SiteSucker Updated


This is a minor update to SiteSucker, but it is a program that I depend on, so I am posting it.

For the uninitiated, SiteSucker is a program that sucks everything (pages, images, etc.) out of a website and downloads it to your hard drive. I use it because I update my school web page at the beginning of each year, but I don’t want to lose all the information from before. I don’t usually go back and look at it much, but mainly save it as a backup of what happened the previous year, in case I lose some other record.

The updates are:

  • Allowed users to view the download settings while downloading.
  • Replaced wildcard support in paths settings with regular expressions.
  • Removed “Get Files via Image Links” from the Download Option and added “Only Follow Image Links” option under the Advanced tab in the download settings.
  • Added an option to save log files in ~/Library/Logs/SiteSucker.
  • Added a Logs tab in the Download Settings window and reorganized the settings.
  • Added scanning of the style attribute in all tags for URLs.
  • Replaced URL parameters with a value in local file names.
  • Deleted empty folders in the download folder when all downloads are paused.
  • Modified the document format to improve performance when analyzing files.
  • Fixed an issue where some files failed to download when a download was resumed.
  • Fixed some issues with the Open File command.

You're subscribed! If you like, you can update your settings


Comments have been disabled for this post