Tag Archives: backup

Maybe Get Some Coffee

I got this message when I was rebuilding my father-in-law’s 2007 iMac. It happened at one point when I was trying to install on OS on the boot drive.

This might be the most realistic time-to-complete estimate I’ve ever seen in an Apple installer:

(I haven’t talked about that computer. I got it a couple of years ago and I’ve been meaning to upgrade it, just like I did with our own 2007 iMac. The only difference from that upgrade plan was I only used a 1 TB drive for the boot drive. The use case is to back up some computers at work, so I added a 4 TB external drive. I would have prefered making it internal, so I could use the SATA connection rather than a USB 2.0 external drive. But … sigh. Apple. There isn’t really room for another drive (yes, I know about replacing the Superdrive with in internal drive) but beyond that, getting at the drive, in case it fails and needs replacing is so incredibly hard, that I decided I could live with slow backups.)

Duplicate Files

I’ve been hoarding data for more than 20 years. For backups, I used to burn a CD periodically, but I long since ran over those limits. Today, my backups are hard drives. One reason is that I’ve moved between computers several times during that period, and when I do, I find stuff I don’t know what to do with. So I copy all that data into a new folder, typically called something like temp/backup/that-system-name/tmp/old/save/keep/t.files/save.d.

After 20 years, that starts to add up. So I’ve been looking at programs to help me find and get rid of duplicates. (I’ve been using rsync -n, and occasionally diff -qr, to compare folders. But the problem is deciding what folders, at what places in the directory structure, to compare.)

So I’ve been looking to see what kind of tools are available to help. At this point, I looked at duff, jdupes, and dupd.

So far, I’ve focused on dupd. It does what I was thinking needed to be done: crawl the entire hierarchy and save the result as a database.