Bombich Software recently released an upgrade to its backup and disk cloning application, Carbon Copy Cloner. Carbon Copy Cloner 5 offers complex task filtering, coaching tips, smarter pruning, task grouping and scheduling and more. It still allows you to create exact — bootable — clones of a startup disk and enables you to check if a backup or clone hasn’t been corrupted due to bit rot or other errors. And while I was at it, I had a short talk with Mike Bombich, the developer and publisher of Carbon Copy Cloner, on backing up, offloading and securing your backups.
The most important feature of Carbon Copy Cloner 5 is task filtering. With version 4, you could have Carbon Copy Cloner copy an entire disk drive or select only those files that you decided as important enough to backup. The interface for selecting files was adequate but could be improved upon. Well, it’s been improved alright. Instead of the small drop-down menu where you had to check/uncheck files, the new version now offers a generous slide-down sheet with the files and folders clearly listed in a checkbox type list. You can still go through your list and check/uncheck as you would previously do, but below the list panel, you’ll now also find a panel where you can set up rules for inclusion or exclusion of files and folders.
This task filter interface allows you to create complex filters from a combination of rules you define. The nice thing is that you can immediately see in the list above how those rules work out. For example, I set up a rule that would exclude all files ending with a “gz” extension and when I finished defining the rule, the list updated to show me the “gz” files would all be removed from the backup. In addition, you can import and export those filters so that you can reuse them or share them with co-workers.
The second most important new feature, in my eyes at least, is that Carbon Copy Cloner is now compatible with Apple’s new file system APFS that will be incorporated in the next, upcoming version of macOS.
A third reason why you should upgrade or buy Carbon Copy Cloner is task grouping and its associated scheduling capabilities. The task list in the sidebar now allows you to create folders to group tasks with and schedule them all at once. That could be too inflexible, so it’s been implemented in such a way that you can still schedule individual members differently — with the software figuring out for itself how to deal with conflicts.
Scheduling options have also become more powerful, with more complex capabilities. For example, you can schedule a backup task to run once at a specific time, after which it will revert to running only when you click the Clone button.
Carbon Copy Cloner now also detects when a volume that is used as a startup has been previously used as a Carbon Copy Cloner destination. If it has, the app will offer a guided restore with automatic volume selections and coaching tips. The coaching tips are new as well. Coaching tips make Carbon Copy Cloner easier on the brain of people who are less technically minded than is good for them. They are a sort of text balloons that briefly explain what key buttons and icons do when you click them guiding you through setting up a backup or restore task. Even if you’re fine with backing up and setting up tasks, coaching tips may offer a comforting helping hand when restoring — which, hopefully, you will never be experienced at.
There are yet other improvements, such as task sorting options, the ability to run Find/Replace corrupted files once a week or month and clicking a volume to mount and unmount.
Carbon Copy Cloner does not offer cloud-based backups, but in my book that only speaks for it. I consider cloud-based backup only good if you have the budget to buy from service providers offering QoS (Quality of Service) contracts with the strictest of privacy and security breach compensation clauses. Those exist but are prohibitively expensive and really only feasible for big businesses.
For the rest of us, Carbon Copy Cloner used as a component of a sound backup strategy that ideally includes storing one backup copy in a remote, offline place, is the ideal solution. The app costs €34.40 for a household licence. An upgrade may cost 50% less, depending on your previous version of Carbon Copy Cloner.
A brief on backup integrity
I asked Mike Bombich if I have got it right that, in order to be sure source and destination copy are 100% identical, you need to read the source once to check the checksum, then write once and then read the destination again.
Mike Bombich: “I’m always careful how I reply to this. Given reliable hardware, a single read from the source and a single write to the destination that occurs without errors is sufficient to conclude that the file was copied successfully to the destination. If there are any doubts about the hardware’s reliability (and to some extent, every piece of hardware has at least a small percentage of anticipated unreliability), then a second read of the destination would confirm that the destination file is still identical to the data that was written. In other words, I draw a distinction between the copy procedure’s success and the destination’s ability to record and retain the data. The checksums that CCC performs automatically during the copy are sufficient on their own to assess the reliability of the copy, but the second read of the destination file is a prudent way to assess the reliability of the media.”
ViP: Is it correct to assume that, once the data has been written successfully and 100% accurately on the media, the data itself can start “drifting” as time goes by? Do you know if this happens on optical media as well (I’m thinking of M-Disc, for example)?
Mike Bombich: “I don’t think I’d use “drifting” to describe it. “Decay” or “degrade” is probably more descriptive. But yes, magnetic media is imperfect and will eventually degrade. Optical is the same, though I don’t know how it compares in terms of reliability. If you leave the disc sitting on your desk and it gets sun exposure, for example, that could degrade the media. I believe there are different grades of optical media too, e.g. I recall seeing “Archival Grade” media once, though I’m not sure if there’s anything to that or if it was just marketing. Optical doesn’t get much attention these days because Apple declared it dead. There are still third-party optical writers, but not having it bundled with the initial purchase drives down the demand for those sorts of solutions. As such, I’m just not very familiar with optical media.”
ViP: “Can I ask you if this makes sense: in order to perform checksum comparisons the Read GBs should be double the Write GBs. Is that always the case? If you make the copy hash while you’re writing, then surely you wouldn’t see that behaviour?”
Mike Bombich: “If you’re referring to an initial backup to an empty volume, then yes, there would be 2x reads vs. writes. That’s a blatant inefficiency that I’m working to resolve, but I don’t typically recommend using the checksumming option for an initial backup. This is one of the reasons, the other being there’s no gain from doing that. The checksum is used to determine if a file should be copied and if the file doesn’t exist on the destination, it will definitely get copied. This makes the inefficiency obvious: if the file doesn’t exist on the destination, I shouldn’t be pre-reading the file on the source just to calculate a checksum. It’s the subsequent task where you’d want to use the checksumming option — that task would re-read the files on the source and destination and see if they’re still identical.
ViP Editor: “Also, can you — if you’re a software vendor using your own app to offload terabytes/day — rely on accuracy statistics made from your own results?”
Mike Bombich: “Heh, this is an excellent question. I generally do rely on the results from my own software, but at the same time, I also perform extensive QA tests with other checksumming software to validate my results. Some of that is built into CCC. For example, I have a test procedure built into the Debug build of CCC that generates a test file on the source, checksums it, then validates the file on the destination at the end of the task (this happens every time I run a task, so this test is exercised constantly). This checksum analysis isn’t done with my internal tools, it’s done with the system utilities. There is a case to be made, though, to have some sort of external analysis of the resulting copy.”
ViP Editor: “Finally, do you think xxhash is as robust as the slower methods like MD5 — xxhash not being based on encryption standards?”
Mike Bombich: “In general, I’d say yes, but if you’re backing up sensitive data (health records, government or trade secrets, etc), I probably wouldn’t accept anything less than the industry standards. I certainly like having the option. I’d choose the faster checksumming option if given the choice.”