The Backup Strategy That Survived the Test


My laptop died last month—complete hardware failure, no warning. Everything was theoretically backed up. I had Time Machine backups, cloud storage, and git repositories. Recovering should have been straightforward.

It was… mostly straightforward. But some things failed or were harder than expected in ways that backup advice doesn’t usually cover. Here’s what actually mattered when I needed to restore everything.

What Worked Perfectly

Code was fine. Everything’s in git, pushed to GitHub. New laptop, clone repositories, back to work. This is the easy case and worked exactly as expected.

Documents in cloud storage (Dropbox, Google Drive) were immediately accessible on the new laptop. Just install the sync client and everything appears. No restoration process needed.

Dotfiles in a git repository (shell config, editor settings, etc.) were quickly restored. Clone the dotfiles repo, run the setup script, and the environment is mostly configured. This saved hours of manual configuration.

Password manager (1Password) synchronized instantly. Logged in on the new laptop, all passwords were available. Authentication wasn’t a problem.

What Didn’t Work As Expected

Time Machine backup was technically complete but restoring from it took eight hours. The backup drive connects via USB, and transferring 500GB takes time. During this time, the laptop is partially functional but not really usable for work.

The Time Machine UI for selective restoration is clunky. Finding specific files from specific dates requires navigating the time machine interface, which is slow when you’re trying to recover quickly.

Some applications don’t restore cleanly from Time Machine. They technically restore but require reactivation, reconfiguration, or have broken preferences. Manually reinstalling would have been faster than fixing broken restorations.

Local files not in cloud storage or git were the problem. Random downloads, project files I didn’t think to put in version control, screenshots and recordings—these only existed in Time Machine backups. Recovering them required the full backup restoration.

The Browser State Problem

Browser bookmarks, extensions, and settings theoretically sync via browser accounts. In practice, some extensions didn’t sync settings. Some bookmarks existed but the organization was lost. It took hours to reconfigure browsers to match previous setup.

Browser local storage and cookies don’t sync. This means you’re logged out of everything and need to re-authenticate to dozens of sites. With a password manager this is manageable but time-consuming.

Development tools that use browser storage lost their state. Saved queries in database tools, saved layouts in dev panels, temporary data you didn’t realize you depended on—all gone.

Application-Specific Data

Some applications store data in non-obvious locations that don’t get captured by normal backups. Database tools store connection configurations locally. Design tools store recent files lists and preferences. Chat apps store message history locally by default.

These aren’t critical but rebuilding them is annoying. You don’t realize how much you depended on having your environment exactly configured until you’re starting over.

The lesson: know where your applications store data and ensure those locations are backed up. Don’t assume Time Machine gets everything, especially for apps that store data outside standard directories.

What I Changed After This Experience

I now keep a documented list of essential applications with installation and configuration notes. When setting up a new machine, this list saves time and ensures nothing is forgotten.

Important project files now go directly into cloud storage rather than sitting in Downloads or Desktop. If it matters, it lives somewhere that syncs automatically. Local-only files are assumed to be disposable.

More frequent explicit backups of application data directories. I have a script that copies application support folders, database configurations, and other non-standard locations to cloud storage weekly.

Better documentation of manual configuration. Custom system settings, obscure preferences, network configurations—these are now documented so I can reproduce them rather than relying on backups to preserve them.

The 3-2-1 Rule Reality

The backup advice is 3-2-1: three copies, two different media types, one offsite. I technically had this—Time Machine, cloud storage, and git repositories. But the restoration experience wasn’t as smooth as this rule suggests.

The issue is that different backup types serve different purposes. Time Machine is comprehensive but slow to restore. Cloud storage is fast but only covers synced files. Git is perfect for code but useless for other data.

A better rule might be: ensure you can restore your essential work environment within an hour, and your complete system within a day. This focuses on recovery time rather than backup comprehensiveness.

What’s Actually Essential

If you can restore your authentication (password manager), code (git), and key documents (cloud storage), you can be productive within an hour. Everything else is convenience and can be rebuilt gradually.

The goal isn’t preserving everything perfectly. It’s minimizing disruption when hardware fails. Being 80% functional quickly matters more than being 100% functional eventually.

I could have been working productively on the new laptop within two hours if I’d been willing to skip the full Time Machine restore and manually reinstall applications. Instead, I chose complete restoration which took most of a day but was less mentally taxing.

Testing Backups

The real lesson: test your backup restoration before you need it. Most people verify backups are running but never actually try restoring from them. You discover problems when it’s too late to fix them.

Once a year, try restoring from backup to a test environment or a borrowed machine. See what works and what doesn’t. Document problems and fix them before you need the backup for real.

This feels unnecessary until you actually need it. Then it’s invaluable.

Pragmatic Backup Strategy

Here’s what I do now:

  • Code in git, pushed frequently
  • Documents in cloud storage automatically syncing
  • Dotfiles in git repository
  • Time Machine running continuously to external drive
  • Quarterly documentation review ensuring critical information is recorded
  • Annual restoration test

This isn’t perfect, but it’s realistic to maintain and proved adequate when tested. More elaborate backup schemes might be better theoretically but often fail in practice because they’re too complex to maintain consistently.

Backups are insurance. The best backup strategy is one you’ll actually follow, that provides adequate protection for likely failure scenarios. Perfect backup schemes that you don’t maintain are worse than imperfect schemes you do.

When my laptop died, I was frustrated but not panicked. Most things restored smoothly. Some things required manual work. Nothing was lost permanently. That’s a successful backup system—not perfect, but good enough when it mattered.