The question isn’t if you’ll lose data; it’s when.
Drives fail. Houses flood. Ransomware hits.
If all your backups live next to your server, you don’t have backups. You have décor.
Disaster recovery sounds expensive. It doesn’t have to be.
Start With Threat Modelling
Ask: what’s the worst-case scenario?
- Fire or theft wipes all local gear
- ISP outage blocks remote access when you need it most
- Your own mistakes (deleting the wrong directory)
Design for realistic failure, not fantasy hacks.
Keep It Small and Encrypted
Off-site doesn’t mean a second data centre:
- Identify what’s truly critical: configs, irreplaceable files, credentials
- Compress and encrypt with keys you control
- Don’t ship raw, unencrypted data to a third party
A few gigabytes off-site is better than terabytes of local rubble.
Choose Redundant Locations
Pick at least one location away from home:
- Cheap object storage (Backblaze B2, Wasabi, Hetzner)
- A VPS with encrypted storage
- A trusted friend’s server with mutual backup agreements
Distance matters; a flood shouldn’t take out both copies.
Automate and Forget
Manual backups will fail eventually:
- Script incremental syncs with
restic
,borg
, orrclone
- Schedule with
cron
and log results - Add simple alerts for failures
Boring automation beats heroic recovery.
Test Restores Like It’s Real
A backup you’ve never restored isn’t a backup:
- Periodically restore a random file or directory to a test machine
- Verify checksums and permissions
- Document recovery steps; future you will panic
Testing builds muscle memory and trust.
Rotate and Prune
Don’t hoard endless copies:
- Keep sensible retention (e.g. daily for a week, weekly for a month, monthly for a year)
- Prune automatically to save costs
- Make sure old snapshots aren’t the only working ones
Disaster recovery is about sanity, not storage bragging rights.
Boring Wins
Off-site backups aren’t sexy.
They’re the unglamorous difference between an inconvenience and ruin.
Build it now, while you’re calm.