- A system on the network with sufficient free storage
rsyncinstalled on the remote system
- A user on the remote system, say
remoteuser, that can do key-based passwordless login via SSH and has
/etc/sudoersfor passwordless sudo access for
- Appropriate firewall ports are open for SSH and rsync
Create a systemd service unit
In the above script, change:
/path/to/local/source/datato point to the source data directory.
firstname.lastname@example.org system to access via SSH
/path/to/remote/archive/datadestination directory on the remote system
You can copy the final rsync command and run it in a shell with the
--dry-run switch (and remove
--quiet) to ensure it works as intended.
Create a systemd timer
Set up a systemd timer to run the backup task daily. There are many options on how to set the frequency and nature of repetition (e.g.
OnUnitActiveSec=15min options under
[Timer] to run every 15 mins in a non-overlapping fashion)
As root, enable and run both systemd units:
systemctl daemon-reload systemctl enable data-backup.service systemctl enable data-backup.timer systemctl start data-backup.service systemctl start data-backup.timer # Ensure all is well journalctl -f -u iridium-data-backup.service
My system has a paltry 250 GB internal disk that keeps running out of space. This gets complicated when I build large, complex projects from source.
I don't like the idea of upgrading to a larger SSD because it makes full disk backups slower, harder. I (stubbornly) believe all programming-related data (code, not training data sets) that is useful and worth long term storage should realistically fit in ~100GB. That makes incremental backups easier. Everything else is transient stuff: node_modules, build objects and intermediate artifacts, docker images/volumes, npm/pip/gradle/mvn package cache, etc.
SSD's also have an inherent life, which means the idea of having a 1TB SSD just not wake up one day is a scary thought. A network backup is the least I can do.