Digital Ocean is a very powerful and yet simple to use Virtual Private Server (VPS) provider. But then you knew that unless you’ve been living under a rock for the last couple of years ;-). The one area I have struggled is those small disk sizes (at least until the Block Storage reach your preferred datacentre).
This post shows how to work around size restrictions and backup and restore databases that fill most of those disks.
Before you start
Ensure you have two droplets, one containing the database you want to backup and one to backup and restore on.
Also ensure you can ssh between them; the easiest way to do this is to set up key based authentication as described here: https://www.digitalocean.com/community/tutorials/how-to-set-up-ssh-keys–2.
Creating the backup
This little beauty will dump all tables, run the result through gzip compression and pipe the lot to another server (droplet). Not only a backup but potentially an off-site backup.
mysqldump -u usr -p dbname | gzip | ssh backupserver "cat > dump.sql.gz"
Or backup just a subset of tables:
mysqldump -u usr -p dbname table1 table2 | gzip | ssh backupserver "cat > dump.sql.gz"
A variation could even pipe the dump into the mysql client on the backup server to provide a warm standby, but for now I’ll leave that as an exercise for the reader!
Restoring the backup
The reverse one-liner to restore direct into the database is this:
ssh user@backupserver "cat ~/dump.sql.gz" | gunzip | mysql -u usr -p dbname
Scheduling with cron
To place these commands in a cron job is straightforward other than the small matter of supplying the password, which will be requested interactively in the above examples. This can be avoided using the client section of the mysql configuration file. Since the documentation can be a bit dense here’s a shortcut: Put this into ~/.my.cnf (and don’t forget to change permissions so others cannot read it).
[client] user = dbusername password = "dbpassword" host = localhost