We have a pre-existing procedure for backing up a MySQL database using hot backups with the wonderful Percona Xtrabackup and in this case the deprecated innobackupex binary.
Xtrabackup is nice since it creates a clean backup of InnoDB, XtraDB, and MyISAM tables quickly and without locking. There are countless useful features including point-in-time recovery, incremental backups, single table recovery, single partition recovery, I/O throttling, parallel processing, encryption, compression, streaming, and more.
A basic recovery procedure document was needed so I wrote one. Here is is.
The pre-existing backup procedure in play has a few quirks:
- The deprecated
innobackupex
binary v2.4.15 is used to create the backups.innobackupex
is a wrapper forxtrabackup
with some additional features that were rolled intoxtrabackup
in later versions. - The non-default
--compress
and--stream
options are used. This has the effect of creating a single compressed file with multiple compressed files within it.
Here’s the procedure:
Summary
See also Advanced Usage for faster, more disk-efficient usage
Procedure
1. Retrieve backups from S3 Storage
The current backup routine uses innobackupex
to create full snapshot hot backups nightly and incremental backups every half hour during the day. These are all stored in an Amazon S3 bucket.
S3 Bucket Locations
In this particular instance, the complete snapshot backup files and the incremental backups are in S3. For example:
- Full Compressed Snapshots
- s3://my-db-backup/xtrabackups/
- Incremental Backups
- s3://my-db-backup/xtrabackups/incrs
Create a clean working directory and download the snapshot
Create a clean directory on the server where you will work with the backup files
mkdir /restore_tmp
Download the full snapshot:
aws s3 cp s3://my-db-backup/xtrabackups/mysqlbackup.qp.xbs /restore_tmp/
NOTE: We’re using the AWS CLI here. Authentication is accomplished with stored credentials for the current user. More about AWS CLI Configuration.
2. Decompress Backup File
The following command decompresses the file into the same (current) directory
xbstream -x < mysqlbackup.qp.xbs
There are still many compressed files in the result! The next step is to decompress all compressed files in the current directory and can be done like so:
xtrabackup --decompress --remove-original --parallel=8 --target-dir=.
3. Prepare the Backup Files
Next we need to prepare the backup. This step completes any open transactions or rollbacks that happened during the original backup procedure.
xtrabackup --prepare --target-dir=.
4. Restore the files
The MySQL server must be shut down for this step.
systemctl stop mysql
The datadir must be empty for restoration. Consider backing up what’s there if it’s not empty.
rm -rf /var/lib/mysql/*
Now, move the backup into the datadir with the xtrabackup
binary. Note that the --target-dir
is required and is used to indicate where the files will be moved from:
xtrabackup --move-back --target-dir=.
The ownership must change:
chown -R mysql:mysql /var/lib/mysql
Now we can start the server:
systemctl start mysql
Advanced Usage
Do it all in one command – this is a more efficient use of disk since it decompresses the download without saving it to disk first.
|
|
- Download single compressed file from S3 and send to stdin.
- Pipe to decompress the single file (streaming to stdin) into multiple files/directories at the current location. The resulting files and directories still contain compressed files.
- Decompress the resulting individual files and remove the original compressed files.
- Prepare the backup (completes open transactions and rollbacks)
- Stop the server
- Remove existing files from the datadir
- Restore the files
- Change ownership of all files to
mysql:mysql
- Start the server
References
- Amazon S3
- Percona XtraBackup v2.4
- Compressed Backup Recipe
- Innobackupex Option Reference
- Innobackupex Compressed Backup Recipe