Introduction
In this blog post we examine in depth how to best provision one MySQL database cluster node from another using rsync
.
Rsync
may be used as an alternative when other provisioning options are unavailable.
Since there are many possible ways to handle provisioning by rsync
, we will do multiple posts on the topic, starting with the most basic - both the source and target database nodes are fully offline for the duration of the rsync
.
While this is not an ideal scenario because two nodes are down at once, it does allow one replica to be provisioned from another replica when all else fails.
In future posts, we will cover how to keep the source replica up for most of the time by using two rsync
commands, not one. We will also cover how to ensure consistent rsync
copies without taking MySQL server down (though it is a good idea to offline the replicator during the copy).
Summary Steps
Since both the source and target need to be fully offline, place the cluster into maintenance mode to prevent any action by the Managers during the provisioning process. Offline the replicators and shut down the database servers processes, then run the rsync
. Turn it all back on in reverse order after clearing the THL on the target, then go back to automatic mode
- Determine the final data directory path on the source node, processing symlinks and relative paths
- Determine the final data directory path on the target node, processing symlinks and relative paths
- Set the policy to maintenance
- Offline the replicator on the source node
- Offline the replicator on the target node
- Stop the database server process on the source node
- Stop the database server process on the target node
rsync
the mysql data directory from source path to target path- Start the database on the source node
- Online the replicator on the source node
- Start the database on the target node
- Clear thl on the target node
- Online the replicator on the target node
- Set the policy to automatic
Detailed Procedure
In our example, we will provision node db2 from node db1.
Determine the final data directory paths
Get the path on the source and target nodes, processing symlinks and relative paths:
shell$ DATADIR=`mysql -N -B -e 'show variables like "%datadir%";' | awk '{print $2}' | sed 's/\/$//'`
shell$ echo $DATADIR; ls -ld $DATADIR
lrwxrwxrwx 1 mysql mysql 19 Sep 24 2018 /var/lib/mysql -> /volumes/data/mysql
In this case, the configured directory is a sym-link, so the “real” directory we are looking for is /volumes/data/mysql
.
Set the Cluster To Maintenance Mode
db2 shell$ cctrl
cctrl> set policy maintenance
cctrl> ls
cctrl> exit
Offline the replicator on the source node
db1 shell$ trepctl -all-services offline
db1 shell$ trepctl services
Offline the replicator on the target node
db2 shell$ trepctl -all-services offline
db2 shell$ trepctl services
Stop the database server process on the source and target nodes
db2 shell$ cctrl
cctrl> service db1-demo.continuent.com/mysql stop
cctrl> service db2-demo.continuent.com/mysql stop
cctrl> exit
Rsync the mysql data directory from source path to target path
Be sure to include the training slashes for both paths! You must have permissions to read both the source and target directories or this step will fail, and may need to be run as root.
db2 shell$ sudo rsync -avz –progress db1:/volumes/data/mysql/ /volumes/data/mysql/
If you store your binary logs in a different directory, copy them too:
db2 shell$ mysql -N -B -e 'show variables like "log_bin%";' | awk '{print $2}'
db2 shell$ sudo rsync -avz --progress --delete db7:/volumes/data/binlogs/ /volumes/data/binlogs/
Start the database on the source node
db2 shell$ cctrl
cctrl> service db1-demo.continuent.com/mysql start
cctrl> exit
db2 shell$ tpm mysql
Online the replicator on the source node
db1 shell$ trepctl -all-services online
db1 shell$ trepctl services
Start the database on the target node
db2 shell$ cctrl
cctrl> service db2-demo.continuent.com/mysql start
cctrl> exit
Clear thl on the target node
db2 shell$ for svc in `trepctl services| grep serviceName | awk '{print $3}'`; do
> echo Purging service $svc
> thl -service $svc purge -y
> done
Online the replicator on the target node
db2 shell$ trepctl -all-services online
db2 shell$ trepctl services
Set the policy to automatic
shell$ cctrl
cctrl> set policy automatic
cctrl> ls
cctrl> exit
Wrap-Up
In this blog post we examined in-depth how to best provision one database cluster node from another using rsync
.
Comments
Add new comment