If your database is large, copying the raw data files can be morę efficient than using
mysąldumpand
importing the file on each slave. This technique skips the overhead of updating indexes as the INSERT statements are replayed.
Using this method with tables in storage engines with complex caching or logging algorithms requires
extra steps to produce a perfect “point in time” snapshot the initial copy command might leave out
cache information and logging updates, even if you have acquired a global read lock. How the storage
engine responds to this depends on its crash recovery abilities.
This method also does not \*ork reliably if the master and slave have different values for
ft_stopword_file[499], ft_min_word_len[499], or ft_max_word_len[498] and you are
copying tables having full-text indexes.
If you use InnoDBtables, you can use the mysqlbackupcommand from the MySQL Enterprise
Backup component to produce a consistent snapshot. This command records the log name and
offset corresponding to the snapshot to be later used on the slave. MySQL Enterprise Backup is a
commercial product that is included as part of a MySQL Enterprise subscription. See Section 24.2,
“MySQL Enterprise Backup” for detailed information.
Otherwise, use the cold backup technique to obtain a reliable binary snapshot of InnoDBtables: copy
all data files after doing a slow shutdown of the MySQL Sen/er.
To create a raw data snapshot of MylSAMtables, you can use standard copy tools such as cpor
copy, a remote copy tool such as scpor rsync, an archiving tool such as zip or tar, or a file
system snapshot tool such as dump, providing that your MySQL data files exist on a single file system.
If you are replicating only certain databases. copy only those files that relate to those tables. (For
InnoDB, all tables in all databases are stored in the system tablespace files, unless you have the
innodb_f ile_per_table[1754] option enabled.)