Faster logical backup of a single table in MySQL.

Mydbops
Nov 20, 2020
15
Mins to Read
All

Logical backup’s are of great use in data migration across cloud environments and table level recoveries. The new Mysql shell 8.0.22 ,has introduced a couple of new utilities named util.dumpTable() and util.exportTable() to export individual tables from a MySQL. Prior to 8.0.22 it is not possible to make a backup of single table using MySQL Shell.

MySQL Shell’s new table dump utility  util.dumpTables() from this we can take the dump of the specific tables of the  schema using this utility.it works in the same way as the instance dump utility util.dumpInstance() and schema dump utility util.dumpSchemas() introduced in 8.0.21, but with a different selection of suitable options. The exported items can then be imported into a MySQL Database Service DB System (a MySQL DB System, for short) or a MySQL Server instance using MySQL Shell’s dump loading utility util.loadDump()

MySQL Shell’s new table export utility util.exportTable() it exports the relational data file in a local server

We can take the dump of the  huge table faster  using  util.dumpTables() it will take less time compare to the mysqldump and mydumper and mysqlpump

Making dump using util.dumpTables()

We need the latest MySQL Shell 8.0.22 .

For our use case I have loaded 10M records on a single table using Sysbench. The size of data post loading is around 20GB.

Loading the data with 10M record using sysbench

 
[root@mydbopstest ~]# sysbench /usr/share/sysbench/oltp_read_write.lua --mysql-port=3306 --mysql-user=root --mysql-password --mysql-socket=/data/mysql/mysql.sock  --mysql-db=test1  --db-driver=mysql --tables=1 --table-size=100000000  prepare
sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2)

Creating table 'sbtest1'...
Inserting 100000000 records into 'sbtest1'

Creating a secondary index on 'sbtest1'...
[root@mydbopstest ~]# 
	

Step 1 : Connect the MySQL server with Shell utility

In this case my database Server is MySQL 5.7.30 ( Percona ). The Shell utility is compatible with any mysql versions.

 
[root@mydbopstest ~]# mysqlsh root@localhost --socket=/data/mysql/mysql.sock
Please provide the password for 'root@/data%2Fmysql%2Fmysql.sock': **********

MySQL Shell 8.0.22

Copyright (c) 2016, 2020, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.
Creating a session to 'root@/data%2Fmysql%2Fmysql.sock'
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 2660844
Server version: 5.7.30-33-log Percona Server (GPL), Release 33, Revision 6517692
No default schema selected; type \use  to set one.
 MySQL  localhost  JS > 
	

Step 2: Initiate the single table backup with shell utility

Ensure that you have connected the shell utilities on JS mode. We have used the default threads of 4 at the time of backup.

 
MySQL  localhost  JS > util.dumpTables("test1", [ "sbtest1" ], "/root/dump_table");
Acquiring global read lock
Global read lock acquired
All transactions have been started
Locking instance for backup
NOTE: Backup lock is not supported in MySQL 5.7 and DDL changes will not be blocked. The dump may fail with an error or not be completely consistent if schema changes are made while dumping.
Global read lock has been released
Writing global DDL files
Writing DDL for table `test1`.`sbtest1`
Preparing data dump for table `test1`.`sbtest1`
Data dump for table `test1`.`sbtest1` will be chunked using column `id`
Running data dump using 4 threads.
NOTE: Progress information uses estimated values and may not be accurate.
Data dump for table `test1`.`sbtest1` will be written to 788 files
1 thds dumping - 101% (100.00M rows / ~98.57M rows), 278.86K rows/s, 55.22 MB/s uncompressed, 24.63 MB/s compressed
Duration: 00:06:55s
Schemas dumped: 1
Tables dumped: 1
Uncompressed data size: 19.79 GB
Compressed data size: 8.83 GB
Compression ratio: 2.2
Rows written: 100000000
Bytes written: 8.83 GB
Average uncompressed throughput: 47.59 MB/s
Average compressed throughput: 21.23 MB/s
	

It took around 7 Min ( 6:55 ) to make a backup of single table of size 20GB. The backup are stored in the compressed tsv files.

Step 3: Load the single table backup via shell utility

Now let us load the single table data back via util.loadDump() utility. We have used the 4 threads to import these data.

 
MySQL  localhost  JS > util.loadDump("/root/dump_table",{schema:'test1'})
Loading DDL and Data from '/home/root/dump_table' using 4 threads.
Opening dump...
Target is MySQL 5.7.30-33-log. A dump was produced from MySQL 5.7.30-33-log
Checking for pre-existing objects...
Executing common preamble SQL
[Worker003] Executing DDL script for `test1`.`sbtest1`
[Worker001] test1@sbtest1@0.tsv.zst: Records: 126903  Deleted: 0  Skipped: 0  Warnings: 0
[Worker000] test1@sbtest1@1.tsv.zst: Records: 126903  Deleted: 0  Skipped: 0  Warnings: 0
[Worker002] test1@sbtest1@2.tsv.zst: Records: 126903  Deleted: 0  Skipped: 0  Warnings: 0
[Worker003] test1@sbtest1@3.tsv.zst: Records: 126903  Deleted: 0  Skipped: 0  Warnings: 0
[Worker001] test1@sbtest1@4.tsv.zst: Records: 126903  Deleted: 0  Skipped: 0  Warnings: 0
[Worker000] test1@sbtest1@5.tsv.zst: Records: 126903  Deleted: 0  Skipped: 0  Warnings: 0
[Worker003] test1@sbtest1@7.tsv.zst: Records: 126903  Deleted: 0  Skipped: 0  Warnings: 0
[Worker002] test1@sbtest1@6.tsv.zst: Records: 126903  Deleted: 0  Skipped: 0  Warnings: 0
[Worker001] test1@sbtest1@8.tsv.zst: Records: 126903  Deleted: 0  Skipped: 0  Warnings: 0
[Worker000] test1@sbtest1@9.tsv.zst: Records: 126903  Deleted: 0  Skipped: 0  Warnings: 0
[Worker001] test1@sbtest1@12.tsv.zst: Records: 126903  Deleted: 0  Skipped: 0  Warnings: 0
[Worker002] test1@sbtest1@11.tsv.zst: Records: 126903  Deleted: 0  Skipped: 0  Warnings: 0
[Worker003] test1@sbtest1@10.tsv.zst: Records: 126903  Deleted: 0  Skipped: 0  Warnings: 0
4 thds loading | 2% (399.22 MB / 19.79 GB), 2.60 MB/s, 1 / 1 tables done[Worker000] test1@sbtest1@13.tsv.zst: Records: 126903  Deleted: 0  Skipped: 0  Warnings: 0
[Worker001] test1@sbtest1@15.tsv.zst: Records: 126903  Deleted: 0  Skipped: 0  Warnings: 0
[Worker000] test1@sbtest1@17.tsv.zst: Records: 126903  Deleted: 0  Skipped: 0  Warnings: 0
[Worker002] test1@sbtest1@14.tsv.zst: Records: 126903  Deleted: 0  Skipped: 0  Warnings: 0


Deleted: 0  Skipped: 0  Warnings: 0
[Worker000] test1@sbtest1@784.tsv.zst: Records: 126903  Deleted: 0  Skipped: 0  Warnings: 0
[Worker003] test1@sbtest1@785.tsv.zst: Records: 126903  Deleted: 0  Skipped: 0  Warnings: 0
[Worker001] test1@sbtest1@786.tsv.zst: Records: 126903  Deleted: 0  Skipped: 0  Warnings: 0
[Worker002] test1@sbtest1@@787.tsv.zst: Records: 127339  Deleted: 0  Skipped: 0  Warnings: 0
Executing common postamble SQL

788 chunks (100.00M rows, 19.79 GB) for 1 tables in 1 schemas were loaded in 30 min 20 sec (avg throughput 7.98 MB/s)
0 warnings were reported during the load.
 MySQL  localhost  JS > 
	

It took around 30 min to load the data. Further optimisation is still possible with disable redo log in MySQL 8 and improving the parallelism. But this is a huge improvement. Using MySQL Shell utilities will save more time comparing to other logical backup tools. I have repeated the test with other popular logical backup tool like mysqldump and mydumper. The results are below.

 Unleash MySQL Shell's Power: Blazing-Fast Single Table Backups with util.dumpTables()

The MySQL Shell utility seems to be faster on single table backup and loading too. This can help database engineers in table rebuild for partitions, migration across cloud infra and it can be replace the regular logical backup too.

Experience Peak MySQL Performance! Mydbops offers expert MySQL database management services to revolutionize your backups. We help you leverage util.dumpTables and other cutting-edge tools for optimal efficiency. Contact us today to unlock the full potential of your MySQL deployments!

{{cta}}

No items found.

About the Author

Mydbops

Subscribe Now!

Subscribe here to get exclusive updates on upcoming webinars, meetups, and to receive instant updates on new database technologies.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.