Database Objects migration to RDS/ Aurora (AWS)

Mydbops
Aug 3, 2018
9
Mins to Read
All

Moving your MySQL databases to the cloud requires careful planning. This guide explores best practices for migrating database objects to RDS/Aurora, ensuring a smooth transition with minimal downtime.

The world of application and its related services are migrating more towards cloud, because of availability, Elasticity, Manageability etc. While moving the entire stack we need to be very cautious while migrating the database part.

Migration of DB servers is not a simple lift and shift operation, Rather it would require a proper planning and more cautious in maintaining data consistency with existing DB server and cloud server by means of native replication or by using any third party tools.

The best way to migrate the existing MySQL database to RDS, in my opinion, is by using “logical backup“. Some of the logical backup tools as below,

Mysqldump — single threaded (widely used)
Mysqlpump — Multithreaded
Mydumper — Multithreaded

In this blog, we will see about a simple workaround and best practices to migrate DB objects such as procedures, triggers, etc from a existing database server on premises to Amazon RDS (MySQL), which is a fully managed relational database service provided by AWS.

In order to provide managed services, RDS restricts certain privileges at the user level. Below are the list of restricted privileges in RDS.

  • SUPER – Enable use of other administrative operations such as CHANGE MASTER TO, KILL (any connection), PURGE BINARY LOGS, SET GLOBAL, and mysqladmin debug command. Level: Global.
  • SHUTDOWN – Enable use of mysqladmin shutdown. Level: Global.
  • FILE – Enable the user to cause the server to read or write files. Level: Global.
  • CREATE TABLESPACE – Enable tablespaces and log file groups to be created, altered, or dropped. Level: Global.

All stored programs (procedures, functions, triggers, and events) and views can have a DEFINER attribute that names a MySQL account. As shown below.

 
DELIMITER ;;
CREATE DEFINER=`xxxxx`@`localhost` PROCEDURE `prc_hcsct_try`(IN `contactId` INT, IN `section` VARCHAR(255))
BEGIN
IF NOT EXISTS (SELECT 1 FROM contacts_details WHERE contact_id = contactId) THEN
INSERT INTO contacts_details (contact_id, last_touch_source, last_touch_time) VALUES (contactId, section, NOW());
ELSE
UPDATE contacts_details SET last_touch_source = section, last_touch_time = NOW() WHERE contact_id = contactId;
END IF;
END ;;
DELIMITER ;
	

While restoring same on to the RDS server, since the RDS doesn’t provide a SUPER privilege to its user, The restoration fails with the below error, since it fails

 
ERROR 1227 (42000) at line 15316: Access denied; you need (at least one of) the SUPER privilege(s) for this operation
	

This will be very annoying since the restore fails at the end,

To overcome this below is the simple one-liner piped with the mysqldump command, which replaces the “DEFINER=`xxxxx`@`localhost`”, So when you are restoring the dump file, the definer will be a user which is used to restore

 
mysqldump -u user -p -h 'testdb.xcvadshkgfd..us-east-1.rds.amazonaws.com' --single-transaction --quick --triggers --routines --no-create-info --no-data --events testdb | perl -pe 's/\sDEFINER=`[^`]+`@`[^`]+`//' > test_dump.sql
	

Below is the content from the dump file after ignoring the default “DEFINER”, the same can also be done vis AWK and SED commands too.

 
DELIMITER ;;
CREATE PROCEDURE `prc_contact_touch`(IN `contactId` INT, IN `section` VARCHAR(255))
BEGIN
IF NOT EXISTS (SELECT 1 FROM contacts_details WHERE contact_id = contactId) THEN
INSERT INTO contacts_details (contact_id, last_touch_source, last_touch_time) VALUES (contactId, section, NOW());
ELSE
UPDATE contacts_details SET last_touch_source = section, last_touch_time = NOW() WHERE contact_id = contactId;
END IF;
END ;;
DELIMITER ;
	

As you can see from the above the DEFINER section is completely removed.

Best practices for RDS migration

  • Restore dump files from EC2 within the same VPC and RDS to have minimal network latency
  • Increase max_allowed_packet to 1G(max), to accommodate bigger packets
  • Dump data in parallel ,based on the instance capacity.
  • Bigger redo-log files can enhance the write performance
  • Make innodb_flush_log_at_trx_commit=2 for faster write with a little compromise to durability.

Need expert help streamlining your MySQL cloud migration? Mydbops, our open-source database management service, offers comprehensive MySQL support, including secure object migration and performance optimization. Let our team of database specialists ensure a seamless transition to the cloud. Contact Mydbops today!

{{cta}}

No items found.

About the Author

Mydbops

Subscribe Now!

Subscribe here to get exclusive updates on upcoming webinars, meetups, and to receive instant updates on new database technologies.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.