This is the 5th blog in series of Maxscale blog, Below is the list of our previous blogs, Which provides deep insight for Maxscale and its use cases for different architectures.
- Scale with Maxscale part -1 (Intro & Galera Cluster)
- Scale with Maxscale part – 2 (Administration, Installation & operation)
- Scale with Maxscale part – 3 (Replication M-S)
- Scale with Maxscale part – 4 (Amazon Aurora)
Here we are going to discuss, using Maxscale for Multi-Master environment (M-M), in both Active-Passive and Active-Active mode
Test Environment:
Below is the detail of the environment used for testing
OS : Debian 8 (Jessie)
MySQL Version : 5.7.21-20-log Percona Server (GPL)
Maxscale version : maxscale-1.4.5-1.debian.jessie.x86_64 ( GPL )
Master 1 : 10.85.106.105
Master 2 : 10.85.162.47
Setting up of master-master is beyond the scope of our exercise, I will directly jump on to the configuration of maxscale with Multi-master setup
Monitor Module for M-M
Maxscale comes with a special monitor module named “MMMON”, This monitors the health of servers and set the status flag based on which the router module (Read-Write splitter) sends connections
Configuration:
Below is the configuration basic configuration for Multi-Master using maxscale.
Active-Active Setup:
Active-Active setup is where there is completed the balance of read & write between the servers, With this setup, I would strongly recommend having the Auto_increment_increment & Auto_increment_offset to avoid conflicting writes.
Below is how it looks from Maxscale
Active Passive Setup:
Active-Passive setup is where Writes happen on one of the node and reads is distributed among the servers. To have this just enable the “read_only=1” on any of the node. Maxscale identifies this flag and starts routing only the read connections.
Next question which arises immediately is what happens when there is writer (Active) node failure?
The answer is pretty simple just disable the read-only on the passive node, You do it manually by logging the node or automate it with Maxscale by integrating it along with the fail-over script, which will be called during the time of unplanned or planned maintenance.
I have just enabled the read_only on Master2, you can see the status got changed to ‘Slave’, as below.
By setting the above Multi-master setup, we have ensured that we have a one more DB node to have the fail-over, This leaves maxscale as a single point of failure for the application, we can have an HA setup for maxscale using keepalived by having an IP switch between the nodes are if you using AWS you can go with ELB(Network) on TCP ports and balancing connection between Maxscale nodes.
Image Courtesy : Photo by Vincent van Zalinge on Unsplash