ProxySQL is the most preferred and is widely used for load-balancing MySQL workload, thanks to Rene Cannon & Team for the great tool, and kudos on the recent release of ProxySQL 1.4.10, with a lot of bug fixes. ProxySQL is simple in design, lightweight, highly efficient and feature rich, We have been working with ProxySQL in production for our client quite a sometime, we have also shared some of our encounters/experience and use cases in the below blogs.
In this blog, we will see how to setup ProxySQL for Percona XtraDB cluster with the READ-WRITE split with Failover handling for various node states with Galera. PXC / MariaDB Clusters really works better with writes on single node than multi node writes. That makes the read/write split up ideal for PXC. Application benefits a lot with PXC /MariaDB Cluster as they avail synchronous reads and High availability with these clusters.
For the purpose of this blog I have set up a three node cluster as below
Setting up the cluster is beyond the scope of this, I will just directly move to proxySQL setup for a cluster in Single-writer mode, Which is the most recommended for Cluster to avoid of conflicts of writes and split-Brain scenarios.
Installing ProxySQL:
Easy and robust way to have install proxySQL is by using the percona repo, because it comes with tool/scripts such as proxysql-admin and proxysql_galera_checker, which helps in the easy configuration, monitoring of cluster and also fail-over
#yum install http://www.percona.com/downloads/percona-release/redhat/0.1-4/percona-release-0.1-4.noarch.rpm
#yum install proxysql-1.4.8-1.1.el7.x86_64 -y
#chkconfig proxysql on (enabling service at startup)
# service proxysql start
Now proxysql is up and running on the node 192.168.33.11 and listening on ports 6032 for proxysql admin interface and 6033 for MySQL interface by default, They can be changed if needed
Query rules will be important for query routing with proxySQL, incoming query pattern will be matched based on which it will be routed to the hostgroup accordingly, here I will be using the default and basic query rules for Read-Write splitting as below,
INSERT INTO mysql_query_rules (rule_id,active,match_pattern,destination_hostgroup,apply) VALUES (1,1,'^SELECT.*FOR UPDATE$',10,1), (2,1,'^SELECT',11,1);
Query OK, 2 rows affected (0.00 sec)
mysql> load mysql query rules to runtime;
Query OK, 0 rows affected (0.00 sec)
mysql> save mysql query rules to disk;
Query OK, 0 rows affected (0.01 sec)
Now the query rules are added.
Integrating Application user:
Now it’s time to add embed application user into proxySQL through which application connects via proxysql, here I have assigned the default host group for the user as 10 writer group, when an incoming query without a matching pattern in a query rule, it routes automatically to the default hostgroup ie., 10
INSERT INTO mysql_users (username,password,default_hostgroup) VALUES ('app_user','application_user',10);
Query OK, 0 rows affected (0.00 sec)
mysql> load mysql users to runtime;
Query OK, 0 rows affected (0.00 sec)
mysql> save mysql users to disk;
Query OK, 0 rows affected (0.01 sec)
Application user now is loaded and kept ready.
ProxySQL Monitoring:
Proxysql Monitors the node, added under it by making checks at regular interval, you can enable the monitoring as below,
mysql> UPDATE global_variables SET variable_value="monitor" WHERE variable_name="mysql-monitor_username";
Query OK, 1 row affected (0.00 sec)
mysql> UPDATE global_variables SET variable_value="monitor" WHERE variable_name="mysql-monitor_password";
Query OK, 1 row affected (0.00 sec)
mysql> LOAD MYSQL VARIABLES TO RUNTIME;
Query OK, 0 rows affected (0.00 sec)
mysql> SAVE MYSQL VARIABLES TO DISK;
Query OK, 94 rows affected (0.00 sec)
It shows that proxySQL is able to connect to all the nodes.
Scheduler:
Here we come to the important aspect of the configuration, With cluster, we have different node states (1-4) , in any state comment beyond SYNCED , these node state switches are tracked by proxySQL health checks ( Monitoring Galera ). And it routes the traffic accordingly with the help of scheduler scripts.
Scheduler with help of the proxysql_galera_checker script checks the node states in a periodic configurable interval (ms), When a node state change is detected, It makes changes to the Hostgroup and takes the nodes out of traffic accordingly. An advanced scheduler was written by Marco Tusa is on his github ( proxy_sql_tools ).
For instance, if the writer node goes down, the scheduler script promotes a node from reader group to accept the writes, and when the node comes back online and add it back to hostgroup, The change is very quick that application doesn’t notice any interruption
INSERT INTO scheduler(id,active,interval_ms,filename,arg1,arg2,arg3,arg4,arg5) VALUES (1,'1','500','/usr/bin/proxysql_galera_checker','10','11','0','1', '/var/lib/proxysql/proxysql_galera_checker.log');
Query OK, 1 row affected (0.00 sec)
mysql> LOAD SCHEDULER TO RUNTIME;SAVE SCHEDULER TO DISK;
Query OK, 0 rows affected (0.00 sec)
mysql> select * from scheduler\G
*************************** 1. row ***************************
id: 1
active: 1
interval_ms: 3000 #check interval in Ms
filename: /bin/proxysql_galera_checker #Check script
arg1: 10 # Writer group
arg2: 11 # Reader group
arg3: 0 # No of writers
arg4: 1 # Writers are readers
Arg5:/var/lib/proxysql/mycluster_proxysql_galera_check.log #log file
comment: mycluster
1 row in set (0.00 sec)
Scheduler start checking immediately and writes the status to log as below
Logs:
[2018-07-15 08:24:11] log file /var/lib/proxysql/mycluster_proxysql_galera_check.log
[2018-07-15 08:24:11] ###### HANDLE WRITER NODES ######
[2018-07-15 08:24:12] --> Checking WRITE server 10:192.168.33.12:3306, current status ONLINE, wsrep_local_state 4
[2018-07-15 08:24:12] ###### HANDLE READER NODES ######
[2018-07-15 08:24:12] --> Checking READ server 11:192.168.33.13:3306, current status ONLINE, wsrep_local_state 4
[2018-07-15 08:24:12] server 11:192.168.33.13:3306 is already ONLINE
[2018-07-15 08:24:12] --> Checking READ server 11:192.168.33.14:3306, current status ONLINE, wsrep_local_state 4
[2018-07-15 08:24:12] server 11:192.168.33.14:3306 is already ONLINE
[2018-07-15 08:24:12] ###### SUMMARY ######
[2018-07-15 08:24:12] --> Number of writers that are 'ONLINE': 1 : hostgroup: 10
[2018-07-15 08:24:12] --> Number of readers that are 'ONLINE': 2 : hostgroup: 11
Now the setup is complete, with Read/Write split and failover, Now we will proceed to test Read-write Split
I will be using the below command snippet to watch the connection and failover scenarios