ProxySQL Series : Percona Cluster/MariaDB Cluster (Galera) Read-write Split

Mydbops
Aug 21, 2018
10
Mins to Read
All

ProxySQL is the most preferred and is widely used for load-balancing MySQL workload, thanks to Rene Cannon & Team for the great tool, and kudos on the recent release of ProxySQL 1.4.10, with a lot of bug fixes. ProxySQL is simple in design, lightweight, highly efficient and feature rich, We have been working with ProxySQL in production for our client quite a sometime, we have also shared some of our encounters/experience and use cases in the below blogs.

In this blog, we will see how to setup ProxySQL for Percona XtraDB cluster with the READ-WRITE split with Failover handling for various node states with Galera. PXC / MariaDB Clusters really works better with writes on single node than multi node writes. That makes the read/write split up ideal for PXC. Application benefits a lot with PXC /MariaDB Cluster as they avail synchronous reads and High availability with these clusters.

For the purpose of this blog I have set up a three node cluster as below

 
Operating System: Centos 7.4
Cluster version : Percona XtraDB cluster 5.7.21
ProxySQL version : proxysql-1.4.8-1.1.el7.x86_64
ProxySQLNode : 192.168.33.11
Node1 : 192.168.33.12
Node2 : 192.168.33.13
Node 3: 192.168.33.14

	

Setting up the cluster is beyond the scope of this, I will just directly move to proxySQL setup for a cluster in Single-writer mode, Which is the most recommended for Cluster to avoid of conflicts of writes and split-Brain scenarios.

Installing ProxySQL:

Easy and robust way to have install proxySQL is by using the percona repo, because it comes with tool/scripts such as proxysql-admin and proxysql_galera_checker, which helps in the easy configuration, monitoring of cluster and also fail-over

 
#yum install http://www.percona.com/downloads/percona-release/redhat/0.1-4/percona-release-0.1-4.noarch.rpm

#yum install proxysql-1.4.8-1.1.el7.x86_64 -y
#chkconfig proxysql on (enabling service at startup)
# service proxysql start
	

Now proxysql is up and running on the node 192.168.33.11 and listening on ports 6032 for proxysql admin interface and 6033 for MySQL interface by default, They can be changed if needed

Adding Cluster Nodes to proxySQL

Now connect to proxySQL as below

 
$ mysql -u admin -padmin -h 127.0.0.1 -P6032
	

In this setup we will be using two hostgroups

Hostgroup 10 (writer group)
Hostgroup 11 (Reader group)

 
mysql> INSERT INTO mysql_servers(hostgroup_id, hostname, port,weight,comment) VALUES (10,'192.168.33.12',3306,1000000,'WRITE');
Query OK, 1 row affected (0.00 sec)

mysql> INSERT INTO mysql_servers(hostgroup_id, hostname, port,weight,comment) VALUES (11,'192.168.33.13',3306,1000,'READ');
Query OK, 1 row affected (0.00 sec)

mysql> INSERT INTO mysql_servers(hostgroup_id, hostname, port,weight,comment) VALUES (11,'192.168.33.14',3306,1000,'READ');
Query OK, 1 row affected (0.00 sec)

mysql> load mysql servers to runtime;
Query OK, 0 rows affected (0.00 sec)

mysql> save mysql servers to disk;
Query OK, 0 rows affected (0.03 sec)
	

Which looks as below

 
mysql> SELECT * FROM mysql_servers;
+--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 10 | 192.168.33.12 | 3306 | ONLINE | 1000000 | 0 | 1000 | 0 | 0 | 0|WRITE |
| 11 | 192.168.33.13 | 3306 | ONLINE | 1000 | 0 | 1000 | 0 | 0 | 0 |READ |
| 11 | 192.168.33.14 | 3306 | ONLINE | 1000 | 0 | 1000 | 0 | 0 | 0 |READ |
+--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
	

Query Rules:

Query rules will be important for query routing with proxySQL, incoming query pattern will be matched based on which it will be routed to the hostgroup accordingly, here I will be using the default and basic query rules for Read-Write splitting as below,

 
INSERT INTO mysql_query_rules (rule_id,active,match_pattern,destination_hostgroup,apply) VALUES (1,1,'^SELECT.*FOR UPDATE$',10,1), (2,1,'^SELECT',11,1);
Query OK, 2 rows affected (0.00 sec)

mysql> load mysql query rules to runtime;
Query OK, 0 rows affected (0.00 sec)

mysql> save mysql query rules to disk;
Query OK, 0 rows affected (0.01 sec)
	

Now the query rules are added.

Integrating Application user:

Now it’s time to add embed application user into proxySQL through which application connects via proxysql, here I have assigned the default host group for the user as 10 writer group, when an incoming query without a matching pattern in a query rule, it routes automatically to the default hostgroup ie., 10

 
INSERT INTO mysql_users (username,password,default_hostgroup) VALUES ('app_user','application_user',10);
Query OK, 0 rows affected (0.00 sec)

mysql> load mysql users to runtime;
Query OK, 0 rows affected (0.00 sec)

mysql> save mysql users to disk;
Query OK, 0 rows affected (0.01 sec)
	

Application user now is loaded and kept ready.

ProxySQL Monitoring:

Proxysql Monitors the node, added under it by making checks at regular interval, you can enable the monitoring as below,

 
mysql> UPDATE global_variables SET variable_value="monitor" WHERE variable_name="mysql-monitor_username";
Query OK, 1 row affected (0.00 sec)

mysql> UPDATE global_variables SET variable_value="monitor" WHERE variable_name="mysql-monitor_password";
Query OK, 1 row affected (0.00 sec)

mysql> LOAD MYSQL VARIABLES TO RUNTIME;
Query OK, 0 rows affected (0.00 sec)

mysql> SAVE MYSQL VARIABLES TO DISK;
Query OK, 94 rows affected (0.00 sec)
	

Monitor check:

 
mysql> SELECT * FROM monitor.mysql_server_connect_log ORDER BY time_start_us DESC LIMIT 5;
+---------------+------+------------------+-------------------------+---------------+
| hostname | port | time_start_us | connect_success_time_us | connect_error |
+---------------+------+------------------+-------------------------+---------------+
| 192.168.33.14 | 3306 | 1534068625859885 | 1452 | NULL |
| 192.168.33.13 | 3306 | 1534068625849598 | 1392 | NULL |
| 192.168.33.12 | 3306 | 1534068625838742 | 1505 | NULL |
| 192.168.33.14 | 3306 | 1534068565850089 | 1389 | NULL |
| 192.168.33.13 | 3306 | 1534068565839890 | 1713 | NULL |
+---------------+------+------------------+-------------------------+---------------+
	

It shows that proxySQL is able to connect to all the nodes.

Scheduler:

Here we come to the important aspect of the configuration, With cluster, we have different node states (1-4) , in any state comment beyond SYNCED , these node state switches are tracked by proxySQL health checks ( Monitoring Galera ).  And it routes the traffic accordingly with the help of  scheduler scripts.

Scheduler with help of the proxysql_galera_checker script checks the node states in a periodic configurable interval (ms), When a node state change is detected, It makes changes to the Hostgroup and takes the nodes out of traffic accordingly. An advanced scheduler was written by Marco Tusa is on his github ( proxy_sql_tools ).

For instance, if the writer node goes down, the scheduler script promotes a node from reader group to accept the writes, and when the node comes back online and add it back to hostgroup, The change is very quick that application doesn’t notice any interruption

 
INSERT INTO scheduler(id,active,interval_ms,filename,arg1,arg2,arg3,arg4,arg5) VALUES (1,'1','500','/usr/bin/proxysql_galera_checker','10','11','0','1', '/var/lib/proxysql/proxysql_galera_checker.log');
Query OK, 1 row affected (0.00 sec)

mysql> LOAD SCHEDULER TO RUNTIME;SAVE SCHEDULER TO DISK;
Query OK, 0 rows affected (0.00 sec)

mysql> select * from scheduler\G
*************************** 1. row ***************************
id: 1
active: 1
interval_ms: 3000                      #check interval in Ms
filename: /bin/proxysql_galera_checker #Check script
arg1: 10                               # Writer group
arg2: 11                               # Reader group
arg3: 0                                # No of writers
arg4: 1                                # Writers are readers
Arg5:/var/lib/proxysql/mycluster_proxysql_galera_check.log  #log file
comment: mycluster
1 row in set (0.00 sec)
	

Scheduler start checking immediately and writes the status to log as below

Logs:

 
[2018-07-15 08:24:11] log file /var/lib/proxysql/mycluster_proxysql_galera_check.log
[2018-07-15 08:24:11] ###### HANDLE WRITER NODES ######
[2018-07-15 08:24:12] --> Checking WRITE server 10:192.168.33.12:3306, current status ONLINE, wsrep_local_state 4
[2018-07-15 08:24:12] ###### HANDLE READER NODES ######
[2018-07-15 08:24:12] --> Checking READ server 11:192.168.33.13:3306, current status ONLINE, wsrep_local_state 4
[2018-07-15 08:24:12] server 11:192.168.33.13:3306 is already ONLINE
[2018-07-15 08:24:12] --> Checking READ server 11:192.168.33.14:3306, current status ONLINE, wsrep_local_state 4
[2018-07-15 08:24:12] server 11:192.168.33.14:3306 is already ONLINE
[2018-07-15 08:24:12] ###### SUMMARY ######
[2018-07-15 08:24:12] --> Number of writers that are 'ONLINE': 1 : hostgroup: 10
[2018-07-15 08:24:12] --> Number of readers that are 'ONLINE': 2 : hostgroup: 11
	

Now the setup is complete, with Read/Write split and failover, Now we will proceed to test Read-write Split

I will be using the below command snippet to watch the connection and failover scenarios

 
# watch -n 1 'mysql -h 127.0.0.1 -P 6032 -uadmin -padmin -t -e "select * from stats_mysql_connection_pool where hostgroup in (10,11) order by hostgroup,srv_host ;" -e " select hostgroup_id,hostname,status,weight,comment from mysql_servers where hostgroup_id in (10,11) order by hostgroup_id,hostname ;"'
+-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+---------
-------+------------+
| hostgroup | srv_host | srv_port | status | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv | Latency_us |
+-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| 10 | 192.168.33.12 | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 604 |
| 11 | 192.168.33.13 | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 509 |
| 11 | 192.168.33.14 | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 601 |
+-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+

+--------------+---------------+--------+---------+---------+
| hostgroup_id | hostname | status | weight | comment       |
+--------------+---------------+--------+---------+---------+
| 10 | 192.168.33.12 | ONLINE | 1000000 | WRITE             |
| 11 | 192.168.33.13 | ONLINE | 1000    | READ              |
| 11 | 192.168.33.14 | ONLINE | 1000    | READ              |
+--------------+---------------+--------+---------+---------+
	

Read/Write testing:

Only write test with sysbench.

 
sysbench --test='/usr/share/sysbench/oltp_update_index.lua' --table-size=1000000 --mysql-host=192.168.33.11 --mysql-db='sbtest' --mysql-user=app_user --mysql-password='application_user' --mysql-port=6033 --time=60 --threads=4 --db-driver=mysql run
	

ProxySQL stats:

 
+-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| hostgroup | srv_host | srv_port | status | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv | Latency_us |
+-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| 10 | 192.168.33.12 | 3306 | ONLINE | 0 | 4 | 4 | 0 | 29326 | 703876 | 0 | 0 |
| 11 | 192.168.33.13 | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| 11 | 192.168.33.14 | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 ||
+-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
	

As you can see the queries stats getting changed for the writer group and the readers do not receive any writes.

Read ( Select ) test:

Simulating a readonly primary key based ( select ) on cluster with sysbench.

 
# sysbench --test='/usr/share/sysbench/oltp_point_select.lua' --table-size=1000000 --mysql-host=192.168.33.11 --mysql-db='sbtest' --mysql-user=app_user --mysql-password='application_user' --mysql-port=6033 --time=60 --threads=4 --db-driver=mysql run
	

ProxySQL stats:

 
+-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| hostgroup | srv_host | srv_port | status | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv | Latency_us |
+-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| 10 | 192.168.33.12 | 3306 | ONLINE | 0 | 4 | 4 | 0 | 29326 | 703876 | 0 | 0 |
| 11 | 192.168.33.13 | 3306 | ONLINE | 0 | 3 | 3 | 0 | 95848 | 2300380 | 5731992 | 0 |
| 11 | 192.168.33.14 | 3306 | ONLINE | 0 | 3 | 3 | 0 | 96929 | 2326324 | 5818176 | 0 |
+-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
	

Now you can see the change of query count in our reader hostgroup comparing the previous result.

In the next blog we will see about the switch over and fail-over with cluster using proxySQL

Image Courtesy:Photo by Tim Gouw on Unsplash

No items found.

About the Author

Mydbops

Subscribe Now!

Subscribe here to get exclusive updates on upcoming webinars, meetups, and to receive instant updates on new database technologies.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.