ProxySQL Series : ProxySQL Native Cluster

Mydbops
Feb 28, 2020
15
Mins to Read
All

ProxySQL supports a feature called clustering from version 1.4.2 .while planning for setting up a  cluster ensure to have a version greater than 1.4.2 ( latest is better) . This is still an experimental feature but works well. This blog would be a continuation of our proxysql series of blogs.

ProxySQL sits in between the app server and DB server as a middleware,when Proxy is going down due to undesirable HW or during failure, the lost node has to be replaced without loss of the configurations. Ofcourse backups ( Previous blog ) can help but it is not a HA solution.

What does clustering solve?

  • Provides HA (Avoids SPOF)
  • Redundancy in major ProxySQL configurations

Below are the only four tables, where the config changes get propagated across the nodes under clustering

  • mysql_query_rules
  • mysql_servers
  • mysql_users
  • proxysql_servers

How does clustering work?

ProxySQL cluster has two major components/module to maintain the synchronisation of these four tables, They are

  • Monitoring
  • Re-configuration

whenever you make a change to the tables INSERT/UPDATE/DELETE and run a ‘LOAD .. to RUNTIME’ ProxySQL creates a new checksum of the table’s data and increments the version number in the table runtime_checksums_values. All nodes MONITOR these changes within a cluster on port 6032, and gets a copy of changes and RECONFIGURE ie, apply the nes config changes to RUNTIME and then to DISK.

Below is the purview of checksum table:

 
+-------------------+---------+------------+--------------------+
| name | version | epoch | checksum |
+-------------------+---------+------------+--------------------+
| admin_variables | 0 | 0 | |
| mysql_query_rules | 1 | 1579594539 | 0x0000000000000000 |
| mysql_servers | 4 | 1579684153 | 0xBE519BB22FCB91EE |
| mysql_users | 1 | 1579594539 | 0x0000000000000000 |
| mysql_variables | 0 | 0 | |
| proxysql_servers | 2 | 1579609539 | 0x7639253F7A999320 |
+-------------------+---------+------------+--------------------+
	

Tables which are not active will have a version as ‘0’, Tables with no change will have checksum value as ‘0x0000000000000000’, Epoch is Unix-timestamp.

How to configure proxysql cluster?

Here I have three servers to add under a cluster setup

Server 1: 172.50.243.145
Server 2: 172.51.131.121
Server 3: 172.52.227.71

Step 1: Update below cluster-admin variables across all the nodes.

 
update global_variables set variable_value='admin:admin;proxycluster:secret1pass' where variable_name='admin-admin_credentials';
update global_variables set variable_value='proxycluster' where variable_name='admin-cluster_username';
update global_variables set variable_value='secret1pass' where variable_name='admin-cluster_password';
update global_variables set variable_value=200 where variable_name='admin-cluster_check_interval_ms';
update global_variables set variable_value=100 where variable_name='admin-cluster_check_status_frequency';
update global_variables set variable_value='true' where variable_name='admin-cluster_mysql_query_rules_save_to_disk';
update global_variables set variable_value='true' where variable_name='admin-cluster_mysql_servers_save_to_disk';
update global_variables set variable_value='true' where variable_name='admin-cluster_mysql_users_save_to_disk';
update global_variables set variable_value='true' where variable_name='admin-cluster_proxysql_servers_save_to_disk';
update global_variables set variable_value=3 where variable_name='admin-cluster_mysql_query_rules_diffs_before_sync';
update global_variables set variable_value=3 where variable_name='admin-cluster_mysql_servers_diffs_before_sync';
update global_variables set variable_value=3 where variable_name='admin-cluster_mysql_users_diffs_before_sync';
update global_variables set variable_value=3 where variable_name='admin-cluster_proxysql_servers_diffs_before_sync';
	

Load and save the variables

 
load admin variables to RUNTIME;
save admin variables to disk;
	

Step 2: proceed to update the servers, run these in all the nodes of the cluster.

 
INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('172.50.243.145',6032,100,'PRIMARY');
INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('172.51.131.121',6032,99,'SECONDARY');
INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('172.52.227.71',6032,98,'TERTIARY');
	

Load and save the proxysql servers

 
LOAD PROXYSQL SERVERS TO RUNTIME;
SAVE PROXYSQL SERVERS TO DISK;
	

To have the same under config, use the below

 
admin_variables =
{
admin_credentials="admin:admin;proxycluster:secret1pass"
mysql_ifaces="0.0.0.0:6032"
cluster_username="proxycluster"
cluster_password="secret1pass"
cluster_check_interval_ms=200
cluster_check_status_frequency=100
cluster_mysql_query_rules_save_to_disk=true
cluster_mysql_servers_save_to_disk=true
cluster_mysql_users_save_to_disk=true
cluster_proxysql_servers_save_to_disk=true
cluster_mysql_query_rules_diffs_before_sync=3
cluster_mysql_servers_diffs_before_sync=3
cluster_mysql_users_diffs_before_sync=3
cluster_proxysql_servers_diffs_before_sync=3
}
proxysql_servers =
(
{
hostname="172.50.243.145"
port=6032
weight=100
comment="PRIMARY"
},
{
hostname="172.51.131.121"
port=6032
weight=99
comment="SECONDARY"
},
{
hostname="172.52.227.71"
port=6032
weight=98
comment="TERTIARY"
}
)
	

From the Proxysql log we could see the time for the peer getting joined

 
2020-01-21 17:55:41 [INFO] Received LOAD PROXYSQL SERVERS TO RUNTIME command
2020-01-21 17:55:41 [INFO] Created new Cluster Node Entry for host 172.50.243.145:6032
2020-01-21 17:55:41 [INFO] Created new Cluster Node Entry for host 172.51.131.121:6032
2020-01-21 17:55:41 [INFO] Created new Cluster Node Entry for host 172.52.227.71:6032
2020-01-21 17:55:41 [INFO] Cluster: starting thread for peer 172.50.243.145:6032
2020-01-21 17:55:41 [INFO] Cluster: starting thread for peer 172.51.131.121:6032
2020-01-21 17:55:41 [INFO] Cluster: starting thread for peer 172.52.227.71:6032
2020-01-21 17:55:41 [INFO] Cluster: detected a new checksum for mysql_query_rules from peer 10.51.131.121:6032, version 1, epoch 1579604766, checksum 0x0000000000000000 . Not syncing yet ...
2020-01-21 17:55:41 [INFO] Cluster: checksum for mysql_query_rules from peer 172.51.131.121:6032 matches with local checksum 0x0000000000000000 , we won't sync.
Now I proceed to make a change to “mysql_servers” table on server ‘172.50.243.145’, Below is the log from server “172.51.131.121”
	

Monitors the checksum table and detects a change

 
2020-01-22 14:30:58 [INFO] Cluster: detected peer 172.50.243.145:6032 with mysql_servers version 2, epoch 1579683657
2020-01-22 14:30:58 [INFO] Cluster: Fetching MySQL Servers from peer 172.50.243.145:6032 started. Expected checksum 0x637EDB7A708CB4A5
	

Next it fetches the change

 
2020-01-22 14:30:58 [INFO] Cluster: Fetching MySQL Servers from peer 172.50.243.145:6032 completed
2020-01-22 14:30:58 [INFO] Cluster: Fetching checksum for MySQL Servers from peer 172.50.243.145:6032 before processing
2020-01-22 14:30:58 [INFO] Cluster: Fetching checksum for MySQL Servers from peer 172.50.243.145:6032 successful. Checksum: 0x637EDB7A708CB4A5
	

Now it proceeds with Reconfigure ie., apply changes

 
2020-01-22 14:30:58 [INFO] Cluster: Writing mysql_servers table
2020-01-22 14:30:58 [INFO] Cluster: Writing mysql_replication_hostgroups table
2020-01-22 14:30:58 [INFO] Cluster: Loading to runtime MySQL Servers from peer 172.50.243.145:6032
	

Adding a node to cluster:

Now let’s proceed to see ,How a new node can be added to the cluster?, I am adding the node “172.50.227.58” to the cluster

Make the admin variable update as described in the “Step 2” to this new proxy node.

And also the “proxysql_server” table in new node as.

 
INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('172.50.243.145',6032,100,'PRIMARY');
INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('172.51.131.121',6032,99,'SECONDARY');
INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('172.52.227.71',6032,98,'TERTIARY');
INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('172.50.227.58',6032,98,'newnode');
	

Then proceed to load and save the proxysql server changes as

 
LOAD PROXYSQL SERVERS TO RUNTIME;
SAVE PROXYSQL SERVERS TO DISK;
	

On the existing node in cluster update the new node as below

 
INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('172.50.227.58',6032,98,'newnode');
	

Now new node joins the cluster and detects a checksum diff and sync to its local

Logs:

 
2020-01-25 15:40:59 [INFO] Cluster: detected a peer 172.52.227.71:6032 with mysql_servers version 5, epoch 1579930914, diff_check 4. Own version: 1, epoch: 1579946323. Proceeding with remote sync
2020-01-25 15:40:59 [INFO] Cluster: detected peer 172.52.227.71:6032 with mysql_servers version 5, epoch 1579930914
2020-01-25 15:40:59 [INFO] Cluster: Fetching MySQL Servers from peer 172.52.227.71:6032 started. Expected checksum 0x682F24A4978A8A40
2020-01-25 15:40:59 [INFO] Cluster: Fetching MySQL Servers from peer 172.52.227.71:6032 completed
2020-01-25 15:40:59 [INFO] Cluster: Fetching checksum for MySQL Servers from peer 172.52.227.71:6032 before processing
2020-01-25 15:40:59 [INFO] Cluster: Fetching checksum for MySQL Servers from peer 172.52.227.71:6032 successful. Checksum: 0x682F24A4978A8A40
2020-01-25 15:40:59 [INFO] Cluster: Writing mysql_servers table
2020-01-25 15:40:59 [INFO] Cluster: Writing mysql_replication_hostgroups table
	

Limitations:

  • Converge time is not predictable
  • There is no protection against split-brain (network partition)
  • Variables sync is not enabled between nodes
  • Use floating IP (VIP) using keep-alived

Ready to elevate your MySQL HA strategy with ProxySQL clustering? Mydbops offers expert guidance and comprehensive MySQL services to ensure optimal database uptime. Contact us today to explore our HA solutions!

{{cta}}

Featured Image credits : Ryan Stone on Unsplash

No items found.

About the Author

Mydbops

Subscribe Now!

Subscribe here to get exclusive updates on upcoming webinars, meetups, and to receive instant updates on new database technologies.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.