As a part of Mydbops Consulting we have a below problem statement from one of our client.
"We have a high powered server for reporting which in turn powers our internal dashboard for viewing the logistics status.Even with a high end hardware, we had a heavy CPU usage and which in turn triggers spikes in replication lag and slowness. Below is the hardware configuration."
Below is the graph on CPU utilisation from Grafana.
Since the work load is purely reporting(OLAP) we could observe a similar type of queries with different ranges. Below is the Execution plan of the query. It is a join query over 6 tables.
Explain Plan
With the initial screening it looks normal as it’s a reporting (OLAP) query and its bound to run longer and this has started to bit our system resources (CPU) and the replication lag cause stale or obsolete data in internal dashboard.
As the execution plan depicts the query is using index and the columns used are being perfectly indexed. Only with table “SCF” we could see a scan of “5820 “ , The index used here has a less cardinality.
Now we should tweak the optimizer to choose the right index. Optimizer chooses the index based on the stats collected for the table and stored under mysql.innodb_table_stats and mysql.innodb_index_stats. .
The default value of innodb_stats_persistent_sample_pages and innodb_stats_transient_sample_pages are 20 and 8 respectively, which is too low for our dataset, This works wells for smaller tables, but in our case tables are in few 100’s of GB. We increased the values below globally since its a dynamic variable by 10X approx.
Below is the definition from the official documentation on these variables,
“The number of index pages to sample when estimating cardinality and other statistics for an indexed column, such as those calculated by ANALYZE TABLE. Increasing the value improves the accuracy of index statistics, which can improve the query execution plan, at the expense of increased I/O during the execution of ANALYZE TABLE for an InnoDB table.”
Now we will have to force the index stats recalculation by running a “Analyze table table_name” on all the table involved in the query or else you can make variables persistent and invoke a DB restart to calculate stats for all the tables , we chose the first method since its less harming.
Let us review the execution plan now, we could see a reduced row scans and better index usage with the optimiser as below:
Note the new index plan is applicable for the new incoming queries. Within a short span of time the CPU usage has dropped down drastically there is huge boost in performance, please find the graph below. Now the dashboards are faster than ever before.
Key Takeaways
Below are points to note while setting this variable.
- Too high value can result in longer time for stats calculation .
- Too low can have inaccurate stats and leads to a situation discussed above
- As the table grows, InnoDB allegedly re-ANALYZEs ie., re-calculates stats after 10% growth, so no manual action needed.
- Most tables have decent stats from sampling 20 pages (Smaller tables)
- Tables with uneven distribution won’t benefit from changing the ’20’, to tackle that we have “Histograms” from MySQL 8.0
Optimize performance, enhance reliability, and ensure data security with Mydbops' MySQL InnoDB Cluster Consulting and Support Services. Our team can help you fine-tune temporary tablespace configuration, troubleshoot performance issues, and implement best practices. Contact us today to schedule a free consultation.
{{cta}}