Category Archives: MySQL MariaDB

Only on Linux. Windows is way too messy for MySQL.

MySQL Performance Tuning

I have used mysqltuner script for a couple of years on and off. While doing some MySQL Performance Tuning research this week I came across the MySQL Tuning Primer which I found to be also useful for some clarity on configuration settings that could be readily updated. However, that one has not been updated for some time and reports the slow_query_log as not running when I know it is, states that there is no InnoDB support, which is odd as that is the default storage engine, and having another look I found a more up-to-date version at https://github.com/BMDan/tuning-primer.sh. That said it is still not up-to-date as it is offering this counter-intuitive advice:

You are using MySQL 8.0.27, no query cache is supported.
I recommend an upgrade to MySQL 4.1 or better

Which suggests that it could use some work, but it still provides some good pointers, and as always YMMV aka ‘your mileage may vary’.

So back to my server, and mysqltuner script and covering some of the issues raised on the test server.

Using mysqltuner gave me the following commentary on the first pass:

General recommendations:
Check warning line(s) in /var/log/mysqld.log file
Check error line(s) in /var/log/mysqld.log file
MySQL was started within the last 24 hours - recommendations may be inaccurate
Reduce your overall MySQL memory footprint for system stability
Dedicate this server to your database for highest performance.
Configure your accounts with ip or subnets only, then update your configuration with skip-name-resolve=1
We will suggest raising the 'join_buffer_size' until JOINs not using indexes are found.
See https://dev.mysql.com/doc/internals/en/join-buffer-size.html
(specially the conclusions at the bottom of the page).
Performance schema should be activated for better diagnostics
Before changing innodb_log_file_size and/or innodb_log_files_in_group read this: https://bit.ly/2TcGgtU
Variables to adjust:
*** MySQL's maximum memory usage is dangerously high ***
*** Add RAM before increasing MySQL buffer variables ***
join_buffer_size (> 256.0K, or always use indexes with JOINs)
table_definition_cache(4000) > 7166 or -1 (autosizing if supported)
performance_schema = ON enable PFS
innodb_buffer_pool_size (>= 1.5G) if possible.
innodb_log_file_size should be (=16M) if possible, so InnoDB total log files size equals to 25% of buffer pool size.

I also grabbed a suggested config for my.cnf from https://gist.github.com/fevangelou/fb72f36bbe333e059b66 and used that as a basis for my next test run.

After a few tweaks, mysqltuner reports a better starting position:

General recommendations:
    Check warning line(s) in /var/lib/mysql/mysql_error.log file
    MySQL was started within the last 24 hours - recommendations may be inaccurate
    Reduce your overall MySQL memory footprint for system stability
    Dedicate this server to your database for highest performance.
    Reduce or eliminate unclosed connections and network issues

Given that there are no warnings relating to memory size settings, other than the recommendation to reduce the overall footprint, I will wait for 24 hours to see what activity occurs and get a comparable report.

Just in case you are looking at your mysqltuner output and wondering what these lines refer to:

-------- Aria Metrics ------------------------------------------------------------------------------
[--] Aria Storage Engine not available.

-------- TokuDB Metrics ----------------------------------------------------------------------------
[--] TokuDB is disabled.

-------- XtraDB Metrics ----------------------------------------------------------------------------
[--] XtraDB is disabled.

-------- Galera Metrics ----------------------------------------------------------------------------
[--] Galera is disabled.

The answer is relatively simple, unless you want to delve really deeply into it.

Galera is a clustering solution for MySQL and only applies if you are setting up a high-availability solution. My guess is, if you are reading this, then like me, you do not need it!

The other 3 are all alternative database storage engines, like ISAM and INNODB.

Aria is an alternative to ISAM. TokuDB is deprecated while XtraDB is current as InnoDB alternative from Percona for MySQL and MariaDB although MariaDB have stopped using it and reverted to InnoDB as their default storage engine.

For the moment I am sticking with the default InnoDB on MySQL 8.x for this server as I am not looking to ‘buy trouble’.

The main issue that I spent time reviewing, understanding (somewhat) and addressing was the Max MySQL Memory setting.

If Max Memory is greater than physical memory it only may be an issue IF MySQL actually tries to use it all. A more detailed reference is at MySQL is https://dev.mysql.com/doc/refman/5.7/en/memory-use.html

How to calculate MySQL Max Memory? Using my initial test examples, start with server_buffers, which are buffers that apply at the server level, i.e. only one buffer instance per server.

server_buffers = 
key_buffer_size   24M
+ innodb_buffer_pool_size   4G
+ innodb_log_buffer_size 16M
+ query_cache_size - Not Applicable with MySQL v8.x

As a side note, you may read other calculators which include “innodb_additional_mem_pool_size” in this calculation but that parameter was deprecated in MySQL 5.6 and removed in v5.7. See https://dev.mysql.com/doc/refman/5.7/en/innodb-performance-use_sys_malloc.html for more information.

server_buffers = 24M +4096M (4G) + 16M = 4136M

Then look at the total_per_thread_buffers which are the buffers used per connection. The total of these buffers is then multiplied by the max_connections value to get the total potential memory allocated that might be used.

total_per_thread_buffers = 
read_buffer_size   3M
+ read_rnd_buffer_size   4M 
+ sort_buffer_size   4M
+ thread_stack   192K
+ max_allowed_packet  256M 
+ join_buffer_size   4M

total_per_thread_buffers = 3M + 4M + 4M + 192K + 256M + 4M = 271M (+192K) use 272M.

Then multiply total_per_thread_buffers by the max_connections value and finally add the server_buffers.

4136M + (272M * 100) = 4136M + 27,200M = 31,336M

Which is only marginally inconsistent with the mysqltuner calculation at 34,600M (~34G) . Note that I also tweaked the server buffers to 8G during the various tests.

[–] Physical Memory : 31.4G
[–] Max MySQL memory : 34.6G
[–] Other process memory: 0B
[–] Total buffers: 8.2G global + 271.2M per thread (100 max threads)

So I reduced the max_allowed_packet parameter from 256M to 128M to try and resolve that max memory issue but Cpanel/WHM is overwriting that value back to 256M when it restarts. I then dropped the value of max_connections from 100 to 60 as the usage over 24 hours indicated only 27 max connections had been used.

Both of these values need more research, acknowledging that I have no idea at this time how frequently there are queries approaching that packet size or exceeding it, or how many peak connections may be needed, but monitoring will address that in time.

What I do know now, is that I can more effectively monitor what should be a more efficient MySQL server than it was 6 hours ago.

MySQL TLS Connections on Localhost

I had an error being reported consistently in a mysqld.log file that relates to MySQL TLS Connections on Localhost.

A deprecated TLS version TLSv1 is enabled for channel mysql_main

A deprecated TLS version TLSv1.1 is enabled for channel mysql_main

A question is why an up-to-date CloudLinux server has older / insecure TLS versions enabled at all ? In any case, why would localhost connections need TLS?

Related discussions that are, at least to me, inconclusive.

https://security.stackexchange.com/questions/48892/is-there-a-benefit-to-having-ssl-connections-on-localhost

https://stackoverflow.com/questions/66372339/is-ssl-certificate-necessary-for-mariadb-or-mysql-to-ensure-the-security

https://stackoverflow.com/questions/66372339/is-ssl-certificate-necessary-for-mariadb-or-mysql-to-ensure-the-security

https://serverfault.com/questions/1012073/mariadb-tls-and-localhost-connection

The result of these threads is that the focus is on using TLS in order to prevent man-in-the-middle, or network stream monitoring. The question of relevance to an enclosed system on a single server without remote access is redirected to why you should have SSL for remote connections.

The underlying thought here is that TLS (SSL) security on MySQL traffic that is only ever on locahost can only be compromised if the server is already compromised, and if the server is already compromised, then they can read the DB files directly and do not need to bother with decrypting a SQL data stream. Which is also kind of wrong, as the local connection will be directly to port 3306 (default for MySQL) and not be transmitted via TCP anyway.

That said, some of the commentary in the threads seem to support my thinking that TLS on a localhost configuration is a waste of performance overhead and serves very little or nothing related to security.

How much of a performance hit is the next question, and I think it is summed up in Ernie Souhrada’s article at Percona on SSL Performance Overhead in MySQL where he says “However, I definitely didn’t think the encryption overhead would be so high.”

Daniel van Eeden presents some more data in MySQL and SSL/TLS Performance that supports the performance hit that security creates, but provides some ideas for mitigation.

The basic Security Guidelines provided at https://dev.mysql.com/doc/refman/5.7/en/security-guidelines.html are a good place to start.

Coming back to the actual error lines being logged a check of the config for mysqld shows:

# mysqld --help --verbose | grep TLS
                      TLS v1.3 ciphersuite to use (implies --ssl) for
                      TLS version for --admin-port, permitted values are TLSv1,
                      TLSv1.1, TLSv1.2, TLSv1.3
                      are SSL/TLS, Unix socket or Shared Memory (on Windows).
                      TLS v1.3 ciphersuite to use (implies --ssl)
  --tls-version=name  TLS version, permitted values are TLSv1, TLSv1.1,
                      TLSv1.2, TLSv1.3
admin-tls-version                                            TLSv1,TLSv1.1,TLSv1.2,TLSv1.3
tls-version                                                  TLSv1,TLSv1.1,TLSv1.2,TLSv1.3

Which indicates that the config is loading all the TLS versions as acceptable. A quick read over at dev.mysql.com TLS documentation provided more information to review and reset the options:

mysql> SHOW GLOBAL VARIABLES LIKE 'tls_version';
+---------------+-------------------------------+
| Variable_name | Value                         |
+---------------+-------------------------------+
| tls_version   | TLSv1,TLSv1.1,TLSv1.2,TLSv1.3 |
+---------------+-------------------------------+
1 row in set (0.01 sec)

mysql> SELECT * FROM performance_schema.session_status
    ->        WHERE VARIABLE_NAME IN ('Ssl_version','Ssl_cipher');
+---------------+----------------+
| VARIABLE_NAME | VARIABLE_VALUE |
+---------------+----------------+
| Ssl_cipher    |                |
| Ssl_version   |                |
+---------------+----------------+
2 rows in set (0.00 sec)

These two queries confirm that the default selection is ALL TLS versions and that none of the Current Client Sessions are using SSL (hark back to the bit about port 3306 and not using TCP locally).

This is addressed by updating the /etc/my.cnf file with a single line:

tls_version=TLSv1.2,TLSv1.3

and a subsequent restart of mysql service.

So even though we are not using SSL / TLS as such, having only the currently recommended TLS versions enabled will stop MySQL complaining in the log file.

MySQL 8.0 Incorrect format parameter

Lots of issues cropping up on one of my CPanel servers, most of which are pointing to issues with MySQL.

While trying to test I have copied a database for a WordPress website from the live site to a dev site, or at least I attempted to do so.

MySQL 8.0 Incorrect format parameter error message in phpMyAdmin

which I think arises due to the limit of 128M file upload in PHP settings and this import file size is 190M which is just too big. So I started to cut the file up, but then decided I should probably clean up the database in the live site first, which will both reduce the export file size and, hopefully, improve the live site performance as well.

Which led me down a rabbit-hole of database optimisation tools, plugins, and SQL configuration.

Having run some WordPress plugins to remove rubbish, I exported again to find that all that effort only removed 2MB of data and was essentially a waste of time. Obviously the system is full of cruft, but identifying the cruft takes more effort than WP Database Cleaner plugins can do. I’ll come back to this later.

In any case, the export .sql file was too big, so I cut it into slices, finding that even with grabbing a stack of lines, there was a bulky section of data in one area and I had to mess around with the file slice sizes to get the largest file down to under the 128M and even then it was still 129M on disk, but probably under the 128MiB vs 128MB.

After looking at the wrong end of the puzzle for an hour, what I eventually found was that I should have checked the diskspace consumed by the tables and / or the number of rows in the tables. In this case only one table was accounting for 130MB of the total 190MB of the database. It was a log record and defaulted to 30 day retention which was consuming a large chunk of space. Reset to have 14 days retention, truncated the table removing some of the older records and tried again.

This time, with 40MB reduced file size, exporting the Structure first and then data in two chunks, that large log table in one, and everything else in the other. Importing 3 files to the new database went smoothly.

Lesson learnt, before exporting SQL, review what is being exported, and make an assessment of expected output file sizes. It should save a lot of effort.

PHP mySQL osCmax and converting databases

This week I started looking at the conversion of some older osCommerce databases and moving to osCmax. I’m almost on top of the templating but need to look at getting live data into the system to work with the templates.

This post is not about that but about the bits of code that I’ve worked on to get my converter working.

Connecting to two databases

Stackoverflow came to my aid again! The post on PHP connecting to two databases seemed fine but I did not read completely the first (or second) time and only after doing a decent read realised that there was the additional parameter for a new connection. Setting that worked fine.

There was also interesting reading on the logic and best practice around opening and closing database connections.

Over at http://net.tutsplus.com/ there are some best practices for php and mysql coding.

Altering a column in a table

I picked up a script from EdmondsCommerce (nice play on eCommerce!) and the script worked fine. I then liked the concept of the script posted by Arnan in the comments to the post but that did not work. I also noted the comment to use ‘IF NOT EXISTS’ except that is only valid for tables not for columns, at least not in my reading.

Adding a new column in a table

Following on from the alter script I created a new function and modified Arnan’s code to work and dealt with some potential issues with columns existing or not. Which led me to……

Logging Whats Happening

Using ‘echo’ to the screen is the common debug process for PHP code. But in this case I am building something a bit more robust and potentially big once I get to grips with over 100 tables in the databases.

I think an interactive script presenting information to the screen is good but not as a long scrolling list. I’d prefer to see a screen that sits and presents the current processing. In addition I want a log file to report on all actions and a separate log for the data line items just in case.

So I am now on the hunt for error logging and reporting and a great start is this post about a Logging class from http://www.redips.net/php/write-to-log-file/ that I have left as is and it works a treat.

Next I will need to come back to progress indicators like http://valums.com/ajax-upload/ and maybe here http://www.tutorialized.com/view/tutorial/PHP-GD-Progress-Bar-Demo/779 or http://davidwalsh.name/progress-bar-animated-mootools. The issue with this is how to update client side the progress occurring server side during a database table pass that in theory is either 0% or 100% and no feedback in between without splitting the query into segments and passing a client-side update. Needs more thought.