Quantcast
Channel: Planet MySQL
Viewing all 18822 articles
Browse latest View live

Moving from MySQL 5.7 to MySQL 8.0 - What You Should Know

$
0
0

April 2018 is not just a date for the MySQL world. MySQL 8.0 was released there, and more than 1 year after, it’s probably time to consider migrating to this new version.

MySQL 8.0 has important performance and security improvements, and, as in all migration to a new database version, there are several things to take into account before going into production to avoid hard issues like data loss, excessive downtime, or even a rollback during the migration task.

In this blog, we’ll mention some of the new MySQL 8.0 features, some deprecated stuff, and what you need to keep in mind before migrating.

What’s New in MySQL 8.0?

Let’s now summarize some of the most important features mentioned in the official documentation for this new MySQL version.

  • MySQL incorporates a transactional data dictionary that stores information about database objects.
  • An atomic DDL statement combines the data dictionary updates, storage engine operations, and binary log writes associated with a DDL operation into a single, atomic transaction.
  • The MySQL server automatically performs all necessary upgrade tasks at the next startup to upgrade the system tables in the mysql schema, as well as objects in other schemas such as the sys schema and user schemas. It is not necessary for the DBA to invoke mysql_upgrade.
  • It supports the creation and management of resource groups, and permits assigning threads running within the server to particular groups so that threads execute according to the resources available to the group. 
  • Table encryption can now be managed globally by defining and enforcing encryption defaults. The default_table_encryption variable defines an encryption default for newly created schemas and general tablespace. Encryption defaults are enforced by enabling the table_encryption_privilege_check variable. 
  • The default character set has changed from latin1 to utf8mb4.
  • It supports the use of expressions as default values in data type specifications. This includes the use of expressions as default values for the BLOB, TEXT, GEOMETRY, and JSON data types.
  • Error logging was rewritten to use the MySQL component architecture. Traditional error logging is implemented using built-in components, and logging using the system log is implemented as a loadable component.
  • A new type of backup lock permits DML during an online backup while preventing operations that could result in an inconsistent snapshot. The new backup lock is supported by LOCK INSTANCE FOR BACKUP and UNLOCK INSTANCE syntax. The BACKUP_ADMIN privilege is required to use these statements.
  • MySQL Server now permits a TCP/IP port to be configured specifically for administrative connections. This provides an alternative to the single administrative connection that is permitted on the network interfaces used for ordinary connections even when max_connections connections are already established.
  • It supports invisible indexes. This index is not used by the optimizer and makes it possible to test the effect of removing an index on query performance, without removing it.
  • Document Store for developing both SQL and NoSQL document applications using a single database.
  • MySQL 8.0 makes it possible to persist global, dynamic server variables using the SET PERSIST command instead of the usual SET GLOBAL one. 

MySQL Security and Account Management

As there are many improvements related to security and user management, we'll list them in a separate section.

  • The grant tables in the mysql system database are now InnoDB tables. 
  • The new caching_sha2_password authentication plugin is now the default authentication method in MySQL 8.0. It implements SHA-256 password hashing, but uses caching to address latency issues at connect time. It provides more secure password encryption than the mysql_native_password plugin, and provides better performance than sha256_password.
  • MySQL now supports roles, which are named collections of privileges. Roles can have privileges granted to and revoked from them, and they can be granted to and revoked from user accounts. 
  • MySQL now maintains information about password history, enabling restrictions on reuse of previous passwords. 
  • It enables administrators to configure user accounts such that too many consecutive login failures due to incorrect passwords cause temporary account locking. 

InnoDB enhancements

As the previous point, there are also many improvements related to this topic, so we'll list them in a separate section too.

  • The current maximum auto-increment counter value is written to the redo log each time the value changes, and saved to an engine-private system table on each checkpoint. These changes make the current maximum auto-increment counter value persistent across server restarts
  • When encountering index tree corruption, InnoDB writes a corruption flag to the redo log, which makes the corruption flag crash-safe. InnoDB also writes in-memory corruption flag data to an engine-private system table on each checkpoint. During recovery, InnoDB reads corruption flags from both locations and merges results before marking in-memory table and index objects as corrupt.
  • A new dynamic variable, innodb_deadlock_detect, may be used to disable deadlock detection. On high concurrency systems, deadlock detection can cause a slowdown when numerous threads wait for the same lock. At times, it may be more efficient to disable deadlock detection and rely on the innodb_lock_wait_timeout setting for transaction rollback when a deadlock occurs.
  • InnoDB temporary tables are now created in the shared temporary tablespace, ibtmp1.
  • mysql system tables and data dictionary tables are now created in a single InnoDB tablespace file named mysql.ibd in the MySQL data directory. Previously, these tables were created in individual InnoDB tablespace files in the mysql database directory.
  • By default, undo logs now reside in two undo tablespaces that are created when the MySQL instance is initialized. Undo logs are no longer created in the system tablespace.
  • The new innodb_dedicated_server variable, which is disabled by default, can be used to have InnoDB automatically configure the following options according to the amount of memory detected on the server: innodb_buffer_pool_size, innodb_log_file_size, and innodb_flush_method. This option is intended for MySQL server instances that run on a dedicated server. 
  • Tablespace files can be moved or restored to a new location while the server is offline using the innodb_directories option. 

Now, let’s take a look at some of the features that you shouldn’t use anymore in this new MySQL version.

What is Deprecated in MySQL 8.0?

The following features are deprecated and will be removed in a future version.

  • The utf8mb3 character set is deprecated. Please use utf8mb4 instead.
  • Because caching_sha2_password is the default authentication plugin in MySQL 8.0 and provides a superset of the capabilities of the sha256_password authentication plugin, sha256_password is deprecated.
  • The validate_password plugin has been reimplemented to use the server component infrastructure. The plugin form of validate_password is still available but is deprecated.
  • The ENGINE clause for the ALTER TABLESPACE and DROP TABLESPACE statements.
  • The PAD_CHAR_TO_FULL_LENGTH SQL mode.
  • AUTO_INCREMENT support is deprecated for columns of type FLOAT and DOUBLE (and any synonyms). Consider removing the AUTO_INCREMENT attribute from such columns, or convert them to an integer type.
  • The UNSIGNED attribute is deprecated for columns of type FLOAT, DOUBLE, and DECIMAL (and any synonyms). Consider using a simple CHECK constraint instead for such columns.
  • FLOAT(M,D) and DOUBLE(M,D) syntax to specify the number of digits for columns of type FLOAT and DOUBLE (and any synonyms) is a nonstandard MySQL extension. This syntax is deprecated.
  • The nonstandard C-style &&, ||, and ! operators that are synonyms for the standard SQL AND, OR, and NOT operators, respectively, are deprecated. Applications that use the nonstandard operators should be adjusted to use the standard operators.
  • The mysql_upgrade client is deprecated because its capabilities for upgrading the system tables in the mysql system schema and objects in other schemas have been moved into the MySQL server.
  • The mysql_upgrade_info file, which is created data directory and used to store the MySQL version number.
  • The relay_log_info_file system variable and --master-info-file option are deprecated. Previously, these were used to specify the name of the relay log info log and master info log when relay_log_info_repository=FILE and master_info_repository=FILE were set, but those settings have been deprecated. The use of files for the relay log info log and master info log has been superseded by crash-safe slave tables, which are the default in MySQL 8.0.
  • The use of the MYSQL_PWD environment variable to specify a MySQL password is deprecated.

And now, let’s take a look at some of the features that you must stop using in this MySQL version.

What Was Removed in MySQL 8.0?

The following features have been removed in MySQL 8.0.

  • The innodb_locks_unsafe_for_binlog system variable was removed. The READ COMMITTED isolation level provides similar functionality.
  • Using GRANT to create users. Instead, use CREATE USER. Following this practice makes the NO_AUTO_CREATE_USER SQL mode immaterial for GRANT statements, so it too is removed, and an error now is written to the server log when the presence of this value for the sql_mode option in the options file prevents mysqld from starting.
  • Using GRANT to modify account properties other than privilege assignments. This includes authentication, SSL, and resource-limit properties. Instead, establish such properties at account-creation time with CREATE USER or modify them afterward with ALTER USER.
  • IDENTIFIED BY PASSWORD 'auth_string' syntax for CREATE USER and GRANT. Instead, use IDENTIFIED WITH auth_plugin AS 'auth_string' for CREATE USER and ALTER USER, where the 'auth_string' value is in a format compatible with the named plugin. 
  • The PASSWORD() function. Additionally, PASSWORD() removal means that SET PASSWORD ... = PASSWORD('auth_string') syntax is no longer available.
  • The old_passwords system variable.
  • The FLUSH QUERY CACHE and RESET QUERY CACHE statements.
  • These system variables: query_cache_limit, query_cache_min_res_unit, query_cache_size, query_cache_type, query_cache_wlock_invalidate.
  • These status variables: Qcache_free_blocks, Qcache_free_memory, Qcache_hits, Qcache_inserts, Qcache_lowmem_prunes, Qcache_not_cached, Qcache_queries_in_cache, Qcache_total_blocks.
  • These thread states: checking privileges on cached query, checking query cache for a query, invalidating query cache entries, sending cached result to the client, storing result in the query cache, Waiting for query cache lock.
  • The tx_isolation and tx_read_only system variables have been removed. Use transaction_isolation and transaction_read_only instead.
  • The sync_frm system variable has been removed because .frm files have become obsolete.
  • The secure_auth system variable and --secure-auth client option have been removed. The MYSQL_SECURE_AUTH option for the mysql_options() C API function was removed.
  • The log_warnings system variable and --log-warnings server option have been removed. Use the log_error_verbosity system variable instead.
  • The global scope for the sql_log_bin system variable was removed. sql_log_bin has session scope only, and applications that rely on accessing @@GLOBAL.sql_log_bin should be adjusted.
  • The unused date_format, datetime_format, time_format, and max_tmp_tables system variables are removed.
  • The deprecated ASC or DESC qualifiers for GROUP BY clauses are removed. Queries that previously relied on GROUP BY sorting may produce results that differ from previous MySQL versions. To produce a given sort order, provide an ORDER BY clause.
  • The parser no longer treats \N as a synonym for NULL in SQL statements. Use NULL instead. This change does not affect text file import or export operations performed with LOAD DATA or SELECT ... INTO OUTFILE, for which NULL continues to be represented by \N. 
  • The client-side --ssl and --ssl-verify-server-cert options have been removed. Use --ssl-mode=REQUIRED instead of --ssl=1 or --enable-ssl. Use --ssl-mode=DISABLED instead of --ssl=0, --skip-ssl, or --disable-ssl. Use --ssl-mode=VERIFY_IDENTITY instead of --ssl-verify-server-cert options.
  • The mysql_install_db program has been removed from MySQL distributions. Data directory initialization should be performed by invoking mysqld with the --initialize or --initialize-insecure option instead. In addition, the --bootstrap option for mysqld that was used by mysql_install_db was removed, and the INSTALL_SCRIPTDIR CMake option that controlled the installation location for mysql_install_db was removed.
  • The mysql_plugin utility was removed. Alternatives include loading plugins at server startup using the --plugin-load or --plugin-load-add option, or at runtime using the INSTALL PLUGIN statement.
  • The resolveip utility is removed. nslookup, host, or dig can be used instead.

There are a lot of new, deprecated, and removed features. You can check the official website for more detailed information.

Considerations Before Migrating to MySQL 8.0

Let’s mention now some of the most important things to consider before migrating to this MySQL version.

Authentication Method

As we mentioned, caching_sha2_password is not the default authentication method, so you should check if your application/connector supports it. If not, let’s see how you can change the default authentication method and the user authentication plugin to ‘mysql_native_password’ again.

To change the default  authentication method, edit the my.cnf configuration file, and add/edit the following line:

$ vi /etc/my.cnf

[mysqld]

default_authentication_plugin=mysql_native_password

To change the user authentication plugin, run the following command with a privileged user:

$ mysql -p

ALTER USER ‘username’@’hostname’ IDENTIFIED WITH ‘mysql_native_password’ BY ‘password’;

Anyway, these changes aren’t a permanent solution as the old authentication could be deprecated soon, so you should take it into account for a future database upgrade.

Also the roles are an important feature here. You can reduce the individual privileges assigning it to a role and adding the corresponding users there. 

For example, you can create a new role for the marketing and the developers teams:

$ mysql -p

CREATE ROLE 'marketing', 'developers';

Assign privileges to these new roles:

GRANT SELECT ON *.* TO 'marketing';

GRANT ALL PRIVILEGES ON *.* TO 'developers';

And then, assign the role to the users:

GRANT 'marketing' TO 'marketing1'@'%';

GRANT 'marketing' TO 'marketing2'@'%';

GRANT 'developers' TO 'developer1'@'%';

And that’s it. You’ll have the following privileges:

SHOW GRANTS FOR 'marketing1'@'%';

+-------------------------------------------+

| Grants for marketing1@%                   |

+-------------------------------------------+

| GRANT USAGE ON *.* TO `marketing1`@`%`    |

| GRANT `marketing`@`%` TO `marketing1`@`%` |

+-------------------------------------------+

2 rows in set (0.00 sec)

SHOW GRANTS FOR 'marketing';

+----------------------------------------+

| Grants for marketing@%                 |

+----------------------------------------+

| GRANT SELECT ON *.* TO `marketing`@`%` |

+----------------------------------------+

1 row in set (0.00 sec)

Character Sets

As the new default character set is utf8mb4, you should make sure you’re not using the default one as it’ll change.

To avoid some issues, you should specify the character_set_server and the collation_server variables in the my.cnf configuration file.

$ vi /etc/my.cnf

[mysqld]

character_set_server=latin1

collation_server=latin1_swedish_ci

MyISAM Engine

The MySQL privilege tables in the MySQL schema are moved to InnoDB. You can create a table engine=MyISAM, and it will work as before, but coping a MyISAM table into a running MySQL server will not work because it will not be discovered.

Partitioning

There must be no partitioned tables that use a storage engine that does not have native partitioning support. You can run the following query to verify this point.

$ mysql -p

SELECT TABLE_SCHEMA, TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE ENGINE NOT IN ('innodb', 'ndbcluster') AND CREATE_OPTIONS LIKE '%partitioned%';

If you need to change the engine of a table, you can run:

ALTER TABLE table_name ENGINE = INNODB;

Upgrade Check

As a last step, you can run the mysqlcheck command using the check-upgrade flag to confirm if everything looks fine.

$ mysqlcheck -uroot -p --all-databases --check-upgrade

Enter password:

mysql.columns_priv                                 OK

mysql.component                                    OK

mysql.db                                           OK

mysql.default_roles                                OK

mysql.engine_cost                                  OK

mysql.func                                         OK

mysql.general_log                                  OK

mysql.global_grants                                OK

mysql.gtid_executed                                OK

mysql.help_category                                OK

mysql.help_keyword                                 OK

mysql.help_relation                                OK

mysql.help_topic                                   OK

mysql.innodb_index_stats                           OK

mysql.innodb_table_stats                           OK

mysql.password_history                             OK

mysql.plugin                                       OK

mysql.procs_priv                                   OK

mysql.proxies_priv                                 OK

mysql.role_edges                                   OK

mysql.server_cost                                  OK

mysql.servers                                      OK

mysql.slave_master_info                            OK

mysql.slave_relay_log_info                         OK

mysql.slave_worker_info                            OK

mysql.slow_log                                     OK

mysql.tables_priv                                  OK

mysql.time_zone                                    OK

mysql.time_zone_leap_second                        OK

mysql.time_zone_name                               OK

mysql.time_zone_transition                         OK

mysql.time_zone_transition_type                    OK

mysql.user                                         OK

sys.sys_config                                     OK

world_x.city                                       OK

world_x.country                                    OK

world_x.countryinfo                                OK

world_x.countrylanguage                            OK

There are several things to check before performing the upgrade. You can check the official MySQL documentation for more detailed information.

Upgrade Methods

There are different ways to upgrade MySQL 5.7 to 8.0. You can use the upgrade in-place or even create a replication slave in the new version, so you can promote it later. 

But before upgrading, step 0 must be backing up your data. The backup should include all the databases including the system databases. So, if there is any issue, you can rollback asap. 

Another option, depending on the available resources, can be creating a cascade replication MySQL 5.7 -> MySQL 8.0 -> MySQL 5.7, so after promoting the new version, if something went wrong, you can promote the slave node with the old version back. But it could be dangerous if there was some issue with the data, so the backup is a must before it.

For any method to be used, it’s necessary a test environment to verify that the application is working without any issue using the new MySQL 8.0 version.

Conclusion

More than 1 year after the MySQL 8.0 release, it is time to start thinking to migrate your old MySQL version, but luckily, as the end of support for MySQL 5.7 is 2023, you have time to create a migration plan and test the application behavior with no rush. Spending some time in that testing step is necessary to avoid any issue after migrating it.


Troubleshooting an OLAP system on InnoDB

$
0
0

As a part of Mydbops Consulting we have a below problem statement from one of our client.

We have a high powered server for reporting which in turn powers our internal dashboard for viewing the logistics status.Even with a high end hardware, we had a heavy CPU usage and which in turn triggers spikes in replication lag and slowness. Below is the hardware configuration.

OS : Debian 9 (Stretch)
CPU : 40
RAM : 220G (Usable)
Disk : 3T SSD with 80K sustained IOPS.
MySQL : 5.6.43-84.3-log Percona Server (GPL)
Datasize : 2.2TB

Below is the graph on CPU utilisation from Grafana.

Since the work load is purely reporting(OLAP) we could observe a similar type of queries with different ranges. Below is the Execution plan of the query. It is a join query over 6 tables.

Explain Plan:

+----+-------------+-------+--------+--------------------------------------------------------------------------------+---------------+---------+---------------------------------+------+------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+--------+--------------------------------------------------------------------------------+---------------+---------+---------------------------------+------+------------------------------------+
| 1 | SIMPLE | cf | ref | PRIMARY,name_x | name_x | 103 | const | 1 | Using where; Using index |
| 1 | SIMPLE | scf | ref | sip_idx,active_idx,flag_idx | flag_idx | 8 | logis.cf.flagId | 5820 | Using where |
| 1 | SIMPLE | sre | ref | staId,sre_idx1,shelfId_hubId,updateDate_statusId_statId_idx,statId_statusId_x | sre_idx1 | 18 | logis.scf.sipId | 1 | Using index condition; Using where |
| 1 | SIMPLE | scam | eq_ref | mappingId | mappingId | 8 | logis.scf.mapId | 1 | NULL |
| 1 | SIMPLE | ssdm | ref | sipIdIdx,SDetailIdIdx | shipmentIdIdx | 17 | logis.sre.sipId | 1 | NULL |
| 1 | SIMPLE | sd | eq_ref | PRIMARY,mrchIdIdx | PRIMARY | 8 | logis.ssdm.SDId | 1 | Using where |
+----+-------------+-------------------------------------------------

With the initial screening it looks normal as it’s a reporting (OLAP) query and its bound to run longer and this has started to bit our system resources (CPU) and the replication lag cause stale or obsolete data in internal dashboard.

As the execution plan depicts the query is using index and the columns used are being perfectly indexed. Only with table “SCF” we could see a scan of “5820 “ , The index used here has a less cardinality.

Now we should tweak the optimizer to choose the right index. Optimizer chooses the index based on the stats collected for the table and stored under mysql.innodb_table_stats and mysql.innodb_index_stats. .

The default value of innodb_stats_persistent_sample_pages and innodb_stats_transient_sample_pages are 20 and 8 respectively, which is too low for our dataset, This works wells for smaller tables, but in our case tables are in few 100’s of GB. We increased the values below globally since its a dynamic variable by 10X approx.

mysql> set global innodb_stats_persistent_sample_pages=200;
Query OK, 0 rows affected (0.00 sec)

mysql> set global innodb_stats_transient_sample_pages=100;
Query OK, 0 rows affected (0.00 sec)

Below is the definition from the official documentation on these variables,

The number of index pages to sample when estimating cardinality and other statistics for an indexed column, such as those calculated by ANALYZE TABLE. Increasing the value improves the accuracy of index statistics, which can improve the query execution plan, at the expense of increased I/O during the execution of ANALYZE TABLE for an InnoDB table.

Now we will have to force the index stats recalculation by running a “Analyze table table_name” on all the table involved in the query or else you can make variables persistent and invoke a DB restart to calculate stats for all the tables , we chose the first method since its less harming.

Let us review the execution plan now, we could see a reduced row scans and better index usage with the optimiser as below:

+----+-------------+-------+--------+--------------------------------------------------------------------------------+---------------+---------+---------------------------------+------+------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+--------+--------------------------------------------------------------------------------+---------------+---------+---------------------------------+------+------------------------------------+
| 1 | SIMPLE | cf | ref | PRIMARY,name_x | name_x | 103 | const | 1 | Using where; Using index |
| 1 | SIMPLE | sre | ref | staId,sre_idx1,shelfId_hubId,updateDate_statusId_statId_idx,statId_statusId_x | shelfId_hubId | 9 | const | 2936 | Using where |
| 1 | SIMPLE | scf | ref | sip_idx,active_idx,flag_idx | sip_idx | 22 | logis.sre.shipmentId | 1 | Using index condition; Using where |
| 1 | SIMPLE | scam | eq_ref | mappingId | mappingId | 8 | logis.scf.mappingId | 1 | NULL |
| 1 | SIMPLE | ssdm | ref | shipmentIdIdx,sellerDetailIdIdx | sipIdIdx | 17 | logis.sre.shipmentId | 1 | NULL |
| 1 | SIMPLE | sd | eq_ref | PRIMARY,merchantIdIdx | PRIMARY | 8 | logis.ssdm.sellerDetailId | 1 | Using where |
+----+-------------+-------+--------+--------------------------------------------------------------------------------+---------------+---------+---------------------------------+------+------------------------------------+

Note the new index plan is applicable for the new incoming queries. Within a short span of time the CPU usage has dropped down drastically there is huge boost in performance, please find the graph below. Now the dashboards are faster than ever before.

Key Takeaways:

Below are points to note while setting this variable.

  • Too high value can result in longer time for stats calculation .
  • Too low can have inaccurate stats and leads to a situation discussed above
  • As the table grows, InnoDB allegedly re-ANALYZEs ie., re-calculates stats after 10% growth, so no manual action needed.
  • Most tables have decent stats from sampling 20 pages (Smaller tables)
  • Tables with uneven distribution won’t benefit from changing the ’20’, to tackle that we have “Histograms” from MySQL 8.0

Automatic Schema Synchronization in NDB Cluster 8.0: Part 1

$
0
0

Data nodes are the distributed, sharded storage core of MySQL NDB Cluster. Its data is usually accessed by MySQL Servers (also called SQL nodes in NDB parlance). The MySQL servers each have their own transactional Data Dictionary (DD) where all the metadata describing tables, databases, tablespaces, logfile groups, foreign keys, and other objects are stored for use by MySQL server.…

Automatic Schema Synchronization in NDB Cluster 8.0: Part 2

$
0
0

In part 1, we took a brief, high-level look at the various protocols and mechanisms used to keep the Data Dictionary (DD) of MySQL servers connected to a MySQL Cluster in synchronization with each other and with the NDB Dictionary.…

InnoDB Flushing in Action for Percona Server for MySQL

$
0
0
InnoDB Flushing in Action for Percona Server for MySQL

InnoDB Flushing in Action for Percona Server for MySQLAs the second part of the earlier post Give Love to Your SSDs – Reduce innodb_io_capacity_max! we wanted to put together some concepts on how InnoDB flushing works in recent Percona Server for MySQL versions (8.0.x prior to 8.0.19, or 5.7.x). It is important to understand this aspect of InnoDB in order to tune it correctly. This post is a bit long and complex as it goes very deep into some InnoDB internals.

InnoDB internally handles flush operations in the background to remove dirty pages from the buffer pool. A dirty page is a page that is modified in memory but not yet flushed to disk. This is done to lower the write load and the latency of the transactions. Let’s explore the various sources of flushing inside InnoDB.

Idle Flushing

We already discussed the idle flushing in the previous post mentioned above. When there are no write operations, which means the LSN isn’t moving, InnoDB flushes dirty pages at the innodb_io_capacity rate.

Dirty Pages Percentage Flushing

This source of flushing is a slightly modified version of the old InnoDB flushing algorithm used years ago. If you have been around MySQL for a while, you probably don’t like this algorithm. The algorithm is controlled by these variables:

innodb_io_capacity (default value of 200) 
innodb_max_dirty_pages_pct (default value of 75) 
innodb_max_dirty_pages_pct_lwm (default value of 0 in 5.7 and 10 above 8.0.3)

If the ratio of dirty pages over the total number of pages in the buffer pool is higher than the low water mark (lwm), InnoDB flushes pages at a rate proportional to the actual percentage of dirty pages over the value of Innodb_max_dirty_pages_pct multiplied by innodb_io_capacity. If the actual dirty page percentage is higher than Innodb_max_dirty_pages_pct, the flushing rate is capped at innodb_io_capacity.

 

The main issue with this algorithm is that it is not looking at the right thing. As a result, transaction processing may often freeze for a flush storm because the max checkpoint age is reached. Here’s an example from a post written by Vadmin Tkachenko back in 2011: InnoDB Flushing: a lot of memory and slow disk.

TPCC New order transaction over time

We can see a sharp drop in NOTP (New Order Transaction per second) at time = 265, and that’s because InnoDB reached the maximum checkpoint age and had to furiously flush pages. This is often called a flush storm. These storms freeze the write operations and are extremely bad for the database operation. For more details about flush storms, please see InnoDB Flushing: Theory and solutions.

Free List Flushing

To speed up reads and page creation, InnoDB tries to always have a certain number of free pages in each buffer pool instance. Without some free pages, InnoDB could need to flush a page to disk before it can load a new one.

This behavior is controlled by another poorly understood variable: innodb_lru_scan_depth. This is a pretty bad name for what the variable controls. Although the name makes sense if you look at the code, for a normal DBA the name should be innodb_free_pages_per_pool_target. At regular intervals, the oldest pages in the LRU list of each buffer pool instance is scanned (hence the name), and pages are freed up to the variable value. If one of these pages is dirty, it will be flushed before it is freed.

Adaptive Flushing

The adaptive flushing algorithm was a major improvement to InnoDB and it allowed MySQL to handle much heavier write load in a decent manner. Instead of looking at the number of dirty pages like the old algorithm, the adaptive flushing looks at what matters: the checkpoint age. The first adaptive algorithm that we are aware of came from Yasufumi Kinoshita in 2008 while he was working at Percona. The InnoDB plugin 1.0.4 integrated similar concepts, and eventually Percona removed its flushing algorithm because the upstream one was doing a good job.

The following description is valid for Percona Server for MySQL 8.0.18-. While we were busy writing this post, Oracle released 8.0.19 which introduces significant changes to the adaptive flushing code. It looks like a good opportunity for a follow-up post in the near future…

Some Background

But let’s first take a small step back to put together some concepts. InnoDB stores rows in pages of normally 16KB. These pages are either on disk, in the data files, or in memory in the InnoDB buffer pool. InnoDB only modifies pages in the buffer pool.

Pages in the buffer pool may be modified by queries, and then they become dirty. At commit, the page modifications are written to the redo log, the InnoDB log files. After the write, the LSN (last sequence number) is increased. The dirty pages are not flushed back to disk immediately and are kept dirty for some time. The delayed page flushing is a common performance hack. The way InnoDB flushes its dirty pages is the focus of this post.

Let’s now consider the structure of the InnoDB Redo Log.

The InnoDB redo log files form a ring buffer

The InnoDB log files form a ring buffer containing unflushed modifications. The above figure shows a crude representation of the ring buffer. The Head points to where InnoDB is currently writing transactional data. The Tail points to the oldest unflushed data modification. The distance between the Head and Tail is the checkpoint age. The checkpoint age is expressed in bytes. The size and the number of log files determine the max checkpoint age, and the max checkpoint age is approximately 80% of the combined size of the log files.

Write transactions are moving the Head forward while page flushing is moving the Tail. If the Head moves too fast and there is less than 12.5% available before the Tail, transactions can no longer commit until some space is freed in the log files. InnoDB reacts by flushing at a high rate, an event called a Flush storm. Needless to say, flush storms should be avoided.

How the Adaptive Flushing Algorithm Works

The adaptive flushing algorithm is controlled by the following variables:

innodb_adaptive_flushing  (default ON)
innodb_adaptive_flushing_lwm  (default 10)
innodb_flush_sync (default ON)
innodb_flushing_avg_loops (default 30)
innodb_io_capacity (default value of 200)
innodb_io_capacity_max (default value of at least 2000)
innodb_cleaner_lsn_age_factor (Percona server only, default high_checkpoint)

The goal of the algorithm is to adapt the flushing rate (speed of the Tail) to the evolution of the checkpoint age (speed of the Head). It starts when the checkpoint age is above the adaptive flushing low water mark, by the default 10% of the max checkpoint age. Percona Server for MySQL offers two algorithms: Legacy and High Checkpoint.

The Legacy algorithm is given below. Notice the power of 3/2 on the age factor and the 7.5 denominator.

Legacy age factor

 

It also offers the High Checkpoint algorithm shown below:

Percona High-checkpoint age factor

This time the age power factor is 5/2 and the denominator is 700.5. It is important to note that in both equations, innodb_io_capacity (ioCap) appears as the denominator in a ratio with innodb_io_capacity_max (ioCapMax). If we plot both equations together, we have:

Flushing pressure for the legacy and high-checkpoint algorithm

The graph has been generated for a ratio ioCapMax/ioCap of 10 like with the default values. The Percona High Checkpoint starts slowly but then it increases rapidly. This allows for more dirty pages (see the post we previously discussed) and it is normally good for performance. The returned percentage value can be much higher than 100.

Average Over Time

So far, only the checkpoint age was used. While this is good, the goal of the algorithm is to flush pages in a way that the tail of the redo log ring buffer moves at about the same speed at the head. Approximately every innodb_flushing_avg_loops second, the rate of pages flushed and the progression of the head of the redo log is measured and the new value is averaged with the previous one. The goal here is to give some inertia to the algorithm, to damp the changes. A higher value of innodb_flushing_avg_loops makes the algorithm slower to react, while a smaller value makes it more reactive. Let’s call these quantities avgPagesFlushed and avgLsnRate.

Pages to Flush for the avgLsnRate

Based on the avgLsnRate value, InnoDB scans the oldest dirty pages in the buffer pool and calculates the number of pages that are at less than avgLsnRate of the tail. Since this is calculated every second, the number of pages returned is what needs to be flushed to maintain the correct rate. Let’s call this number pagesForLsnRate.

Finally…

We now have all the parts we need. The actual number of pages that will be flushed is given by:

This quantity is then capped to ioCapMax. As you can see, pctOfIoCapToUse is multiplied by ioCap. If you look back at the equations giving pctOfIoCapToUse, they have ioCap at the denominator. The ioCap cancels out and the adaptive flushing algorithm is thus independent of innodb_io_capacity, as only innodb_io_capacity_max matters. There could also be more pages flushed if innodb_flush_neighbors is set.

Can InnoDB Flush Pages at a Rate Higher Than innodb_io_capacity_max?

Yes, if innodb_flush_sync is ON, InnoDB is authorized to go beyond if the max checkpoint age is reached or almost reached. If set to OFF, you’ll never go beyond innodb_io_capacity_max. If the latency of your read queries is critical, disabling innodb_flush_sync will prevent an IO storm, but at the expense of stalling the writes.

InnoDB page_cleaner Error Message

Error messages like

[Note] InnoDB: page_cleaner: 1000ms intended loop took 4013ms. The settings might not be optimal. (flushed=1438 and evicted=0, during the time.)

are rather frequent. They basically mean the hardware wasn’t able to flush innodb_io_capacity_max pages per second. In the above example, InnoDB tried to flush 1,438 pages but the spinning disk is able to perform only 360 per second. Thus, the flushing operation which was supposed to take 1 second ended up taking 4 seconds. If you really think the storage is able to deliver the number of write iops stated by innodb_io_capacity_max, then it may be one of these possibilities:

  • innodb_io_capacity_max represents a number of pages to be flushed, flushing a page may require more than one IO, especially when the number of tablespaces is large.
  • A spike of read IOPs has competed with the flushing.
  • The device had a write latency spike. Garbage collection on SSDs can cause that, especially if the SSD is rather full.
  • The doublewrite buffer had contention. Try Percona Server for MySQL with the parallel doublewrite buffer feature.
  • Do you have enough page cleaners to fully maximize your hardware?

Tuning InnoDB

Now that we understand how InnoDB flushes dirty pages to disk, the next obvious step is to tune it. InnoDB tuning will be covered in a follow-up post, so stay tuned.

How to Install MySQL Enterprise Edition on Docker and Monitor it with MySQL Enterprise Monitor?

$
0
0

Introduction


Before I talk about installation of MySQL inside docker, it's more important to know
what is Docker?
- Docker is a tool designed to create , deploy ,and run an application any where.
-It allow us to package up application with all requirements such as libraries and other dependencies and ship it all as a PACKAGE.
who uses Docker?
Developer : Docker enables developer to develop application without spending much time on IT infrastructure.
Sysadmin :-Docker enables sysadmin to streamline the software delivery, such as develop and deploy bug fixes and new features without any roadblock. 
Enterprise :-Docker works in the cloud , on premise ,and supports both traditional and micro services deployments.
why Docker?
-Easily adapts to your working environment.
-Simple to use.
- Eliminate friction in development life cycle.

More info at :- https://docs.docker.com/engine/docker-overview/

Let's Install MySQL Enterprise Edition 8.0.19 with Docker

WarningThe MySQL Docker images maintained by the MySQL team are built specifically for Linux platforms. Other platforms are not supported, and users using these MySQL Docker images on them are doing so at their own risk. See the discussion here for some known limitations for running these containers on non-Linux operating systems.

More about MySQL Enterprise Edition:- https://www.mysql.com/products/enterprise/

Step 1 :- Download MySQL EE binaries from MOS portal->patch & Updates
Step 2:- Download the Binaries by clicking on product name and unzip to obtain tarball inside.

              See at step-1 in yellow color.
Step 3:-Load the image by running below command
             # docker load -i mysql-enterprise-server-8.0.19.tar

Suppose you have downloaded mysql-enterprise-server-8.0.19.tar file into windows laptop then push it into Linux Machine where Docker is running, then go to the directory and load the 
mysql-enterprise-server-8.0.19.tar file.

Step 4:-  to verify
#docker images

Step 5:- Starting MySQL Server Instance
#docker run --name MySQLEnterpriseContainer -d  -p 3306:3306  mysql/enterprise-server:8.0


The --name option, for supplying a custom name for your server container, is optional; if no container name is supplied, a random one is generated.

mysql/enterprise-server:8.0 :- image_name:tag

Note:- Don’t give full version 8.0.19. TAG has to be 8.0 or else below error will generated.



Step 6:- To verify
#docker ps


Step 7:- Get Random Password by typing below command
#docker logs MySQLEnterpriseContainer 2>&1 | grep GENERATED


Step 8:-Connecting to MySQL Server from within the Container
docker exec -it MySQLEnterpriseContainer mysql -uroot –p

Enter Generated Password : HOpnuMIxibVYMijv3syRYK4KjEc

Step 9:-Change Temporary Password
ALTER USER 'root'@'localhost' IDENTIFIED BY 'MySQL8.0';

Step 10:- Change Create Table and Insert Records to test...

Create database test;

Create table test.sales(Empname Varchar(20), CountryName Varchar(20));

Insert into test.sales select 'Ram','Delhi';

Insert into test.sales select 'Radha','Delhi';

Insert into test.sales select 'Rakesh','Mumbai';

Insert into test.sales select 'Rajesh','Mumbai';


Extra Commands to play...


TO Get into the Docker Container to find the data directory , base directory , executing linux bash commands etc.

shell> docker exec -it MySQLEnterpriseContainer bash

bash-4.2#

ls /var/lib/mysql

To stop the MySQL Server container we have created, use this command:

docker stop MySQLEnterpriseContainer

docker start MySQLEnterpriseContainer

docker restart MySQLEnterpriseContainer

To delete the MySQL container, stop it first, and then use the docker rm command:

docker stop MySQLEnterpriseContainer

docker rm MySQLEnterpriseContainer

Accessing Docker MySQL Database from Physical Host through MySQL Workbench Tool

Assume MySQL Enterprise Workbench is installed on Windows Machine : 192.168.0.3
Docker MySQL is Installed on Docker Host : 192.168.227.128 
Step 1:- Make Sure MySQL is able to communicate with remote host.
update mysql.user set host='%' where user='root';


flush privileges;

Step 2:- Make sure Docker MySQL port is published out side hosts. 
                see Step 05  -p 3306:3306

Step 3:-Connect MySQL Enterprise Workbench to Docker MySQL DB.
             

Monitoring Docker MySQL Database from MySQL Enterprise Monitor

Assume MySQL Enterprise Monitor is istalled on Windows Machine and already monitoring many On-premises DB and Cloud Instances and NOW we want to also monitor MySQL DB which is running in DOCKER.
Assume MySQL Enterprise Workbench is installed on Windows Machine : 192.168.0.3

Docker MySQL is Installed on Docker Host : 192.168.227.128 

Login to MySQL Enterprise Monitor-->Add Instances.

To Monitor SQL Statements:-

Know More about MySQL Enterprise Monitor:-

Conclusion:-


Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly.

Running MySQL is quite easy when ever environment demands Running multiple MySQL instances into single server DOCKER is best fit.

Making Sense of MySQL Group Replication Consistency Levels

$
0
0
MySQL Group Replication Consistency

MySQL Group Replication ConsistencyFrom the initial release, one of the biggest complaints I had about Group Replication is that it allowed “stale” reads and there was no way to prevent them or to even know that you read “stale” data. That was a huge limitation. Thankfully, Oracle released features to control the consistency levels, and it was exactly a year ago! I don’t know about you, but I personally was confused by naming it group_replication_consistency=’AFTER’ or ‘BEFORE’.

So now I want to try to make sense of it and share my understanding (even if it is one year later).

Setup:

We will start with the default group_replication_consistency=’EVENTUAL’ and work from there. So let’s consider a very simple table:

CREATE TABLE `t1` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `cnt` int(11) DEFAULT NULL,
  PRIMARY KEY (`id`)
)

With over 10mln rows:

select count(*) from t1;
+----------+
| count(*) |
+----------+
| 10485760 |
+----------+

And we will do a very simple action. On Node 1 we will update the table and on Node 2 we will read data.

Node 1:

UPDATE t1 SET cnt=11;
Query OK, 10485760 rows affected (1 min 20.23 sec)
Rows matched: 10485760  Changed: 10485760  Warnings: 0

And immediately after that on the Node 2:

select cnt from t1 where id=10001;
+------+
| cnt  |
+------+
|   10 |
+------+
1 row in set (0.00 sec)

There are a few points to note:

  • On Node 1 it took 1min 20 sec for the transaction to execute
  • On Node 2 we got the result immediately, but we essentially got a “stale” read in that data was already updated, but we got an old version of it.

How can we get a better outcome:

group_replication_consistency=’BEFORE’;

Let’s look into the consistency level ‘BEFORE’. It says that a transaction on Node 2 will wait until the previously committed transaction on the Node 1 transaction is committed also on Node 2. So let’s change the set session to group_replication_consistency=’BEFORE’; on Node 2 and execute a similar update again.

Node 1 (still group_replication_consistency=’EVENTUAL’;):

UPDATE t1 SET cnt=12;
Query OK, 10485760 rows affected (1 min 18.66 sec)
Rows matched: 10485760  Changed: 10485760  Warnings: 0

And after this, on Node 2 (with set session group_replication_consistency=’BEFORE’;):

select cnt from t1 where id=10001;
+------+
| cnt  |
+------+
|   12 |
+------+
1 row in set (1 min 11.45 sec)

So there are very notable changes:

  • The transaction on Node 2 returned the correct result now.
  • But now it took 1 min 11 sec to return it (instead of 0sec as in the previous case). Basically, the transaction waited on the transaction from Node 1 to be applied on Node 2
  • The execution time on Node 1 did not change.

This mode allowed us to do exactly what we wanted – prevent stale reads. Great outcome! So what about group_replication_consistency=’AFTER’ ?

group_replication_consistency=’AFTER’;

This mode says that a transaction on Node 1 will not return OK until it makes sure that other nodes applied the transaction. To see it in action, we will put Node 1 into group_replication_consistency=’AFTER’ and Node 2 into group_replication_consistency=’EVENTUAL’;.

Node 1 (in group_replication_consistency=’AFTER’):

UPDATE t1 SET cnt=13;
Query OK, 10485760 rows affected (3 min 0.46 sec)
Rows matched: 10485760  Changed: 10485760  Warnings: 0

After that, Node 2 (in group_replication_consistency=’EVENTUAL’):

select cnt from t1 where id=10001;
+------+
| cnt  |
+------+
|   13 |
+------+
1 row in set (0.00 sec)

Here the situation is different. Now the execution time on Node 1 doubled, as the transaction waits to be committed on all nodes and after that on Node 2 the execution is immediate.

This mode still avoids “stale” reads, but in this case, we shifted the wait time from Node 2 to Node 1, and this is how we can view the difference between ‘BEFORE’ and ‘AFTER’ consistency modes in Group Replication.

Both modes provide a consistent view, but:

  • In ‘BEFORE’ mode: the readers on secondary nodes will be blocked, waiting on the moment when the consistent view is available, and
  • In ‘AFTER’ mode the writers are blocked till other nodes get a consistent view.

So which mode to choose? Actually I think it is good to have an option here. You can choose, if you want, to put wait time on your readers or on your writers; the decision is up to how your application is designed.

Upgrade MySQL InnoDB Cluster 8.0.18 to 8.0.19 (With MySQL InnoDB Cluster MetaData V2 Change)

$
0
0
MySQL InnoDB Cluster in 8.0.19 has new version of MySQL InnoDB Cluster MetaData.
MySQL Shell 8.0.19 full function has to be running with V2 metadata.

This is the tutorial for an sample upgrade of MySQL 8.0.18 to MySQL 8.0.19

Reference :
https://mysqlserverteam.com/upgrading-mysql-innodb-cluster-metadata/


The following InnoDB Cluster is running with MySQL 8.0.18.


 Running MySQL Shell 8.0.19 with MySQL InnoDB Cluster 8.0.18 (which the metadata version is @1.0.1) with command dba.getCluster() results in Warning :

WARNING: No cluster change operations can be executed because the installed metadata version 1.0.1 is lower than the version required by Shell which is version 2.0.0. Upgrade the metadata to remove this restriction. See \? dba.upgradeMetadata for additional details.





<clustter>.listRouters() lists the Router registered with the InnoDB Cluster.


To illustrate the Upgrade MySQL InnoDB Cluster from 8.0.18 to 8.0.19, here is the steps
1. Upgrade ALL Routers to New Version (8.0.19)
2. Upgrade the MySQL InnoDB Cluster Metadata to V2
3. Upgrade individual Server from 8.0.18 to 8.0.19

What would happen if Upgrading the MySQL InnoDB Metadata without upgrading Router -
1. Connect with the Cluster Admin User using MySQL Shell
( where the user was possibly created using dba.configureInstance("<server URL>", {clusterAdmin:'<admin user>', clusterAdminPassword:'<the password>'})

2. Execute the dba.upgradeMetadata {dryRun:true}

The User created with MySQL 8.0.18 might not have enough privileges with MySQL Shell 8.0.19.  

Login with super user (e.g. root) via normal mysql client (or mysql shell) to the Primary Node (e.g. primary:3310), and execute the GRANT statements as shown on the report notes.


Re-run the  dba.upgradeMetadata {dryRun:true}


This shows the list of ROUTERs which is to be upgraded before the metadata to be upgraded.


Upgrading MySQL Router 8.0.18 to 8.0.19
For the tutorial purpose, the 'upgrade' MySQL Router to 8.0.19 is simply to do with New boostraping process using MySQL Router 8.0.19.  This recreates the configuration and creating NEW router account.

Once all routers are upgraded, the metadata can be upgraded.  Running with dryRun mode shows :

The privileges for MySQL Router account are missing.  This is because the bootstrap of MySQL Router 8.0.19 creating new user.  The OLD router account is still valid.



With MySQL Router 8.0.19, there is new option(s) --account to define what user to be reused.   So all routers can share the same account without individual account being created.


UPGRADE Metadata to V2
Using MySQL Shell 8.0.19, perform :

MySQL [primary ssl] JS> dba.upgradeMetadata()



The metadata has been changed to V2.


Upgrade MySQL Server from 8.0.18 to MySQL 8.0.19
1. The upgrade should start with Secondary Server and the Primary Server should be upgrade as the Last Server.
2. Upgrade to MySQL 8.0.19 is simply to start it with MySQL 8.0.19 (mysqld).   The mysql_upgrade has been deprecated since MySQL 8.0.16.   "mysqld" when it is started with OLD database version automatically upgrade the database to its version.
3. By default, MySQL Server when it is configured as member node with MySQL Shell (<cluster>.addInstance...)  is started with group_replication_start_on_boot=true.  This tells the server to join InnoDB Cluster when it is started.    During the upgrade process, it might worth to change this setting (group_replication_start_on_boot) to false so that we can validate the UPGRADE before it rejoins the InnoDB Cluster.

Connect to Secondary Server1
mysql> mylect * from performance_schema.persisted_variables where variable_name like 'group_replication_start_on_boot%';


Change the setting to FALSE
mysql> set persist group_replication_start_on_boot=false;




Shutdown the Server

Change the Configuration to use New basedir to MySQL 8.0.19 folder (if needed)
e.g. basedir=/usr/local/mysql8019



Startup MySQL Server with New MySQL 8.0.19 binary (mysqld)

Check the errorlog for any failure and double check the server if it is running with 8.0.19
mysql> select @@version;
+-------------------+
| @@version         |
+-------------------+
| 8.0.19-commercial |
+-------------------+
1 row in set (0.00 sec)


Persist the group_replication_start_on_boot=true so that the server with New Version 8.0.19 can rejoin on next startup.  The InnoDB Cluster will therefore be a mix version of 8.0.18 and 8.0.19.   The higher version of MySQL Server can only be the Secondary Server where there is a mix versions in the InnoDB Cluster.




Upon the restart of the MySQL Server, showing the status of the MySQL InnoDB Cluster as follows :



Repeat the Process for the next Secondary Server and finally the Primary Server.

When the Primary Server is shutdown, the new Primary Server is elected where the 2 Servers have the server version of MySQL 8.0.19.    When the Server is upgraded and rejoined, it has the member role as Secondary.



Finally, the status goes back to ALL online servers with MySQL 8.0.19.

Done!!!








MySQL Document Store Tutorial

$
0
0
When I tell people that they can use MySQL without SQL they tend to be skeptical.  But with the MySQL Document Store you can do just that with a new NoSQL API and in this case there is no structured query language.pre-FOSDEM MySQL Days (which is sold out and there is a waiting list) is my tutorial on using the Document Store.  Those in my session will be able to see how to use the MySQL Shell (mysqlsh) to connect to a MySQL server and save data without have to do the many things a DBA used to have to do in the past such as normalize data, setup relations, and several other tasks.  Plus the schema-less Document Store means you can alter your data needs without having to do an endless series of ALTER TABLES.
MySQL without SQL
MySQL Document Store let you save and retrieve data without needed the use of structured query language (SQL)
Part of the tutorial is a workbook and slides that I should be able to publish if they are well received.  And maybe a video for those who will not be able to make it to Brussels.

Monitoring MySQL using MySQL Shell ( \show & \watch )

$
0
0

We know the MySQL Shell is the advanced client tool for communicate to the MySQL server . MySQL Shell has lot of features like InnoDB Cluster control , InnoDB ReplicaSet, MySQL Shell utilities , MySQL server management etc … Today I came to know, MySQL shell helps lot in monitoring as well ( query, threads, resource consumption , locking ) .

In this blog I am going to explain how to use the MySQL Shell for monitor your server .

MySQL Shell provides two hot commands \show and \watch for monitor the MySQL server and report generating purpose .

\show : Execute the report with the provided options

\watch : Execute the report in loop with provided options

\show with thread example :

\show with query example :

You can execute any query within the double quotes .

\show with threads example :

As I showned in the screenshot there are two types in threads .

  • –foreground
  • –background

Similarly you can use the \watch command to execute the reports in loop .

All good, now I am going to show some examples,

  1. How to find the top three MySQL threads which consuming more memory for the particular user ?

tid : thread id
cid : connection id
memory : the number of bytes allocated by the thread
started : time when thread started to be in its current state
user : the user who issued the statement, or NULL for a background thread

cmd : \show threads –foreground -o tid,cid,user,memory,started –order-by=memory –desc –where “user = ‘app_user'” –limit=3

2. How to find the blocking and blocked threads ?

Consider I started the below transaction in a terminal 1 but not committed ,

At terminal 2, I am trying to update the same value ,

root@localhost:sakthi>update sakthi_j set id=10;

now, lets execute the \show with the required options ,

tidle : the time the thread has been idle
nblocked : the number of other threads blocked by the thread
nblocking : the number of other threads blocking the thread

After commit the transaction there is no blocking transactions . Make sense ?

cmd : \show threads –foreground -o tid,cid,tidle,nblocked,nblocking,digest,digesttxt –where “nblocked=1 or nblocking=1”

3. How to find the top 10 threads, which used huge IO events ?

ioavgltncy : the average wait time per timed I/O event for the thread
ioltncy : the total wait time of timed I/O events for the thread
iomaxltncy : the maximum single wait time of timed I/O events for the thread
iominltncy : the minimum single wait time of timed I/O events for the thread
nio : the total number of I/O events for the thread

cmd : \show threads –foreground -o tid,ioavgltncy,ioltncy,iomaxltncy,iominltncy,nio –order-by=nio –desc  –limit=10

Like this way, you can find more details about query statistics , JOIN informations , system resource utilisation etc …

I hope this blog will helps someone who is looking MySQL Shell for effectively handle the MySQL server . Will come up with my next blog soon …

Thanks !!!

Securing MySQL Binary logs at Rest in MySQL 8.0

$
0
0

We will have a look at a new feature in MySQL 8.0 called binlog encryption. This feature is available from the MySQL version 8.0.14 or above.

Our previous blogs discussed about table space encryption in MySQL and Percona servers. In Mydbops, we are giving high importance about achieving security compliances.

The binary log records changes made to the databases so that it can be used to replicate the same to the slaves and also for the point in time recovery (PITR). So, it means that if someone has access to the binary logs, they can reproduce our entire database in many forms. As a DBA, we need to make sure that the binary log files are protected from users who are having access to the file system and also, log files need to be encrypted to follow the security compliance requirements by some clients.

These new features came as a rescue to satisfy those requirements. Now we will have a look at how to use and maintain it.

Note: To explore this feature, I have installed MySQL 8.0.18 in our testing environment.

Below are the topics that we are going to discuss in this blog:

  1. Enabling Encryption
  2. Rotating Key Manually
  3. High Level Architecture
  4. To access the encrypted binary log
  5. To decrypt the encrypted binary log

Enabling Encryption:

Let’s start this by enabling the binlog encryption. Let us list down the current binary logs and their status. 

mysql> show binary logs;
+---------------+-----------+-----------+
| Log_name      | File_size | Encrypted |
+---------------+-----------+-----------+
| binlog.000001 |       474 | No        |
+---------------+-----------+-----------+
1 row in set (0.02 sec)

MySQL Server has one binary log and it is not encrypted it is not encrypted by default.

To proceed further, make sure you are having SUPER or BINLOG_ENCRYPTION_ADMIN privilege for your user to enable the encryption online. 

mysql> show global variables like 'binlog_encryption';
+-------------------+-------+
| Variable_name     | Value |
+-------------------+-------+
| binlog_encryption | OFF   |
+-------------------+-------+
1 row in set (0.00 sec)
mysql> set global binlog_encryption=on;
 Query OK, 0 rows affected (0.18 sec)

mysql> show global variables like 'binlog_encryption';
+-------------------+-------+
| Variable_name     | Value |
+-------------------+-------+
| binlog_encryption | ON    |
+-------------------+-------+
1 row in set (0.17 sec)

mysql> show binary logs;
+---------------+-----------+-----------+
| Log_name      | File_size | Encrypted |
+---------------+-----------+-----------+
| binlog.000001 |       497 | No        |
| binlog.000002 |       199 | No        |
| binlog.000003 |       667 | Yes       |
+---------------+-----------+-----------+
3 rows in set (0.03 sec)

Now, we can observe the latest binary log is encrypted. If the encrypted binary logs were the only one to be present in the server we can safely purge(remove) them. 

mysql> purge binary logs to 'binlog.000003';
Query OK, 0 rows affected (0.07 sec)

mysql> show binary logs;
+---------------+-----------+-----------+
| Log_name      | File_size | Encrypted |
+---------------+-----------+-----------+
| binlog.000003 |       667 | Yes       |
+---------------+-----------+-----------+
1 row in set (0.00 sec)

Currently, the server has encrypted binary logs. Also, we can observe the increase in the size of the encrypted binary log (empty) when compared with unencrypted ones. It is because of the 512 bytes encrypted header and the header is never replicated. 

Rotating Encryption Key:

The encryption of binary logs is made and it is time to rotate the encryption key. This must be done periodically to comply with security compliances. We can rotate the encryption key manually. 

mysql> alter instance rotate binlog master key;
Query OK, 0 rows affected (0.36 sec)

mysql> show binary logs;
+---------------+-----------+-----------+
| Log_name      | File_size | Encrypted |
+---------------+-----------+-----------+
| binlog.000003 |       711 | Yes       |
| binlog.000004 |       667 | Yes       |
+---------------+-----------+-----------+
2 rows in set (0.00 sec)

The below process is performed by the server while rotating the key.

  1. The binary and relay log files are flushed (rotated).
  2. A new binary log encryption key is generated with the next available sequence number, stored on the keyring, and used as the new binary log master key.
  3. Binary log encryption keys that are no longer in use for any files after the re-encryption process are removed from the keyring.

Also, we are having one more variable binlog_rotate_encryption_master_key_at_startup which ensures the binary log encryption key has to be rotated at server startup.

High-Level Architecture:

Binlog Encryption is designed to use two-tier encryption. 

  1. File Passwords
  2. Replication Encryption key 
Binlog Encryption

The File Password is used to encrypt/decrypt binary log content and Replication Encryption Key is used to encrypt/decrypt the file password in the encrypted binary log header.

A single replication encryption key may be used to encrypt/decrypt many binary and relay log files passwords, while a file password is intended to encrypt/decrypt a single binary or relay log file.

Screenshot 2019-12-23 at 8.54.19 AM

Multiple file passwords rely on the single encryption key whereas file password is used for the individual log files.

Okay, what happens if the encryption key REK1 is rotated?

Screenshot 2019-12-23 at 8.54.40 AM

The server generates a new replication encryption key REKj and it iterates over all encrypted log files to re-encrypt their password (Iterate from the last file to the first file ) by overwriting the encrypted file header with the new one. So, my new encryption key can decrypt all the available file passwords.

Compare the encrypted log file with unencrypted one:

Screenshot 2019-12-23 at 8.44.10 AM

The only difference is the binlog header. 

Here is the detailed header of the encrypted binlog:

Screenshot 2019-12-23 at 9.48.46 AM

Magic Number :

It is needed to distinguish from an encrypted to an unencrypted binary log files.

An encrypted binlog file has 0xFD62696E . An unencrypted binlog file : 0xFE62696E

Form an encrypted log file:

[root@testing mysql]# hexdump binlog.000003 -n 4 -e '/1 "%02X"'
FD62696E

Form an unencrypted log file:

[root@gr1 mysql]# hexdump binlog.000001 -n 4 -e '/1 "%02X"'
FE62696E

Replication logs encryption version :

The current value is 1. It might change in the future.

Replication encryption key ID: 

The key ID of the replication master key that encrypted the file password. It is a combination of MySQLReplicationKey, UUID, and SEQ_NO. It can be identified in the following format.

MySQLReplicationKey_{UUID}_{SEQ_NO}
Where MySQLReplicationKey is the prefix
      UUID is the MySQL server’s UUID that generated the key               SEQ_NO is the global replication master key sequence number.

Here is an example:

                                      |<-----Keyword------|<---------------UUID--------------->|SEQ_NO|                        
Keyring key ID for 'binlog.000003' is 'MySQLReplicationKey_d1deace2-24cc-11ea-a1db-0800270a0142_2'

Now Let us try rotating the encryption key.

mysql> alter instance rotate binlog master key;
Query OK, 0 rows affected (0.12 sec)

After successful rotation, the encryption key is 

Keyring key ID for 'binlog.000003' is 'MySQLReplicationKey_d1deace2-24cc-11ea-a1db-0800270a0142_3'

We can observe that My SEQ_NO is incremented by 1. It will keep going on all the encryption key rotations.

Encrypted File Password: 

It is used to generate the key for encrypting/decrypting the binary log content.

IV for encrypting file password:

         It is used with the replication master key to encrypt the file password.

Padding:

        Unused header space will be filled with 0.

How to access the encrypted binary log using mysqlbinlog utility ?

Sometimes, It is necessary to decode the binary log for PITR / to get to know the write pattern. Using mysqlbinlog, we can’t directly decode the encrypted binlog as it is not having access to the keyring file.

For example:

[root@testing mysql]# mysqlbinlog binlog.000003
/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/;
/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;
DELIMITER /*!*/;
ERROR: Reading encrypted log files directly is not supported.
SET @@SESSION.GTID_NEXT= 'AUTOMATIC' /* added by mysqlbinlog */ /*!*/;
DELIMITER ;
# End of log file
/*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/;
/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/;
[root@testing mysql]#

But it is possible to get it by making a request to the MySQL server

[root@testing mysql]# mysqlbinlog -R --host localhost --user root binlog.000003
/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/;
/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;
DELIMITER /*!*/;
# at 4
#191222 15:28:57 server id 1  end_log_pos 124 CRC32 0xcb35f1c9  Start: binlog v 4, server v 8.0.18 created 191222 15:28:57
BINLOG '
uYv/XQ8BAAAAeAAAAHwAAAAAAAQAOC4wLjE4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAEwANAAgAAAAABAAEAAAAYAAEGggAAAAICAgCAAAACgoKKioAEjQA
CgHJ8TXL
'/*!*/;
# at 124
#191222 15:28:57 server id 1  end_log_pos 155 CRC32 0xb7909c71  Previous-GTIDs
# [empty]
# at 155
#191223  1:55:30 server id 1  end_log_pos 199 CRC32 0x46148c5f  Rotate to binlog.000004  pos: 4
SET @@SESSION.GTID_NEXT= 'AUTOMATIC' /* added by mysqlbinlog */ /*!*/;
DELIMITER ;
# End of log file
/*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/;
/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/;

How to decrypt an encrypted binary log file?

The manual decryption of encrypted binary log is possible when the value of the key that encrypted its file password is known. We can use this amazing post https://mysql.wisborg.dk/2019/01/28/automatic-decryption-of-mysql-binary-logs-using-python/ to decrypt the binary log. To use this script, we need only have access to the keyring file.

Note: Even if we do not have enabled binary log on the server, we can still use it to encrypt relay log files on the slave servers. 

]

A Legacy Behavior of MySQL Corrupting Restored Backups (replicate-same-server-id = OFF)

$
0
0
In my previous post (Puzzled by MySQL Replication), I describe a weird, but completely documented, behavior of replication that had me scratching my head for hours because it was causing data corruption.  I did not give too many details then as I also wanted allowing you to scratch your head if you wished.  In this post, I describe this behavior in more details. But first I need to apologize to

Deletes are fast and slow in an LSM

$
0
0
In an LSM deletes are fast for the deleter but can make queries that follow slower. The problem is that too many tombstones can get in the way of a query especially a range query.

A tombstone must remain in the LSM tree until any keys it deletes have been removed from the LSM. For example, if there is a tombstone for key "ABC" on level 1 of the LSM tree and a live value for that key on level 3 then that tombstone cannot be removed. It is hard to make the check (does live key exist below me) efficient.

I haven't read much about optimizing for tombstones in an LSM not named RocksDB. Perhaps I have not tried hard to find such details. Maybe this is something that LSM engine developers should explain more in public.

Confirming whether a tombstone can be dropped

This is based on code that I read ~2 years ago. Maybe RocksDB has changed today.

Tombstones are dropped during compaction. The question is how much work (CPU and IO) you are willing to spend to determine whether a tombstone can be dropped. LevelDB and RocksDB favor performance over exactness. By this I mean they spend less CPU and no IO on the "can I drop this tombstone" check. The check is simple today. If there is an LSM level (or sorted run) below (older) that has a key range which overlaps the tombstone key then the tombstone won't be dropped. And in most workloads this means that tombstones aren't dropped until they reach the max LSM level -- because there usually is overlap.

An LSM could spend a bit more CPU and check bloom filters for the levels that overlap with the tombstone. That might allow tombstones to be dropped earlier. An even more expensive check would be to use more CPU and possibly IO to confirm whether the level that overlaps with the tombstone really has that key. This can make compaction much slower.

Fast path

The SingleDelete API in RocksDB makes it easier to drop tombstones. If you respect what the API requires then tombstones can be dropped quickly -- without spending IO or CPU. SingleDelete makes it easy to drop tombstones for a given key when those tombstones meet during compaction. They don't do anything for the case above where the tombstone is on one level and the live key might be on a lower level.

MyRocks magic

MyRocks has some clever code that does extra work to remove tombstones from an SST when the SST has too many tombstones. Configuration options for this are mentioned in this post. Percona has some docs on rocksdb_compaction_sequential_deletes. And I am sure there is a slide deck from my favorite MyRocks expert. Maybe he will share that with me.

How to clone a MySQL test or development instance from InnoDB Cluster?

$
0
0

Introduction to InnoDB Cluster

If you have not heard about MySQL InnoDB Cluster MySQL, InnoDB Cluster is a built-in high-availability solution for MySQL. The key benefit over old high-availability solutions is that InnoDB Cluster is built into MySQL and supported on all platforms where MySQL is supported.

The key components of MySQL InnoDB Cluster:
- MySQL Group Replication
- MySQL Shell
- MySQL Router

MySQL Group Replication is a plugin that makes sure that; data is distributed to all nodes, conflicts are handled and also handle recovery.
MySQL Shell makes is easy to configure and administer your InnoDB Cluster.
MySQL Router is the last part of InnoDB cluster, it's a lightweight middleware that provides transparent routing between the application and back-end MySQL Servers part of group replication.

If you want to get started with InnoDB Cluster:
- https://dev.mysql.com/doc/refman/8.0/en/mysql-innodb-cluster-userguide.html
- https://github.com/wwwted/MySQL-InnoDB-Cluster-3VM-Setup
- https://lefred.be/content/category/mysql/innodb-cluster/

Clone a test/development environment from InnoDB Cluster

Most users need some way to periodically refresh test and development environments from production. In the past many users used traditional replication, setting up a MySQL slave on a file-system on-top of LVM to leverage snapshots to create new test/developer environments.

For InnoDB Cluster we can use MySQL Clone (available by MySQL 8.0.l7 or later) to creating a test or development environment much easier.

Steps needed on InnoDB Cluster (create a dedicated clone user for the donor):
CREATE USER clone_user@'%' IDENTIFIED BY "clone_password";
GRANT BACKUP_ADMIN ON *.* to clone_user@%;
GRANT SELECT ON performance_schema.* TO clone_user@%;
GRANT EXECUTE ON *.* to clone_user@%;
You might limit the clone user to only be able to connect from some sub-net and not from all host (%). The InnoDB Cluster must run MySQL 8.0.17 or later.

Next we need to create the test or development server.
1) Start a new MySQL instance (using MySQL 8.0.17 or later).
2) Provision data using clone:
INSTALL PLUGIN CLONE SONAME "mysql_clone.so";
INSTALL PLUGIN group_replication SONAME 'group_replication.so';
SET GLOBAL clone_valid_donor_list = "10.0.0.17:3306";
CREATE USER clone_user@localhost IDENTIFIED BY "clone_password";
GRANT CLONE_ADMIN ON *.* to clone_user@localhost;
SET global log_error_verbosity=3;
CLONE INSTANCE FROM clone_user@10.0.0.17:3306 identified by "clone_password";
.. after restart disable GR plugin (lots of errors in error log)
UNINSTALL PLUGIN group_replication;
There are some controls in the MySQL Server that forces us to load the group replication plugin prior to executing the clone command. If you do not load the group replication plugin you will get an error like: ERROR 3870 (HY000): Clone Donor plugin group replication is not active in Recipient

Of course, the final step will be to clean/wash the data before handing over the MySQL instance for test or development, this step is not covered by this blog but using our masking and De-Identification can be used to solve this step ;)

I have tested above procedures using MySQL 8.0.18.

Happy Clustering!

A Comparison Between the MySQL Clone Plugin and Xtrabackup

$
0
0

In one of our previous blogs we explained how Clone Plugin, one of new features that showed in MySQL 8.0.17, can be used to rebuild a replication slave. Currently the go-to tool for that, as well as for backups, is Xtrabackup. We thought it is interesting to compare how those tools work and behave.

Comparing Performance

The first thing we decided to test is how both perform when it comes to storing the copy of the data locally. We used AWS and m5d.metal instance with two NVMe SSD and we ran the clone to local copy:

mysql> CLONE LOCAL DATA DIRECTORY='/mnt/clone/';

Query OK, 0 rows affected (2 min 39.77 sec)

Then we tested Xtrabackup and made the local copy:

rm -rf /mnt/backup/ ; time xtrabackup --backup --target-dir=/mnt/backup/ --innodb-file-io-threads=8 --innodb-read-io-threads=8  --innodb-write-io-threads=8 --innodb-io-capacity=20000 --parallel=16

200120 13:12:28 completed OK!

real 2m38.407s

user 0m45.181s

sys 4m18.642s

As you can see, the time required to copy the data was basically the same. In both cases the limitation was the hardware, not the software.

Transferring data to another server will be the most common use case for both tools. It can be a slave you want to provision or rebuild. In the future it may be a backup, Clone Plugin doesn’t have such functionality as of now but we are pretty sure in the future someone will make it possible to use it as a backup tool. Given that hardware is the limitation for local backup in both cases, hardware will also be a limitation for transferring the data across the network. Depending on your setup, it could be either the network, disk I/O or CPU.

In a I/O-intensive operations CPU is the least common bottleneck. This makes it quite common to trade some CPU utilization for reduction in the data set size. You can accomplish that through compression. If it is done on the fly, you still have to read the same amount of data but you send less of it (as it is compressed) over the network. Then, you will have to decompress it and write it down. It is also possible that the files themselves are compressed. In that case you reduce the amount of data read, transferred and written to disk.

Clone Plugin doesn’t come with any sort of on-the-fly compression available. It can clone compressed InnoDB tables but this doesn’t help much when compared to Xtrabackup as Xtrabackup will as well copy the reduced data set. On the other hand, Xtrabackup can be used along with the compression done on the fly, so it will come up faster if the network will be the limiting factor. Other than that we would expect to see similar results in both cases.

Comparing Usability

Performance is just one thing to compare, there are many others like how easy tools are to use. In both cases there are several steps you have to perform. For Clone Plugin it is:

  1. Install the plugin on all nodes
  2. Create users on both donor and receiver nodes
  3. Set up the donor list on the receiver

Those three steps have to be performed once. When they are set, you can use Clone Plugin to copy the data. Based on the init system you may need to start MySQL node after the clone process has completed. This is not required if, like in the case of systemd, MySQL will be automatically restarted.

Xtrabackup requires a couple more steps to get things done.

  1. Install the software on all nodes
  2. Create user on the donor

Those two steps have to be executed once. For every backup you have to execute following steps:

  1. Configure network streaming. Simple and secure way would be to use SSH, something like:
xtrabackup --backup --innodb-file-io-threads=8 --innodb-read-io-threads=8  --innodb-write-io-threads=8 --innodb-io-capacity=20000 --parallel=8 --stream=xbstream --target-dir=/mnt/backup/ | ssh root@172.32.4.70 "xbstream -x -C /mnt/backup/"

We found, though, for faster harddrives, with single-threaded SSH, CPU becomes a bottleneck. Setting up netcat requires additional step on the receiver to ensure netcat is up, listening and redirecting the traffic to the proper software (xbstream).

  1. Stop MySQL on the receiver node

  2. Run the Xtrabackup

  3. Apply InnoDB logs

  4. Copy back the data

  5. Start MySQL on the receiver node

As you can see, Xtrabackup requires more steps to be taken.

Security Considerations

Clone Plugin can be configured to use SSL for data transfer even though by default it uses plain text. Cloning of the encrypted tablespaces is possible but there is no option to encrypt, for example, the local clone. User would have to do it separately, after the clone process is completed.

Xtrabackup itself doesn’t provide any security. Security is determined by how you stream the data. If you use SSH for streaming, data in transit will be encrypted. If you decide to use netcat, it will be sent as a plain text. Of course, if the data is encrypted in tablespaces, it is already secured, just like in the case of the Clone Plugin. Xtrabackup can also be used along with on-the-fly encryption to ensure your data is encrypted also at rest.

Plugin Features

Clone Plugin is a new product, still in an infant phase. Its primary task is to provide ways of provisioning nodes in InnoDB Cluster and it does that just fine. For other tasks, like backups or provisioning of replication slaves, it can be used to some extent but it suffers from several limitations. We covered some of them in our previous blog so we won’t repeat it here but the most serious one, when talking about provisioning and backups, is that only InnoDB tables are cloned. If you happen to use any other storage engine, you cannot really use Clone Plugin. On the other hand Xtrabackup will happily backup and transfer most commonly used storage engines: InnoDB, MyISAM (unfortunately, it’s still used in many places) and CSV. Xtrabackup comes also with a set of tools that are intended to help with streaming the data from node to node or even stream backup to S3 buckets.

To sum it up, when it comes to backing up data and provisioning replication slaves, xtrabackup is and most likely will still be the most popular pick. On the other hand, Clone Plugin, most likely, will improve and evolve. We will see what the future holds and how things will look like in a year’s time.

Let us know if you have any thoughts on the Clone Plugin, we are very interested to see what is your opinion on this new tool.

 

SQL EXISTS and NOT EXISTS

$
0
0

Introduction In this article, we are going to see how the SQL EXISTS operator works and when you should use it. Although the EXISTS operator has been available since SQL:86, the very first edition of the SQL Standard, I found that there are still many application developers who don’t realize how powerful SQL subquery expressions really are when it comes to filtering a given table based on a condition evaluated on a different table. Database table model Let’s assume we have the following two tables in our database, that form a one-to-many... Read More

The post SQL EXISTS and NOT EXISTS appeared first on Vlad Mihalcea.

Watch the New Webinar: An Introduction to Database Proxies (for MySQL)

$
0
0

As hinted at earlier this month, we’re happy to announce our latest on-demand webinar:
An Introduction to Database Proxies (for MySQL)

In this webinar, Gilles Rayrat, our VP of Engineering and database proxies guru, shares some of his knowledge on the world of database proxies, how they work, why they’re important and what to use them for.

Starting with a simple database connectivity scenario, Gilles builds up the content by discussing clustered databases and what happens in the case of a failure through to explaining the important role database proxies play; including a more in-depth look into some advanced database connectivity setups and proxies functionalities.

More specifically, Gilles covers the following:

  • A simple database connectivity scenario
  • The concept of a clustered database
  • Failure in a clustered database: the nightmare scenario
  • The solution: use a proxy! Preferably a smart one …
  • Advanced database connectivity setups
  • Advanced proxy functionalities
  • Recap

Watch the database proxies webinar

While this webinar discusses database proxy concepts in general, we do call ourselves the MySQL Availability Company for a reason … and hence the content of this webinar is geared at MySQL (or MariaDB and Percona Server) predominantly.

This topic is not entirely innocent of course, as we’ve been involved with MySQL proxy technology for quite some time … namely, Tungsten Proxy, known to Continuent customers as Tungsten Connector. It is in fact a MySQL proxy; and at Continuent, we like to refer to it as an ‘Intelligent Proxy for MySQL’.

Tungsten Proxy – The Intelligent MySQL Proxy – in a nutshell

  • Provides intelligent traffic routing to valid MySQL master(s), locally and globally
  • Scales read queries to valid slaves via query inspection and other methods
  • Application and active users do not disconnect during MySQL master failover events
  • Combined with another intelligent layer of Tungsten Clustering called Tungsten Manager, it provides automatic, rapid master failover for MySQL High Availability and automated cross-site level failover for Disaster Recovery

Tungsten Connector (Tungsten Proxy) has been an important part of the Continuent Tungsten Clustering solution since 2006 and we’re gearing up to make it more widely known in the coming months.

Stay tuned! And watch our webinar on database proxies in the meantime!

What to Monitor in MySQL 8.0

$
0
0

Monitoring is a must in all environments, and databases aren’t the exception. Once you have your database infrastructure up-and-running, you’ll need to keep tabs on what’s happening. Monitoring is a must if you want to be sure everything is going fine but also if you make necessary adjustments while your system grows and evolves. That will enable you to identify trends, plan for upgrades or improvements, or react adequately to any problems or errors that may arise with new versions, different purposes, and so on.

For each database technology, there are different things to monitor. Some of these are specific to the database engine, vendor, or even the particular version that you’re using. Database clusters heavily depend on the underlying infrastructure, so network and operating stats are interesting to see by the database administrators too. 

When running multiple database systems, the monitoring of these systems can become quite a chore. 

In this blog, we’ll take a look at what you need to monitor a MySQL 8.0 environment. We will also take a look at cluster control monitoring features, which may help you to track the health of your databases for free.

OS and Database System Monitoring

When observing a database cluster or node, there are two main points to take into account: the operating system and the MySQL instance itself. You will need to define which metrics you are going to monitor from both sides and how you are going to do it. You need to follow the parameter always in the meaning of your system, and you should look for alterations on the behavior model.

Grip in mind that when one of your parameters is affected, it can also affect others, making troubleshooting of the issue more complicated. Having a proper monitoring and alerting system is essential to make this task as simple as possible.

In most cases, you will need to use some tools, as it is difficult to find one to cover all the wanted metrics. 

OS System Monitoring

One major thing (which is common to all database engines and even to all systems) is to monitor the Operating System behavior. Here are some points to check here. Below you can find top system resources to watch on a database server. It's actually also the list of very first things to check.

CPU Usage

A high CPU usage is not a bad thing as long as you don’t reach the limit. Excessive percentage of CPU usage could be a problem if it’s not usual behavior. In this case, it is essential to identify the process/processes that are generating this issue. If the problem is the database process, you will need to check what is happening inside the database.

RAM Memory or SWAP Usage

Ideally, your entire database should be stored in memory, but this is not always possible. Give MySQL as much as you can afford but leave enough for other processes to function.

If you see a high value for this metric and nothing has changed in your system, you probably need to check your database configuration. Parameters like shared_buffers and work_mem can affect this directly as they define the amount of memory to be able to use for the MySQL database. Swap is for emergencies only, and it should not be used, make sure you also have your operating system set to let MySQL decide about swap usage.

Disk Usage 

Disk usage is one of the key metrics to monitor and alert. Make sure you always have free space for new data, temporary files, snapshots, or backups.

Monitoring hard metric values is not good enough. An abnormal increase in the use of disk space or an excessive disk access consumption is essential things to watch as you could have a high number of errors logged in the MySQL log file or a lousy cache configuration that could generate a vital disk access consumption instead of using memory to process the queries. Make sure you are able to catch abnormal behaviors even if your warning and critical metrics are not reached yet.

Along with monitoring space we also should monitor disk activity.  The top values to monitor are:

  • Read/Write requests
  • IO Queue length
  • Average IO wait
  • Average Read/Write time
  • Read/Write bandwidth

You can use iostat or pt-diskstats from Percona to see all these details. 

Things that can affect your disk performance are often related to data transfer from and towards your disk so monitor abnormal processes than can be started from other users.

Load Average

An all-in-one performance metric. Understanding Linux Load is a key to monitor OS and database dependent systems.

Load average related to the three points mentioned above. A high load average could be generated by an excessive CPU, RAM, or disk usage.

Network

Unless doing backups or transferring vast amounts of data, it shouldn’t be the bottleneck.

A network issue can affect all the systems as the application can’t connect (or connect losing packages) to the database, so this is an important metric to monitor indeed. You can monitor latency or packet loss, and the main issue could be a network saturation, a hardware issue, or just a lousy network configuration.

Database Monitoring

While monitoring is a must, it’s not typically free. There is always a cost on the database performance, depending on how much you are monitoring, so you should avoid monitoring things that you won’t use.

In general, there are two ways to monitor your databases, from the logs or from the database side by querying.

In the case of logs, to be able to use them, you need to have a high logging level, which generates high disk access and it can affect the performance of your database.

For the querying mode, each connection to the database uses resources, so depending on the activity of your database and the assigned resources, it may affect the performance too.

Of course, there are many metrics in MySQL. Here we will focus on the top important.

Monitoring Active Sessions

You should also track the number of active sessions and DB up down status. Often to understand the problem you need to see how long the database is running. so we can use this to detect respawns.

The next thing would be a number of sessions. If you are near the limit, you need to check if something is wrong or if you just need to increment the max_connections value. The difference in the number can be an increase or decrease of connections. Improper usage of connection pooling, locking or network issues are the most common problems related to the number of connections.

The key values here are

  • Uptime
  • Threads_connected
  • Max_used_connections
  • Aborted_connects

Database Locks

If you have a query waiting for another query, you need to check if that another query is a normal process or something new. In some cases, if somebody is making an update on a big table, for example, this action can be affecting the normal behavior of your database, generating a high number of locks.

Monitoring Replication

The key metrics to monitor for replication are the lag and the replication state. Not only the up down status but also the lag because a continuous increase in this value is not a very good sign as it means that the slave is not able to catch up with its master.

The most common issues are networking issues, hardware resource issues, or under dimensioning issues. If you are facing a replication issue you will need to know this asap as you will need to fix it to ensure the high availability environment. 

Replication is best monitored by checking SLAVE STATUS and the following parameters:

  • SLAVE_RUNNING
  • SLAVE_IO_Running
  • SLAVE_SQL_RUNNING
  • LAST_SQL_ERRNO
  • SECONDS_BEHIND_MASTER

Backups

Unfortunately, the vanilla community edition doesn't come with the backup manager. You should know if the backup was completed, and if it’s usable. Usually, this last point is not taken into account, but it’s probably the most critical check in a backup process. Here we would have to use external tools like percona-xtrabackup or ClusterControl.

Database Logs

You should monitor your database log for errors like FATAL or deadlock, or even for common errors like authentication issues or long-running queries. Most of the errors are written in the log file with detailed useful information to fix it. Common failure points you need to keep an eye on are errors, log file sizes. The location of the error log can be found under the log_error variable.

External Tools

Last but not least you can find a list of useful tools to monitor your database activity. 

Percona Toolkit - is the set of Linux tools from Percona to analyze MySQL and OS activities. You can find it here. It supports the most popular 64 bit Linux distributions like Debian, Ubuntu, and Redhat. 

mysqladmin - mysqladmin is an administration program for the MySQL daemon. It can be used to check server health (ping), list the processes, see the values of the variables, but also do some administrative work like create/drop databases, flush (reset) logs, statistics, and tables, kill running queries, stop the server and control replication.

innotop - offers an extended view of SHOW statements. It's very powerful and can significantly reduce the investigation time. Among vanilla MySQL support, you can see the Galera view and Master-slave replication details. 

mtop - monitors a MySQL server showing the queries which are taking the most amount of time to complete. Features include 'zooming' in on a process to show the complete query, 'explaining' the query optimizer information for a query and 'killing' queries. In addition, server performance statistics, configuration information, and tuning tips are provided.

Mytop -  runs in a terminal and displays statistics about threads, queries, slow queries, uptime, load, etc. in tabular format, much similar to the Linux

Conclusion

This blog is not intended to be an exhaustive guide to how to enhance database monitoring, but it hopefully gives a clearer picture of what things can become essential and some of the basic parameters that can be watched. Do not hesitate to let us know if we’ve missed any important ones in the comments below.

 

Preserving commit order on replicas with binary log disabled

$
0
0

MySQL 8.0.19 introduces Binlogless replicas with commit ordering which means you can deploy asynchronous replicas without binary logs enabled, and commit transactions in the same order they are replicated in. Yes, you can disable binlog (skip-log-bin) and the logging of changes done by the applier (log-slave-updates=FALSE) while at the same preserve commit order (slave-preserve-commit-order=TRUE).…

Upgrading MySQL InnoDB Cluster with MySQL5.7.25 to MySQL 8.0.19

$
0
0

This is the tutorial and serves as a sample ONLY.  Every environment can be different.  It is a test trial for the Upgrade MySQL InnoDB Cluster with version 5.7.25 to MySQL 8.0.19 where the MySQL InnoDB Cluster Metadata has changed from V1 to V2 in 8.0.19.
1.      Assuming the following SETUP
           i.              MySQL Server 5.7.25
          ii.              MySQL Shell : 8.0.15
        iii.              MySQL Router : 8.0.15
2.      3 nodes are running on the same machine for this tutorial
           i.              Port : 3310, 3320 and 3330
          ii.              Hostname : primary   (or node1)
3.      MySQL InnoDB Cluster Status as follows
MySQL [primary ssl] JS> x.status({extended:2})
{
    "clusterName": "mycluster",
    "defaultReplicaSet": {
        "groupName": "8561210d-4278-11ea-907d-0800277b31d3",
        "name": "default",
        "primary": "primary:3310",
        "ssl": "REQUIRED",
        "status": "OK",
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
        "topology": {
            "node1:3320": {
                "address": "node1:3320",
                "memberId": "9cdacfc4-4277-11ea-8435-0800277b31d3",
                "mode": "R/O",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            },
            "node1:3330": {
                "address": "node1:3330",
                "memberId": "a18eb74f-4277-11ea-8595-0800277b31d3",
                "mode": "R/O",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            },
            "primary:3310": {
                "address": "primary:3310",
                "memberId": "98f0e702-4277-11ea-81f1-0800277b31d3",
                "mode": "R/W",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE",
                "transactions": {
                    "checkedCount": 0,
                    "committedAllMembers": "8561210d-4278-11ea-907d-0800277b31d3:1-16,
98f0e702-4277-11ea-81f1-0800277b31d3:1-10",
                    "conflictsDetectedCount": 0,
                    "inQueueCount": 0,
                    "lastConflictFree": ""
                }
            }
        },
        "topologyMode": "Single-Primary"
    },
    "groupInformationSourceMember": "primary:3310"
}


Configuration File (MySQL) – my1.cnf / my2.cnf / my3.cnf
The initial configuration as shown in the first part of the ROW.  The second part was appended during the dba.configureLocalInstance(…) after the MySQL InnoDB Cluster creation.
my1.cnf
my2.cnf
my3.cnf
[mysqld]
datadir=/home/mysql/data/3310
basedir=/usr/local/mysql5725
log-error=/home/mysql/data/3310/my.error
port=3310
socket=/home/mysql/data/3310/my.sock
log-bin=logbin
relay-log=logrelay
binlog-format=row
binlog-checksum=NONE
server-id=101

# enable gtid
gtid-mode=on
enforce-gtid-consistency = ON
log-slave-updates = ON

# Table based repositories
master-info-repository=TABLE
relay-log-info-repository=TABLE
# Extraction Algorithm
transaction-write-set-extraction=XXHASH64

report-host=primary
report_port = 3310

[mysqld]
datadir=/home/mysql/data/3320
basedir=/usr/local/mysql5725
log-error=/home/mysql/data/3320/my.error
port=3320
socket=/home/mysql/data/3320/my.sock
log-bin=logbin
relay-log=logrelay
binlog-format=row
binlog-checksum=NONE
server-id=102

# enable gtid
gtid-mode=on
enforce-gtid-consistency = ON
log-slave-updates = ON

# Table based repositories
master-info-repository=TABLE
relay-log-info-repository=TABLE
# Extraction Algorithm
transaction-write-set-extraction=XXHASH64

report-host=primary
report_port = 3320
[mysqld]
datadir=/home/mysql/data/3330
basedir=/usr/local/mysql5725
log-error=/home/mysql/data/3330/my.error
port=3330
socket=/home/mysql/data/3330/my.sock
log-bin=logbin
relay-log=logrelay
binlog-format=row
binlog-checksum=NONE
server-id=103

# enable gtid
gtid-mode=on
enforce-gtid-consistency = ON
log-slave-updates = ON

# Table based repositories
master-info-repository=TABLE
relay-log-info-repository=TABLE
# Extraction Algorithm
transaction-write-set-extraction=XXHASH64

report-host=primary
report_port = 3330
INSERTED after dba.configureLocalInstance( … )
INSERTED after dba.configureLocalInstance( … )
INSERTED after dba.configureLocalInstance( … )
auto_increment_increment = 1
auto_increment_offset = 2
group_replication_allow_local_disjoint_gtids_join = OFF
group_replication_allow_local_lower_version_join = OFF
group_replication_auto_increment_increment = 7
group_replication_bootstrap_group = OFF
group_replication_components_stop_timeout = 31536000
group_replication_compression_threshold = 1000000
group_replication_enforce_update_everywhere_checks = OFF
group_replication_exit_state_action = READ_ONLY
group_replication_flow_control_applier_threshold = 25000
group_replication_flow_control_certifier_threshold = 25000
group_replication_flow_control_mode = QUOTA
group_replication_force_members =
group_replication_group_name = 8561210d-4278-11ea-907d-0800277b31d3
group_replication_group_seeds = node1:13320,node1:13330
group_replication_gtid_assignment_block_size = 1000000
group_replication_ip_whitelist = 192.168.56.0/24
group_replication_local_address = node1:13310
group_replication_member_weight = 80
group_replication_poll_spin_loops = 0
group_replication_recovery_complete_at = TRANSACTIONS_APPLIED
group_replication_recovery_reconnect_interval = 60
group_replication_recovery_retry_count = 10
group_replication_recovery_ssl_ca =
group_replication_recovery_ssl_capath =
group_replication_recovery_ssl_cert =
group_replication_recovery_ssl_cipher =
group_replication_recovery_ssl_crl =
group_replication_recovery_ssl_crlpath =
group_replication_recovery_ssl_key =
group_replication_recovery_ssl_verify_server_cert = OFF
group_replication_recovery_use_ssl = ON
group_replication_single_primary_mode = ON
group_replication_ssl_mode = REQUIRED
group_replication_start_on_boot = ON
group_replication_transaction_size_limit = 0
group_replication_unreachable_majority_timeout = 0
auto_increment_increment = 1
auto_increment_offset = 2
group_replication_allow_local_disjoint_gtids_join = OFF
group_replication_allow_local_lower_version_join = OFF
group_replication_auto_increment_increment = 7
group_replication_bootstrap_group = OFF
group_replication_components_stop_timeout = 31536000
group_replication_compression_threshold = 1000000
group_replication_enforce_update_everywhere_checks = OFF
group_replication_exit_state_action = READ_ONLY
group_replication_flow_control_applier_threshold = 25000
group_replication_flow_control_certifier_threshold = 25000
group_replication_flow_control_mode = QUOTA
group_replication_force_members =
group_replication_group_name = 8561210d-4278-11ea-907d-0800277b31d3
group_replication_group_seeds = node1:13310,node1:13330
group_replication_gtid_assignment_block_size = 1000000
group_replication_ip_whitelist = 192.168.56.0/24
group_replication_local_address = node1:13320
group_replication_member_weight = 70
group_replication_poll_spin_loops = 0
group_replication_recovery_complete_at = TRANSACTIONS_APPLIED
group_replication_recovery_reconnect_interval = 60
group_replication_recovery_retry_count = 10
group_replication_recovery_ssl_ca =
group_replication_recovery_ssl_capath =
group_replication_recovery_ssl_cert =
group_replication_recovery_ssl_cipher =
group_replication_recovery_ssl_crl =
group_replication_recovery_ssl_crlpath =
group_replication_recovery_ssl_key =
group_replication_recovery_ssl_verify_server_cert = OFF
group_replication_recovery_use_ssl = ON
group_replication_single_primary_mode = ON
group_replication_ssl_mode = REQUIRED
group_replication_start_on_boot = ON
group_replication_transaction_size_limit = 0
group_replication_unreachable_majority_timeout = 0
auto_increment_increment = 1
auto_increment_offset = 2
group_replication_allow_local_disjoint_gtids_join = OFF
group_replication_allow_local_lower_version_join = OFF
group_replication_auto_increment_increment = 7
group_replication_bootstrap_group = OFF
group_replication_components_stop_timeout = 31536000
group_replication_compression_threshold = 1000000
group_replication_enforce_update_everywhere_checks = OFF
group_replication_exit_state_action = READ_ONLY
group_replication_flow_control_applier_threshold = 25000
group_replication_flow_control_certifier_threshold = 25000
group_replication_flow_control_mode = QUOTA
group_replication_force_members =
group_replication_group_name = 575f951a-427c-11ea-83d7-0800277b31d3
group_replication_group_seeds = node1:13310,node1:13320
group_replication_gtid_assignment_block_size = 1000000
group_replication_ip_whitelist = 192.168.56.0/24
group_replication_local_address = node1:13330
group_replication_member_weight = 60
group_replication_poll_spin_loops = 0
group_replication_recovery_complete_at = TRANSACTIONS_APPLIED
group_replication_recovery_reconnect_interval = 60
group_replication_recovery_retry_count = 10
group_replication_recovery_ssl_ca =
group_replication_recovery_ssl_capath =
group_replication_recovery_ssl_cert =
group_replication_recovery_ssl_cipher =
group_replication_recovery_ssl_crl =
group_replication_recovery_ssl_crlpath =
group_replication_recovery_ssl_key =
group_replication_recovery_ssl_verify_server_cert = OFF
group_replication_recovery_use_ssl = ON
group_replication_single_primary_mode = ON
group_replication_ssl_mode = REQUIRED
group_replication_start_on_boot = ON
group_replication_transaction_size_limit = 0
group_replication_unreachable_majority_timeout = 0


MySQL Router (8.0.15)
mysql> select * from mysql_innodb_cluster_metadata.routers;
+-----------+-------------+---------+------------+
| router_id | router_name | host_id | attributes |
+-----------+-------------+---------+------------+
|         1 |             |       3 | NULL       |
+-----------+-------------+---------+------------+
1 row in set (0.00 sec)

mysql> select * from mysql_innodb_cluster_metadata.hosts;
+---------+----------------------+------------+-------------------+----------+------------------------------------+--------------------+
| host_id | host_name            | ip_address | public_ip_address | location | attributes                         | admin_user_account |
+---------+----------------------+------------+-------------------+----------+------------------------------------+--------------------+
|       1 | primary              |            | NULL              |          | NULL                               | NULL               |
|       2 | node1                |            | NULL              |          | NULL                               | NULL               |
|       3 | virtual-41.localhost | NULL       | NULL              |          | {"registeredFrom": "mysql-router"} | NULL               |
+---------+----------------------+------------+-------------------+----------+------------------------------------+--------------------+
3 rows in set (0.00 sec)

Upgrade Steps
1.      With the Running MySQL InnoDB Cluster 5.7.25 (MySQL Shell 8.0.15 and MySQL Router 8.0.15)
           i.              Upgrade the MySQL Router First with MySQL 8.0.19
          ii.              Upgrade the MySQL Server to MySQL 8.0.19 (But still using MySQL Shell 8.0.15)
        iii.              With MySQL Server 8.0.19 Running for ALL Servers as MySQL InnoDB Cluster (using Shell MySQL 8.0.15)
        iv.              Upgrade MetaData to V2 : MySQL Shell 8.0.19


MySQL Router 8.0.19 Upgrade
For Simplicity with this tutorial, the upgrade is simply to do the bootstrap process with MySQL Router 8.0.19 new binary.   The new ROUTER configuration folder is created and the router is started.
After the MySQL Router 8.0.19 upgrade, checking the tables with mysql_innodb_cluster_metadata.routers :
mysql> select * from mysql_innodb_cluster_metadata.routers;
+-----------+-------------+---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+
| router_id | router_name | host_id | attributes                                                                                                                                                      |
+-----------+-------------+---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+
|         1 |             |       3 | NULL                                                                                                                                                            |
|         2 |             |       4 | {"version": "8.0.19", "ROEndpoint": "6447", "RWEndpoint": "6446", "ROXEndpoint": "64470", "RWXEndpoint": "64460", "MetadataUser": "mysql_router2_z7jvrcps73jm"} |
+-----------+-------------+---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)

Extra Row is created with attribute : 8.0.19 information.
Whereas the user table has new User due to the MySQL Router Bootstrapping.   This tutorial does not provide any details with re-using existing router account or managing/removing the OLD  router account(s).
mysql> select user,host from mysql.user;
+----------------------------------+----------------------------+
| user                             | host                       |
+----------------------------------+----------------------------+
| gradmin                          | %                          |
| mysql_router1_7d0dklmkh3n8       | %                          |
| mysql_router2_z7jvrcps73jm       | %                          |
| mysql_innodb_cluster_r0431831317 | 192.168.56.0/255.255.255.0 |
| mysql_innodb_cluster_r0431832209 | 192.168.56.0/255.255.255.0 |
| mysql_innodb_cluster_r0431833234 | 192.168.56.0/255.255.255.0 |
| mysql.session                    | localhost                  |
| mysql.sys                        | localhost                  |
| root                             | localhost                  |
+----------------------------------+----------------------------+
9 rows in set (0.00 sec)
1          Upgrade MySQL 5.7.25 to MySQL 8.0.19 for Secondary Server
1.1         Shutdown MySQL Secondary Server
[mysql@virtual-41 mysql57]$ mysql -uroot -h127.0.0.1 -P3330
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 20
Server version: 5.7.25-enterprise-commercial-advanced-log MySQL Enterprise Server - Advanced Edition (Commercial)

Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> shutdown;
Query OK, 0 rows affected (0.00 sec)

mysql>
1.2         Backup the my[1|2|3].cnf
[mysql@virtual-41 config]$ mkdir backup
[mysql@virtual-41 config]$ cp my3.cnf backup
[mysql@virtual-41 config]$ cp my2.cnf backup
[mysql@virtual-41 config]$ cp my1.cnf backup

1.3         Change the my?.cnf for the Secondary Server
1.3.1    COMMENT all variables with group_replication% prefix from the configuration file
Those variables should be converted into persisted variables

[mysql@virtual-41 config]$ mkdir 8.0
[mysql@virtual-41 config]$ sed   "s/^group_replication/#group_replication/g" my3.cnf > 8.0/my3.cnf
[mysql@virtual-41 config]$ sed   "s/^group_replication/#group_replication/g" my2.cnf > 8.0/my2.cnf
[mysql@virtual-41 config]$ sed   "s/^group_replication/#group_replication/g" my1.cnf > 8.0/my1.cnf

1.4         Change the 8.0/my?.cnf with basedir pointing to MySQL 8.0.19 (MySQL Home Directory) and make change to mysqlx-port and mysqlx-socket for MySQL 8.0.  The mysqlx port /socket has default value.  For my tutorial on single machine, server(s) cannot be started for the same port/socket.
1.4.1    For 8.0/my3.cnf as example :
basedir=/usr/local/mysql8019
mysqlx-port=33300
mysqlx-socket=/home/mysql/data/3330/myx.sock

1.5         Convert the group_replication% variables into persisted variables statements
[mysql@virtual-41 config]$ awk '/^group_replication/   { if ($NF != "=") print "set persist " $0 ";"}' my3.cnf > 8.0/my3.sql
[mysql@virtual-41 config]$ awk '/^group_replication/   { if ($NF != "=") print "set persist " $0 ";"}' my2.cnf > 8.0/my2.sql
[mysql@virtual-41 config]$ awk '/^group_replication/   { if ($NF != "=") print "set persist " $0 ";"}' my1.cnf > 8.0/my1.sql

1.6         Modify the SQL files to
1.6.1    comment the deprecated (or not used) variables (group_replication_allow_local_disjoint_gtids_join)
#set persist group_replication_allow_local_disjoint_gtids_join = OFF;

1.6.2    Change the group_replication_exit_state_action = OFFLINE_MODE only if needed.  OFFLINE_MODE is available in new MySQL 8.0.19.
1.6.3    Change the group_replication_auto_increment_increment = 1 (for SINGLE PRIMARY MODE)
1.6.4    Change the SQL file with those value as STRING with quotation mark
E.g
set persist group_replication_group_name = "575f951a-427c-11ea-83d7-0800277b31d3";

1.6.5    Modify or append variables which you may want.  The following variables serve as example ONLY.   You can use the SHELL to modify after upgrade.
(e.g.
group_replication_unreachable_majority_timeout =,
group_replication_member_expel_timeout=
group_replication_autorejoin_tries
set persist group_replication_consistency = "BEFORE_ON_PRIMARY_FAILOVER"
)


1.7         Start Up MySQL Secondary Server as Standalone Server using MySQL 8.0.19 Binary
1.7.1    Make sure the PATH with the New Binary from MySQL 8.0.19 bin to be used to start the Database.
1.7.2    The startup process should automatically upgrade the MySQL 5.7 to MySQL 8.0.19.  Because ALL group_replication% prefix variables are commented, the startup will not bring the server to rejoin the MySQL InnoDB Cluster. 
1.7.3    Check the error log for any errors.   (This is supposed to be taken care for MySQL 5.7 to MySQL 8.0 process)   This tutorial assumes the upgrade process is finished without any errors.  
1.7.3.1   If there is any error, RESTORED the MySQL Database as MySQL 5.7.  
1.7.3.2   MySQL 5.7 to MySQL 8.0 should be taken care with separate process.  Backup the original MySQL 5.7 and clone the database as separate database.   Upgrade to MySQL 8.0 and check carefully.  Until all errors are fixed, the MySQL 5.7.x InnoDB Cluster to MySQL 8.0.x InnoDB Cluster can be rolled out with the upgrade process. 
1.7.3.3   Sample Error Log file for my3.cnf after the startup and Upgrade to 8.0.19.  The Group Replication Plugin is started but there is no group_replication% prefix variables.  The server does not rejoin the InnoDB Cluster.
2020-01-30T03:15:20.657436Z 0 [System] [MY-010116] [Server] /usr/local/mysql8019/bin/mysqld (mysqld 8.0.19-commercial) starting as process 15406
2020-01-30T03:15:20.705770Z 1 [System] [MY-011012] [Server] Starting upgrade of data directory.
2020-01-30T03:15:24.115723Z 0 [ERROR] [MY-011685] [Repl] Plugin group_replication reported: 'The group name option is mandatory'
2020-01-30T03:15:24.116032Z 0 [ERROR] [MY-011660] [Repl] Plugin group_replication reported: 'Unable to start Group Replication on boot'
2020-01-30T03:15:29.025526Z 2 [System] [MY-011003] [Server] Finished populating Data Dictionary tables with data.
2020-01-30T03:15:31.721829Z 5 [System] [MY-013381] [Server] Server upgrade from '50700' to '80019' started.
2020-01-30T03:15:54.090860Z 5 [System] [MY-013381] [Server] Server upgrade from '50700' to '80019' completed.
2020-01-30T03:15:54.422628Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2020-01-30T03:15:54.604614Z 0 [System] [MY-010931] [Server] /usr/local/mysql8019/bin/mysqld: ready for connections. Version: '8.0.19-commercial'  socket: '/home/mysql/data/3330/my.sock'  port: 3330  MySQL Enterprise Server - Commercial.

1.7.3.4   Execute the script file for persisted variables which it includes those group_replication variables.
[mysql@virtual-41 config]$ mysql -uroot -h127.0.0.1 -P3330 < my3.sql
1.7.3.5   Check the persisted variables imported
1.7.3.6   As example for my3 Secondary Server (at this moment the server is still a standalone server)
[mysql@virtual-41 config]$ mysql -uroot -h127.0.0.1 -P3330 -e "select * from performance_schema.persisted_variables;"
+----------------------------------------------------+--------------------------------------+
| VARIABLE_NAME                                      | VARIABLE_VALUE                       |
+----------------------------------------------------+--------------------------------------+
| group_replication_consistency                      | BEFORE_ON_PRIMARY_FAILOVER           |
| group_replication_allow_local_lower_version_join   | OFF                                  |
| group_replication_auto_increment_increment         | 1                                    |
| group_replication_autorejoin_tries                 | 288                                  |
| group_replication_bootstrap_group                  | OFF                                  |
| group_replication_components_stop_timeout          | 31536000                             |
| group_replication_compression_threshold            | 1000000                              |
| group_replication_enforce_update_everywhere_checks | OFF                                  |
| group_replication_exit_state_action                | OFFLINE_MODE                         |
| group_replication_flow_control_applier_threshold   | 25000                                |
| group_replication_flow_control_certifier_threshold | 25000                                |
| group_replication_flow_control_mode                | QUOTA                                |
| group_replication_group_name                       | 575f951a-427c-11ea-83d7-0800277b31d3 |
| group_replication_group_seeds                      | node1:13310,node1:13320              |
| group_replication_gtid_assignment_block_size       | 1000000                              |
| group_replication_ip_whitelist                     | 192.168.56.0/24                      |
| group_replication_local_address                    | node1:13330                          |
| group_replication_member_expel_timeout             | 120                                  |
| group_replication_member_weight                    | 60                                   |
| group_replication_poll_spin_loops                  | 0                                    |
| group_replication_recovery_complete_at             | TRANSACTIONS_APPLIED                 |
| group_replication_recovery_reconnect_interval      | 60                                   |
| group_replication_recovery_retry_count             | 10                                   |
| group_replication_recovery_ssl_verify_server_cert  | OFF                                  |
| group_replication_recovery_use_ssl                 | ON                                   |
| group_replication_single_primary_mode              | ON                                   |
| group_replication_ssl_mode                         | REQUIRED                             |
| group_replication_start_on_boot                    | ON                                   |
| group_replication_transaction_size_limit           | 0                                    |
| group_replication_unreachable_majority_timeout     | 120                                  |
+----------------------------------------------------+--------------------------------------+
[mysql@virtual-41 config]$

1.7.3.7   Restart the server which should rejoin the MySQL InnoDB Cluster.
1.7.3.8   Check the MySQL Server Error Log for any error
1.7.3.9   Connect with MySQL Shell 8.0.15 (At this time, the shell has not yet been upgraded) and Check the Cluster Status.
[mysql@virtual-41 config]$ mysqlsh
MySQL Shell 8.0.15-commercial

Copyright (c) 2016, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.

MySQL JS> \connect gradmin:grpass@primary:3310
Creating a session to 'gradmin@primary:3310'
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 112498
Server version: 5.7.25-enterprise-commercial-advanced-log MySQL Enterprise Server - Advanced Edition (Commercial)
No default schema selected; type \use <schema> to set one.

MySQL [primary ssl] JS> var x = dba.getCluster()
MySQL [primary ssl] JS> x.status()
{
    "clusterName": "mycluster",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "primary:3310",
        "ssl": "REQUIRED",
        "status": "OK",
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
        "topology": {
            "node1:3320": {
                "address": "node1:3320",
                "mode": "R/O",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            },
            "node1:3330": {
                "address": "node1:3330",
                "mode": "R/O",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            },
            "primary:3310": {
                "address": "primary:3310",
                "mode": "R/W",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            }
        },
        "topologyMode": "Single-Primary"
    },
    "groupInformationSourceMember": "primary:3310"
}

MySQL [primary ssl] JS>
1.7.3.10            The Secondary Server (with port 3330 is ONLINE and rejoined)
1.7.4    Repeat the process for the 2ndSecondary Server and then finally the Primary Server.
1.7.5    After ALL servers are upgraded, and check the status again.  The R/W node Primary Server should be switched to another node.  For this tutorial with Weighting being set, the Primary Server should be on 3320 as follows :
[mysql@virtual-41 mysql80]$ mysqlsh
MySQL Shell 8.0.15-commercial

Copyright (c) 2016, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.

MySQL JS> \connect gradmin:grpass@primary:3320
Creating a session to 'gradmin@primary:3320'
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 236
Server version: 8.0.19-commercial MySQL Enterprise Server - Commercial
No default schema selected; type \use <schema> to set one.

MySQL [primary ssl] JS> var x = dba.getCluster()
MySQL [primary ssl] JS> x.status()
{
    "clusterName": "mycluster",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "node1:3320",
        "ssl": "REQUIRED",
        "status": "OK",
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
        "topology": {
            "node1:3320": {
                "address": "node1:3320",
                "mode": "R/W",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            },
            "node1:3330": {
                "address": "node1:3330",
                "mode": "R/O",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            },
            "primary:3310": {
                "address": "primary:3310",
                "mode": "R/O",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            }
        },
        "topologyMode": "Single-Primary"
    },
    "groupInformationSourceMember": "primary:3320"
}

MySQL [primary ssl] JS>

2          Upgrade MySQL InnoDB Cluster Meta Data to V2 and Using MySQL Shell 8.0.19
2.2         Please do GRANT Statements for the clusterAdmin User following the dba.upgradeMetadata({dryRun:true}) in MySQL Shell 8.0.19
MySQL [primary ssl] JS> dba.upgradeMetadata({dryRun:true})
ERROR: The account 'gradmin'@'%' is missing privileges required to manage an InnoDB cluster:
GRANT EXECUTE, SELECT ON *.* TO 'gradmin'@'%' WITH GRANT OPTION;
GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SHOW VIEW, TRIGGER, UPDATE ON mysql_innodb_cluster_metadata_bkp.* TO 'gradmin'@'%' WITH GRANT OPTION;
GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SHOW VIEW, TRIGGER, UPDATE ON mysql_innodb_cluster_metadata_previous.* TO 'gradmin'@'%' WITH GRANT OPTION;
Dba.upgradeMetadata: The account 'gradmin'@'%' is missing privileges required for this operation. (RuntimeError)

2.2.1    Execute the GRANT statements as following the steps
2.2.2    You need to RUN extra GRANT statement :
mysql> grant SELECT  on performance_schema.global_variables to gradmin@'%' with grant option;
2.2.2.1   Or else you may get error during the upgrade of meta data.
Step 1 of 1: upgrading from 1.0.1 to 2.0.0...
ERROR: SELECT command denied to user 'gradmin'@'primary.localhost' for column 'variable_value' in table 'global_variables'
Dba.upgradeMetadata: Unable to detect Metadata version. Please check account privileges. (RuntimeError)

Once all is completed, the Servers are upgraded to New Version 8.0.19 together with meta data V2.



Viewing all 18822 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>