Quantcast
Channel: Planet MySQL
Viewing all 18800 articles
Browse latest View live

Using strace for MySQL Troubleshooting

$
0
0
I'd say that strace utility is even more useful for MySQL DBAs than lsof. Basically, it is a useful general purpose diagnostic and debugging tool for tracing system calls some process makes and signals it receives. The name of each system call, its arguments and its return value are printed to stderr or to the file specified with the -o option.

In context of MySQL strace is usually used to find what files mysqld process accesses, and to check the details about any I/O errors. For example, if I'd really want to try to verify Bug #88479 - "Unable to start mysqld using a single config file (and avoiding reading defaults)" by Simon Mudd, I'd just run mysqld from some 5.7.x version as a command argument for strace. On my Ubuntu 14.04 netbook I have the following files:
openxs@ao756:~/dbs/5.7$ ls -l /usr/my.cnf
-rw-r--r-- 1 root root 943 лип 19  2013 /usr/my.cnf
openxs@ao756:~/dbs/5.7$ ls -l /etc/my.cnf
-rw-r--r-- 1 root root 260 чер 24 20:40 /etc/my.cnf
openxs@ao756:~/dbs/5.7$ ls -l /etc/mysql/my.cnf
-rw-r--r-- 1 root root 116 лют 26  2016 /etc/mysql/my.cnf
openxs@ao756:~/dbs/5.7$ bin/mysqld --version
bin/mysqld  Ver 5.7.18 for Linux on x86_64 (MySQL Community Server (GPL))
So, what happens if I try to run mysqld --defaults-file=/etc/mysql/my.cnf, like this:
openxs@ao756:~/dbs/5.7$ strace bin/mysqld --defaults-file=/etc/mysql/my.cnf --print-defaults 2>&1 | grep 'my.cnf'
stat("/etc/mysql/my.cnf", {st_mode=S_IFREG|0644, st_size=116, ...}) = 0
open("/etc/mysql/my.cnf", O_RDONLY)     = 3
openxs@ao756:~/dbs/5.7$
It seems we proved that only the file passed as --defaults-file is read (if it exists). By default other locations are also checked in a specific order (note that return codes are mapped to symbolic errors when possible):
openxs@ao756:~/dbs/5.7$ strace bin/mysqld --print-defaults --print-defaults 2>&1 | grep 'my.cnf'
stat("/etc/my.cnf", {st_mode=S_IFREG|0644, st_size=260, ...}) = 0
open("/etc/my.cnf", O_RDONLY)           = 3
stat("/etc/mysql/my.cnf", {st_mode=S_IFREG|0644, st_size=116, ...}) = 0
open("/etc/mysql/my.cnf", O_RDONLY)     = 3
stat("/home/openxs/dbs/5.7/etc/my.cnf", 0x7ffd68f0e020) = -1 ENOENT (No such file or directory)
stat("/home/openxs/.my.cnf", 0x7ffd68f0e020) = -1 ENOENT (No such file or directory)
openxs@ao756:~/dbs/5.7$
If we think that --print-defaults may matter, we can try without it:
openxs@ao756:~/dbs/5.7$ strace bin/mysqld --defaults-file=/etc/mysql/my.cnf 2>&1 | grep 'my.cnf'stat("/etc/mysql/my.cnf", {st_mode=S_IFREG|0644, st_size=116, ...}) = 0
open("/etc/mysql/my.cnf", O_RDONLY)     = 3
^C
openxs@ao756:~/dbs/5.7$
Last example also shows how one can terminate tracing with Ctrl-C.

Now, let me illustrate typical use cases with (surprise!) some public MySQL bug reports where strace was important to find or verify the bug:
  • Bug #20748 - "Configuration files should not be read more than once". In this old bug report Domas Mituzas proved the point by running mysqld as a command via strace. He filtered out lines related to opening my.cnf file with egrep and made it obvious that one file may be read more than once. The bug is closed long time ago, but it is not obvious that all possible cases are covered, based on further comments...
  • Bug #49336 - "mysqlbinlog does not accept input from stdin when stdin is a pipe". Here bug reporter shown how run mysqlbinlog as a command under strace and redirect output to the file with -o option.
  • Bug #62224 - "Setting open-files-limit above the hard limit won't be logged in errorlog". This minor bug in all versions before old 5.6.x (that still remains "Verified", probably forgotten) was reported by Daniël van Eeden. strace allowed him to show what limits are really set by setrlimit() calls. The fact that arguments of system calls are also shown matters sometimes.
  • Bug #62578 - "mysql client aborts connection on terminal resize". This bug report by Jervin Real from Percona is one of my all time favorites! I clearly remember how desperate customer tried to to dump data and load them on a remote server with a nice command line, and after waiting for many days (mysqldump was run for a data set of 5TB+, don't ask, not my idea) complained that they got "Lost connection to MySQL server during query" error message and loading failed. Surely, the command was run from the terminal window on his Mac and in the process of moving it here and there he just resized the window  by chance... You can see nice example of strace usage with MySQL Sandbox in the bug report, as well as some outputs for SIGWINCH signal. Note that the bug is NOT fixed by Oracle in MySQL 5.5.x (and never happened in 5.6+), while Percona fixed it since 5.5.31.
  • Bug #65956 - "client and libmysqlclient VIO drops connection if signal received during read()". In this yet another 5.5 only regression bug that was NOT fixed in 5.5.x by Oracle (until in frames of Bug #82019 - "Is client library supposed to retry EINTR indefinitely or not" the patch was contributed for a special case of it by Laurynas Biveinis from Percona, that in somewhat changed form allowed to, hopefully, get some fix in 5.5.52+), the problem was noted while trying to attach strace to running client program (with -p option).
  • Bug #72340 - "my_print_defaults fails to parse include directive if there is no new line". Here bug reporter used strace (with -f option to trace child/forked processes and -v option get verbose output) to show that file name is NOT properly recognized by my_print_defaults. The bug is still "Verified".
  • Bug #72701 - "mysqlbinlog uses localtime() to print events, causes kernel mutex contention". Yet another nice bug report by Domas Mituzas, who had used -e stat options of strace to trace only stat() calls and show how many times they are applied to /etc/localtime.The bug is fixed since 5.5.41, 5.6.22 and 5.7.6 removed related kernel mutex contention.
  • Bug #76627 - "MySQL/InnoDB mix buffered and direct IO". It was demonstrated with strace that InnoDB was opening each .ibd file twice (this is expected as explained by Marko Mäkelä) in different modes (and this was not any good). The bug is fixed in recent renough MySQL server versions.
  • Bug #80020 - "mysqlfrm doesn't work with 5.7". In this bug report Aleksandr Kuzminsky had to use strace to find out why mysqlfrm utility failed mostly silently even with --verbose option.
  • Bug #80319 - ""flush tables" semantics deviate from manual". Here Jörg Brühe proved with strace that close() system call is not used for individual InnoDB table t when FLUSH TABLES t is executed. manual had to be clarified.
  • Bug #80889 - "PURGE BINARY LOGS TO is reading the whole binlog file and causing MySql to Stall". By attaching strace to running mysqld process and running the command it was shown that when GTID_MODE=OFF the entire file we purge to was read. No reason to do this, really. Note how return value of open(), file descriptor, was further used to track reads from this specific file.
  • Bug #81443 - "mysqld --initialize-insecure silently fails with --user flag". As MySQL DBA, you should be ready to use strace when requested by support or developers. Here bug reporter was asked to use it, and this revealed a permission problem (that was a result of the bug). No need to guess what may go wrong with permissions - just use strace to find out!
  • Bug #84708 - "mysqld fills up error logs with [ERROR] Error in accept: Bad file descriptor". Here strace was used to find out that "When mysqld is secured with tcp_wrappers it will close a socket from an unauthorized ip address and then immediately start polling that socket"... The bug is fixed in MySQL 5.7.19+ and 8.0.2+, so take care!
  • Bug #86117 - "reconnect is very slow". I do not care about MySQL Shell, but in this bug report Daniël van Eeden used -r option of strace to print a relative timestamp for every system call, and this was it was clear how much time was spent during reconnect, and where it was spent. Very useful!
  • Bug #86462 - "mysql_ugprade: improve handling of upgrade errors". In this case strace allowed Simon Mudd to find out what exact SQL statement generated by mysql_upgrade failed.
The last but not the least, note that you may need root or sudo privileges to use strace. On my Ubuntu 14.04 this kind of messages may appear even if I am the user that owns mysqld process (same with gdb):
openxs@ao756:~/dbs/maria10.2$ strace -p 2083
strace: attach: ptrace(PTRACE_ATTACH, ...): Operation not permitted
Could not attach to process.  If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user.  For more details, see /etc/sysctl.d/10-ptrace.conf
To summarize, strace may help MySQL DBA to find out:

  • what files are accessed by the mysqld process or related utilities, and in what order
  • why some MySQL-related command (silently) fails or hangs
  • why some commands end up with permission denied or other errors
  • what signals MySQL server and tools get
  • what system calls could took a lot of time when something works slow
  • when files are opened and closed, and how much data are read from the files
  • where the error log and other logs are really located (we can look for system calls related to writing to stderr, for example)
  • how MySQL really works with files, ports and sockets
It also helps to find and verify MySQL bugs, and clarify missing details in MySQL manual.

There other similar tools for tracing system calls (maybe among other things) on Linux that I am going to review in this blog some day. Performance impact of running MySQL server under this kind of tracing is also a topic to study.

MariaDB MaxScale Setup with Binlog Server and SQL Query Routing

$
0
0
MariaDB MaxScale Setup with Binlog Server and SQL Query Routing massimiliano_pinto_g Mon, 12/04/2017 - 03:26

Binlog server is a MariaDB MaxScale replication proxy setup which involves one Master server and several slave servers using MariaDB replication protocol.

Up to MariaDB MaxScale version 2.1, due to the lack of some SQL variables needed by the monitor for MariaDB instances, it’s not possible to use it in conjunction with SQL routing operations, such as Read/Write split routing.

With MariaDB MaxScale 2.2 (currently in beta) this is no longer a limitation as the monitor can detect a Binlog server setup and SQL statements can be properly routed among Master and Slave servers.

Depending on the configuration value of the optional variable “master_id”, the binlog server can be seen as a ‘Relay Master’ with its own slaves or just a ‘Running’ server, without its slaves being listed.

MariaDB MaxScale configuration:

# binlog server details
[binlog_server]
type=server
address=127.0.0.1
port=8808
protocol=MySQLBackend

# Mysql monitor
[MySQL Monitor]
type=monitor
module=mysqlmon
servers=server1,server2,...,binlog_server
user=mon_user
passwd=some_pass
monitor_interval=10000
detect_replication_lag=true

# R/W split service
[Read-Write-Service]
type=service
router=readwritesplit
servers=server1,server2,...

# Binlog server configuration
[Replication_Service]
type=service
router=binlogrouter
version_string=10.1.17-log
router_options=server_id=93

# Binlog server listener
[BinlogServer_Listener]
type=listener
service=Replication_Service
protocol=MySQLClient
port=8808
address=0.0.0.0


Note: the ‘binlog_server’ is not needed in the server list of R/W split service; if set it doesn’t harm MariaDB MaxScale as it doesn’t have Slave or Master states.

Binlog Server identity post reminds which parameters affect the way MaxScale is seen from Slave servers and MaxScale monitor.


Scenario A: only server_id is given in configuration.

MySQL [(none)]> select @@server_id; // The server_id of master, query from slaves.
+-------------+
| @@server_id |
+-------------+
|       10124 |
+-------------+

MySQL [(none)]> select @@server_id, @@read_only; // Maxscale server_id, query from MySQL monitor only.

+-------------+-------------+
| @@server_id | @@read_only |
+-------------+-------------+
|          93 |           0 |
+-------------+-------------+

MySQL [(none)]> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
              Slave_IO_State: Binlog Dump
                 Master_Host: 192.168.100.11  // Master server IP
                 Master_User: repo
                 Master_Port: 3306
                 ...
            Master_Server_Id: 10124 // Master Server_ID
                 ...

MaxAdmin> show servers
Server 0x1f353b0 (server1)
    Server:                              127.0.0.1
    Status:                              Slave, Running
    Protocol:                            MySQLBackend
    Port:                                25231
    Server Version:                      10.0.21-MariaDB-log
    Node Id:                             101
    Master Id:                           10124
    Slave Ids:
    Repl Depth:                          1
    ...

Server 0x1f31af0 (server2)
    Server:                              192.168.122.1
    Status:                              Master, Running
    Protocol:                            MySQLBackend
    Port:                                10124
    Server Version:                      10.1.24-MariaDB
    Node Id:                             10124
    Master Id:                           -1
    Slave Ids:                           101, 93
    Repl Depth:                          0
    ...

Server 0x1f32d90 (binlog_server)
    Server:                              127.0.0.1
    Status:                              Running
    Protocol:                            MySQLBackend
    Port:                                8808
    Server Version:                      10.1.17-log
    Node Id:                             93
    Master Id:                           10124
    Slave Ids:
    Repl Depth:                          1
    ...

Scenario B: server_id and common_identity (master_id)

[BinlogServer]
type=service
router=binlogrouter
version_string=10.1.17-log
router_options=server-id=93, master_id=1111

MySQL [(none)]> select @@server_id; // Maxscale common identity
+-------------+
| @@server_id |
+-------------+
|        1111 |
+-------------+
1 row in set (0.00 sec)

MySQL [(none)]> select @@server_id, @@read_only; // Maxscale common identity
+-------------+-------------+
| @@server_id | @@read_only |
+-------------+-------------+
|        1111 |           0 |
+-------------+-------------+
1 row in set (0.00 sec)

MySQL [(none)]> show slave status\G
*************************** 1. row ***************************
              Slave_IO_State: Binlog Dump
                 Master_Host: 192.168.100.11  // Master server IP
                 Master_User: repl
                 Master_Port: 3306
                 ...
            Master_Server_Id: 10124 // Master Server_ID
                 ...

MaxAdmin> show servers
Server 0x24103b0 (server1)
    Server:                              127.0.0.1
    Status:                              Slave, Running
    Protocol:                            MySQLBackend
    Port:                                25231
    Server Version:                      10.0.21-MariaDB-log
    Node Id:                             101
    Master Id:                           1111
    Slave Ids:                           
    Repl Depth:                          2
    ...
Server 0x240dd90 (binlog_server)
    Server:                              127.0.0.1
    Status:                              Relay Master, Running
    Protocol:                            MySQLBackend
    Port:                                8808
    Server Version:                      10.1.17-log
    Node Id:                             1111
    Master Id:                           10124
    Slave Ids:                           101
    Repl Depth:                          1
    ...

Server 0x240caf0 (server2)
    Server:                              192.168.122.1
    Status:                              Master, Running
    Protocol:                            MySQLBackend
    Port:                                10124
    Server Version:                      10.1.24-MariaDB
    Node Id:                             10124
    Master Id:                           -1
    Slave Ids:                           1111
    Repl Depth:                          0
    ...

The latter configuration with the extra master_id option is clearly then one which well represents the setup with Binlog server as a replication proxy: the user can immediately see that.

The picture shows the setup and makes it clear MariaDB MaxScale handles both replication protocol between Master and Slaves and also routes Read and Write application traffic.

complete_setup.jpg

Conclusion

This post shows how it's easy for any user to improve a MariaDB replication setup with MariaDB MaxScale combining benefits of replication proxy and query routing scalability.

MariaDB MaxScale 2.2 is in beta and we do not recommend using it in production environments. However, we do encourage you to download, test it and share your successes!

 

Additional Resources

With MariaDB MaxScale 2.2 (currently in beta) the monitor for MariaDB instances can detect a Binlog server setup and SQL statements can be properly routed among Master and Slave servers.

Login or Register to post comments

Internal Temporary Tables in MySQL 5.7

$
0
0
InnoDB row operations graph from PMM

In this blog post, I investigate a case of spiking InnoDB Rows inserted in the absence of a write query, and find internal temporary tables to be the culprit.

Recently I was investigating an interesting case for a customer. We could see the regular spikes on a graph depicting “InnoDB rows inserted” metric (jumping from 1K/sec to 6K/sec), however we were not able to correlate those spikes with other activity. The

innodb_row_inserted
 graph (picture from PMM demo) looked similar to this (but on a much larger scale):

InnoDB row operations graph from PMM

Other graphs (Com_*, Handler_*) did not show any spikes like that. I’ve examined the logs (we were not able to enable general log or change the threshold of the slow log), performance_schema, triggers, stored procedures, prepared statements and even reviewed the binary logs. However, I was not able to find any single write query which could have caused the spike to 6K rows inserted.

Finally, I figured out that I was focusing on the wrong queries. I was trying to correlate the spikes on the InnoDB Rows inserted graph to the DML queries (writes). However, the spike was caused by SELECT queries! But why would SELECT queries cause the massive InnoDB insert operation? How is this even possible?

It turned out that this is related to temporary tables on disk. In MySQL 5.7 the default setting for internal_tmp_disk_storage_engine is set for InnoDB. That means that if the SELECT needs to create a temporary table on disk (e.g., for GROUP BY) it will use the InnoDB storage engine.

Is that bad? Not necessarily. Krunal Bauskar published a blog post originally about the InnoDB Intrinsic Tables performance in MySQL 5.7. The InnoDB internal temporary tables are not redo/undo logged. So in general performance is better. However, here is what we need to watch out for:

  1. Change of the place where MySQL stores temporary tables. InnoDB temporary tables are stored in ibtmp1 tablespace file. There are a number of challenges with that:
    • Location of the ibtmp1 file. By default it is located inside the innodb datadir. Originally MyISAM temporary tables were stored in  tmpdir. We can configure the size of the file, but the location is always relative to InnoDB datadir, so to move it to tmpdir we need something like this: 
      innodb_temp_data_file_path=../../../tmp/ibtmp1:12M:autoextend
    • Like other tablespaces it never shrinks back (though it is truncated on restart). The huge temporary table can fill the disk and hang MySQL (bug opened). One way to fix that is to set the maximum size of ibtmp1 file: 
      innodb_temp_data_file_path=ibtmp1:12M:autoextend:max:1G
    • Like other InnoDB tables it has all the InnoDB limitations, i.e., InnoDB row or column limits. If it exceeds these, it will return “Row size too large” or “Too many columns” errors. The workaround is to set internal_tmp_disk_storage_engine to MYISAM.
  2. When all temp tables go to InnoDB, it may increase the total engine load as well as affect other queries. For example, if originally all datasets fit into buffer_pool and temporary tables were created outside of the InnoDB, it will not affect the InnoDB memory footprint. Now, if a huge temporary table is created as an InnoDB table it will use innodb_buffer_pool and may “evict” the existing pages so that other queries may perform slower.

Conclusion

Beware of the new change in MySQL 5.7, the internal temporary tables (those that are created for selects when a temporary table is needed) are stored in InnoDB ibtmp file. In most cases this is faster. However, it can change the original behavior. If needed, you can switch the creation of internal temp tables back to MyISAM: 

set global internal_tmp_disk_storage_engine=MYISAM

MariaDB Connector/C 2.3.4 now available

$
0
0
MariaDB Connector/C 2.3.4 now available dbart Mon, 12/04/2017 - 12:37

The MariaDB project is pleased to announce the immediate availability of MariaDB Connector/C 2.3.4. See the release notes and changelogs for details and visit mariadb.com/downloads/connector to download.

Download MariaDB Connector/C 2.3.4

Release Notes Changelog About MariaDB Connector/C

The MariaDB project is pleased to announce the immediate availability of MariaDB Connector/C 2.3.4. See the release notes and changelog for details.

Login or Register to post comments

Linkbench: in-memory, small server, MyRocks and InnoDB

$
0
0
This post explains MySQL performance for Linkbench on a small server. This used a low-concurrency and in-memory workload to measure response time, IO and CPU efficiency. Tests were run for MyRocks and InnoDB on an Intel NUC. I wrote a similar report a few months ago. The difference here is that I used an updated compiler toolchain, include results for MyRocks and use an updated version of Linkbench.
'
tl;dr - for in-memory linkbench
  • InnoDB from MySQL 5.6.35 has ~20% better insert and load rates than MyRocks.
  • MyRocks is less CPU efficient. MyRocks uses 28% more CPU than InnoDB 5.6.35 for the load and 14% more for transactions.
  • MyRocks is more write efficient. Modern InnoDB writes ~3X more to storage per insert and ~25X more per transaction.
  • MyRocks is more space efficient. Uncompressed InnoDB uses ~1.5X more space than uncompressed MyRocks after 12 hours of transactions.
  • There is a CPU regression from MySQL 5.6.35 to 8.0.2. The CPU overhead is 20% larger for 8.0.2 on the load and 8% larger on transactions. The insert and load rates are 20% smaller for 8.0.2 than for 5.6.35. 

Configuration

I used my Linkbench repo and helper scripts to run linkbench with maxid1=10M, loaders=1 and requestors=1 so there will be 2 concurrent connections doing the load and 1 connection running transactions after the load finishes. My linkbench repo has a recent commit that changes the Linkbench workload and this test included that commit. The test pattern is 1) load and 2) transactions. The transactions were run in 12 1-hour loops and I share results from the last hour. The test server is the i5 NUC described here were 4 HW threads, SSD and 16gb of RAM.

Tests were run for MyRocks from FB MySQL 5.6.35 and InnoDB from upstream MySQL. The binlog was enabled but sync on commit was disabled for the binlog and database log. All engines used jemalloc. Mostly accurate my.cnf files are here but the database cache was made large enough to cache the ~10gb database.
  • MyRocks was compiled on October 16 with git hash 1d0132. Compression was not used.
  • Upstream 5.6.35, 5.7.17 and 8.0.2 were used with InnoDB. SSL was disabled and 8.x used the same charset/collation as previous releases. I also set innodb_purge_threads=1 to reduce mutex contention.
The performance schema was enabled for upstream InnoDB. It was disabled at compile time for MyRocks because FB MySQL still has user & table statistics for monitoring.

Graphs

All of the data is here. I adjusted iostat metrics for MyRocks because iostat currently counts bytes trimmed as bytes written which is an issue for RocksDB but my adjustment is not exact. The data also includes p99 response time metrics for the most frequent transaction types in Linkbench and that was similar for MyRocks and modern InnoDB.

The first two graphs show the load and transaction rates relative to InnoDB from MySQL 5.6.35. InnoDB from 5.6.35 does better than MyRocks on both load and transaction rates. Load and transaction rates drop by ~20% from MySQL 5.6.35 to 8.0.2.
Next is the CPU overhead per insert during the load and per transaction. This is measured by the us and sy columns from vmstat. The values on the graph are relative to InnoDB from MySQL 5.6.35. The CPU overheads explain the load and insert rates above. MyRocks uses ~28% more CPU than InnoDB 5.6.35 for the load and ~14% more for transactions. But there is also a CPU regression for InnoDB from MySQL 5.6 to 8.0 because InnoDB with 8.0.2 uses ~20% more CPU on the load and 8% more for transactions.

Next is KB written to storage per insert during the load and per transaction. The values on the graphs are relative to InnoDB from MySQL 5.6.35. Modern InnoDB writes ~3X more to storage per insert and ~25X more per transaction. An SSD will last longer with MyRocks.
Uncompressed MyRocks uses more space than uncompressed InnoDB after the load. But modern InnoDB uses ~1.5X more space than MyRocks after 12 hours of transactions. One problem for InnoDB is B-Tree fragmentation. Leveled compaction in MyRocks wastes less space.

Moving data in real-time into Amazon Redshift – The power of heterogeneous Tungsten Replication

$
0
0

Amazon Redshift has been providing scalable, quick-to-access analytics platforms for many years, but the question remains: how do you get the data from your existing datastore into Redshift for processing? Tungsten Replicator provides real-time movement of data from Oracle and MySQL into Amazon Redshift, including flexible data handling, translation and long-term change data capture.

In our webinar, Wednesday, December 13th, we will review:

  • How Amazon Redshift replication works
  • Deployment from MySQL or Oracle
  • Deployment from Amazon RDS
  • Provisioning/seeding the original information
  • Filtering and accounting for data differences
  • Data concentration/aggregation with single-schema targets

Linkbench: IO-bound, small server, MyRocks and InnoDB

$
0
0
This post explains MySQL performance for Linkbench on a small server. This used a low-concurrency IO-bound workload to measure response time, IO and CPU efficiency. Tests were run for MyRocks and InnoDB on an Intel NUC. The previous post used an in-memory workload.
'
tl;dr - for IO-bound linkbench
  • 99th percentile response times are more than 2X better with MyRocks than InnoDB
  • Throughput results are mixed -- InnoDB 5.6 loads faster, MyRocks does transactions faster. The relative load rate for MyRocks is ~0.84 the InnoDB rate. The relative transaction rate for MyRocks is ~1.14X the InnoDB rate.
  • CPU efficiency is mixed. MyRocks uses more CPU than InnoDB 5.6 on the load and less on transactions.
  • MyRocks is more write efficient. Modern InnoDB writes ~1.6X more to storage per insert and ~25X more per transaction compared to MyRocks.
  • MyRocks is more space efficient. Uncompressed InnoDB uses ~1.5X more space than uncompressed MyRocks and ~3X more space than compressed MyRocks.
  • There is a regression from MySQL 5.6.35 to 8.0.2. InnoDB 8.0.2 gets 0.75X the load rate and 0.90X the transaction rate compared to InnoDB 5.6.35. New CPU overhead is the problem as InnoDB 8.0.2 uses 1.27X more CPU for the load and 1.15X more CPU for transactions compared to InnoDB 5.6.35.

Configuration

I used my Linkbench repo and helper scripts to run linkbench with maxid1=80M, loaders=1 and requestors=1 so there will be 2 concurrent connections doing the load and 1 connection running transactions after the load finishes. My linkbench repo has a recent commit that changes the Linkbench workload and this test included that commit. The test pattern is 1) load and 2) transactions. The transactions were run in 12 1-hour loops and I share results from the last hour. The test server is the i5 NUC described here were 4 HW threads, SSD and 16gb of RAM. The database is larger than RAM after the load.

Tests were run for MyRocks from FB MySQL 5.6.35 and InnoDB from upstream MySQL. The binlog was enabled but sync on commit was disabled for the binlog and database log. All engines used jemalloc. Mostly accurate my.cnf files are here.
  • MyRocks was compiled on October 16 with git hash 1d0132. Tests were repeated without and with compression. The configuration without compression is called MySQL.none in the rest of this post. The configuration with compression is called MySQL.zstd and used zstandard for the max level, no compression for L0/L1/L2 and lz4 for the other levels.
  • Upstream 5.6.35, 5.7.17 and 8.0.2 were used with InnoDB. SSL was disabled and 8.x used the same charset/collation as previous releases. I also set innodb_purge_threads=1 to reduce mutex contention.
The performance schema was enabled for upstream InnoDB. It was disabled at compile time for MyRocks because FB MySQL still has user & table statistics for monitoring.

Graphs

All of the data is here. I adjusted iostat metrics for MyRocks because it currently counts bytes trimmed as bytes written which is an issue for RocksDB but my adjustment is not exact. The first two graphs show the load and transaction rates relative to the rate for InnoDB from MySQL 5.6.35.
  • p99 response times are more than 2X better for MyRocks than InnoDB 5.6
  • MyRocks gets ~0.84X the load rate and ~1.14X the transaction rate compared to InnoDB 5.6
  • InnoDB 8.0 gets 0.75X the load rate and 0.90X the transaction rate compared to InnoDB 5.6
CPU efficiency is mixed. These graphs have CPU overhead per insert during the load and per transaction. This is measured by the us and sy columns from vmstat. The values on the graph are relative to InnoDB from MySQL 5.6.35.

MyRocks uses more CPU than InnoDB 5.6 on the load and and less for transactions. Some of the CPU overhead for the load is from compaction and that is decoupled from the user connections doing inserts.

There is a CPU regression from MySQL 5.6.35 to 8.0.2 for InnoDB. InnoDB 8.0.2 uses 1.27X more CPU than InnoDB 5.6.35 for the load and 1.15X more for transactions. I assume this is from new code.
MyRocks is more write efficient. These graphs have the KB written to storage per insert during the load and per transaction. The values on the graphs are relative to InnoDB from MySQL 5.6.35. Modern InnoDB writes ~1.6X more to storage per insert and ~25X more per transaction compared to MyRocks. An SSD will last longer with MyRocks.
MyRocks is more space efficient. Uncompressed InnoDB uses more space than MyRocks after both the load and transaction tests. It uses ~1.5X more than uncompressed MyRocks and ~3X more than compressed MyRocks. One problem for InnoDB is B-Tree fragmentation. Leveled compaction in MyRocks wastes less space. The graph has the database size in GB (not using relative values here).

JSON_TABLE

$
0
0
JSON data is a wonderful way to store data without needing a schema but what about when you have to yank that data out of the database and apply some sort of formatting to that data?  Well, then you need JSON_TABLE.

JSON_TABLE takes free form JSON data and applies some formatting to it.  For this example we will use the world_x sample database's countryinfo table.  What is desired is the name of the country and the year of independence but only for the years after 1992.  Sound like a SQL query against JSON data, right? Well that is exactly what we are doing.

We tell the MySQL server that we are going to take the $.Name and $.IndepYear key's values from the JSON formatted doc column in  the table, format them into a string and a integer respectively, and alias the key value's name to a table column name that we can use for qualifiers in an SQL statement.


mysql> select country_name, IndyYear from countryinfo,
json_table(doc,"$" columns (country_name char(20) path "$.Name",
IndyYear int path "$.IndepYear")) as stuff
where IndyYear > 1992;

+----------------+----------+
| country_name   | IndyYear |
+----------------+----------+
| Czech Republic |     1993 |
| Eritrea        |     1993 |
| Palau          |     1994 |
| Slovakia       |     1993 |
+----------------+----------+
4 rows in set, 67 warnings (0.00 sec)



mysql>

So what else can JSON_TABLE do? How about default values for missing values? Or checking that a key exists in a document. More on that next time. For now if you want to try MySQL 8.0.3 with JSON_TABLES, you need to head to Labs.MySQL.COM to test this experimental feature.

SQLyog helped Steven Manage MySQL Databases for over 15 Years

$
0
0

The story of Steven Mapes, an experienced software developer who has been using SQLyog MySQL GUI since 2002.

We were so thrilled to speak with Steven Mapes for two simple reasons. One, he is an ardent user of SQLyog. Two, he has been using the tool since 2002 (we had released the GA version of SQLyog in 2002). And, it is users like Steven who make the product successful with their constant support and feedback that keeps us going.

We go down memory lane and learn some interesting facts about SQLyog that made Steven stick with the tool for more than a decade.

Steven Mapes is a self-employed software solutions provider who develops web-based polyglot solutions for clients often hosted within the cloud since 2012. Before it, he was the Head of IT for Moko Social Media in the UK. When asked about how he came across SQLyog and the need to use the tool, Steven says,”My first exposure to SQLyog was back in 2002 when the company I worked for at the time introduced a MySQL database to run their new web applications alongside their existing MSSQL server.”

Finding a MySQL client tool

As a software engineer and DBA, Steven wanted to use a tool that would help him manage the MySQL database in a way that felt familiar to the MSSQL Query Analyzer tool. He wanted the tool to include syntax checking, data export, multiple queries, connections to multiple databases, along with GUI tools to manage users to ease the task for his colleagues. For the same reason, he evaluated numerous other IDEs such as the MySQL CLI client, SQL Workbench, Zend Studios, PHPEdit, to name a few.

Why SQLyog?

He found SQLyog to be the most natural fit to his requirements. Steven’s primary purpose of using SQLyog was to develop and test SQL queries, create and manage stored procedures and compare database schemas across different connections. As he puts it,

“SQLyog provides easy to use backup features, schema comparisons and data transfer which meant that I could easily compare the development schemas to the live schema (this was pre-framework code) while using an interface to query the database that felt familiar and did not have the memory leaks of SQL Workbench.”

He finds backup and data/schema export extremely useful along with the schema comparison tools. Stefan has clients who do not use framework based ORMs, and so SQLyog’s schema comparison is perfect to rapidly compare live and development schemas. SQLyog made any migrations rapid and obvious especially when it comes to collation and charset differences.

Customer happiness

SQLyog decreased the management time and at the same time accelerate the development process. According to Steven,

‘It is a tool that I have introduced to various companies and individuals, and it is still my go-to choice as a GUI tool to connect to MySQL whether it’s from a Windows or Linux client. It is an excellent product.’

The only improvement Steven would see in SQLyog is to have a native Linux client which would help with some rendering bugs which occur when you run the product over WINE. We hear you Steven, and we aim to make SQLyog as perfect a tool to help users like you.


SQLyog delivers more robust operation than free database administration tools and comes with useful features that help you get the most out of your DBA tasks. Download SQLyog free trial.

The post SQLyog helped Steven Manage MySQL Databases for over 15 Years appeared first on SQLyog Blog.

This Week in Data with Colin Charles 17: AWS Re:Invent, a New Book on MySQL Cluster and Another Call Out for Percona Live 2018

$
0
0
Colin Charles

Colin Charles Open Source Database evangelist for PerconaJoin Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

The CFP for Percona Live Santa Clara 2018 closes December 22, 2017: please consider submitting as soon as possible. We want to make an early announcement of talks, so we’ll definitely do a first pass even before the CFP date closes. Keep in mind the expanded view of what we are after: it’s more than just MySQL and MongoDB. And don’t forget that with one day less, there will be intense competition to fit all the content in.

A new book on MySQL Cluster is out: Pro MySQL NDB Cluster by Jesper Wisborg Krogh and Mikiya Okuno. At 690 pages, it is a weighty tome, and something I fully plan on reading, considering I haven’t played with NDBCLUSTER for quite some time.

Did you know that since MySQL 5.7.17, connection control plugins are included? They help DBAs introduce an increasing delay in server response to clients after a certain number of consecutive failed connection attempts. Read more at the connection control plugins.

While there are a tonne of announcements coming out from the Amazon re:Invent 2017 event, I highly recommend also reading Some data of interest as AWS reinvent 2017 ramps up by James Governor. Telemetry data from sumologic’s 1,500 largest customers suggest that NoSQL database usage has overtaken relational database workloads! Read The State of Modern Applications in the Cloud. Page 8 tells us that MySQL is the #1 database on AWS (I don’t see MariaDB Server being mentioned which is odd; did they lump it in together?), and MySQL, Redis & MongoDB account for 40% of database adoption on AWS. In other news, Andy Jassy also mentions that less than 1.5 months after hitting 40,000 database migrations, they’ve gone past 45,000 over the Thanksgiving holiday last week. Have you started using AWS Database Migration Service?

Releases

Link List

Upcoming appearances

  • ACMUG 2017 gathering – Beijing, China, December 9-10 2017 – it was very exciting being there in 2016, I can only imagine it’s going to be bigger and better in 2017, since it is now two days long!

Feedback

I look forward to feedback/tips via e-mail at colin.charles@percona.com or on Twitter @bytebot.

Percona Monitoring and Management 1.5: QAN in Grafana Interface

$
0
0
Percona-Monitoring-and-Management-1.5-QAN-1 small

In this post, we’ll examine how we’ve improved the GUI layout for Percona Monitoring and Management 1.5 by moving the Query Analytics (QAN) functions into the Grafana interface.

For Percona Monitoring and Management users, you might notice that QAN appears a little differently in our 1.5 release. We’ve taken steps to unify the PMM interface so that it feels more natural to move from reviewing historical trends in Metrics Monitor to examining slow queries in QAN.  Most significantly:

  1. QAN moves from a stand-alone application into Metrics Monitor as a dashboard application
  2. We updated the color scheme of QAN to match Metrics Monitor (but you can toggle a button if you prefer to still see QAN in white!)
  3. Date picker and host selector now use the same methods as Metrics Monitor

Percona Monitoring and Management 1.5 QAN 1

Starting from the PMM landing page, you still see two buttons – one for Metrics Monitor and another for Query Analytics (this hasn’t changed):

Percona Monitoring and Management 1.5 QAN 2

Once you select Query Analytics on the left, you see the new Metrics Monitor dashboard page for PMM Query Analytics. It is now hosted as a Metrics Monitor dashboard, and notice the URL is no longer /qan:

Percona Monitoring and Management 1.5 QAN 3

Another advantage of the Metrics Monitor dashboard integration is that the QAN inherits the host selector from Grafana, which supports partial string matching. This makes it simpler to find the host you’re searching for if you have more than a handful of instances:

Percona Monitoring and Management 1.5 QAN 4

The last feature enhancement worth mentioning is the native Grafana time selector, which lets you select down to the minute resolution time frames. This was a frequent source of feature requests — previously PMM limited you to our pre-defined default ranges. Keep in mind that QAN has an internal archiving job that caps QAN history at eight days.

Percona Monitoring and Management 1.5 QAN 5

Last but not least is the ability to toggle between the default dark interface and the optional white. Look for the small lightbulb icon at the bottom left of any QAN screen (Percona Monitoring and Management 1.5 QAN 6) and give it a try!

Percona Monitoring and Management 1.5 QAN 7

We hope you enjoy the new interface, and we look forward to your feedback on these improvements!

Monitoring RDS MySQL Performance Metrics

$
0
0

Amazon Web Services (AWS) is a cloud platform that offers a wide variety of services including computing power, database storage, content delivery and other functionality that targets businesses of all sizes. One of their database solutions includes the Amazon Relational Database Service. Amazon RDS includes a number of popular RDBMSes, including Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle, and Microsoft SQL Server, as well as tools to manage your databases and monitor their performance.

Despite the wide range of metrics available within the Amazon RDS console, there are some very good reasons for using your own monitoring tool(s) instead or in addition to those offered by Amazon RDS. For example, familiarity with your own tool(s) or access to features that Amazon RDS does not provide would constitute two persuasive reasons for employing a local tool.

With traditional software monitoring platforms such as Monyog still enjoying widespread usage on cloud-based databases, vendors have been quick to accommodate cloud DBaaS platforms by adding new support and features. Case in point, Monyog version 8.1.0 introduced the RDS/Aurora OS Monitoring feature to make use of Amazon’s CloudWatch API and utilize the different OS metrics available with the API to fetch and display data.

In this blog, we will explore how to monitor MySQL Performance Metrics using both the Amazon monitoring tools as well as the latest version of Monyog in order to compare the strengths of each.

Amazon RDS Metrics

Amazon’s RDS database platform provides statistics and advice about many types of metrics including:

  • High CPU or RAM consumption
  • Disk space consumption
  • Network traffic
  • Database connections
  • IOPS (input output operations per second) metrics

AWS provides both automated and manual tools that you can use to monitor your RDS database(s). This setup allows you to automate monitoring tasks as much as possible while leaving some for you to manage as you see fit.

These tools are spread across several dashboards including Amazon RDS, CloudWatch, AWS Trusted Advisor and other AWS console dashboards. Each targets different types of metrics about resource utilization and performance as well as services like alarms and troubleshooting. We’ll be focussing on CloudWatch and the Amazon RDS here.

Monitoring with Amazon CloudWatch

Amazon CloudWatch collects and processes raw data from Amazon RDS into readable, near real-time metrics. Historical information is stored for a period of two weeks, so that you can gain a better perspective on how your RDS instance is performing over time.

The CloudWatch console is located at https://console.aws.amazon.com/cloudwatch/. From there, you can change the region and view RDS metrics by choosing Metrics in the navigation pane, followed by the RDS metric namespace. RDS metrics are divided by:

  • Per-Database Metrics
  • By Database Engine
  • By Database Class
  • Across All Databases

RDS Cloud Metrics

Clicking on a metric type shows all of the applicable metrics for that type.

Each metric has a checkbox on the left-hand side that, when checked, adds that metric to the displayed graphs:

RDS CloudWatch WriteThroughput

CloudWatch metrics may also be viewed using the AWS Command Line Interface (CLI). It’s a unified tool to manage your AWS services from one place. The one tool may be utilized to download, configure, and control multiple AWS services from the command line and automate them through scripts.

The AWS CLI is built on top of the AWS SDK for Python. Once configured, you can start using all of the functionality provided by the AWS Management Console from your favorite terminal program, including Linux shells, the Windows command line, with the Amazon EC2 systems manager, and even remotely through a remote terminal such as PuTTY or SSH.

The following command lists all of the metrics for the AWS/RSD namespace:

aws cloudwatch list-metrics --namespace AWS/RDS

The Amazon RDS Dashboard

Amazon RDS provides metrics that pertain directly to the functioning of your DB instances and clusters. From the Amazon RDS Dashboard, you can monitor both database and operating system (OS) metrics.

Viewing DB and OS metrics for a DB instance using the Console

After a successful sign-in to the AWS Management Console, you can access the Amazon RDS console at https://console.aws.amazon.com/rds/. From there:

  1. In the navigation pane, choose DB Instances.
  2. Select the check box to the left of the DB cluster you need information about. Under Show Monitoring, choose the option for how you want to view your metrics from the following four choices:
    • Show Multi-Graph View – Shows a summary of DB instance metrics available from Amazon CloudWatch. Each metric includes a graph showing the metric monitored over a specific time span.
    • Show Single Graph View – Shows a single metric at a time with more detail. Each metric includes a graph showing the metric monitored over a specific time span.
    • Show Latest Metrics View – Shows a summary of DB instance metrics without graphs.
    • Enhanced Monitoring – Shows a summary of OS metrics available for a DB instance with Enhanced Monitoring enabled. Each metric includes a graph showing the metric monitored over a specific time span. (Enhanced Monitoring will be covered in more detail in the next section.)

RDS Show Monitoring Views


The time range of the metrics represented by the graphs may be selected via the Time Range dropdown. It’s available for every Dashboard except Enhanced Monitoring.

Graphs are divided into many pages that can be accessed via the page buttons. Moreover, the Show All button displays all graphs on one page.

RDS Multi-graph View

Clicking on any graph brings up a more detailed view of that graph:

RDS detailed graph view

You can apply metric-specific filters on this screen as well as create applicable alarms.

Enhanced Monitoring

To gather and analyze metrics for your DB instance and its underlying operating system (OS) in real time, you can view Enhanced Monitoring metrics in the console, or consume JSON output from CloudWatch Logs in a monitoring system like Monyog.

Enhanced Monitoring is a free service until usage exceeds the free tier provided by Amazon CloudWatch Logs. After that Enhanced Monitoring is priced according to several factors, including:

  • Monitoring interval: A smaller monitoring interval results in more frequent reporting of OS metrics and increases your monitoring cost.
  • The number of DB instances: Usage costs for Enhanced Monitoring are applied for each DB instance that Enhanced Monitoring is enabled for. Hence, monitoring a large number of DB instances is more expensive than monitoring just a few.
  • Workload: DB instances whose workload is more compute-intensive have more OS process activity to report and higher costs for Enhanced Monitoring.

Visit the Amazon CloudWatch Pricing page to obtain more information on Enhanced Monitoring prices.

Viewing DB Metrics by Using the CloudWatch CLI

Amazon RDS integrates with CloudWatch metrics to provide a variety of DB instance metrics. In addition to the RDS console, you can also view RDS metrics using the AWS CLI or API.

For example, invoking the mon-get-stats CloudWatch command with the following parameters displays usage and performance statistics for a DB instance:

PROMPT>mon-get-stats FreeStorageSpace

       --dimensions="DBInstanceIdentifier=mydbinstance"

       --statistics=Average

       --namespace="AWS/RDS"

       --start-time 2015-09-29T00:00:00

       --end-time 2015-09-2900:04:00

Results:

Time                 Average  Unit

2015-09-29 00:00:00  33.09    Percent

2015-09-29 00:01:00  32.17    Percent

2015-09-29 00:02:00  34.67    Percent

2015-09-29 00:03:00  32.33    Percent

2015-09-29 00:04:00  31.45    Percent

Here’s how to fetch the same statistics as above via the GetMetricStatistics CloudWatch API:

http://monitoring.amazonaws.com/

	?SignatureVersion=2

	&Action=GetMetricStatistics

	&Version=2009-05-15

	&StartTime=2015-09-29T00:00:00

	&EndTime=2015-09-29T00:04:00

	&Period=60

	&Statistics.member.1=Average

     &Dimensions.member.1="DBInstanceIdentifier=mydbinstance"	&Namespace=AWS/RDS

	&MeasureName=FreeStorageSpace

	&Timestamp=2009-10-15T17%3A48%3A21.746Z

	&AWSAccessKeyId=

	&Signature=

Integrating the Monyog Monitoring Tool with CloudWatch

Many people are unaware that you can collect metrics for database instances that reside on the Cloud using your own monitoring tools, much like you would for databases that reside on your own company infrastructure. These may offer extended monitoring functionality. For instance, you can correlate metrics from your cloud database with other parts of your infrastructure, such as applications that interact with that database. It may also be possible to massage and/or filter your metrics for specific uses. Monyog makes it easy to seamlessly integrate with the CloudWatch API in order to collect metrics from across your infrastructure and AWS.

Connecting to your RDS instance

You cannot access your RDS instances directly as you would a local database. That being said, you can connect to your MySQL instance remotely using standard database tools, provided that you’ve configured the security group for your MySQL instance to allow connections from the device you are using to establish a connection.

Newly created DB security groups don’t provide access to a DB instance by default. You must specify a range of IP addresses or an Amazon EC2 security group that can have access to the DB instance.

To add inbound rules to the security group

  1. Determine the IP address that you will use to connect to instances in your DB Instance. To determine your public IP address, you can use the service at http://checkip.amazonaws.com. If you are connecting through an Internet service provider (ISP) or from behind your firewall without a static IP address, you need to find out the range of IP addresses used by client computers.

    Use 0.0.0.0/0 to enable all IP addresses to access your public instances. This approach is perfectly acceptable for a short time in a test environment but is unsafe for production environments. In production, always authorize only the specific IP address or range of addresses that will be accessing your instances.
  2. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.
  3. Click on the security group that you’d like to grant users access to (highlighted in red above).
  4. Choose the Inbound Rules tab, and then click Edit.
  5. Set the protocol Type and IP Source for your new inbound rule. For instance:
    • Type: HTTP (80)
    • Source: 0.0.0.0/0.
  6. To save your settings, choose Save.

edit security group inbound rules


Once your access rules are in place, you can connect to your MySQL Instance using the mysql command line tool:

mysql -h instance-name.xxxxxx.us-east-1.rds.amazonaws.com -P 3306 -u yourusername -p

The instance endpoint (ending in rds.amazonaws.com) can be found in the list of instances on the RDS console.

Establishing a Connection to your RDS instance from Monyog

In Monyog, connecting to an RDS DB is no different than one that resides on your own infrastructure. All configured servers are located on the Servers screen, which is accessible via the Servers button, usually the top icon on the left-hand side.

  • Click the ADD NEW SERVER button to bring up the Server Details dialog.

add new server button

  • On the Server Details dialog, enter the following details:
    1. Your DB Instance endpoint.
    2. The MySQL port.
    3. Your USERNAME and PASSWORD.
    4. The CONNECTION TYPE.

Amazon RDS connection screen

  • When completed, click SAVE to create the connection and close the dialog.

Once the connection has been established, Monyog will immediately begin gathering metrics about your DB instance. You will be able to see some of them on the Dashboard:

Monyog Amazon RDS Performance Metrics Dashboard

Viewing RDS Instance Metrics

The RDS/Aurora OS Monitoring feature introduced with Monyog 8.1.0 employs the CloudWatch API in order to fetch and display OS metrics that are exposed via the API. RDS/Aurora OS monitors are shown under the MONITOR GROUP as “RDS/Aurora Instance Metrics” on the Monitors page while the corresponding charts are available on the Dashboard page.

In order to be able to see the OS data, you should first verify that the RDS/Aurora Instance Metrics are enabled. To do that, click the plus (+) sign beside the MONITOR GROUP header on the MONITORS page and confirm that the slider is to the right and blue in color (and not gray):

Monyog RDS-Aurora Instance Metrics

Editing RDS Instance Metrics

Clicking the “RDS/Aurora Instance Metrics” item under the MONITOR GROUP on the Monitors page shows the configured monitors. Each item in the MONITORS table is fully editable by clicking it:

Monyog RDS-Aurora Instance Metrics – Edit monitor – CPU Utilization

Configure Custom RDS/Aurora Monitors

In addition to the default RDS/Aurora monitors, you can add new monitors via the MONITORS header dropdown list. These may be any available CloudWatch metric. Refer to the AWS site for the complete list of CloudWatch metrics available for RDS.

  1. Select Manage RDS/AURORA custom objects from the list:
  2. monitors menu

  3. On the ADD/EDIT RDS/AURORA CUSTOM OBJECTS screen, click the plus (+) icon. Doing so will present a blank RDS/Aurora Object configuration screen. Here it is with a new Object defined:

Adding a new custom monitor

Adding the Monitor

Once you’ve configured your new RDS/Aurora Custom Object you’re ready to add it to the list of the Monyog default metrics.

  1. Go to Monitors -> RDS/Aurora Instance Metrics, click on the plus (+) icon, then select Add new monitor.
  2. Enter the Monitor name and the Monitor group name that you want to add the new monitor to. For instance, you would enter “RDS/Aurora Instance metrics” if you wanted to add it in this group.
  3. Select the “Type of counter” appropriate to your monitor:
    • MySQL: database-related information.
    • System: OS-related information.
    • Custom SQL: for a Custom SQL object.
  4. Provide a Formula name.
  5. Enter a Javascript function in the Value field using the Cloudwatch metric like:
function() {

 	 return GetAWSMetricVal('NetworkReceiveThroughput');

}

add new monitor screen

Adding New RDS/Aurora Dashboard Charts

The Monyog Dashboard page allows us to customize a dashboard with a particular set of charts. With respect to AWS RDS, you could create a dashboard with only RDS metrics charts for ease of monitoring.

In order to add an RDS/Aurora chart to the dashboard page, the corresponding RDS/Aurora custom object should be defined and enabled first. Then, follow the steps below to add the chart:

  1. From the Dashboard page, click the DASHBOARDS dropdown and select the pencil icon to the right of the Performance Metrics item:
  2. edit dashboard

    On the Performance Metrics screen:

  3. Locate your custom object in the list or enter the name in the search box to find it:
  4. Performance Metrics charts with new chart added

  5. Click the pencil icon to the right of the Item.

In the Editor:

  1. Select the Type of Counter.
  2. Provide the Series Caption and Series Values:
  3. Add new chart

  4. Click the SAVE button to add the chart and close the dialog.
  5. Don’t forget to activate your new chart on the Performance Metrics screen!

Conclusion

In this blog, we explored how to monitor MySQL Performance Metrics using both the Amazon monitoring tools as well as the latest version of Monyog in order to compare the strengths of each.

We learned that:

  1. Amazon’s RDS database platform provides statistics and advice about many types of metrics. These may be configured and consumed using a variety of online tools as well as via the CLI or API.
  2. The RDS Dashboard provides monitoring of both database and operating system (OS) metrics. A number of views are available including single and multi-graph layouts.
  3. Enhanced Monitoring may be utilized to gather and analyze metrics for your DB instance and its underlying operating system (OS) in real time but at a potential additional cost.
  4. Monyog version 8.1.0 introduced the RDS/Aurora OS Monitoring feature to make use of Amazon’s CloudWatch API and utilize the different OS metrics available with the API to fetch and display the data.
  5. Monyog features a number of default RDS/Aurora monitors and provides the ability to add new monitors. These may be any available CloudWatch metric.
  6. RDS charts may be added to the Monyog Dashboard page in order to create customized views.

Moving database operations need not demand a whole new set of monitoring tools and interfaces. With Monyog’s Amazon RDS/Aurora integration, you can continue to utilize the monitoring tool that you already know and trust.


Monyog is a MySQL monitoring tool that improves the database performance of your MySQL powered systems. Download your free trial.

The post Monitoring RDS MySQL Performance Metrics appeared first on Monyog Blog.

tpcc-mysql, IO-bound, high-concurrency: MyRocks, InnoDB and TokuDB

$
0
0
This has results for tpcc-mysql with a high-concurrency workload when the database is larger than memory. Tests were run for MyRocks, InnoDB and TokuDB. I previously shared results for an in-memory workload. While the database is larger than RAM there are few reads from storage per transaction unlike many of the other IO-bound benchmarks I run.

tl;dr:
  • InnoDB from upstream 5.7 has the best throughput and gets 1.18X more TPMC than upstream InnoDB 5.6. MyRocks with and without compression does slightly better than upstream InnoDB 5.6.
  • InnoDB is more CPU efficient. MyRocks uses 1.1X to 1.2X more CPU/transaction than upstream InnoDB 5.6.
  • MyRocks is more write efficient. InnoDB writes ~3X more to storage per transaction.
  • MyRocks is more space efficient. Uncompressed InnoDB uses 1.8X more space than uncompressed MyRocks and 2.9X more space than compressed MyRocks.
  • InnoDB has a regression from 5.7 to 8.0 - less throughput, more CPU overhead. I assume the problem is from new code above the storage engine.
  • InnoDB from FB MySQL 5.6 is almost as fast as upstream InnoDB 5.7. I assume several changes account for that result.

Disclaimer

For several reasons this isn't TPC-C. But this is tpcc-mysql. Read committed was used for MyRocks while the official benchmark requires repeatable read for some of the transaction types. When MyRocks supports gap locks this test can be repeated using repeatable read for it. Repeatable read was used for InnoDB and TokuDB.

Configuration

I used tpcc-mysql from Percona with my helper scripts. The test was run with 1000 warehouses and 20 customers. The database is larger than memory. The test server has 48 HW threads, fast SSD and 50gb of RAM. The database block cache (buffer pool) was 10g for MyRocks, 10g for TokuDB and 35g for InnoDB.

The test pattern is load and then run 12 hours of transactions in 1 hour loops. Results are reported for the last hour of transactions. I used 1000 milliseconds as the response time SLA.

Tests were run for MyRocks, InnoDB from upstream MySQL, InnoDB from FB MySQL and TokuDB. The binlog was enabled but sync on commit was disabled for the binlog and database log. All engines used jemalloc. Mostly accurate my.cnf files are here.
  • MyRocks was compiled on October 16 with git hash 1d0132. Tests were repeated with and without compression. The configuration without compression is called MySQL.none in the rest of this post. The configuration with compression is called MySQL.zstd and used zstandard for the max level, no compression for L0/L1/L2 and lz4 for the other levels. 
  • Upstream 5.6.35, 5.7.17, 8.0.1, 8.0.2 and 8.0.3 were used with InnoDB. SSL was disabled and 8.x used the same charset/collation as previous releases.
  • InnoDB from FB MySQL 5.6.35 was compiled on June 16 with git hash 52e058.
  • TokuDB was from Percona Server 5.7.17. Tests were done without compression and then with zlib compression.
The performance schema was enabled for upstream InnoDB and TokuDB. It was disabled at compile time for MyRocks and InnoDB from FB MySQL because FB MySQL 5.6 has user & table statistics for monitoring.

Graphs

The graphs are from the 12th hour of the transaction test. All of the data is here.  I adjusted iostat bytes written metrics for MyRocks because it currently counts bytes trimmed as bytes written which is an issue for RocksDB but my adjustment is not exact.

The first graph has the TPMC for an engine and the values are relative to the TPMC for InnoDB from upstream MySQL 5.6. TPMC is transaction throughput.
  • InnoDB from 5.7.17 does the best
  • MyRocks is slightly better than InnoDB from upstream 5.6.35.
  • MyRocks doesn't lose much throughput from using compression.
  • InnoDB from FB MySQL 5.6.35 has the second best throughput which is 1.16X more than upstream 5.6.35. I am not sure which changes explain that.
  • There is a regression from 5.7 to 8.x for upstream MySQL. I assume the cause is new code above the storage engine.

The next graph has the CPU overhead per TPMC and the values are relative to upstream InnoDB 5.6. A value > 1 means that the engine uses more CPU/transaction than the base case.
  • MyRocks uses 1.12X to 1.22X more CPU than InnoDB 5.6
  • InnoDB from FB MySQL uses less CPU than InnoDB from upstream 5.6
  • There is a CPU regression from upstream 5.7 to 8.x

This graph has the KB written to storage per TPMC and the values are relative to upstream InnoDB 5.6. MyRocks is more write efficient as InnoDB writes ~3X more to storage per transaction.
The next graph has iostat read operations per transaction. The values are relative to upstream InnoDB 5.6. The absolute values are small, most are less than 0.10. While the test database is larger than memory the working set almost fits in memory. I tried using 2X more warehouses, but that takes longer to load and didn't change the read/transaction rate by much to justify the extra time.

This graph shows the size of the database after the load and after 12 hours of transactions. Rows are added during the transaction test, the database is expected to grow and the growth is a function of the transaction rate. Fortunately MyRocks and InnoDB have similar transaction rates so it is mostly fair to compare database size.

MyRocks is more space efficient than InnoDB. Uncompressed InnoDB uses ~1.8X more space than uncompressed MyRocks and 2.9X more space than compressed MyRocks.

MySQL 8.0 Window Functions: A Quick Taste

$
0
0
Window Functions

Window FunctionsIn this post, we’ll briefly look at window functions in MySQL 8.0.

One of the major features coming to MySQL 8.0 is the support of Window functions. The detailed documentation is already available here Window functions. I wanted to take a quick look at the cases where window functions help.

Probably one the most frequent limitations in MySQL SQL syntax was analyzing a dataset. I tried to find the answer to the following question: “Find the Top N entries for each group in a grouped result.”

To give an example, I will refer to this request on Stackoverflow. While there is a solution, it is hardly intuitive and portable.

This is a popular problem, so databases without window support try to solve it in different ways. For example, ClickHouse introduced a special extension for LIMIT. You can use LIMIT n BY m to find “m” entries per group.

This is a case where window functions come in handy.

As an example, I will take the IMDB database and find the TOP 10 movies per century (well, the previous 20th and the current 21st).To download the IMDB dataset, you need to have to have an AWS account and download data from S3 storage (the details are provided on IMDB page).

I will use the following query with MySQL 8.0.3:

SELECT primaryTitle,century*100,rating,genres,rn as `Rank` FROM (SELECT primaryTitle,startYear div 100 as century,rating,genres, RANK() OVER (PARTITION BY startYear div 100 ORDER BY rating desc) rn FROM title,ratings WHERE title.tconst=ratings.tconst AND titleType='movie' AND numVotes>100000) t1 WHERE rn<=10 ORDER BY century,rating desc

The main part of this query is RANK() OVER (PARTITION BY startYear div 100 ORDER BY rating desc), which is the mentioned window function. PARTITION BY divides rows into groups, ORDER BY specifies the order and RANK() calculates the rank using the order in the specific group.

The result is:

+---------------------------------------------------+-------------+--------+----------------------------+------+
| primaryTitle                                      | century*100 | rating | genres                     | Rank |
+---------------------------------------------------+-------------+--------+----------------------------+------+
| The Shawshank Redemption                          |        1900 |    9.3 | Crime,Drama                |    1 |
| The Godfather                                     |        1900 |    9.2 | Crime,Drama                |    2 |
| The Godfather: Part II                            |        1900 |      9 | Crime,Drama                |    3 |
| 12 Angry Men                                      |        1900 |    8.9 | Crime,Drama                |    4 |
| The Good, the Bad and the Ugly                    |        1900 |    8.9 | Western                    |    4 |
| Schindler's List                                  |        1900 |    8.9 | Biography,Drama,History    |    4 |
| Pulp Fiction                                      |        1900 |    8.9 | Crime,Drama                |    4 |
| Star Wars: Episode V - The Empire Strikes Back    |        1900 |    8.8 | Action,Adventure,Fantasy   |    8 |
| Forrest Gump                                      |        1900 |    8.8 | Comedy,Drama,Romance       |    8 |
| Fight Club                                        |        1900 |    8.8 | Drama                      |    8 |
| The Dark Knight                                   |        2000 |      9 | Action,Crime,Drama         |    1 |
| The Lord of the Rings: The Return of the King     |        2000 |    8.9 | Adventure,Drama,Fantasy    |    2 |
| The Lord of the Rings: The Fellowship of the Ring |        2000 |    8.8 | Adventure,Drama,Fantasy    |    3 |
| Inception                                         |        2000 |    8.8 | Action,Adventure,Sci-Fi    |    3 |
| The Lord of the Rings: The Two Towers             |        2000 |    8.7 | Action,Adventure,Drama     |    5 |
| City of God                                       |        2000 |    8.7 | Crime,Drama                |    5 |
| Spirited Away                                     |        2000 |    8.6 | Adventure,Animation,Family |    7 |
| Interstellar                                      |        2000 |    8.6 | Adventure,Drama,Sci-Fi     |    7 |
| The Intouchables                                  |        2000 |    8.6 | Biography,Comedy,Drama     |    7 |
| Gladiator                                         |        2000 |    8.5 | Action,Adventure,Drama     |   10 |
| Memento                                           |        2000 |    8.5 | Mystery,Thriller           |   10 |
| The Pianist                                       |        2000 |    8.5 | Biography,Drama,Music      |   10 |
| The Lives of Others                               |        2000 |    8.5 | Drama,Thriller             |   10 |
| The Departed                                      |        2000 |    8.5 | Crime,Drama,Thriller       |   10 |
| The Prestige                                      |        2000 |    8.5 | Drama,Mystery,Sci-Fi       |   10 |
| Like Stars on Earth                               |        2000 |    8.5 | Drama,Family               |   10 |
| Whiplash                                          |        2000 |    8.5 | Drama,Music                |   10 |
+---------------------------------------------------+-------------+--------+----------------------------+------+
27 rows in set (0.19 sec)

The previous century was dominated by “The Godfather” and the current one by “The Lord of the Rings”. While we may or may not agree with the results, this is what the IMDB rating tells us.
If we look at the result set, we can see that there are actually more than ten movies per century, but this is how function RANK() works. It gives the same RANK for rows with an identical rating. And if there are multiple rows with the same rating, all of them will be included in the result set.

I welcome the addition of window functions into MySQL 8.0. This definitely simplifies some complex analytical queries. Unfortunately, complex queries still will be single-threaded — this is a performance limiting factor. Hopefully, we can see multi-threaded query execution in future MySQL releases.

MySQL Crashes on DDL statement: A Lesson on purge threads

$
0
0

Recently there have been several issues reported to me about DDL activity causing MySQL crash scenarios. In one case it stemmed from dropping multiple databases, one after the other in rapid succession. But in the case that I was recently dealing with directly, where we were upgrading to MySQL 5.7, it was the result of mysql_upgrade running an ALTER TABLE FORCE on a 2.2Tb table in order to convert it to the new microsecond precision supporting data format.

The issue occurred after the intermediate table had been completely filled with all the necessary data and right when MySQL would swap out the existing table for the intermediate. After a period of time MySQL crashed and the following InnoDB monitor output was found in the error log.

 


2017-11-19T00:22:44.070363Z 7 [ERROR] InnoDB: The age of the last checkpoint is 379157140, which exceeds the log group capacity 377483674.
InnoDB: ###### Diagnostic info printed to the standard error stream
2017-11-19T00:22:57.447115Z 0 [Warning] InnoDB: A long semaphore wait:
--Thread 140690671367936 has waited at srv0srv.cc line 1982 for 921.00 seconds the semaphore:
X-lock on RW-latch at 0x750a368 created in file dict0dict.cc line 1184
a writer (thread id 140690645923584) has reserved it in mode exclusive
number of readers 0, waiters flag 1, lock_word: 0
Last time read locked in file row0purge.cc line 862
Last time write locked in file /build/mysql-5.7-RrA758/mysql-5.7-5.7.20/storage/innobase/row/row0mysql.cc line 4305
2017-11-19T00:22:57.447181Z 0 [Warning] InnoDB: A long semaphore wait:
--Thread 140690535433984 has waited at buf0flu.cc line 1209 for 660.00 seconds the semaphore:
SX-lock on RW-latch at 0x7ff5d4a5eaa0 created in file buf0buf.cc line 1460
a writer (thread id 140690645923584) has reserved it in mode exclusive
number of readers 0, waiters flag 1, lock_word: 0
Last time read locked in file row0sel.cc line 1335
Last time write locked in file /build/mysql-5.7-RrA758/mysql-5.7-5.7.20/storage/innobase/row/row0upd.cc line 2856
2017-11-19T00:22:57.447210Z 0 [Warning] InnoDB: A long semaphore wait:
--Thread 140690654582528 has waited at row0purge.cc line 862 for 923.00 seconds the semaphore:
S-lock on RW-latch at 0x750a368 created in file dict0dict.cc line 1184
a writer (thread id 140690645923584) has reserved it in mode exclusive
number of readers 0, waiters flag 1, lock_word: 0
Last time read locked in file row0purge.cc line 862
Last time write locked in file /build/mysql-5.7-RrA758/mysql-5.7-5.7.20/storage/innobase/row/row0mysql.cc line 4305
InnoDB: ###### Starts InnoDB Monitor for 30 secs to print diagnostic info:
InnoDB: Pending preads 0, pwrites 0
.....
----------------------------
END OF INNODB MONITOR OUTPUT
============================
InnoDB: ###### Diagnostic info printed to the standard error stream
2017-11-19T00:23:27.448900Z 0 [ERROR] [FATAL] InnoDB: Semaphore wait has lasted > 600 seconds. We intentionally crash the server because it appears to 10364 be hung.
2017-11-18 19:23:27 0x7ff51a7d9700 InnoDB: Assertion failure in thread 140690688153344 in file ut0ut.cc line 916


 

The key thing to note here is that there was a purge event (row0purge.cc) on the data dictionary (dict0dict.cc) that formed a semaphore, which lasted longer than 600 seconds. In the event of a semaphore wait lasting longer than 600 seconds, MySQL will intentionally crash itself.

There were a few other things that we noticed in the output, specifically that there was a warning that the last checkpoint age was beyond the threshold of the redo log files, indicating the redo log accumulation was exceeding InnoDB’s ability to flush changes to persistent space and this was definitely causing a bottleneck.

In order to get past this issue, we increased the redo log space to 8G, as some testing indicated that this is how much data would be written to the redo log during the ALTER TABLE FORCE process on a table that was 2.2Tb in size. The amount of memory and the buffer pool size were also increased in order to speed up the ALTER TABLE FORCE process.

One other key change, and the change that we believe resolved the issue was setting the variable innodb_purge_threads from the 5.7 default value of 4 to 1. If you check the MySQL reference guide you’ll see that there is a note that states to keep this setting low so that the threads do not contend with each other for access to the busy tables. With a little googling you can also find bugs reports like this one where similar issues where setting this variable to 1 was suggested.

Conclusion

The takeaway from this is simply to suggest that if you are running MySQL 5.7, you will want to consider changing the variable innodb_purge_threads from it’s new default value of 4, to the 5.6 default value of 1 to avoid purge thread contention. That is, unless you have a strong need for multiple purge threads running. This also begs the question of why the default was changed from 4 to 1 in 5.7 as it could be considered as an unsafe value.


MySQL Enterprise Monitor 4.0.2 has been released

$
0
0

We are pleased to announce that MySQL Enterprise Monitor 4.0.2 is now available for download on the My Oracle Support (MOS) web site. It will also be available for download via the Oracle Software Delivery Cloud in a few days. MySQL Enterprise Monitor is the best-in-class tool for monitoring and management of your MySQL assets and is included with your MySQL Enterprise Edition and MySQL Enterprise Carrier Grade subscriptions.

You can find more information on the contents of this release in the change log.

Highlights of MySQL Enterprise Monitor 4.0 include:

  • Modern look and feel: a redesigned user interface delivers a vastly improved overall user experience. The visible changes--the layout, the icons, the and the overall aesthetics--provide a more natural and intuitive experience. Views dynamically change and adjust to your current context and the assets you've selected, everything from individual MySQL instances or hosts to your custom Groups, to your complex replication and clustered topologies. Additional enhancements include a much more responsive UI and a greater capacity to scale, allowing you to more effectively manage thousands of MySQL related assets.
  • MySQL Cluster monitoring: we now auto-discover your MySQL Cluster installations and give you visibility into the performance, availability, and health of each MySQL instance and NDB process, as well as the health of the MySQL Cluster instance as a single logical system. The Overview dashboard displays detailed instrumentation available for each MySQL Cluster and the Topology dashboard displays the current configuration of your MySQL Cluster enabling you to quickly see the status of the MySQL Cluster instance as a whole and each individual process. The Topology dashboard allows you to easily see how your MySQL Cluster installations are currently functioning.
  • A User Statistics report: which provides an easy way to monitor MySQL resource utilization broken down by user.

You will find binaries for the new release on My Oracle Support. Choose the "Patches & Updates" tab, and then choose the "Product or Family (Advanced Search)" side tab in the "Patch Search" portlet.

You will also find the binaries on the Oracle Software Delivery Cloud soon.  Type "MySQL Enterprise Monitor" in the search box, or enter a license name to find Enterprise Monitor along with other MySQL products: "MySQL Enterprise Edition" or "MySQL Cluster Carrier Edition".  Then select your platform.

Please open a bug or a ticket on My Oracle Support to report problems, request features, or give us general feedback about how this release meets your needs.

If you are not a MySQL Enterprise customer and want to try the Monitor and Query Analyzer using our 30-day free customer trial, go to http://www.mysql.com/trials, or contact Sales at http://www.mysql.com/about/contact.

Thanks and Happy Monitoring!

- The MySQL Enterprise Tools Development Team

Useful URLs

JSON_TABLE Part 2

$
0
0
The JSON_TABLE function has some very interesting uses. JSON data is great for schemaless jobs but what about when you need to pretend you have a schema and/or need to create tables based on values in a JSON document.

Existence and Defaults

Let start with some simple data:

mysql> SELECT * FROM t1;
+-----+--------------------------+
| _id | doc |
+-----+--------------------------+
| 1 | {"x": 0, "name": "Bill"} |
| 2 | {"x": 1, "name": "Mary"} |
| 3 | {"name": "Pete"} |
+-----+--------------------------+
3 rows in set (0.00 sec)
We have three documents and you will notice that the third record is missing a 'x' key/value pair. We can use JSON_TABLE to provide a value when that value is missing. For this example a missing value of 'x' is given a value of 999.

mysql> select * from t1, json_table(doc,"$" columns
(xHasValue int path "$.x" default '999' on empty,
hasname char(5) exists path "$.name",
mojo char(5) exists path "$.mojo")) as t2;

+-----+--------------------------+-----------+---------+------+
| _id | doc | xHasValue | hasname | mojo |
+-----+--------------------------+-----------+---------+------+
| 1 | {"x": 0, "name": "Bill"} | 0 | 1 | 0 |
| 2 | {"x": 1, "name": "Mary"} | 1 | 1 | 0 |
| 3 | {"name": "Pete"} | 999 | 1 | 0 |
+-----+--------------------------+-----------+---------+------+
3 rows in set (0.00 sec)

mysql>

Do we have that data?

We can also use the exists qualifier to test the existence of a key. The last two line in the query exists checks for a name which does exists and reports a '1'. And check for mojo which does not exist and reports a '0'. We can of course use these binary fields in our query.

mysql> select * from t1, json_table(doc,"$" columns
(xHasValue int path "$.x" default '999' on empty,
hasname char(5) exists path "$.name",
mojo char(5) exists path "$.mojo")) as t2
WHERE hasname=1 and xHasValue=1
;
+-----+--------------------------+-----------+---------+------+
| _id | doc | xHasValue | hasname | mojo |
+-----+--------------------------+-----------+---------+------+
| 2 | {"x": 1, "name": "Mary"} | 1 | 1 | 0 |
+-----+--------------------------+-----------+---------+------+
1 row in set (0.01 sec)

mysql>
/>

Current projects

$
0
0

Occasionally I like to list or describe what I’m working on in my free time. Here are the past two:

Changes since February

A bunch of the stuff I worked on in February evolved into something else. lm2, my key-value storage library, is “done.” It’s the primary storage option in all of my projects. Epsilon is gone but almost all of that work is now in Cistern. Alpha was an experiment I just got rid of but I think I want to explore web analytics again some day. I haven’t been working on the CSV stuff.

Current focus

  • Transverse, my goal forecasting app. It’s in open beta. I’m so glad I got this done! I use it everyday and I’m crushing my goals.
    • Transverse depends on lm2 and the Rig so I’m obviously working on both when I have to.
  • MySQL Explain Analyzer. I need to redesign this.
  • A rating tracking app. Still in the idea phase. I like to rate stuff like food, coffee, movies, books, music, etc. I’m using a spreadsheet right now but I want to make an app that’s slightly better than a spreadsheet. That’s what I did with Transverse and I want to keep that going.
  • Another app to store my guitar improv noodles. I used to do a Tuesday night improv thing a few years ago where I would sit down with my guitar and come up with a 2-3 minute recording and post it. It would be good for me to start doing that again but I don’t want to save that stuff on Tumblr or SoundCloud. I’m using DigitalOcean Spaces (their S3-compatible object store service) and it has a $5/mo 250 GB minimum and there’s no way I can fill that with just Transverse.

Non-programming stuff

  • Guitar. I got a Gibson Les Paul Classic recently and it’s super fun. I’m trying to get into jazz instead of just playing AC/DC or Joe Satriani covers all night.
  • Reading. Thanks to Transverse I’m been reading roughly two books a month. I did some napkin math recently and learned that I’ll probably get to read only about 1200 more books in my lifetime, so… better read as much as I can!
  • Design. I’m really liking Sketch to create designs and mockups for my app ideas. Most days I spend more time thinking about design than actual code.

Yes, I’m very busy. My social media hiatus is helping. This is all very exciting stuff! So exciting that I’m up early and up late and forget about sleep!

Deploying MySQL, MariaDB, Percona Server, MongoDB or PostgreSQL - Made Easy with ClusterControl

$
0
0

Helping users securely automate and manage their open source databases has been at the core of our efforts from the inception of Severalnines.

And ever since the first release of our flagship product, ClusterControl, it’s always been about making it as easy and secure as possible for users to deploy complex, open source database cluster technologies in any environment.

Since our first steps with deployment, automation and management we’ve perfected the art of securely deploying highly available open source database infrastructures by developing ClusterControl from a deployment and monitoring tool to a full-blown automation and management system adopted by thousands of users worldwide.

As a result, ClusterControl can be used today to deploy, monitor, and manage over a dozen versions of the most popular open source database technologies - on premise or in the cloud.

Whether you’re looking to deploy MySQL standalone, MySQL replication, MySQL Cluster, Galera Cluster, MariaDB, MariaDB Cluster, Percona XtraDB and Percona Server for MongoDB, MongoDB itself and PostgreSQL - ClusterControl has you covered.

In addition to the database stores, users can also deploy and manage load balancing technologies such as HAProxy, ProxySQL, MaxScale and Keepalived.

“Very easy to deploy a cluster, also it facilitates administration and monitoring.”

Michel Berger IT Applications Manager European Broadcasting Union (EBU)

Using ClusterControl, database clusters can be either deployed new or existing ones imported.

A deployment wizard makes it easy and secure to deploy production-ready database clusters with a point and click interface that walks the users through the deployment process step by step.

Select Deploy or Import Cluster

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

Walk through of the Deploy Wizard

View your cluster list

“ClusterControl is great for deploying and managing a high availability infrastructure. Also find the interface very easy to manage.”

Paul Masterson, Infrastructure Architect, Dunnes

Deploying with the ClusterControl CLI

Users can also chose to work with our CLI, which allows for easy integration with infrastructure orchestration tools such as Ansible etc.

s9s cluster
  --create
  --cluster-type=galera
  --nodes='10.10.10.26;10.10.10.27;10.10.10.28'
  --vendor=percona
  --cluster-name=PXC_CENTOS7
  --provider-version=5.7
  --os-user=vagrant   --wait

The ClusterControl deployment supports multiple NICS and templated configurations.

In short, ClusterControl provides:

  • Topology-aware deployment jobs for MySQL, MariaDB,Percona, MongoDB and PostgreSQL
  • Self-service and on-demand
  • From standalone nodes to load-balanced clusters
  • Your choice of barebone servers, private/public cloud and containers

To see for yourself, download ClusterControl today and give us your feedback.

Percona Monitoring and Management 1.5.2 Is Now Available

$
0
0
Percona Monitoring and Management

Percona Monitoring and ManagementPercona announces the release of Percona Monitoring and Management 1.5.2. This release contains fixes to bugs found after Percona Monitoring and Management 1.5.1 was released.

Bug fixes

  • PMM-1790QAN displayed query metrics even for a host that was not configured for mysql:queries or mongodb:queries. We have fixed the behaviour to display an appropriate warning message when there are no query metrics for the selected host.
  • PMM-1826: If PMM Server 1.5.0 is deployed via Docker, the Update button would not upgrade the instance to a later release.
  • PMM-1830: If PMM Server 1.5.0 is deployed via AMI (Amazon Machine Image) instance, the Update button would not upgrade the instance to a later release.
Viewing all 18800 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>