Quantcast
Channel: Planet MySQL
Viewing all 18823 articles
Browse latest View live

We've Moved!

$
0
0

I want to take a moment to thank you for reading this blog. We are working very hard on cool tools for you to use with MySQL and we really enjoy spreading the news of these tools to you.  With this in mind I wanted to update you on something that is changing. We are moving to a new blog home at http://insidemysql.com/.  From this point on all new content will be posted on our new blog and we encourage each of you to update your bookmarks accordingly.  Our aggregator at http://planet.mysql.com/ has already been updated.

Don' t worry! The old posts will still be here so your old bookmarks will still work.  You can find our new Windows focused category at

http://insidemysql.com/category/mysql-development/mysql-on-windows/.  Thank you for your continued support and we'll see you over in our new place!


PlanetMySQL Voting: Vote UP / Vote DOWN

The Top 10 Most Inappropriately Misappropriated Lyrics For Monitoring

$
0
0

As I flipped the channels in my car, Josh Groban’s soulful voice floated over my speakers and caressed my very soul, singing about our shared timeless yearning to solve the monitoring problems that plague us all:

I can’t cage you in my arms
When my heart is jumping forward
To avoid your false alarms

“These lyrics are so deeply meaningful to me!” I thought, reaching for the box of Kleenex. SING IT, Josh! And he did, his soaring voice speaking of anomaly detection algorithms (I understand the following lyrics to be addressed to a system metric such as CPU, don’t you agree?).

And you can’t tell me not to stay
When I opened up your window
And I watched you fly away

Now there’s a song I can relate to! If you need to stop reading and go listen to Josh sing his sensible, logical lyrics that always clearly mean something, we’ll be here when you return.

His Grobosity

But seriously, in the spirit of a little #monitoringfun, how about some lyrics that address the topics that are really near and dear to our hearts? Here are our top 10 picks for misappropriated (and sometimes redacted) lyrics that aren’t really about monitoring.

10: Pager Duty: We Own It (Mike Posner)

If I had my on-call time back, I wouldn’t change a thing, either. Yo, yo, what what?

I’m growing up I gotta buzz call it motivation
Blowing up like iPhone push notifications
Priceless if I had it back I wouldn’t change a thing
It’s far from over so I tell the fat lady sing

9: Nagios: System Check (Roni Size)

Make sure you set those thresholds right!

System Check, Make sure everything is operational
You can’t hold down what can’t be held down
New formulas forcing you to move around
Pushing to the center of your mind, we’ll make connection
Start off from the top then work it to your mid section
We use ready formulas and many tactics
First we’re doing somersaults, then we’re busting back flips

7: Blameless Postmortems: Where Did We Go Wrong (Toni Braxton/Babyface)

With much love to @mindweather.

Where did we go wrong?
Is it all my fault?
Where did we go wrong?
Is it all my fault?

6: Desperately Mitigating: I Will Fix You (Coldplay)

Ah, the feeling of staying up all night and all day for two days in a row, trying to get things back online.

When you try your best, but you don’t succeed
When you get what you want, but not what you need
When you feel so tired, but you can’t sleep
Lights will guide you home
And ignite your bones
I will try to fix you

(and another blameless postmortem reference?)

Tears stream down your face
When you lose something you cannot replace
I promise you I will learn from my mistakes
Tears stream down your face and I
… And I will try to fix you

5: Resilience in Complex Adaptive Systems: Discovery Channel (Thebloodhoundgang)

A single line is enough for this one… but you might wanna watch and learn.

You want it rough, you’re out of bounds

4: Uh-Oh, We Crashed: Downtime, Jo Dee Messina

We tell ourselves this, too.

I tell myself that everything will be just fine
I’m just going through a little downtime

3: How We Got Here: Speed The Collapse, Metric

Besides being impossible to stop listening to, Metric is obviously a mandatory band for any post on monitoring lyrics. Because, well, Metrics, you get it.

Every warning we ignored
Drifting in from distant shores

2: MonitoringLove To The Rescue: I.G.Y. (Steely Dan)

For some people, this actually rings true; for others it’s more complicated but anyway we have to end on a high note, and we have to mention Graphite at some point, no?

You’ve got to admit it
At this point in time that it’s clear
The future looks bright
On that train all graphite

And one more random Graphite reference for good measure, this time from someone named Juventa:

We are, all that remains,
Of a world in chaos broken by change,
We are light in the dark,
Calling out for something to spark..

And we’ll hide in the graphite,
Deep inside the earth,
And wait for the fires to start,
In your eyes…

To Sum Up

And there you have it, the top ten songs that aren’t about monitoring. (Yes, Josh Groban is #1, and he’ll always be #1. You got that right.)

Please, I beg of you, don’t leave any more in the comments! Unless you absolutely must, in which case, do. And speaking of music, check out our post on music streaming and databases.


PlanetMySQL Voting: Vote UP / Vote DOWN

Technical Difficulties (My Blog Has Them)

$
0
0

It has been a while since I posted something. I’m running in to some technical difficulties with the series of posts I have been working on. Mainly that WordPress thinks it needs to make my example code invisible.

Obviously that isn’t very helpful, and I’m working on it.

What are the posts about, you ask (or maybe you didn’t, but I’ll tell you anyway)? I’m putting a few session together on parsing XML in MySQL and making it perform well.

Hopefully everything is worked out within a week.


PlanetMySQL Voting: Vote UP / Vote DOWN

Performance gains with MariaDB and IBM POWER8

$
0
0
Thu, 2015-08-06 09:55
maria-luisaraviol

MariaDB and Foedus paths crossed less than a year ago when I met Paolo Messina at a two-day IBM event in Tuscany. IBM invited me, as the MariaDB Italian representative, to introduce MariaDB as an Open Source Solution for the POWER8 platform.

Foedus was there as one of the top Italian IBM Partners and as an Italian ISV company that provides an ERP solution platform named “Octobus”. Since 2008, Foedus has developed innovative solutions and management tools using the LAMP stack as the foundation of their technology. Foedus is at the same time open to other platforms. Octobus, which has been developed in Java, also runs on Windows platform.

When we met in October 2014, Foedus was about to test their solution on a POWER8 machine. They wanted to consider POWER8 as an option because of the great performance of POWER8 machines, and because it was a great opportunity to run a native Linux-based solution on an IBM Linux native system. One of their best clients agreed to host a POC in their live environment--actually based on a WAMP stack--and use their data.

The plan was to install Octobus in a POWER8 LAMP stack test environment that could run in parallel with the production system and test the performance with static data. At a certain point and time, the plan was to stop the production environment, move all of the data to the POWER8 machine and switch the production for at least two days on the new environment and then monitor the performance.

At the beginning, MariaDB was not part of this plan. However, IBM suggested to Foedus that they add MariaDB to the configuration and experience the fantastic performance of MariaDB version 10 optimized for POWER8. They agreed. They also decided to use RedHat version 7 for the operating system.

At that time, Foedus was not very familiar with MariaDB, so they were worried this change would require some review either of the source code in the configuration or in the parameter settings. None of those were necessary. They just installed Octobus, installed MariaDB 10, did a backup of the MySQL database, moved the data to the POWER8 machine, started the service and the job was done!

They ran tests for two full days on the new environment and in production. The results were so impressive that the final customer is now considering IBM POWER8 with MariaDB and Linux as an option for next year. The most impressive result they observed was when they tested some well known, long running queries: the MariaDB optimizer and the POWER8 specific binaries allowed Foedus to cut the query execution time by more than ten times.

If you want to know more about Foedus and their experience, you can find a video that introduces them on this page: https://mariadb.com/blog/mariadb-galera-available-ibm-power8-platform.

About the Author

maria-luisaraviol's picture

Maria-Luisa Raviol is a Senior Sales Engineer with over 20 years industry experience.


PlanetMySQL Voting: Vote UP / Vote DOWN

How to configure AWR system

#DBHangOps 08/06/15 -- Orchestrator and Binlog Servers and more!

$
0
0

#DBHangOps 08/06/15 -- Orchestrator and Binlog Servers and more!

Hello everybody!

Join in #DBHangOps this Thursday, August, 06, 2015 at 11:00am pacific (18:00 GMT), to participate in the discussion about:

  • Orchestrator and Binlog Servers from Shlomi Noach
  • Configuration vs. Orchestration

You can check out the event page at https://plus.google.com/events/ci32euumljnmivfo8kkh9j8kum8 on Thursday to participate.

As always, you can still watch the #DBHangOps twitter search, the @DBHangOps twitter feed, or this blog post to get a link for the google hangout on Thursday!

See all of you on Thursday!

You can catch a livestream at:

Show Notes

Binlog Servers

  • Binlog servers appear as a mysql daemon to any mysql masters or replicas, but it merely proxies binary log events to downstream replicas
    • Binlog servers are also meant to act as if they're the same as their upstream master
    • you do a "CHANGE MASTER TO..." pointing at the hostname of an individual binlog server, but when statements are relayed from a binlog server, they have the server_id of their masters
  • "binlog servers: Ctrl-C Ctrl-V as a service!"
  • binlog servers help provide "master fanout" to allow more read replicas to for a given set of data
  • At Booking.com, replicas connect to a VIP in order to access the next available binlog server. As a result, if a binlog server fails, a replica will connect to the next available one.
  • Binlog servers will download a list of credentials from its upstream master to service data under the same authorization flow
  • Anything interrogation for replication (e.g. SHOW SLAVE STATUS or SHOW VARIABLES) will work, but data inspection statements like SHOW TABLES won't return anything
  • Using binlog servers in place of intermediate masters has interesting implications with parallel replication

Orchestrator and Binlog Servers

  • Binlog servers show up in orchestrator as "INHERIT" for their statement types
  • Orchestrator is now able to support interacting with binlog servers and making intelligent decisions with binlog servers in a replication topology
  • since binlog servers serve the same server-id as an upstream master, orchestrator knows it can simply update "MASTER_HOST" on downstream replicas
  • It seems like orcehstrator with binlog servers has a lot of overlap with GTIDs in MySQL. Why use one over the other?
    • using binlog servers allows for faster healing of large topolgoies. If you have to repoint 10 replicas, you need to have scan binary logs on each replica. Binlog servers allow you to just change the host you point at.
    • using binlog servers doesn't really add additional lag into the topology like you would see with an intermediate master
      • Unless you do something extreme like delete 1mil records using Row-Based Replication
  • How does semi-sync play into a topology with Binlog servers?
    • Support for this is being worked on. This allows for stronger guarantees of statement durability across multiple nodes. Current idea is that binlog servers will be semi-sync and downstream replicas may still be asynchronous
  • Other Binlog servers ideas/use cases
    • Could binlog servers also service backup architecture work?
      • Binlog servers could compress binary logs and relay them elsewhere for backup/recovery purposes
  • Orchestrator and Binlog servers are both solutions to help solve self-healing replication
    • Even though these are effectively competing solutions, they work together very well for addressing read-scaling and gradual migration of topologies from stock replication to a binlog server topology
      • Booking.com is presently running a hybrid solution to help with the transition to binlog servers

Example of reconverging a topology after a master failure with Binlog servers

  • if you have a master fail that has binlog servers directly below it, there's a chance these binlog servers might not be 100% in sync. The solution is to make any lagged binlog servers replicate from a more up to date one
  • This makes the notion of automated convergence even faster/easier since you can simply point all binlog servers to the most up to date one. All other downstream replicas don't notice a difference.
    • This will make it easy to promote a downstream RO replica to master for all nodes.
    • until MySQL Bug#77482 is addressed, you effectively issue FLUSH LOGS on the replica to promote until it's binary log file and position match what its old master used to have
    • If you have an RO replica that has a binary log file with a higher number than the master, you could issue a RESET MASTER on the replica to set it back to binary log file 1 (e.g. bin.000001) and then issues FLUSH LOGS until it's caught up to the binary log file of the master

Interesting links!


PlanetMySQL Voting: Vote UP / Vote DOWN

Architecting for Failure - Disaster Recovery of MySQL/MariaDB Galera Cluster

$
0
0

Failure is a fact of life, and cannot be avoided. No IT vendor in their right mind will claim 100% system availability, although some might claim several nines :-) We might have armies of ops soldiers doing everything they possibly can to avoid failure, yet we still need to prepare for it. What do you do when the sh*t hits the fan?

Whether you use unbreakable private datacenters or public cloud platforms, Disaster Recovery (DR) is indeed a key issue. This is not about copying your data to a backup site and being able to restore it, this is about business continuity and how fast you can recover services when disaster strikes. 

In this blog post, we will look into different ways of designing your Galera Clusters for fault tolerance, including failover and failback strategies. 

Disaster Recovery (DR)

Disaster recovery involves a set of policies and procedures to enable the recovery or continuation of infrastructure following a natural or human-induced disaster. A DR site is a backup site in another location where an organization can relocate following a disaster, such as fire, flood, terrorist threat or other disruptive event. A DR site is an integral part of a Disaster Recovery/Business Continuity plan.

Most large IT organizations have some sort of DR plan, not so for smaller organisations though - because of the high cost vs the relative risk. Thanks to the economics of public clouds, this is changing though. Smaller organisations with tighter budgets are also able to have something in place. 

Setting up Galera Cluster for DR Site

A good DR strategy will try to minimize downtime, so that in the event of a major failure, a backup site can instantly take over to continue operations. One key requirement is to have the latest data available. Galera is a great technology for that, as it can be deployed in different ways - one cluster stretched across multiple sites, multiple clusters kept in sync via asynchronous replication, mixture of synchronous and asynchronous replication, and so on. The actual solution will be dictated by factors like WAN latency, eventual vs strong data consistency and budget.
 
Let’s have a look at the different options to deploy Galera and how this affects the database part of your DR strategy. 

Active Passive Master-Master Cluster

This consists of 6 Galera nodes on two sites, forming a Galera cluster across WAN. You would need a third site to act as an arbitrator, voting for quorum and preserving the “primary component” if the primary site is unreachable. The DR site should be available immediately, without any intervention.

Failover strategy:

  1. Redirect your traffic to the DR site (e.g. update DNS records, etc.). The assumption here is that the DR site’s application instances are configured to access the local database nodes.

Failback strategy:

  1. Redirect your traffic back to primary site.

Advantages:

  • Failover and failback without downtime. Applications can switch to both sites back and forth.
  • Easier to switch side without extra steps on re-bootstrapping and reprovisioning the cluster. Both sites can receive reads/writes at any moment provided the cluster is in quorum.
  • SST (or IST) during failback won’t be painful as a set of nodes is available to serve the joiner on each site.

Disadvantages:

  • Highest cost. You need to have at least three sites with minimum of 7 nodes (including garbd). 
  • With the disaster recovery site mostly inactive, this would not be the best utilisation of your resources.
  • Requires low and reliable latency between sites, or else there is a risk for lag - especially for large transactions (even  with different segment IDs assigned)
  • Risk for performance degradation is higher with more nodes in the same cluster, as they are synchronous copies. Nodes with uniform specs are required.
  • Tightly coupled cluster across both sites. This means there is a high level of communication between the two sets of nodes, and e.g., a cluster failure will affect both sites. (on the other hand, a loosely coupled system means that the two databases would be largely independent, but with occasional synchronisation points)

Active Passive Master-Master Node

Two nodes are located in the primary site while the third node is located in the disaster recovery site. If the primary site is down, the cluster will fail as it is out of quorum. galera3 will need to be bootstrapped manually as a single node primary component. Once the primary site comes back up, galera 1 and galera2 need to rejoin galera3 to get synced. Having a pretty large gcache should help to reduce the risk of SST over WAN.

Failover strategy:

  1. Bootstrap galera3 as primary component running as “single node cluster”.
  2. Point database traffic to the DR site.

Failback strategy:

  1. Let galera1 and galera2 join the cluster.
  2. Once synced, point database traffic to the primary site.

Advantages:

  • Reads/writes to any node.
  • Easy failover with single command to promote the disaster recovery site as primary component.
  • Simple architecture setup and easy to administer.
  • Low cost (only 3 nodes required)

Disadvantages:

  • Failover is manual, as the administrator needs to promote the single node as primary component. There would be downtime in the meantime.
  • Performance might be an issue, as the DR site will be running with a single node to run all the load. It may be possible to scale out with more nodes after switching to the DR site, but beware of the additional load.
  • Failback will be harder if SST happens over WAN.
  • Increased risk, due to tightly coupled system between the primary and DR sites

Active Passive Master-Slave Cluster via Async Replication

This setup will make the primary and DR site independent of each other, loosely connected with asynchronous replication. One of the Galera nodes in the DR site will be a slave, that replicates from one of the Galera nodes (master) in the primary site. Ensure that both sites are producing binary logs with GTID and log_slave_updates is enabled - the updates that come from the asynchronous replication stream will be applied to the other nodes in the cluster.

By having two separate clusters, they will be loosely coupled and not affect each other. E.g. a cluster failure on the primary site will not affect the backup site. Performance-wise, WAN latency will not impact updates on the active cluster. These are shipped asynchronously to the backup site. The DR cluster could potentially run on smaller instances in a public cloud environment, as long as they can keep up with the primary cluster. The instances can be upgraded if needed. 

It’s also possible to have a dedicated slave instance as replication relay, instead of using one of the Galera nodes as slave.

Failover strategy:

  1. Ensure the slave in the DR site is up-to-date (up until the point when the primary site was down).
  2. Stop replication slave between slave and primary site. Make sure all replication events are applied.
  3. Direct database traffic on the DR site.

Failback strategy:

  1. On primary site, setup one of the Galera nodes (e.g., galera3) as slave to replicate from a (master) node in the DR site (galera2).
  2. Once the primary site catches up, switch database traffic back to the primary cluster.
  3. Stop the replication between primary site and DR site.
  4. Re-slave one of the Galera nodes on DR site to replicate from the primary site.

Advantages:

  • No downtime required during failover/failback.
  • No performance impact on the primary site since it is independent from the backup site.
  • Disaster recovery site can be used for other purposes like data backup, binary logs backup and reporting or large analytical queries (OLAP).

Disadvantages:

  • There is a chance of missing some data during failover if the slave was behind.
  • Pretty costly, as you have to setup a similar number of nodes on the disaster recovery site.
  • The failback strategy can be risky, it requires some expertise on switching master/slave role.

Active Passive Master-Slave Replication Node

Galera cluster on the primary site replicates to a single-instance slave in the DR site using asynchronous MySQL replication with GTID. Note that MariaDB had a different GTID implementation, so it has slightly different instructions. 
Take extra precaution to ensure the slave is replicating without replication lag, to avoid any data loss during failover. From the slave point-of-view, switching to another master should be easy with GTID.

Failover to DR site:

  1. Ensure the slave has caught up with the master. If it does not and the primary site is already down, you might miss some data. This will make things harder.
  2. If the slave has READ_ONLY=on, disable it so it can receive writes.
  3. Redirect database traffic to the DR site

Failback to primary site:

  1. Use xtrabackup to move the data from the DR site to a Galera node on primary site - this is online process which may cause some performance drops but it’s non-blocking for InnoDB-only databases
  2. Once data is in place, slave the Galera node off the DR host using the data from xtrabackup
  3. At the same time, slave the DR site off the Galera node - to form master-master replication
  4. Rebuild the rest of the Galera cluster using either xtrabackup or SST
  5. Wait until primary site catches up on replication with DR site
  6. Perform a failback by stopping the writes to DR, ensure that replication is in sync and finally repoint writes to the production
  7. Set the slave as read-only, stop the replication from DR to prod leaving only prod -> DR replication

Advantages:

  • Replication slave should not cause performance impact to the Galera cluster.
  • If you are using MariaDB, you can utilize multi-source replication, where the slave in the DR site is able to replicate from multiple masters.
  • Lowest cost and relatively faster to deploy.
  • Slave on disaster recovery site can be used for other purposes like data backup, binary logs backup and running huge analytical queries (OLAP).

Disadvantages:

  • There is a chance of missing data during failover if the slave was behind.
  • More hassle in failover/failback procedures.
  • Downtime during failover.

The above are a few options for your disaster recovery plan. You can design your own, make sure you perform failover/failback tests and document all procedures. Trust us - when disaster strikes, you won’t be as cool as when you’re reading this post.

Blog category:


PlanetMySQL Voting: Vote UP / Vote DOWN

FromDual.en: FromDual Performance Monitor for MySQL and MariaDB 0.10.5 has been released

$
0
0

FromDual has the pleasure to announce the release of the new version 0.10.5 of its popular Database Performance Monitor for MySQL, MariaDB, Galera Cluster and Percona Server fpmmm.

You can download fpmmm from here.

In the inconceivable case that you find a bug in fpmmm please report it to our Bug-tracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

This release contains various bug fixes and improvements. The previous release had some major bugs so we recommend to upgrade...

New installation of fpmmm v0.10.5

Please follow our mpm installation guide. A specific fpmmm installation guide will follow with the next version.

Prerequisites

CentOS 6

centos.pngredhat.png
# yum install php-cli php-process php-mysqli

# cat << _EOF >/etc/php.d/fpmmm.ini
variables_order = "EGPCS"
date.timezone = 'Europe/Zurich'
_EOF

rpm -Uvh http://repo.zabbix.com/zabbix/2.2/rhel/6/x86_64/zabbix-release-2.2-1.el6.noarch.rpm
yum update
yum install zabbix-sender

CentOS 7

# yum install php-cli php-process php-mysqli

# cat << _EOF >/etc/php.d/fpmmm.ini
variables_order = "EGPCS"
date.timezone = 'Europe/Zurich'
_EOF

rpm -Uvh http://repo.zabbix.com/zabbix/2.2/rhel/7/x86_64/zabbix-release-2.2-1.el7.noarch.rpm
yum update
yum install zabbix-sender

Ubuntu 14.04

ubuntu.png
# apt-get install php5-cli php5-mysqlnd php5-curl

# cat << _EOF >/etc/php5/cli/conf.d/fpmmm.ini
variables_order = "EGPCS"
date.timezone = 'Europe/Zurich'
_EOF

apt-get install zabbix-agent

OpenSuSE 13.1

suse.png
# zypper install php5 php5-posix php5-mysql php5-pcntl php5-curl

#cat << _EOF >/etc/php5/conf.d/fpmmm.ini
variables_order = "EGPCS"
date.timezone = 'Europe/Zurich'
_EOF

zypper addrepo http://download.opensuse.org/repositories/server:/monitoring/openSUSE_13.1 server_monitoring
zypper update
zypper install zabbix-agent

Upgrade from fpmmm 0.10.x to fpmmm 0.10.5

# cd /opt
# tar xf /download/fpmmm-0.10.5.tar.gz
# rm -f fpmmm
# ln -s fpmmm-0.10.5 fpmmm

The following templates in your Zabbix monitor should be replaced. Before you replace the templates it is a good idea to first delete all triggers...

  • tpl/Template_FromDual.MySQL.fpmmm.xml
  • tpl/Template_FromDual.MySQL.innodb.xml
  • tpl/Template_FromDual.MySQL.master.xml
  • tpl/Template_FromDual.MySQL.myisam.xml
  • tpl/Template_FromDual.MySQL.mysql.xml
  • tpl/Template_FromDual.MySQL.server.xml
  • tpl/Template_FromDual.MySQL.slave.xml

Changes in fpmmm v0.10.5

fpmmm agent

  • Better and more verbose error handling in various modules.
  • Directory for log file is created automatically if it does not exist yet.
  • All broken SQL queries (from 0.10.4) fixed again.
  • Add delay for not so frequent changing data.
  • Several triggers which complained after restart are fixed now.
  • Connections to database were now reduced to the minimum.
  • Links for templates fixed.
  • Innodb_flush_log_at_trx_commit, log_queries_not_using_indexes and character_set_server triggers disabled by default.
  • Also sendCachedData is now checked for too big cache file. Bug from swd.

Slave module

  • Slave error messages are caught and sent to the monitor.
  • Warning is written to the log file if slave module is configured without being a slave.
  • New slave status is reported correctly now.
  • Seconds_behind_master is now only sent when running.

Master module

  • New trigger for binlog_format = MIXED and replication filtering added.
  • Severity increased on STATEMENT based filtering added.
  • Regexp bug fixed.
  • Master without binary log fixed.

MySQL module

  • Old broken triggers fixed.

Server module

  • IOPS graph added.
  • Device sda5 removed.
  • I/O statistics calculation improved.
  • I/O r/w wait experimental items implemented.
  • CPU count added.
  • NUMA and virtualization information added.

InnoDB module

  • innodb_flush_method item added.
  • Trigger for innodb_flush_method added.
  • innodb_force_recovery trigger severity increased.
  • innodb_log_files_in_group item added for log traffic threshold.
  • InnoDB transaction log traffic trigger and graph added.

For subscriptions of commercial use of fpmmm please get in contact with us.


PlanetMySQL Voting: Vote UP / Vote DOWN

MariaDB 10.0.21 and 5.5.45 now available

What Makes the MySQL Audit Plugin API Special?

$
0
0

Why Should I Be Reading This?

To better understand how the MySQL Server functions, how to monitor the relevant server events, and find out what’s new in MySQL 5.7.8.

What’s Special About the Audit Plugin API?

Picking the right API for your new plugin is probably the most important design decision a plugin author will need to make. Each of the plugin types exposed by the MySQL server have interesting and unique events that plugins can consume. But in addition to that, some of them provide very important additional characteristics that make these APIs stand out and become much more convenient to use.

One API in particular—the Audit Plugin API—is full of very interesting traits:

  • It can support multiple active plugins receiving the same event(s).
  • It stores plugin references in the execution context so that it doesn’t have to repeatedly search for it in the global plugin list.
  • It supports grouping of events in a number of event classes or categories and allows plugins to subscribe to only those event classes that they are interested in.

Let’s go into more detail on why each of these are important.

Multiple Active Plugin Support

Take for example the Authentication Plugin API. Each user account uses one and only one authentication plugin. This makes a lot of sense because user accounts can only authenticate in one certain way. And there’s a lot of good authentication plugins out there. But what if I want to take some extra action when users connect?

Yes, in theory I can tweak the code of the authentication plugin(s) I’m using. But maintaining my custom changes on top of the ever changing upstream(s) can quickly become tedious, especially if I’m using multiple authentication methods within MySQL.

This is where the Audit API’s ability to support multiple active plugins all receiving the same event can be extremely handy. It allows me to write my own little plugin, then subscribe to the MYSQL_AUDIT_CONNECTION_CLASS and react to the events I need. For example, MYSQL_AUDIT_CONNECTION_CONNECT and MYSQL_AUDIT_CONNECTION_CHANGE_USER. And I can do all this without even modifying the existing plugins.

Caching the Plugin References in the Execution Context

Plugins are dynamically loadable and unloadable.  Loading and unloading is done through (eventually concurrent) SQL commands. And plugins loaded from one thread need to become visible to all of the other threads too. To implement all this the MySQL server keeps the plugins in a global list and employs some good old fashioned multi-threaded programing techniques to protect that list from concurrent access. Namely it uses a mutex called LOCK_plugin.

Now let’s look at what it takes to be able to safely call a plugin method. First of all we need to ensure that the plugin will not be unloaded for as long as we need it. Obviously it’s not practical to be holding LOCK_plugin for the entire duration that we’re using the plugin, as no other plugin operation can occur while we are holding the lock.

The server solves this by reference counting the loaded plugins. When we need to call a plugin we can lock LOCK_plugin, take a plugin reference (increasing the reference count as we do), then release LOCK_plugin and go on and use the plugin.

Consequently, when we are done with our plugin usage, we can lock LOCK_plugin, release the plugin reference (decreasing the reference count in the process), and then release LOCK_plugin.

Now imagine we need to do this millions of of time to execute a single query (e.g. call a function that calls a plugin from inside a subquery). The LOCK_plugin usage can cause a significant performance hit.

This is where caching the plugin references in the execution context comes in very handy. What it does is instead of releasing the reference right after usage it will store it into the session’s context, so that the next time we need the plugin we won’t have to take any mutexes or look at any global structures. We can just reuse the reference we’ve already taken. Of course there’s a cost for that. And it comes from the fact that one can’t UNINSTALL a plugin that has active references to it. But most of the time this is a very small cost to pay for much better performance.

Event Pre-Filtering

The Audit API generates an event for every query that’s executed and for every connection/disconnection. In the typical scenario the latter is much rarer than the former. Why would we need to bother calling all plugins that need to react on connect/disconnect for each query? Not only is there no real benefit, but there’s a cost to that (see the explanations in the previous section). This is where the event pre-filtering comes in handy. Check out the Audit Plugin data structure:

struct st_mysql_audit
{
  int interface_version;
  void (*release_thd)(MYSQL_THD);
  void (*event_notify)(MYSQL_THD, unsigned int, const void *);
  unsigned long class_mask[MYSQL_AUDIT_CLASS_MASK_SIZE];
};

More specifically the class_mask property. This defines a bit-mask of all possible event classes the plugin is interested in receiving. The server aggregates this mask at INSTALL PLUGIN time so that it can consult it when it’s about to dispatch an event to the plugins, dispatching it solely to the plugins that are interested in receiving it and avoiding the dispatch altogether if there are no plugins interested in it.

Changes in 5.7.8

I hope you’ve seen the Query Rewrite API in the previous 5.7 DMRs (see Martin’s two part blog series for an introduction: part1, part2). It’s a useful API allowing one to install pre and post-parse plugins that can alter the query being executed. That API was being called several times during each query execution but without having some of the traits described above. Instead of re-inventing the wheel for it we’ve just transformed that separate plugin API into an audit API event class. And thus the Query Rewrite Plugins are now actually Audit plugins. This is described further in worklog 8505.

We’re also working on strengthening the good parts of the Audit API even further, so stay tuned!

That’s it for now. I hope that this has helped to shine a light on how useful the Audit Plugin API is! THANK YOU for using MySQL!


PlanetMySQL Voting: Vote UP / Vote DOWN

Changed defaults between MySQL 5.6 and 5.7

$
0
0

MySQL 5.7 comes with many changes. Some of them are better explained than others.

I wanted to see how many changes I could get by comparing SHOW VARIABLES in MySQL 5.6 and 5.7.

The most notable ones are:


  • binlog_format: the default is now ROW. This variable affects the format of the binary log, whether you use it as a backup complement or for replication, the change means bigger binary logs and possibly side effects.
  • binlog_error_action now defaults to ABORT_SERVER. If the server cannot write to the binary log, rather than continuing its work without logging, it shuts down. This could be a desirable course of action, but better be prepared for the eventuality.
  • innodb_strict_mode is enabled by default, which is probably a good thing, but it means that previously accepted events will now generate an error instead than a warning.
  • sql_mode is now STRICT by default. While many well prepared users will be pleased with this change, which was advocated as best practice by some DBAs, the practical outcome is that several exiting applications may break because of unclean input.
  • sync_binlog, which affects data safety but also server performance is enabled.

The table below shows the full list.


VARIABLE 5.6.26 5.7.8
binlog_error_action IGNORE_ERROR ABORT_SERVER
binlog_format STATEMENT ROW
binlog_gtid_simple_recovery OFF ON
eq_range_index_dive_limit 10 200
innodb_buffer_pool_dump_at_shutdown OFF ON
innodb_buffer_pool_instances 8 1
innodb_buffer_pool_load_at_startup OFF ON
innodb_checksum_algorithm innodb crc32
innodb_file_format Antelope Barracuda
innodb_large_prefix OFF ON
innodb_log_buffer_size 8388608 16777216
innodb_purge_threads 1 4
innodb_strict_mode OFF ON
log_warnings 1 2
performance_schema_accounts_size 100 –1
performance_schema_hosts_size 100 –1
performance_schema_max_cond_instances 1382 –1
performance_schema_max_file_instances 2557 –1
performance_schema_max_mutex_instances 5755 –1
performance_schema_max_rwlock_instances 3138 –1
performance_schema_max_socket_instances 230 –1
performance_schema_max_statement_classes 168 191
performance_schema_max_table_handles 616 –1
performance_schema_max_table_instances 684 –1
performance_schema_max_thread_instances 288 –1
performance_schema_setup_actors_size 100 –1
performance_schema_setup_objects_size 100 –1
performance_schema_users_size 100 –1
pseudo_thread_id 5 7
slave_net_timeout 3600 60
sql_mode NO_ENGINE_SUBSTITUTION ONLY_FULL_GROUP_BY,
STRICT_TRANS_TABLES,
NO_ZERO_IN_DATE,
NO_ZERO_DATE,
ERROR_FOR_DIVISION_BY_ZERO,
NO_AUTO_CREATE_USER,
NO_ENGINE_SUBSTITUTION
sync_binlog 0 1
table_open_cache_instances 1 16
warning_count 0 1

I don't know if the other changes can have the effect of breaking existing apps. Please comment on this post if you know of possible side effects related to the above variables.


PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL Group Replication – 0.4.0 Labs Release Plugin Packages

$
0
0

The multi master plugin for MySQL is here. MySQL Group Replication provides virtually synchronous updates on any member in a group of MySQL servers, with conflict handling and automatic group membership management and failure detection.

You can read about all MySQL Group Replication features following the tag MySQL Group Replication.

On this blog post we will present the packages the MySQL Group Replication 0.4.0 offers and what are the tasks that you need to perform to install the plugin on the MySQL server.

Packages

Go to http://labs.mysql.com/, choose MySQL Group Replication on the release menu. You will see two files there:

  1. mysql-group-replication-0.4.0-labs.tar.gz
  2. mysql-group-replication-0.4.0-labs-el6.x86_64.tar.gz

The first file is the tarball with plugin source, the second file is the plugin binary for Oracle Enterprise Linux x86_64 platform.

Our goal is for MySQL Group Replication releases to work with the latest MySQL server release, except this one time we need 5.7.7, so the binary package needs to be installed on a MySQL 5.7.7 server running on a Oracle Enterprise Linux 6 x86_64 platform. Going forward we will release to more platforms and package formats.
Please continue to read to see the detailed installation instructions.

Install from binary package

The plugin binary package is meant for a specific MySQL server version and platform: MySQL server 5.7.7 on Oracle Enterprise Linux 6 x86_64. Server and plugin must have the same the platform to work together.

  1. Unpack the plugin binary package;
  2. Copy mysql-group-replication-VERSION/lib/plugin/group_replication.so to your MySQL server plugins folder, usually it is /usr/lib64/mysql/plugin/ but it will depend on how MySQL server is installed;
  3. Follow the configuration instructions at Getting started with MySQL Group Replication.
Install from source package
  1. Download the required MySQL server version, 5.7.7, tar archive from http://www.mysql.com/downloads/;
  2. Unpack the MySQL server tar;
  3. Unpack plugin source tar;
  4. Build the plugin together with MySQL server:
    $ cd mysql-group-replication-VERSION
    $ mkdir BIN
    $ cd BIN
    $ cmake .. -DMYSQL_SERVER_SRC_DIR="PATH_TO_SERVER_ON_STEP2" -DMYSQL_SERVER_CMAKE_ARGS="SERVER_BUILD_OPTIONS"
    $ make package
    

Plugin binary package will be created on BIN folder. Please follow binary package installation steps to install the plugin on the MySQL server.

MYSQL_SERVER_CMAKE_ARGS

Please use this option to set MySQL server build configuration. In order to make the plugin compatible with a given MySQL server binary, the plugin must be build with the same configuration as the MySQL server.

When several cmake arguments are passed to this variable them must be split by semicolon: ;
Example:

-DMYSQL_SERVER_CMAKE_ARGS="-DX=1;-DY=2"

If some of those arguments contain spaces them must be surrounded by apostrophes: ‘
Example:

-DMYSQL_SERVER_CMAKE_ARGS="-DX='a b';-DY='c d';DZ=1"

Conclusion

Go to http://labs.mysql.com/ and try the new preview release of MySQL Group Replication following the instructions at Getting started with MySQL Group Replication and send us your feedback.

Note that this is not the GA yet, so don’t use it in production and expect bugs here and there. If you do experience bugs, we are happy to fix them. All you have to do is to file a bug in the bugs DB in that case.


PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL Group Replication: Plugin Version Access Control.

$
0
0

Here is version 0.4.0 of the MySQL Group Replication plugin, our solution that gives you virtual synchronous updates on any member of a MySQL server group. With this new version you can expect a bunch of bug fixes and new features.

One of the new feature that marks this release is the access control of different plugin versions in a server group. In an evolving product like Group Replication, it is important to assure the correct functioning of the group by automatically check that all servers have a compatible plugin versions.

Plugin versions – the basics

As you may have noticed, the Group Replication plugin has an independent life cycle from the MySQL server, characterized by frequent releases. Each one of theses releases is marked with an individual version that follows the  Semantic Versioning format :

MAJOR.MINOR.PATCH

MAJOR= Plugin’s major version
MINOR= Plugin’s minor version
PATCH= Plugin’s patch version

Major versions are associated to important releases that introduce adjustments to the API or some major behavior change. On the other hand, Minor versions are associated to incremental improvements of a said major version. Finally Patch versions are associated to minor corrections or bug fixes to the code.

Look up your version

As you may know, not only Group Replication, but every other plugin in MySQL has also an associated version that can be consulted under the performance schema plugin table.

SELECT PLUGIN_VERSION FROM INFORMATION_SCHEMA.PLUGINS 
  WHERE PLUGIN_NAME="group_replication";

+----------------+
| PLUGIN_VERSION |
+----------------+
| 0.4            |
+----------------+

This version depicts the plugin’s major and minor versions.
In Group replication, to know your plugin full version you query the description.

SELECT PLUGIN_DESCRIPTION FROM INFORMATION_SCHEMA.PLUGINS
  WHERE PLUGIN_NAME="group_replication";

+---------------------------+
| PLUGIN_DESCRIPTION        |
+---------------------------+
| Group Replication (0.4.0) |
+---------------------------+

Here you can see not only the major and minor versions but also the patch version.

Plugin version rules

The basic rules

Why are versions important in Group replication?

When joining, versions are crucial when determining if a member is compatible with a group. A higher major version can indicate that the member has some messaging incompatibilities with the group.

The basic rules for version handling in Group Replication are then:

  1. When a lower version member tries to join the group, its request will be denied and the member will become offline. This is  justified by concerns of possible new features that are not support by this member version. These rules does not apply to patch version variations however, as they represent minor changes or bug fixes.
    Patch versions do not affect the version control mechanism
    Member with lower patch version joins – success
    Members with lower versions cannot enter the group without the force option.
    Member with lower minor version joins – failure

    A higher version member is always accepted into the group if no exception exists.
    Member with higher minor version joins – success

  2. While the above rule guarantees safeness in most cases, there can be situations
    where, for example, a needed fix on version 1.7.0 makes it incompatible with its lower versions 1.6.N and 1.5.N. To tackle this, group replication also supports built in incompatibilities rules for these specific cases. Higher versioned plugins can then join the group, unless there is a incompatibility rule registered in it.

    These rules are coded into the plugin, they are not visible and cannot be changed or overridden by the end user. Nevertheless, these exception shall be properly documented.

Override lower version incompatibilities

From the above described rules, there is one kind that can be overridden: the lower version incompatibility rule.

If the there are no documented interoperability issues between two versions, the user can force the entry of a lower version member into the group by setting:

SET GLOBAL group_replication_allow_local_lower_version_join= ON;

This will allow the member to join a group with higher versions.

 A few words on the joining process

Now diving into the details, some basics about the joining process are presented here as they are important to understand how the member enters and is rejected by the group and how this is seen by the end user.

First of all, the decision process is local, meaning that it is the joining member that knows if it is incompatible with the group or not. It makes sense as only a member with a higher version, when joining, knows that it is incompatible with the lower version members.

Also, in what refers to the communication layer, the member always join the group and all members will receive a notification of this fact. This is also when all nodes, among other info, broadcast their versions so the new member can make a decision. So, it is only when the plugin process receives this notification that the compatibility with the group is checked. If the member is declared incompatible only then it will ask to leave.

This has 2 consequences:

  1. The START GROUP_REPLICATION command won’t fail as it only checks if  the member entered the group. The user must check that the member  in fact is recovering or became online after starting in a similar way to what is done when starting a MySQL slave.
  2. All the other members of the group will assume the node joined and will now start recovery. Only moments later they will be notified of a new group change where the member now left. This means that, for brief moments, the user can see the joiner status as being on recovery while checking the performance schema replication_group_members table on the other group members.

 

Conclusion

Go to labs releases and try the new preview release of MySQL Group Replication Plugin following the instructions at Getting started with MySQL Group Replication and send us your feedback.

Note that this is not the GA yet, so don’t use it in production and expect bugs here and there. If you do experience bugs, we are happy to fix them. All you have to do is to file a bug in the bugs DB in that case.


PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL Group Replication – 0.4.0 Labs Release

$
0
0

Hi all, it is time again to do another preview release of MySQL Group Replication, the plugin that brings multi-master update everywhere to MySQL, like we described in Hello World post.

We are very proud to do the third preview release of MySQL Group Replication, which introduces new exciting features, please enjoy the highlights!

Introduced changes

Plugin version access control

In an evolving product like Group Replication, it is important to ensure the correct functioning of the group by automatically check that all servers have a compatible plugin version.

This feature rules the access control of different plugin versions in a group. It dictates what action is taken when a previous or future plugin version joins a group, example a lower major or minor version, by default, it is not allowed to join a group.
This mechanism is of capital importance on upgrade procedures control.

Please read the blog post Plugin version access control on MySQL Group Replication for the full details.

Free-Standing Plugin

We now release the plugin as its own module. This meaning that the plugin won’t be tied to MySQL server release cycle, instead it will have its own release cycle!
The plugin is now built independently of the MySQL server code base and instead uses the well-defined interface of MySQL 5.7 to connect to the MySQL server.
This a step forward to shorter release cycles that will allow us to fulfill users requests sooner.

Every MySQL Group Replication version will be based on a MySQL server version, since the plugin does depend on server features/interfaces, but the plugin features can and will evolve over time.
On blog post MySQL Group Replication plugin packages we explain the details.

Bug Fixes

There are a lot of bugs that have been fixed in this third preview release of MySQL Group Replication, moving quickly towards a more stable, reliable and usable plugin.

Conclusion

Go to http://labs.mysql.com/ and try the new preview release of MySQL Group Replication following the instructions at Getting started with MySQL Group Replication and send us your feedback.

Note that this is not the GA yet, so don’t use it in production and expect bugs here and there. If you do experience bugs, we are happy to fix them. All you have to do is to file a bug in the bugs DB in that case.


PlanetMySQL Voting: Vote UP / Vote DOWN

Proposal: Adding consistency to protocol selection

$
0
0

If you run multiple MySQL instances on a Linux machine, chances are good that at one time or another, you’ve ended up connected to an instance other than what you had intended. It’s certainly happened to me, and I submitted Bug#76512 to deal with the cause which affects me most commonly – that the mysql client will silently ignore the –port option and connect using the default Unix socket instead when the host is “localhost” (default).  We’ve recently discussed ways we can make this behavior less surprising to users, and though we’re now past the second RC of MySQL Server 5.7, we’re contemplating making these changes in future 5.7 releases.  Please let us know your thoughts!

Here are the basic principles of what we intend to change:

Explicit –protocol option rules all

If a user provides an explicit –protocol option, the client will only attempt to connect using that protocol.  If it cannot connect using that protocol, it generates an error.  When an explicit –protocol option is supplied, client options related to other protocols will be ignored.  That allows you to have a configuration file for both TCP and Socket connection parameters, while choosing which to use by passing the –protocol command-line option.  For example, this will not happen:

R:\mysql-5.7.8-rc-winx64>bin\mysql -uroot --port=3310 --protocol=TCP --pipe -e"STATUS;"
--------------
bin\mysql  Ver 14.14 Distrib 5.7.8-rc, for Win64 (x86_64)

Connection id:          26
Current database:
Current user:           root@localhost
SSL:                    Not in use
Using delimiter:        ;
Server version:         5.7.8-rc MySQL Community Server (GPL)
Protocol version:       10
Connection:             Named pipe: MySQL
...
--------------

Implicit protocol options

Certain options are associated with a single protocol, and going forward, will implicitly set the protocol.  For example, using the –port option will imply TCP should be used.  This prevents the situation from my bug report where the default socket is used instead of the explicitly requested TCP port, resulting in connection to the wrong instance.  Here are the proposed rules:

  • –host option defined with value other than localhost implies TCP/IP connection
  • –port option defined with any value implies TCP/IP connection
  • –socket option defined with any value implies Unix Socket connection on Linux
  • –socket option defined with any value implies Named Pipe connections on Windows
  • –shared-memory-base-name defined with any value implies Shared Memory connection on Windows

In addition, the current –pipe (or -W) is equivalent to –protocol=TCP, and we propose to deprecate it.

Reject conflicting command-line options

MySQL has historically been very lax in option processing and will generally accept conflicting options instead of producing errors.  For example, you can start the mysql client with all of the above options explicitly defined, without error:

 

R:\mysql-5.7.8-rc-winx64>bin\mysql -uroot --port=3310 --socket=MySQL 
--shared-memory-base-name=MYMEM --pipe
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 27
Server version: 5.7.8-rc MySQL Community Server (GPL)

Which protocol is being used here? It’s using a Named Pipe.

There are some combinations which produce appropriate errors already – most notably when connections are attempted using inter-machine protocols while specifying a remote host:

R:\ade\mysql-5.7.8-rc-winx64>bin\mysql -h192.168.2.2 --protocol=MEMORY
ERROR 2047 (HY000): Wrong or unknown protocol

I think it makes sense to produce similar errors when conflicting implicit command-line options are given. That would prohibit users from starting mysql with both –port and –socket options defined on the command-line.

Retain platform-specific default protocols

When supplying no options that change the behavior, the standard MySQL command-line clients should continue to prefer Unix Socket connections on Linux for performance reasons, while Windows uses TCP/IP.

Conclusion

We think the proposed changes make sense to clarify protocol selection, and that this is a worthwhile addition to 5.7 – even though two Release Candidate builds have already shipped.  With these changes, users (like me) will have better protection against inadvertently connecting to the wrong instance on a multi-instance host, and ambiguity around which protocol is used will be significantly reduced.

Please let us know your thoughts on this proposal!

 


PlanetMySQL Voting: Vote UP / Vote DOWN

The MySQL query cache: Worst enemy or best friend?

$
0
0

During the last couple of months I have been involved in an unusually high amount of performance audits for e-commerce applications running with Magento. And although the systems were quite different, they also had one thing in common: the MySQL query cache was very useful. That was counter-intuitive for me as I’ve always expected the query cache to be such a bottleneck that response time is better when the query cache is turned off no matter what. That lead me to run a few experiments to better understand when the query cache can be helpful.

Some context

The query cache is well known for its contentions: a global mutex has to be acquired for any read or write operation, which means that any access is serialized. This was not an issue 15 years ago, but with today’s multi-core servers, such serialization is the best way to kill performance.

However from a performance point of view, any query cache hit is served in a few tens of microseconds while the fastest access with InnoDB (primary lookup) still requires several hundreds of microseconds. Yes, the query cache is at least an order of magnitude faster than any query that goes to InnoDB.

A simple test

To better understand how good or bad the query cache can be, I set up a very simple benchmark:

  • 1M records were inserted in 16 tables.
  • A moderate write load (65 updates/s) was run with a modified version of the update_index.lua sysbench script (see the end of the post for the code).
  • The select.lua sysbench script was run, with several values for the --num-threads option.

Note that the test is designed to be unfavorable to the query cache as the whole dataset fits in the buffer pool and the SELECT statements are very simple. Also note that I configured the query cache to be large enough so that no entry was evicted from the cache due to low memory.

Results – MySQL query cache ON

First here are the results when the query cache is enabled:

qcache_on

This configuration scales well up to 4 concurrent threads, but then the throughput degrades very quickly. With 10 concurrent threads, SHOW PROCESSLIST is enough to show you that all threads spend all their time waiting for the query cache mutex. Okay, this is not a surprise.

Results – MySQL query cache OFF

When the query cache is disabled, this is another story:

qcache_off

Throughput scales well up to somewhere between 10 and 20 threads (for the record the server I was using had 16 cores). But more importantly, even at the higher concurrencies, the overall throughput continued to increase: at 20 concurrent threads, MySQL was able to serve nearly 3x more queries without the query cache.

Conclusion

With Magento, you can expect to have a light write workload, very low concurrency and also quite complex SELECT statements. Given the results of our simple benchmarks, it is finally not that surprising that the MySQL query cache is a good fit in this case.

It is also worth noting that many applications run a database workload where writes are light and concurrency is low: the query cache should then not be discarded immediately. And maybe it is time for Oracle to make plans to improve the query cache as suggested by Peter a few years ago?

Annex: sysbench commands

# Modified update_index.lua
function event(thread_id)
   local table_name
   table_name = "sbtest".. sb_rand_uniform(1, oltp_tables_count)
   rs = db_query("UPDATE ".. table_name .." SET k=k+1 WHERE id=" .. sb_rand(1, oltp_table_size))
   db_query("SELECT SLEEP(0.015)")
end

# Populate the tables
sysbench --mysql-socket=/data/mysql/mysql.sock --mysql-user=root --mysql-db=db1 --oltp-table-size=1000000 --oltp-tables-count=16 --num-threads=16 --test=/usr/share/doc/sysbench/tests/db/insert.lua prepare
# Write workload
sysbench --mysql-socket=/data/mysql/mysql.sock --mysql-user=root --mysql-db=db1 --oltp-tables-count=16 --num-threads=1 --test=/usr/share/doc/sysbench/tests/db/update_index.lua --max-requests=1000000 run
# Read workload
sysbench --mysql-socket=/data/mysql/mysql.sock --mysql-user=root --mysql-db=db1 --oltp-tables-count=16 --num-threads=1 --test=/usr/share/doc/sysbench/tests/db/select.lua --max-requests=10000000 run

The post The MySQL query cache: Worst enemy or best friend? appeared first on MySQL Performance Blog.


PlanetMySQL Voting: Vote UP / Vote DOWN

Baron Schwartz speaking at OSCON EU

$
0
0

Baron Schwartz will be speaking on Building Microservices with Go at OSCON EU on October 26th.

Below is an overview;

Go is great for building HTTP and RPC services. VividCortex’s infrastructure is Go-based, and there are a lot of lessons to learn from the experience building it. Here are some of the things we needed to do above and beyond what the standard libraries provide:

  • Using net/http effectively
  • Building an actually sane REST framework in Go
  • Dealing with garbage collection
  • Building a build system
  • Continuous integration and deployment
  • Running services (daemons)
  • Runtime inspection of all in-flight requests
  • Interacting with databases, caches, and queues
  • Building staging and development environments

Go heavily influenced all of the above, and there was additional work needed beyond just plug-and-play with the libraries. Much of this is now open-sourced on VividCortex’s GitHub repositories.

Click here for more details on the conference.


PlanetMySQL Voting: Vote UP / Vote DOWN

Velocity EU: Amsterdam

Cloud Computing Expo

$
0
0

VividCortex is sponsoring and exhibiting at the Cloud Computing Expo in Santa Clara, California November 3 - 5. Stop by our booth to get a free product demo and see how we can revolutionize your database monitoring.

Click here for more details and registration.


PlanetMySQL Voting: Vote UP / Vote DOWN

PG Conf Silicon Valley

Viewing all 18823 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>