Quantcast
Channel: Planet MySQL
Viewing all 18787 articles
Browse latest View live

VividCortex Press Release: Receives Award, Continues to Expand

$
0
0

VividCortex received the MySQL Community Award for Application of the Year. This is an acknowledgement of the work put into developing a smarter monitoring tool providing deep query insights. We continue to be extremely excited about what’s on the horizon. Read the full press release here.


PlanetMySQL Voting: Vote UP / Vote DOWN

How to setup MySQL incremental backup

$
0
0

Incremental backups in MySQL were always a tricky exercise. Logical backup tools like mysqldump or mydumper don’t support incremental backups, although it’s possible to emulate them with binary logs. And with snapshot-based backup tools it’s close to impossible to take incremental copies.

Percona’s XtraBackup does support incremental backups, but you have to understand well how it works under the hood and be familiar with command line options. That’s not so easy and it’s getting worse when it comes to restoring the database from an incremental copy. Some shops even ditch incremental backups due to complexity in scripting backup and restore procedures.

With TwinDB incremental backups are easy. In this post I will show how to configure MySQL incremental backups for a replication cluster with three nodes – a master and two slaves.

Configure MySQL Incremental Backups in TwinDB – online backup service for MySQL

TwinDB is online backup service for MySQL. It’s available on https://console.twindb.com/. Once you get there you’ll see a read-only demo. It shows how we backup our TwinDB servers.


Create Account in TwinDB

A new user has to create an account so they can backup their own servers.

For now we are in the by-invitations beta, drop me a mail to aleks@twindb.com for an invitation code.


Once you’re registered it’ll bring you to your environment where you can manage MySQL servers and storage, change schedule and retention policy.

Install Packages Repository

The next step is to install TwinDB agent on MySQL servers. It’s a python script that receives and executes commands from TwinDB. We distribute the TwinDB agent via packages repository. There are repositories for RedHat based systems as well as for Debian based systems.

For the demonstration we will register a cluster with one master and two slaves.

Let’s install TwinDB RPM repository.

# yum install https://repo.twindb.com/twindb-release-latest.noarch.rpm

After the repository is configured we can install the agent:

# yum install twindb
Loaded plugins: fastestmirror
Setting up Install Process
Loading mirror speeds from cached hostfile
 * base: mirror.cs.vt.edu
 * epel: mirror.dmacc.net
 * extras: mirror.cs.vt.edu
 * updates: mirrors.loosefoot.com
Resolving Dependencies
--> Running transaction check
---> Package twindb.noarch 0:0.1.35-1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=============================================================
 Package      Arch         Version        Repository    Size
=============================================================
Installing:
 twindb       noarch       0.1.35-1       twindb        26 k

Transaction Summary
=============================================================
Install       1 Package(s)

Total download size: 26 k
Installed size: 85 k
Is this ok [y/N]: y
Downloading Packages:
twindb-0.1.35-1.noarch.rpm            |  26 kB     00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : twindb-0.1.35-1.noarch                    1/1
Stopping ntpd service
Shutting down ntpd: [  OK  ]
Starting ntpd service
Starting ntpd: [  OK  ]
Starting twindb client
Starting TwinDB agent ... OK
  Verifying  : twindb-0.1.35-1.noarch                    1/1

Installed:
  twindb.noarch 0:0.1.35-1

Complete!

The agent should be installed on all three servers. TwinDB discovers replication topology and makes sure the backup is taken from a slave.

Register TwinDB Agents

Now we need to register the MySQL servers in TwinDB.


To do so we need to run this command on all three servers.

# twindb --register ea29cf2eda74bb308a6cb80a910ab19a
2015-05-03 04:12:24,588: twindb: INFO: action_handler_register():1050: Registering TwinDB agent with code ea29cf2eda74bb308a6cb80a910ab19a
2015-05-03 04:12:26,804: twindb: INFO: action_handler_register():1075: Reading SSH public key from /root/.ssh/twindb.key.pub.
2015-05-03 04:12:28,356: twindb: INFO: action_handler_register():1129: Received successful response to register an agent
2015-05-03 04:12:29,777: twindb: INFO: get_config():609: Got config:
{
    "config_id": "8",
    "mysql_password": "********",
    "mysql_user": "twindb_agent",
    "retention_policy_id": "9",
    "schedule_id": "9",
    "user_id": "9",
    "volume_id": "8"
}
2015-05-03 04:12:30,549: twindb: INFO: create_agent_user():1159: Created MySQL user twindb_agent@localhost for TwinDB agent
2015-05-03 04:12:31,084: twindb: INFO: create_agent_user():1160: Congratulations! The server is successfully registered in TwinDB.
2015-05-03 04:12:31,662: twindb: INFO: create_agent_user():1161: TwinDB will backup the server accordingly to the default config.
2015-05-03 04:12:32,187: twindb: INFO: create_agent_user():1162: You can change the schedule and retention policy on https://console.twindb.com/

When a MySQL server registers in TwinDB few things happen:

  • The agent generates a GPG keys pair to encrypt backups and for secure communication with TwinDB dispatcher
  • The agent generates a SSH keys for secure file transfers
  • TwinDB creates a schedule, retention policy for the server and allocates storage in TwinDB for backup copies.
  • The agent creates a MySQL user on the local MySQL instance.

At the registration step the agent has to connect to MySQL with root permissions. It’s preferable to set a user and password in ~/.my.cnf file. It is also possible to specify the user and password with -u and -p options.

After five minutes TwinDB will discover the replication topology, and will find a feasible MySQL server to take backup and will schedule a backup job.

In “Server farm” -> “All servers” we see all registered MySQL servers.


After TwinDB discovers replication cluster nodes it starts scheduling backup jobs. By default a full copy is taken every week and incremental copy is taken every hour. You can change the schedule if you click on  “Schedule” -> “Default“.

On the dashboard there is a list of jobs. I was writing this post several days, so TwinDB managed to schedule a dozen of jobs.


For each newly registered server TwinDB schedules a full job, that’s why there are jobs for db01 and db02. But then it picked db03 and all further backups are taken from it.

To see what backup copies are taken from the replication cluster let’s open db03 server details, tab “Backup copies“. Here you can see full copies from db01, db02, and db03 and further incremental copies from db03.


Restore MySQL Incremental Backup

So far, taking an incremental backup was easy, but what about restoring a server from it?

Let’s go to the server list, right-click on a server where we want to restore a backup copy and choose “Restore server“:

Then choose an incremental copy to restore:


Then enter directory name where the restored database will be:


Then press “Restore” and it should show a confirmation window:


The restore job is scheduled and it’ll start after five minutes:


When the restore job is done the database files will be restored in directory /var/lib/mysql.restored on server db03:

[root@db03 mysql.restored]# cd /var/lib/mysql.restored/
[root@db03 mysql.restored]# ll
total 79908
-rw-r-----. 1 root root      295 May  5 03:36 backup-my.cnf
-rw-r-----. 1 root root 79691776 May  5 03:36 ibdata1
drwx------. 2 root root     4096 May  5 03:36 mysql
drwx------. 2 root root     4096 May  5 03:36 performance_schema
drwx------. 2 root root     4096 May  5 03:36 sakila
drwx------. 2 root root     4096 May  5 03:36 twindb
-rw-r-----. 1 root root       25 May  5 03:36 xtrabackup_binlog_info
-rw-r-----. 1 root root       91 May  5 03:36 xtrabackup_checkpoints
-rw-r-----. 1 root root      765 May  5 03:36 xtrabackup_info
-rw-r-----. 1 root root  2097152 May  5 03:36 xtrabackup_logfile
-rw-r-----. 1 root root       80 May  5 03:36 xtrabackup_slave_info
[root@db03 mysql.restored]#

And that’s it. /var/lib/mysql.restored/ is ready to be used as MySQL datadir.

facebooktwittergoogle_pluslinkedinby feather

The post How to setup MySQL incremental backup appeared first on Backup and Data Recovery for MySQL.


PlanetMySQL Voting: Vote UP / Vote DOWN

Partial table recovery from physical backup

$
0
0

In previous topic, we have covered “Transportable Tablespace” concept by copying and importing table’s tablespace to remote server. See -> Copying Tablespaces to Remote Server

The idea is copying tablespace file to remote server, in remote server you must create identical database names and table names manually, then you should discard new table’s tablespace file and import new copied one.

To achieve this you must have running MySQL version >= 5.6, innodb_file_per_table=1 and you must know “CREATE statement” of table.

Let’s to change our test condition. Assume that, you have MySQL server and you have taken physical backup of your server (you can use Percona XtraBackup, cold backup for eg.).
But one of the wonderful day somebody deleted all table data (say -> delete from table_name).
In fact your table at this moment exists(.frm and .ibd), you can easily discard table’s tablespace and import tablespace from backup folder.
But if table is dropped and you don’t know the create of table. Or even database is dropped. Our path will differ from previous one:
1. Create dropped database manually.
2. Create dropped table by extracting table’s create statement from .frm file which is in backed up directory.
To extract table create statement from .frm file you can use mysqlfrm tool from MySQL Utilities.
3. Discard table’s tablespace (ALTER TABLE t DISCARD TABLESPACE;)
4. Copy .ibd file from backup directory to MySQL’s datadir database directory
5. Import copied back tablespace file.(ALTER TABLE t IMPORT TABLESPACE;)

You can also read about this concept from documentation -> tablespace-copying

I have automatized this process adding table create statement extracting functionality to MySQL-AutoXtraBackup project as –partial recovery option.
Here is a demo usage video:

If you tested and found issues, please report it to improve this opensource project.

The post Partial table recovery from physical backup appeared first on Azerbaijan MySQL UG.


PlanetMySQL Voting: Vote UP / Vote DOWN

MongoDB’s flexible schema: How to fix write amplification

$
0
0

Being schemaless is one of the key features of MongoDB. On the bright side this allows developers to easily modify the schema of their collections without waiting for the database to be ready to accept a new schema. However schemaless is not free and one of the drawbacks is write amplification. Let’s focus on that topic.

Write amplification?

The link between schema and write amplification is not obvious at first sight. So let’s first look at a table in the relational world:

mysql> SELECT * FROM user LIMIT 2;
+----+-------+------------+-----------+-----------+----------------------------------+---------+-----------------------------------+------------+------------+
| id | login | first_name | last_name | city      | country                          | zipcode | address                           | password   | birth_year |
+----+-------+------------+-----------+-----------+----------------------------------+---------+-----------------------------------+------------+------------+
|  1 | arcu  | Vernon     | Chloe     | Paulista  | Cook Islands                     | 28529   | P.O. Box 369, 1464 Ac Rd.         | SSC44GZL5R |       1970 |
|  2 | quis  | Rogan      | Lewis     | Nashville | Saint Vincent and The Grenadines | H3T 3S6 | P.O. Box 636, 5236 Elementum, Av. | TSY29YRN6R |       1983 |
+----+-------+------------+-----------+-----------+----------------------------------+---------+-----------------------------------+------------+------------+

As all records have exactly the same fields, the field names are stored once in a separate file (.frm file). So the field names is metadata while the value of each field for each record is of course data.

Now let’s look at an equivalent collection in MongoDB:

{
        {
                "login": "arcu",
                "first_name": "Vernon",
                "last_name": "Chloe",
                "city": "Paulista",
                "country": "Cook Islands",
                "zipcode": "28529",
                "address": "P.O. Box 369, 1464 Ac Rd.",
                "password": "SSC44GZL5R",
                "birth_year": 1970
        },
        {
                "login": "quis",
                "first_name": "Rogan",
                "last_name": "Lewis",
                "city": "Nashville",
                "country": "Saint Vincent and The Grenadines",
                "zipcode": "H3T 3S6",
                "address": "P.O. Box 636, 5236 Elementum, Av.",
                "password": "TSY29YRN6R",
                "birth_year": 1983
        }
}

One difference with a table in the relational world is that MongoDB doesn’t know which fields each document will have. Therefore field names are data, not metadata and they must be stored with each document.

Then the question is: how large is the overhead in terms of disk space? To have an idea, I inserted 10M such records in an InnoDB table (adding an index on password and on birth_year to make the table look like a real table): the size on disk is around 1.4GB.

I also inserted the exact same 10M records in a MongoDB collection using the regular MMAPv1 storage engine, again adding an index on password and on birth_year, and this time the size on disk is … 2.97GB!

Of course it is not an apples-to-apples comparison as the InnoDB storage format and the MongoDB storage format are not identical. However a 100% difference is still significant.

Compression

One way to deal with write amplification is to use compression. With MongoDB 3.0, the WiredTiger storage engine is available and one of its benefits is compression (default algorithm: snappy). Percona TokuMX also has built-in compression using zlib by default.

Rebuilding the collection with 10M documents and the 2 indexes now gives the following results:
WiredTiger: 1.14GB
TokuMX: 736MB

This is a 2.5x to 4x data size reduction, pretty good!

WiredTiger also provides zlib compression and in this case the collection is only 691MB. However CPU usage is much higher compared to snappy so zlib will not be usable in all situations.

Conclusion

MongoDB schemaless design is attractive but it comes with several tradeoffs. Write amplification is one of them and using either WiredTiger with MongoDB 3.0 or Percona TokuMX is a very simple way to fix the issue.

The post MongoDB’s flexible schema: How to fix write amplification appeared first on MySQL Performance Blog.


PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL 5.7 Labs — Inserting, Updating, and Deleting Records via HTTP

$
0
0

In the MySQL Labs version of MySQL version 5.7, there is a new HTTP plugin. The HTTP plugin documentation from the labs site provides this information (from MySQL Labs):

The HTTP Plugin for MySQL adds HTTP(S) interfaces to MySQL. Clients can use the HTTP respectively HTTPS (SSL) protocol to query data stored in MySQL. The query language is SQL but other, simpler interfaces exist. All data is serialized as JSON. This version of MySQL Server HTTP Plugin is a Labs release, which means it’s at an early development stage. It contains several known bugs and limitation, and is meant primarily to give you a rough idea how this plugin will look some day. Likewise, the user API is anything but finalized. Be aware it will change in many respects.

In other words, with a simple HTTP URL, you can access and modify your data stored in MySQL. Here is an overview from the documentation:


The HTTP Plugin for MySQL is a proof-of concept of a HTTP(S) interface for MySQL 5.7.

The plugin adds a new protocol to the list of protocols understood by the server. It adds the HTTP respectively HTTPS (SSL) protocol to the list of protocols that can be used to issue SQL commands. Clients can now connect to MySQL either using the MySQL Client Server protocol and programming language-dependent drivers, the MySQL Connectors, or using an arbitrary HTTP client.
Results for SQL commands are returned using the JSON format.

The server plugin is most useful in environments where protocols other than HTTP are blocked:
• JavaScript code run in a browser
• an application server behind a firewall and restricted to HTTP access
• a web services oriented environment

In such environments the plugin can be used instead of a self developed proxy which translates HTTP requests into MySQL requests. Compared to a user-developed proxy, the plugin means less latency, lower complexity and the benefit of using a MySQL product. Please note, for very large deployments an architecture using a proxy not integrated into MySQL may be a better solution to clearly separate software layers and physical hardware used for the different layers.

The HTTP plugin implements multiple HTTP interfaces, for example:
• plain SQL access including meta data
• a CRUD (Create-Read-Update-Delete) interface to relational tables
• an interface for storing JSON documents in relational tables

Some of the interfaces follow Representational State Transfer (REST) ideas, some don’t. See below for a description of the various interfaces.

The plugin maps all HTTP accesses to SQL statements internally. Using SQL greatly simplifies the development of the public HTTP interface. Please note, at this early stage of development performance is not a primary goal. For example, it is possible to develop a similar plugin that uses lower level APIs of the MySQL server to overcome SQL parsing and query planning overhead.


In this post, I will show you how to install the plugin and use HTTP commands to retrieve data. The documentation also provides other examples. We aren’t going to explain everything about the plugin, as you will need to download the documentation.

First, you will need to download the MySQL Labs 5.7 version which includes the plugin. This download is available from the MySQL Labs web site.

After MySQL 5.7 is installed, you will want to add these lines to your my.cnf/my.ini file under the [mysqld] section:

#
# Default database, if no database given in URL
#
myhttp_default_db = httptest
#
# Non-SSL default MySQL SQL user
#
myhttp_default_mysql_user_name = http_sql_user
myhttp_default_mysql_user_passwd = sql_secret
myhttp_default_mysql_user_host = 127.0.0.1

There are other options for the plugin, but we will skip them for this post.

# Change only, if need be to run the examples!
#
# General settings
# 
# myhttp_http_enabled = 1
# myhttp_http_port = 8080
# myhttp_crud_url_prefix = /crud/
# myhttp_document_url_prefix = /doc/
# myhttp_sql_url_prefix = /sql/
# 
# 
# 
# Non-SSL HTTP basic authentication
# 
# myhttp_basic_auth_user_name = basic_auth_user
# myhttp_basic_auth_user_passwd = basic_auth_passwd

# 
# SSL
# 
# myhttp_https_enabled = 1
# myhttp_https_port = 8081
# myhttp_https_ssl_key = /path/to/mysql/lib/plugin/myhttp_sslkey.pem

After modifying the my.cnf/my.ini file, restart mysql and then install the plugin from a mysql prompt. Before proceeding, be sure to also check to make sure the plugin is installed:

mysql> INSTALL PLUGIN myhttp SONAME 'libmyhttp.so';
Query OK, 0 rows affected (0.09 sec)


mysql> SELECT * FROM INFORMATION_SCHEMA.PLUGINS WHERE PLUGIN_NAME='myhttp'\G
*************************** 1. row ***************************
           PLUGIN_NAME: myhttp
        PLUGIN_VERSION: 1.0
         PLUGIN_STATUS: ACTIVE
           PLUGIN_TYPE: DAEMON
   PLUGIN_TYPE_VERSION: 50705.0
        PLUGIN_LIBRARY: libmyhttp.so
PLUGIN_LIBRARY_VERSION: 1.5
         PLUGIN_AUTHOR: Andrey Hristov, Ulf Wendel
    PLUGIN_DESCRIPTION: HTTP Plugin for MySQL
        PLUGIN_LICENSE: GPL
           LOAD_OPTION: ON
1 row in set (0.03 sec)

We will need to create the user for accessing our database, and grant permissions:

mysql> CREATE USER 'http_sql_user'@'127.0.0.1' IDENTIFIED WITH mysql_native_password;
Query OK, 0 rows affected (1.89 sec)

mysql> SET old_passwords = 0;
Query OK, 0 rows affected (0.05 sec)

mysql> SET PASSWORD FOR 'http_sql_user'@'127.0.0.1' = PASSWORD('sql_secret');
Query OK, 0 rows affected (0.05 sec)

mysql> GRANT ALL ON myhttp.* TO
    -> 'http_sql_user'@'127.0.0.1';
Query OK, 0 rows affected (0.12 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.58 sec)

We will need to create a table for our example. The table will be a very simple table with three fields – ID, first and last names:

mysql> CREATE TABLE `names` (
    ->   `id` int(11) NOT NULL DEFAULT '1000',
    ->   `name_first` varchar(40) DEFAULT NULL,
    ->   `name_last` varchar(40) DEFAULT NULL,
    ->   PRIMARY KEY (`id`)
    -> ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Query OK, 0 rows affected (0.04 sec)

We need to insert some data into the table:

INSERT INTO `names` (name_first, name_last) VALUES ('Clark','Kent');
INSERT INTO `names` (name_first, name_last) VALUES ('Bruce','Wayne');
INSERT INTO `names` (name_first, name_last) VALUES ('Hal','Jordan');
INSERT INTO `names` (name_first, name_last) VALUES ('Barry','Allen');
INSERT INTO `names` (name_first, name_last) VALUES ('Diana','Prince');
INSERT INTO `names` (name_first, name_last) VALUES ('Arthur','Curry');
INSERT INTO `names` (name_first, name_last) VALUES ('Oliver','Queen');
INSERT INTO `names` (name_first, name_last) VALUES ('Ray','Palmer');
INSERT INTO `names` (name_first, name_last) VALUES ('Carter','Hall');
Query OK, 9 rows affected (0.01 sec)

Now that we have our table and table data, we can test a select statement with an HTTP URL. You may use a browser for this, but since I like to work with command line tools, I am going to use curl, a command line tool for doing all sorts of URL manipulations and transfers. Here is a simple select statement via curl. Use the plus sign (+) for spaces.

Select all of the names in the table:

$ curl --user basic_auth_user:basic_auth_passwd --url "http://127.0.0.1:8080/sql/myhttp/SELECT+name_first,+name_last+FROM+names"
[
{
"meta":[
	{"type":253,"catalog":"def","database":"myhttp","table":"names","org_table":"names","column":"name_first","org_column":"name_first","charset":33,"length":120,"flags":0,"decimals":0},
	{"type":253,"catalog":"def","database":"myhttp","table":"names","org_table":"names","column":"name_last","org_column":"name_last","charset":33,"length":120,"flags":0,"decimals":0}
],
"data":[ 
	["Clark","Kent"],
	["Bruce","Wayne"],
	["Hal","Jordan"],
	["Barry","Allen"],
	["Diana","Prince"],
	["Arthur","Curry"],
	["Oliver","Queen"],
	["Ray","Palmer"],
	["Carter","Hall"]
],
"status":[{"server_status":34,"warning_count":0}]
}
]

If you want to use a browser, you might have to authenticate the connection (enter the user name and password):

And here is the output from submitting the URL in a browser:

URL:  http://127.0.0.1:8080/sql/myhttp/SELECT+name_first,+name_last+FROM+names

Selecting a single name:

$ curl --user basic_auth_user:basic_auth_passwd --url "http://127.0.0.1:8080/sql/myhttp/SELECT+name_first,+name_last+FROM+names+where+name_first+=+'Clark'"
[
{
"meta":[
	{"type":253,"catalog":"def","database":"myhttp","table":"names","org_table":"names","column":"name_first","org_column":"name_first","charset":33,"length":120,"flags":0,"decimals":0},
	{"type":253,"catalog":"def","database":"myhttp","table":"names","org_table":"names","column":"name_last","org_column":"name_last","charset":33,"length":120,"flags":0,"decimals":0}
],
"data":[ 
	["Clark","Kent"]
],
"status":[{"server_status":34,"warning_count":0}]
}
]

Deleting a row:

$ curl --user basic_auth_user:basic_auth_passwd --url "http://127.0.0.1:8080/sql/myhttp/delete+from+names+where+name_first+=+'Hal'"
{"server_status":34,"warning_count":0,"affected_rows":1,"last_insert_id":0}

Inserting a row:

$ curl --user basic_auth_user:basic_auth_passwd --url "http://127.0.0.1:8080/sql/myhttp/INSERT+INTO+names+(name_first,+name_last)+VALUES+('Hal','Jordan');"
{"server_status":2,"warning_count":0,"affected_rows":1,"last_insert_id":1018}

In a future post, I will show you how to use Perl to connect via HTTP and then parse the results. That’s all for now. THANK YOU for using MySQL!

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.

PlanetMySQL Voting: Vote UP / Vote DOWN

VividCortex Receives the MySQL Community Award for Application of the Year

$
0
0

VividCortex, the monitoring solution for the modern data system, continues building a product that enables IT departments to manage distributed, diverse systems effectively and improve workflow.

Charlottesville, Virginia (PRWEB) May 05, 2015 - VividCortex, the monitoring solution for the modern data system, received the MySQL Community Award for Application of the Year at Percona Live 2015. Since launching in April 2014, the SaaS product for database monitoring has provided unparalleled query insight and given customers a clear view of what is happening on their production servers, allowing companies to improve application and server performance and maximize resources. This award recognizes the new standard for database monitoring, one that goes beyond graphs and charts to provide actionable data. VividCortex will continue to build a service that enables IT teams to manage distributed systems and continuous workflows effectively.

ADDITIONAL HIGHLIGHTS

  • Revenue and customer growth are more than 500% year over year
  • Customer retention and satisfaction are extremely high, and customer adoption is expanding rapidly
  • Thus far in 2015, VividCortex has expanded to two additional databases, PostgreSQL and Redis
  • Last quarter alone, VividCortex made a number of product enhancements to further improve database monitoring. View here.
  • The team has released 35+ open-source code repositories

Quotes

“It’s an honor to receive this award from the MySQL community. We aim to raise the bar for database monitoring the same way MySQL has raised the bar for open source databases. Our product would not be at this point without the dedication and help of our employees, friends, customers and investors. We thank them for their support and look forward to many years of mutually beneficial relationships. On a personal note, as a previous recipient of Community Member of the Year award, and having dedicated the last decade of my life to the MySQL community, this is deeply meaningful to me.”

About VividCortex

Database management software is at the core of IT systems, but often operates as a black box. VividCortex has created the first comprehensive tool designed specifically to provide actionable insight and a high definition window into the inner workings of databases with unprecedented detail, accuracy, and ease-of-use. Visit us at http://www.VividCortex.com and read our blog at VividCortex.com/blog/.

PRWeb Press Release


PlanetMySQL Voting: Vote UP / Vote DOWN

OpenStack Summit Vancouver May 18-22. How is Time Warner Cable using Galera? Come and see.

$
0
0

Codership will be joining OpenStack Summit Vancouver May 18-22. Visit our booth T52.

Codership will have demo theater presentation about Galera in cloud, Time Warner Cable will tell how they use Galera and MariaDB gives best practises to use MariaDB Galera Cluster.

 

OpenStack Summit is the must-attend OpenStack event. The OpenStack Summit is a five-day conference for developers, users, and administrators of OpenStack Cloud Software. It’s a great place to get started with OpenStack.

 

 

Codership’s demo theater presentation at Vancouver:

 

Do More with Galera Cluster in Your OpenStack Cloud

 

Galera Cluster is already the way to achieve active/active HA for OpenStack back-end databases. Yet, it is possible to do a great deal more: Galera can provide cloud user with a fully redundant database cluster in place of the traditional single-node MySQL or legacy replication or Amazon RDS. With the new geo-distribution features, it is also possible to create databases that span regions and availability zones. This provides protection against whole-datacenter disasters and power outages.

 

Presenter Philip Stoev:

Philip Stoev is a QA and Release Manager with Galera Cluster, responsible for the overall quality of the product. He has worked on distributed systems and databases for more than a decade at Oracle, MariaDB and NuoDB. His database testing framework, the RQG, is widely used in the MySQL ecosystem and won the Application of the Year MySQL Community Award in 2014.

 

 

Real World Experiences with Upgrading OpenStack at Time Warner Cable

Thursday, May 21 • 2:20pm – 3:00pm

How much work is it really to upgrade to newer versions of OpenStack? What problems might you run into and what should you tell your customers about it? We’ll talk about our experiences as an operator upgrading services individually and in bulk to both Kilo and Juno. Topics covered will include:

 

* Our testing approach using virtualized OpenStack environments and continuous integration

* How to know if you will need to take an outage and how to mitigate the impact.

* Upgrade automation using Ansible and Puppet

* Discussion of the pitfalls we encountered in the upgrades

* Bonus: Upgrading underlying infrastructure such as MySQL with GALERA and RabbitMQ

 

Speakers:

Matt Fisher, Principal Software Engineer – Time Warner Cable

Clayton O’Neill, Principal Software Engineer, Time Warner Cable

 

 

MariaDB Galera Cluster Best practices presentation

Thursday, May 21 • 9:50am – 10:30am

MariaDB Galera cluster is a synchronous multi-master cluster with many intriguing features like synchronous replication, active-active multi-master topology, automatic node provisioning, etc. This presentation will cover many of the best practices including pitfalls that Database Administrators and DevOps must keep in mind while managing the MariaDB Galera cluster.

 

Speaker Nirhbay Choubay, Software Engineer at MariaDB Corporation

 

 


PlanetMySQL Voting: Vote UP / Vote DOWN

Getting started Galera with Docker, part 1

$
0
0

by Erkan Yanar

 

Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications. Consisting of Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows, Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments. As a result, IT can ship faster and run the same app, unchanged, on laptops, data center VMs, and any cloud.

 

This is the first of a series of blog posts about using Galera with Docker.


In this post we are going to get started with Docker and Galera:

* Build a basic Docker Image (which we will extended in later posts)

* Deploy on a test cluster on a local machine

 

The instructions have been tested on Ubuntu 14.04 with Docker 1.5.

 

Build a basic Docker image


In Docker, Dockerfiles are used to describe the Docker images we are going

to use to start our Galera Cluster.

 

We are using the following Dockerfile:


FROM ubuntu:14.04
MAINTAINER Erkan Yanar <erkan.yanar@linsenraum.de>
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y software-properties-common
RUN apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 BC19DDBA
RUN add-apt-repository 'deb http://releases.galeracluster.com/ubuntu trusty main'
RUN apt-get update
RUN apt-get install -y galera-3 galera-arbitrator-3 mysql-wsrep-5.6 rsync lsof
COPY my.cnf /etc/mysql/my.cnf
ENTRYPOINT ["mysqld"]


This image builds on top of the Ubuntu 14.04 image. It simply installs Galera using the Codership repository and copies the my.cnf over.

 

The `my.cnf` is quite simple.


[mysqld]
user = mysql
bind-address = 0.0.0.0
wsrep_provider = /usr/lib/galera/libgalera_smm.so
wsrep_sst_method = rsync
default_storage_engine = innodb
binlog_format = row
innodb_autoinc_lock_mode = 2
innodb_flush_log_at_trx_commit = 0
query_cache_size = 0
query_cache_type = 0


A pre-built image is available from Docker Hub. You can pull it by running:


sudo docker pull erkules/galera:basic

(All commands in this article need to run as root.)


Deploy on a test cluster on a local machine


Next, we are going to start a Galera Cluster on the local host. The instructions below are for demonstration purposes only and will not work when deploying on multiple hosts, as networking between containers needs to be set up. Configuring Docker networking across multiple hosts will be described in a following post.


Starting a cluster


There have been a number of blog posts showing how to start Galera Cluster

on a single host. This post is going to show the simplest way to do that in Docker by using simple commands, which will not work for a multi-host installation.

First, if working on Ubuntu, we need to put AppArmor’s Docker profile in complain mode in advance.


$ sudo aa-complain /etc/apparmor.d/docker

Then we can start the first Galera node by instructing Docker to create a container and run mysqld in it.

 

$ sudo docker run --detach=true --name node1 -h node1 erkules/galera:basic --wsrep-cluster-name=local-test --wsrep-cluster-address=gcomm://

In addition to defining the internal name and hostname, we also define the name of the cluster.


MySQLs error log is not configured explicitly, and Docker records STDOUT and STDERR of every container. So, using `sudo docker logs node1`, the log output from the first node can be displayed without having to enter the container.


For the next two containers, we use a simple Docker trick. The `–link` option writes the name and the IP of host1 into the `/etc/hosts` file of the container. This way, we can connect the remaining nodes to node1 without having to obtain its IP from its container.


$ sudo docker run --detach=true --name node2 -h node2 --link node1:node1 erkules/galera:basic --wsrep-cluster-name=local-test --wsrep-cluster-address=gcomm://node1
$ sudo docker run --detach=true --name node3 -h node3 --link node1:node1 erkules/galera:basic --wsrep-cluster-name=local-test --wsrep-cluster-address=gcomm://node1


Now we have a running Galera cluster. We can check the number of nodes in the Cluster by running the mysql client from inside one of the containers:


$ sudo docker exec -ti node1 mysql -e 'show status like "wsrep_cluster_size"'
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 3 |
+--------------------+-------+

What did we do?


We built a simple Galera Cluster on one host.

* Without using SSH;

* Without the need to configure any IP addresses;


Note that this setup does not support restarting the container — you should remove the container and recreate it instead.

 

In the next blog post we will describe deploying Galera with Docker on multiple hosts.

 

 


PlanetMySQL Voting: Vote UP / Vote DOWN

new to pstop – vmstat style stdout interface

$
0
0

In November last year I announced a program I wrote called pstop. I hope that some of you have tried it and found it useful. Certainly I know that colleagues and friends use it and it has proved helpful when trying to look inside MySQL to see what it is doing.

A recent suggestion provoked me to provide a slightly different interface to pstop, that is rather than show the output in a terminal-like top format, provide a line-based summary in a similar way to vmstat(8), pt-diskstats(1p) and other similar command line tools.  I have now incorporated some changes which allow this to be done. So if you want to see every few seconds which tables are generating most load, or which files have most I/O then this tool may be useful. Example output is shown below:

$ pstop --stdout --count=3 --interval=10 --limit=4 --view=file_io_latency
pstop 0.4.0 - 18:29:38 myserver-0001 / 5.6.xx-log, up 99d 22h 42m 42s  [REL] 0 seconds
File I/O Latency (file_summary_by_instance)    6 row(s)    
   Latency      %|  Read  Write   Misc|Rd bytes Wr bytes|     Ops  R Ops  W Ops  M Ops|Table Name
  27.88 ms  43.1%|         1.6%  98.4%|          28.50 k|      40         50.0%  50.0%|<redo_log>
  27.05 ms  41.8%| 23.1%   1.7%  75.3%|356.98 k   9.79 k|  1.91 k  98.0%   1.0%   1.0%|<binlog>
   9.54 ms  14.7%|100.0%              | 32.00 k         |       2 100.0%              |db.mytable1
  70.32 us   0.1%|100.0%              | 16.00 k         |       1 100.0%              |db.mytable2
  64.66 ms 100.0%| 24.7%   1.4%  73.9%|436.98 k  38.29 k|  1.95 k  96.0%   2.0%   2.0%|Totals
pstop 0.4.0 - 18:29:48 myserver-0001 / 5.6.xx-log, up 99d 22h 42m 52s  [REL] 10 seconds
File I/O Latency (file_summary_by_instance)   38 row(s)    
   Latency      %|  Read  Write   Misc|Rd bytes Wr bytes|     Ops  R Ops  W Ops  M Ops|Table Name
    3.71 s  33.2%| 16.1%   3.8%  80.1%| 35.01 M 968.91 k|203.50 k  97.9%   1.0%   1.0%|<binlog>
    3.17 s  28.3%|         1.5%  98.5%|           3.35 M|  4.34 k         50.0%  50.0%|<redo_log>
 893.29 ms   8.0%| 94.9%   1.2%   3.9%|  1.89 M   4.41 M|     571  21.2%  49.4%  29.4%|db.mytable1
 668.57 ms   6.0%|        77.8%  22.2%|          30.28 M|     683         50.1%  49.9%|<ibdata>
   11.18 s 100.0%| 35.7%   7.0%  57.4%| 42.52 M  64.83 M|212.06 k  94.3%   3.1%   2.6%|Totals
pstop 0.4.0 - 18:29:58 myserver-0001 / 5.6.xx-log, up 99d 22h 43m 2s   [REL] 10 seconds
File I/O Latency (file_summary_by_instance)   34 row(s)    
   Latency      %|  Read  Write   Misc|Rd bytes Wr bytes|     Ops  R Ops  W Ops  M Ops|Table Name
    3.48 s  32.2%| 17.0%   3.8%  79.2%| 33.80 M 935.31 k|202.72 k  98.0%   1.0%   1.0%|<binlog>
    2.92 s  27.1%|         1.6%  98.4%|           3.37 M|  4.34 k         50.0%  50.0%|<redo_log>
 916.01 ms   8.5%| 92.7%   1.1%   6.2%|  1.98 M   4.58 M|     590  21.5%  49.7%  28.8%|db.mytable1
 898.80 ms   8.3%| 98.0%   0.4%   1.7%|  2.09 M   1.52 M|     303  44.2%  32.0%  23.8%|db.mytable3
   10.79 s 100.0%| 38.6%   6.3%  55.1%| 42.03 M  57.94 M|210.93 k  94.5%   3.0%   2.6%|Totals
$

Hopefully this gives you an idea.  The --help option gives you more details. I have not yet paid much attention to the output and the output is not currently well suited for a tool to parse, so I think it’s likely I will need to provide a more machine readable --raw format option at a later stage.  That said feedback on what you want to see or patches are most welcome.


PlanetMySQL Voting: Vote UP / Vote DOWN

The Perfect Server - Ubuntu 15.04 (Vivid Vervet) with Apache, PHP, MySQL, PureFTPD, BIND, Postfix, Dovecot and ISPConfig 3

$
0
0
This tutorial shows how to install an Ubuntu 15.04 (Vivid Vervet) server (with Apache2, BIND, Dovecot) for the installation of ISPConfig 3, and how to install ISPConfig 3. ISPConfig 3 is a webhosting control panel that allows you to configure the following services through a web browser: Apache or nginx web server, Postfix mail server, Courier or Dovecot IMAP/POP3 server, MySQL, BIND or MyDNS nameserver, PureFTPd, SpamAssassin, ClamAV, and many more. This setup covers the installation of Apache (instead of nginx), BIND (instead of MyDNS), and Dovecot (instead of Courier).
PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL Binlog Events – reading and handling information from your Binary Log

$
0
0

MySQL replication is among the top features of MySQL. In replication data is replicated from one MySQL Server (also knows as Master) to another MySQL Server(also known as Slave). MySQL Binlog Events is a set of libraries which work on top of replication and open directions for myriad of use cases like extracting data from binary log files, building application to support heterogeneous replication, filtering events from binar<ol>y log files and much more.
All this in  REAL TIME .
INTRODUCTION
I have already defined what MySQL Binlog Events is, to deliver on any of the above use-cases, you first need to read the event from the binary log. This can be done using two types of transports:

1) TCP transport: Here your application will connect to an online MySQL Server and receive events as and when they occur.

2) File transport: As the name suggests the application will connect to an already existing file and read the events from it.

In the MySQL framework we always receive one event at a time irrespective of the transport , after the connection is established, we will start reading the events from the binary log files and hold the information in a buffer.

To do something useful with the event it needs to be decoded, and then the relevant information will be stored or processed based on the requirements of the application.
Event processing can be implemented in a very sophisticated manner using the content handlers  in MySQL Binlog Events which are designed specifically for that purpose.

The set of libraries in MySQL Binlog Events are all C++ libraries, this is a brief introduction of them

– libmysqlstream -> Methods related to connecting with the transport and receiving the buffer from the connection and processing the events using content handlers.
– libbinlogevents   -> Methods for decoding the  events after receiving them from any of the transports the application is connected via.

GETTING AND COMPILING THE SOURCE CODE
You can get a copy of source code from labs.mysql.com, select MySQL Binlog Events 1.0.0 from the drop down menu. After selecting that you will get an option to download either the source code or the binaries, for respective platforms.

If downloading the source code these are some prerequisite which you need to have on your machine in order to compile the source code successfully.

  • MySQL binaries of 5.7.6 or greater ()
    • To link to the libmysqlclient library.
    • The header files in the include directory are also used.
  • MySQL Source code of the same version as of binaries
    • To get few header files which are not part of the binaries and are being used
  • CMake 2.6 or greater
  • GCC 4.6.3 or greater

Steps for compiling MySQL Binlog Events source:

  1. Run the ‘cmake’ command on the parent directory of the MySQL Binlog Events source. This will generate the necessary Makefiles. Options to set while running CMake.
    1. Make sure to set cmake option ENABLE_DOWNLOADS=1; which will install Google Test which is required to run the unit
      tests.
    2. The option MYSQLCLIENT_STATIC_LINKING:BOOL=TRUE; will make the linking with libmysqlclient static
    3. You need to point to the MySQL Binary directory in order to make the library libmysqlclient and other header files
      visible, this is done by setting MYSQL_DIR={parent directory of the MySQL Binaries}.
    4. As I have already discussed that, we also need to include header files from the MySQL Source code, for that you need
      to do MYSQL_SOURCE_INCLUDE_DIR={Parent directory of MYSQL source}/include
      the full command for doing CMake iscmake . -DMYSQLCLIENT_STATIC_LINKING:BOOL=TRUE
      -DMYSQL_DIR= {Parent directory of MySQL binary}
      -DMYSQL_SOURCE_INCLUDE_DIR={Parent directory of MYSQL source}
      -DENABLE_DOWNLOADS=1Run the ‘cmake’ command on the parent directory of the MySQL Binlog Events source. This will generate the necessary Makefiles. Options to set while running CMake.
  2. Run make and make install to build and install, this will create the libraries libmysqlstream and libbinlogevents

That will be all, you are now ready to create your own application using MySQL Binlog Events.

For more details about the use cases and how to create a small application using MySQL Binlog Events refer to the  second blog in this series.


PlanetMySQL Voting: Vote UP / Vote DOWN

Concrete5 CMS

$
0
0

Today, a lot of CMS are existing, WordPress, Joomla, Magento, and others, in this blog I will share my experience Concrete5 through a web agency specialized based in Geneva: 8 Ways Media


What Concrete5?


Not only a CMS (Content Management System) open source based on a webserver, it is coded in PHP using a MySQL database, but also it's a great Framework for developers. A simplified system (optional), URL rewriting is present to increase the performance of indexing sites from search engines.

C5 can also be used in the development of Web applications.


C5 also provides, through its content management and user rights, create intranets for companies (small scale in my opinion, it is better for an intranet, stay on a SharePoint or Alfresco).

This CMS is designed to make life easier for the end user, the handling is simple and intuitive.

Advanced management of dynamic websites with modern design, the editing mode is made directly via the FrontEnd, the possibilities are numerous: Drag and Drop, templates, etc ...

INSTALLATION


Verifying Prerequisites


The following components are required to enable concrete5 run correctly: http://www.concrete5.org/documentation/developers/5.7/installation/system-requirements/


To install the tutorial, please refer to the main site: http://www.concrete5.org/documentation/developers/5.7/installation/installation/


FEATURES


The Dashboard manages the properties related to:
 

alt

  • Rights management

alt

 

  • Management of imported content


  alt alt

  • Templates

alt

  • Block, pages

alt

  • Features: video, forms, presentations, blog, guestbook, etc ...

  alt alt

The GUI is configurable, C5 is based on a system with access to a punctilious customization.
The editing of pages, texts and other is done via the FrontEnd as soon as you are logged on as an administrator or with the writing rights on the site. The editing mode has two modes: HTML or "Composer".
      
Versioning is an asset, it means that in case of error, a trace of the old version before changes is easily restorable.
The updates are performed through a simple upload, followed by a single click.

CONCLUSION


Despite the trio "Wordpress - Joomla - Drupal" according to studies, having discovered Concrete5, I recommend it for its very intuitive look and ease of use, on the other hand the developer of communities seem to be active and growing, what facilitates the resolution of "small issues" in cases. Also, exports of all site content in HTML is possible, this could help if you have to change the web server. However, the majority of bases useful plugins have a high cost, the most "design" themes are not free (expense of taste of each) and the support service is paid if the host is not through their bias.
I highly recommend this CMS! Its simplicity and amplitude adaptation to different needs allows the creation of a site with confidence and ease.


PlanetMySQL Voting: Vote UP / Vote DOWN

News from the third MariaDB Foundation Board Meeting this year

$
0
0

The MariaDB Foundation Board has been meeting monthly since February and on Monday this week had the third meeting of the year. Here is an update on a couple of things from the meeting.

We’re happy to announce that Booking.com has renewed their support to the foundation. As a major corporate sponsor Booking.com has been offered a seat on the Foundation board. Booking.com nominated Eric Herman.  Eric has a history with MySQL dating from 2004 where he joined MySQL working on the server and tools.  In 2010, Eric joined Booking.com where he works on database scaling challenges and BigData. As a community member, he has contributed to the perl MySQL client driver, the perl interpreter, and other Free Software.  To represent community and industry interests in line with the Foundation mission, Eric Herman has joined the Board.

The current Members of the Board ordered by last name are:

  • Sergei Golubchik, Chief Architect, MariaDB Corporation
  • Eric Herman, Principal Developer, Booking.com
  • Espen Håkonsen, CIO of Visma and Managing Director of Visma IT & Communications
  • Rasmus Johansson (Chair), VP Engineering, MariaDB Corporation
  • Michael Widenius, CTO, MariaDB Foundation
  • Jeremy Zawodny, Software Engineer, Craigslist

Last but not least secretary of the Board is the Foundation’s CEO Otto Kekäläinen.

The list of corporate sponsors so far this year are:

In case your company is interested to support the MariaDB project through the MariaDB Foundation please contact ”foundation ‘at’ mariadb (dot) org”.

It might be of interest that the mariadb.org website is getting a facelift to both look more appealing but also include more relevant information about the project and the Foundation. More about that later.


PlanetMySQL Voting: Vote UP / Vote DOWN

Spring Cleaning in the GIS Namespace

$
0
0

In MySQL 5.7.6 we’ve done some major spring cleaning within the GIS function namespace. We have deprecated 66 function names and added 13 new aliases. Please see the release notes for a complete list of all the changes. But why have we done this, and what impact does this have for you?

Standardization

GIS is a growing area, and to keep MySQL up to speed we have made the GIS namespace more SQL/MM compliant. SQL/MM is an ISO standard that defines a set of spatial routines and schemas for handling spatial data. As part of this effort we have adapted the standard naming convention where all spatial functions are prefixed with “ST_“. This also means that it is a lot easier to take your queries from other SQL/MM compliant database systems and run them on your MySQL databases.

Implementation Confusion

Almost all of the spatial functions in MySQL had two versions—one version prefixed with “ST_” and one prefixed with “MBR“. The “ST_” prefixed version does a precise calculation on the geometries, while the “MBR” version uses a minimum bounding rectangle (MBR) instead of the exact shape of the geometries. Because of this, the two versions would often give different answers, as seen in the example below:

mysql> SET @polygon1 = ST_GeomFromText('POLYGON((28 55,21 50,28 44,15 44,15 55,28 55))');
Query OK, 0 rows affected (0.00 sec)

mysql> SET @polygon2 = ST_GeomFromText('POLYGON((28 55,21 50,28 44,28 55))');
Query OK, 0 rows affected (0.00 sec)

mysql> SELECT MBRContains(@polygon1, @polygon2);
+-----------------------------------+
| MBRContains(@polygon1, @polygon2) |
+-----------------------------------+
|                                 1 |
+-----------------------------------+
1 row in set (0.00 sec)

mysql> SELECT ST_Contains(@polygon1, @polygon2);
+-----------------------------------+
| ST_Contains(@polygon1, @polygon2) |
+-----------------------------------+
|                                 0 |
+-----------------------------------+
1 row in set (0.00 sec)

In addition to these two function name variants, a third name without any prefix sometimes existed as well. Which of the calculations does this function perform? Precise or with a minimum bounding rectangle? This wasn’t always clear or consistent, so in order to avoid this kind of confusion, all spatial functions without an “ST_” or “MBR” prefix are now deprecated.

However, according to and in compliance with the SQL/MM standard, exceptions to the above rule are the geometry construction functions: Point, LineString, Polygon, MultiPoint, MultiLineString, MultiPolygon and GeometryCollection. They have the same name as their corresponding column type, and will not be deprecated or receive an “ST_” prefix.

Inconsistencies in Error Reporting

Given all the function variants noted above, we also had some inconsistencies when it came to function naming and error reporting. An error within one of the Equals or MBREqual (yes, without the last s!) functions would report an error containing the string “MBREquals”, even though the function MBREquals did not exist! In addition, the GLength function did not follow any of the naming conventions we used. We have now cleaned all of this up so that the function names follow the SQL/MM standard, and the errors reported print the correct function name(s).

Okay, but What Do I Have to Do?

From MySQL 5.7.6 on all of the deprecated functions will raise a warning, and they will later be removed in a future Server release. Thus we strongly encourage you to ensure that all of your applications are using either the “ST_” prefixed function name variants or the “MBR” prefixed versions in MySQL 5.7. If you are unsure which of the prefixed function variants you should replace your current call(s) with, please take a look at the specific warnings produced in each case as they will tell you exactly which functions you should use instead.

We hope that all of this work helps to simplify your MySQL usage, bring us more into standards compliance, and further pave the way for MySQL playing a prominent part in the Open Source GIS database field.

That’s all for now. As always, THANK YOU for using MySQL!


PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL BInlog Events Use case and Examples

$
0
0

This is in continuation to the previous post, if you have not read that, I would advise you to go through that before continue reading.
I discussed about three use cases in my last post, here I will explain them in detail.

  1.  Extracting data from a binary log file:
    The binary log file is full of information, but what if you want selected info, for example:
    • printing the timestamp for every event in the binary log file.
    • extracting the event types of all the events occurring in the binary log file.

    And any other such data from an event in real time. Below is some sample code which shows how to do that step by step.

    • Connect to the available transport
      // the base class for both types of transport, the term driver and transport are
      // used interchangeably in code.
      binary_log::system::Binary_log_driver *drv;
      • For tcp transport
        // An example of the uri can be mysql://neha@localhost:3306  drv=binary_log::system::create_transport("user@host:port_no");
      • For file transport
        // The path should be absolute like
        // “/home/neha/binlog/masterbin.000001”
        drv=binary_log::system::create_transport("path of the binary log file");
      • Common code for both transports
        binary_log::Binary_log binlog(drv);
        // This internally calls methods from libmysqlclient library
        binlog.connect();
    • Fetch the stream in a buffer
      // In this pair object the first value will store stream buffer and second value
      // will store length buffer, only one event is received at a time.
      std::pair buffer_buflen;
      // This internally calls methods from libmysqlclient library.
      drv= get_next_event(&buffer_buflen);
    • Decode the buffer and get an event object
      binary_log::Binary_log_event *event; // To store the event object
      // This class contains method to decode all the binary log events
      Decoder decode;

      event= decode.decode_event((char *) buffer_buflen.first, buffer_buflen.second,
      NULL/* char array to store the error message*/, 1)
    • Now that the event object is created, it can be used to extract all the information about that particular event, for example
      std::cout<<”Event Data”<<std::endl;
      std::cout<<”Event type ”<<event->get_event_type()<<std::endl;
      std::cout<<”Data written ”<<event->header()->data_written<<std::endl;
      std::cout<<”Binlog Position ”<<event->header()->log_pos<<std::endl;

    You can see the complete code below.

    Screenshot from 2015-04-21 13:25:28

    The above example will print just one event, if you want to continue reading events, place the code for decoding and printing in a while loop, with breaking condition as per your requirement. If there are no more events left to read, an EOF message will appear in case of file transport.

  2. Filtering events from a Binary log file:
    Imagine a case where you want to have all the queries executed on your server in a
    printable format at one place may be in some file, or you want to be notified whenever a binary log file is rotated(this can be identified by Rotate event). Similar to these there can be many other use cases where you are only interested in some events and don’t want to load yourself with extra information.In the last use case we saw how we can get an event object, now to filter event we can do this:
    The steps for getting the event object will be same as last example

    /*
    Filtering query_event and printing the query for each Query_event
    The type of event object is Binary_log_event which is the abstract class for all events
    to extract any data which is specific to that event We will need to have the appropriate
    cast.
    */
    if (event->get_event_type() == QUERY_EVENT)
    std::cout<<” The Query is “<<static_cast<Query_event*>(event)->query<<std::endl;
    And as this is real time, you can just run your application and all the queries will be printed as soon as they executed on the MySQL Server in case of a tcp transport.

  3. Building application to support heterogeneous replication in real time. Heterogeneous replication is where the two datastore participating in the replication process are different. There are multiple scenarios where this is beneficial, for example: Apache Kafka captures thedata from different streams and processes them, and imagine a scenario where we want to establish a replication system between MySQL server and Apache Kafka, so we can create an application using MySQL Binlog Events which will enable you to create a pipeline between MySQL Server and Apache Kafka, and replicate the data from MySQL to Apache Kafka in real time.

These are just some of the many places where MySQL Binlog Events can be used. This is a very generic framework so there will be many other use-cases. Let us know how you are using this library. Its always exciting to hear about the real use cases. If you have any comments or questions, please get in touch.
If you happen to find a bug, please file the bug at bugs.mysql.com.
This library is availabe at labs.mysql.com, choose MySQL Binlog Events 1.0.0 from the drop down menu.

Looking forward to the feedback.


PlanetMySQL Voting: Vote UP / Vote DOWN

MariaDB 10.0.18 now available

$
0
0

Download MariaDB 10.0.18

Release Notes Changelog What is MariaDB 10.0?

MariaDB APT and YUM Repository Configuration Generator

mariadb-seal-shaded-browntext-altThe MariaDB project is pleased to announce the immediate availability of MariaDB 10.0.18. This is a Stable (GA) release.

See the Release Notes and Changelog for detailed information on this release and the What is MariaDB 10.0? page in the MariaDB Knowledge Base for general information about the MariaDB 10.0 series.

Thanks, and enjoy MariaDB!


PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL indexing 101: a challenging single-table query

$
0
0

We discussed in an earlier post how to design indexes for many types of queries using a single table. Here is a real-world example of the challenges you will face when trying to optimize queries: two similar queries, but one is performing a full table scan while the other one is using the index we specially created for these queries. Bug or expected behavior? Read on!

Our two similar queries

# Q1
mysql> explain select col1, col2 from t where ts >= '2015-04-30 00:00:00';
+----+-------------+---------------+------+---------------+------+---------+------+---------+-------------+
| id | select_type | table         | type | possible_keys | key  | key_len | ref  | rows    | Extra       |
+----+-------------+---------------+------+---------------+------+---------+------+---------+-------------+
|  1 | SIMPLE      | t             | ALL  | ts            | NULL | NULL    | NULL | 4111896 | Using where |
+----+-------------+---------------+------+---------------+------+---------+------+---------+-------------+
# Q2
mysql> explain select count(*) from t where ts >='2015-04-30 00:00:00';
+----+-------------+---------------+-------+---------------+--------------+---------+------+---------+--------------------------+
| id | select_type | table         | type  | possible_keys | key          | key_len | ref  | rows    | Extra                    |
+----+-------------+---------------+-------+---------------+--------------+---------+------+---------+--------------------------+
|  1 | SIMPLE      | t             | range | ts            | ts           | 5       | NULL | 1809458 | Using where; Using index |
+----+-------------+---------------+-------+---------------+--------------+---------+------+---------+--------------------------+

Q1 runs a full-table scan while Q2 is using the index on ts, which by the way is covering – See Using index in the Extra field. Why such different execution plans?

Let’s try to understand what happens with Q1.

This is a query with a single inequality on the ts field and we have an index on ts. The optimizer tries to see if this index is usable (possible_keys field), this is all very logical. Now if we look at the rows field for Q1 and Q2, we can see that the index would allow us to only read 45% of the records (1.8M out of 4.1M). Granted, this is not excellent but this should be much better than a full table scan anyway, right?

If you think so, read carefully what’s next. Because this assumption is simply not correct!

Estimating the cost of an execution plan (simplified)

First of all, the optimizer does not know if data or indexes are in memory or need to be read from disk, it will simply assume everything is on disk. What it does know however is that sequential reads are much faster than random reads.

So let’s execute Q1 with the index on ts. Step 1 is to perform a range scan on this index to identify the 1.8M records that match the condition: this is a sequential read, so this is quite fast. However now step 2 is to get the col1 and col2 fields for each record that match the condition. The index provides the primary key value for each matching record so we will have to run a primary key lookup for each matching record.

Here is the issue: 1.8M primary key lookups is equivalent to 1.8M random reads, therefore this will take a lot of time. Much more time than sequentially reading the full table (which means doing a full scan of the primary key because we are using InnoDB here).

Contrast that with how Q2 can be executed with the index on ts. Step 1 is the same: identify the 1.8M matching records. But the difference is: there’s no step 2! That’s why we call this index a ‘covering index': we don’t need to run point queries on the primary key to get extra fields. So this time, using the index on ts is much more efficient than reading the full table (which again would mean that we would do a full-table scan of the primary key).

Now there is one more thing to understand: a full-table scan is a sequential operation when you think about it from a logical point of view, however the InnoDB pages are certainly not stored sequentially on disk. So at the disk level, a full table is more like multiple random reads than a single large sequential read.

However it is still much faster than a very large number or point query and it’s easy to understand why: when you read a 16KB page for a full table scan, all records will be used. While when you read a 16KB page for a random read, you might only use a single record. So in the worst case, reading 1.8M records will require 1.8M random reads while reading the full table with 4M records will only require 100K random reads – the full table scan is still an order of magnitude faster.

Optimizing our query

Now that we’ve understood why the optimizer chose a full table scan for Q1, is there a way to make it run faster by using an index? If we can create a covering index, we will no longer need the expensive primary key lookups. Then the optimizer is very likely to choose this index over a full table scan. Creating such a covering index is easy:

ALTER TABLE t ADD INDEX idx_ts_col1_col2 (ts, col1, col2);

Some of you may object that because we have an inequality on ts, the other columns cannot be used. This would be true if we had conditions on col1 or col2 in the WHERE clause, but that does not apply here since we’re only adding these extra columns to get a covering index.

Conclusion

Understanding how indexes can be used to filter, sort or cover is paramount to be able to optimize queries, even simple ones. Understanding (even approximately) how a query is run according to a given execution plan is also very useful. Otherwise you will sometimes be puzzled by the decisions made by the optimizer.

Also note that beginning in MySQL 5.7, the cost model can be tuned. This can help the optimizer make better decisions: for instance random reads are far cheaper on fast storage than on regular disks.

The post MySQL indexing 101: a challenging single-table query appeared first on MySQL Performance Blog.


PlanetMySQL Voting: Vote UP / Vote DOWN

Log Buffer #422: A Carnival of the Vanities for DBAs

$
0
0

This Log Buffer Edition picks, choose and glean some of the top notch blog posts from Oracle, SQL Server and MySQL.

Oracle:

  • The standard images that come with devstack are very basic
  • Oracle is pleased to announce the release of Oracle VM VirtualBox 5.0 BETA 3
  • Monitoring Parallel Execution using Real-Time SQL Monitoring in Oracle Database 12c
  • Accessing your Cloud Integration API end point from Javascript
  • Are You Ready for The Future of Oracle Database?

SQL Server:

  • SQL Monitor Custom Metric: Plan Cache; Cache Pages Total
  • Generating A Password in SQL Server with T-SQL from Random Characters
  • This article explains how default trace can be used for auditing purposes when combined with PowerShell scripts
  • How to FTP a Dynamically Named Flat File
  • Alan Cooper helped to debug the most widely-used PC language of the late seventies and early eighties, BASIC-E, and, with Keith Parsons, developed C-BASIC. He then went on to create Tripod, which morphed eventually into Visual Basic in 1991.

MySQL:

  • There’s a new kid on the block in the NoSQL world – Azure DocumentDB
  • Spring Cleaning in the GIS Namespace
  • MySQL replication is among the top features of MySQL. In replication data is replicated from one MySQL Server (also knows as Master) to another MySQL Server (also known as Slave). MySQL Binlog Events is a set of libraries which work on top of replication and open directions for myriad of use cases like extracting data from binary log files, building applications to support heterogeneous replication, filtering events from binary log files and much more.
  • New to pstop – vmstat style stdout interface
  • The Perfect Server – Ubuntu 15.04 (Vivid Vervet) with Apache, PHP, MySQL, PureFTPD, BIND, Postfix, Dovecot and ISPConfig 3

PlanetMySQL Voting: Vote UP / Vote DOWN

MongoDB with Percona TokuMXse – experimental build RC5 is available!

$
0
0

While our engineering team is working on finalizing the TokuMXse storage engine, I want to provide an experimental build that you can try and test MongoDB 3.0 with our storage engine.

It is available here
percona.com/downloads/TESTING/Percona-TokuMXse-rc5/percona-tokumxse-3.0.3pre-rc5.tar.gz

To start MongoDB with TokuMXse storage engine use:

mongod --storageEngine=tokuft

I am looking for your feedback!

The post MongoDB with Percona TokuMXse – experimental build RC5 is available! appeared first on MySQL Performance Blog.


PlanetMySQL Voting: Vote UP / Vote DOWN

Percona Server 5.5.43-37.2 is now available

$
0
0

Percona Server
Percona is glad to announce the release of Percona Server 5.5.43-37.2 on May 8, 2015. Based on MySQL 5.5.43, including all the bug fixes in it, Percona Server 5.5.43-37.2 is now the current stable release in the 5.5 series.

Percona Server is open-source and free. Details of the release can be found in the 5.5.43-37.2 milestone on Launchpad. Downloads are available here and from the Percona Software Repositories.

Bugs Fixed:

  • A server binary as distributed in binary tarballs could fail to load on different systems due to an unsatisfied libssl.so.6 dynamic library dependency. This was fixed by replacing the single binary tarball with multiple tarballs depending on the OpenSSL library available in the distribution: 1) ssl100 – for all Debian/Ubuntu versions except Squeeze/Lucid (libssl.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0 (0x00007f2e389a5000)); 2) ssl098 – only for Debian Squeeze and Ubuntu Lucid (libssl.so.0.9.8 => /usr/lib/libssl.so.0.9.8 (0x00007f9b30db6000)); 3) ssl101 – for CentOS 6 and CentOS 7 (libssl.so.10 => /usr/lib64/libssl.so.10 (0x00007facbe8c4000)); 4) ssl098e – to be used only for CentOS 5 (libssl.so.6 => /lib64/libssl.so.6 (0x00002aed5b64d000)). Bug fixed #1172916.
  • mysql_install_db would make the server produce an “Error in my_thread_global_end(): 1 threads didn't exit” error message. While this error does not prevent mysql_install_db from completing successfully, its presence might cause any mysql_install_db-calling script to return an error as well. This is a regression introduced by backporting fix for bug #1319904. Bug fixed #1402074.
  • A string literal containing an invalid UTF-8 sequence could be treated as falsely equal to a UTF-8 column value with no invalid sequences. This could cause invalid query results. Bug fixed #1247218 by a fix ported from MariaDB (MDEV-7649).
  • Percona Server .deb binaries were built without fast mutexes. Bug fixed #1433980.
  • Installing or uninstalling the Audit Log Plugin would crash the server if the audit_log_file variable was pointing to an inaccessible path. Bug fixed #1435606.
  • The audit_log_file variable would point to random memory area if the Audit Log Plugin was not loaded into server, and then installed with INSTALL PLUGIN, and my.cnf contained audit_log_file setting. Bug fixed #1437505.
  • Percona Server client .deb packages were built with with EditLine instead of Readline. Further, a client built with EditLine could display incorrectly on PuTTY SSH client after its window resize. Bugs fixed #1266386 and #1332822 (upstream #63130 and #69991).

Other bugs fixed: #1436138 (upstream #76505).

(Please also note that Percona Server 5.6 series is the latest General Availability series and current GA release is 5.6.24-72.2.)

Release notes for Percona Server 5.5.43-37.2 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

The post Percona Server 5.5.43-37.2 is now available appeared first on MySQL Performance Blog.


PlanetMySQL Voting: Vote UP / Vote DOWN
Viewing all 18787 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>