Quantcast
Channel: Planet MySQL
Viewing all 18822 articles
Browse latest View live

Baffling 5.7 global/status variables issues, unclean migration path

$
0
0

MySQL 5.7 introduces a change in the way we query for global variables and status variables: the INFORMATION_SCHEMA.(GLOBAL|SESSION)_(VARIABLES|STATUS) tables are now deprecated and empty. Instead, we are to use the respective performance_schema.(global|session)_(variables|status) tables.

But the change goes farther than that; there is also a security change. Oracle created a pitfall of 2 changes at the same time:

  1. Variables/status moved to a different table
  2. Privileges required on said table

As an example, my non-root user gets:

mysql> show session variables like 'tx_isolation';
ERROR 1142 (42000): SELECT command denied to user 'normal_user'@'my_host' for table 'session_variables'

Who gets affected by this? Nearly everyone and everything.

  • Your Nagios will not be able to read status variables
  • Your ORM will not be able to determine session variables
  • Your replication user will fail connecting (see this post by Giuseppe)
  • And most everyone else.

The problem with the above is that involves two unrelated changes to your setup, which are not entirely simple to coordinate:

  1. Change your app code to choose the correct schema (information_schema vs. performance_schema)
  2. GRANT the permissions on your database

Perhaps at this point you still do not consider this to be a problem. You may be thinking: well, let's first prepare by creating the GRANTs, and once that is in place, we can, at our leisure, modify the code.

Not so fast. Can you really that simply create those GRANTs?

Migration woes

How do you migrate to a new MySQL version? You do not reinstall all your servers. You want an easy migration path, and that path is: introduce one or two slaves of a newer version, see that everything works to your satisfaction, slowly upgrade all your other slaves, eventually switchover/upgrade your master.

This should not be any different for 5.7. We would like to provision a 5.7 slave in our topologies and just see that everything works. Well, we have, and things don't just work. Our Nagios stops working for that 5.7 slave. Orchestrator started complaining (by this time I've already fixed it to be more tolerant for the 5.7 problems so no crashes here).

I hope you see the problem by now.

You cannot issue a GRANT SELECT ON performance_schema.global_variables TO '...' on your 5.6 master.

The table simply does not exist there, which means the statement will not go to binary logs, which means it will not replicate on your 5.7 slave, which means you will not be able to SHOW GLOBAL VARIABLES on your slave, which means everything remains broken.

Yes, you can issue this directly on your 5.7 slaves. It's doable, but undesired. It's ugly in terms of automation (and will quite possibly break some assumptions and sanity checks your automation uses); in terms of validity testing. It's unfriendly to GTID (make sure to SET SQL_LOG_BIN=0 before that).

WHY in the first place?

It seems like a security thing. I'm not sure whether this was intended. So you prevent a SHOW GLOBAL VARIABLES for a normal user. Makes sense. And yet:

mysql> show global variables like 'hostname';
ERROR 1142 (42000): SELECT command denied to user 'normal_user'@'my_host' for table 'global_variables'

mysql> select @@global.hostname;
+---------------------+
| @@global.hostname   |
+---------------------+
| myhost.mydomain.com |
+---------------------+

mysql> select @@version;
+--------------+
| @@version    |
+--------------+
| 5.7.8-rc-log |
+--------------+

Seems like I'm allowed access to that info after all. So it's not strictly a security design decision. For status variable, I admit, I don't have a similar workaround.

Solutions?

The following are meant to be solutions, but do not really solve the problem:

  • SHOW commands. SHOW GLOBAL|SESSION VARIABLES|STATUS will work properly, and will implicitly know whether to provide the results via information_schema or performance_schema tables.
    • But, aren't we meant to be happier with SELECT queries? So that I can really do stuff that is smarter than LIKE 'variable_name%'?
    • And of course you cannot use SHOW in server side cursors. Your stored routines are in a mess now.
    • This does not solve the GRANTs problem.
  • show_compatibility_56: an introduced variable in 5.7, boolean. It truly is a time-travel-paradox novel in disguise, in multiple respects.
    • Documentation introduces it, and says it is deprecated.
      • time-travel-paradox :O
    • But it actually works in 5.7.8 (latest)
      • time-travel-paradox plot thickens
    • Your automation scripts do not know in advance whether your MySQL has this variable
      • Hence SELECT @@global.show_compatibility_56 will produce an error on 5.6
      • But the "safe" way of SHOW GLOBAL VARIABLES LIKE 'show_compatibility_56' will fail on a privilege error on 5.7
      • time-travel-paradox :O
    • Actually advised by my colleague Simon J. Mudd, show_compatibility_56 defaults to OFF. I support this line of thought. Or else it's old_passwords=1 all over again.
    • show_compatibility_56 doesn't solve the GRANTs problem.
    • This does not solve any migration path. It just postpones the moment when I will hit the same problem. When I flip the variable from "1" to "0", I'm back at square one.

Suggestion

I claim security is not the issue, as presented above. I claim Oracle will yet again fall into the trap of no-easy-way-to-migrate-to-GTID in 5.6 if the current solution is unchanged. I claim that there have been too many changes at once. Therefore, I suggest one of the alternative two flows:

  1. Flow 1: keep information_schema, later migration into performance_schema
    • In 5.7information_schema tables should still produce the data.
    • No security constraints on information_schema
    • Generate WARNINGs on reading from information_schema ("...this will be deprecated...")
    • performance_schema also available. With security constraints, whatever.
    • In 5.8 remove information_schema tables; we are left with performance_schema only.
  2. Flow 2: easy migration into performance_schema:
    • In 5.7, performance_schema tables should not require any special privileges. Any user can read from them.
    • Keep show_compatibility_56 as it is.
    • SHOW commands choose between information_schema or performance_schema on their own -- just as things are done now.
    • In 5.8performance_schema tables will require SELECT privileges.

As always, I love the work done by the engineers; and I love how they listen to the community.

Comments are most welcome. Have I missed the simple solution here? Are there even more complications to these features? Thoughts on my suggested two flows?


PlanetMySQL Voting: Vote UP / Vote DOWN

Oracle dump utility version 1.1

$
0
0
Today I released version 1.1 of myoradump for download from sourceforge. If you don't know what myoradump is, this is a utility for exporting data from an Oracle database in some relevant text format so that it can be imported to some other database.

The main thing in version 1.1 is that I have added a whole bunch of new output formats, so make it even easier to get your data out of expensive Oracle and into something more effective. The new formats supported are:
  • MySQL - The format of this is a bunch of INSERT statements that you get when you use mysqldump for example and is useful for import into MariaDB (and MySQL). INSERT arrays are supported as a bunch of more options.
  • JSON - This format is rather obvious, the output is a file consisting of one JSON object per row. To support binary data, which is a no-no in JSON, base64 encoding of binary data is also supported.
  • JSON Array - The format is similar to JSON, but instead of separate objects per row, this format consists of one or more JSON arrays of JSON objects.
  • HTML - This format will produce a valid HTML TABLE. This is sometimes useful when you want to view output data that includes UTF8 characters for example.
In additions, this version of  myoradump includes a bunch of new features and bug fixes. I will follow up this post with one that includes some specific examples of using myoradump eventually.

So, don't touch that dial!
/Karlsson
PlanetMySQL Voting: Vote UP / Vote DOWN

Log Buffer #435: A Carnival of the Vanities for DBAs

$
0
0

Sun of database technologies is shining through the cloud technology. Oracle, SQL Server, MySQL and various other databases are bringing forth some nifty offerings and this Log Buffer Edition covers some of them.

Oracle:

  • How to create your own Oracle database merge patch.
  • Finally the work of a database designer will be recognized! Oracle has announced the Oracle Database Developer Choice Awards.
  • Oracle Documents Cloud Service R4: Why You Should Seriously Consider It for Your Enterprise.
  • Mixing Servers in a Server Pool.
  • Index compression–working out the compression number
  • My initial experience upgrading database from Oracle 11g to Oracle 12c (Part -1).

SQL Server:

  • The Evolution of SQL Server BI
  • Introduction to SQL Server 2016 Temporal Tables
  • Microsoft and Database Lifecycle Management (DLM): The DacPac
  • Display SSIS package version on the Control Flow design surface
  • SSAS DSV COM error from SSDT SSAS design Data Source View

MySQL:

  • If you run multiple MySQL instances on a Linux machine, chances are good that at one time or another, you’ve ended up connected to an instance other than what you had intended.
  • MySQL Group Replication: Plugin Version Access Control.
  • MySQL 5.7 comes with many changes. Some of them are better explained than others.
  • What Makes the MySQL Audit Plugin API Special?
  • Architecting for Failure – Disaster Recovery of MySQL/MariaDB Galera Cluster

 

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL.

The post Log Buffer #435: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.


PlanetMySQL Voting: Vote UP / Vote DOWN

MariaDB: InnoDB foreign key constraint errors

$
0
0

Introduction

A foreign key is a field (or collection of fields) in one table that uniquely identifies a row of another table. The table containing the foreign key is called the child table, and the table containing the candidate key is called the referenced or parent table. The purpose of the foreign key is to identify a particular row of the referenced table. Therefore, it is required that the foreign key is equal to the candidate key in some row of the primary table, or else have no value (the NULL value). This is called a referential integrity constraint between the two tables. Because violations of these constraints can be the source of many database problems, most database management systems provide mechanisms to ensure that every non-null foreign key corresponds to a row of the referenced table. Consider following simple example:

create table parent (
    id int not null primary key,
    name char(80)
) engine=innodb;

create table child (
    id int not null,
    name char(80),
    parent_id int, 
    foreign key(parent_id) references parent(id)
) engine=innodb;

As far as I know, the following storage engines for MariaDB and/or MySQL support foreign keys:

MariaDB foreign key syntax is documented at https://mariadb.com/kb/en/mariadb/foreign-keys/ (and MySQL at http://dev.mysql.com/doc/refman/5.5/en/innodb-foreign-key-constraints.html). While most of the syntax is parsed and checked when the CREATE TABLE or ALTER TABLE clause is parsed, there are still several error cases that can happen inside InnoDB. Yes, InnoDB has its own internal foreign key constraint parser (in dict0dict.c function dict_create_foreign_constraints_low()).

However, the error messages shown in CREATE or ALTER TABLE, and SHOW WARNINGS in versions of MariaDB prior to 5.5.45 and 10.0.21 are not very informative or clear. There are additional error messages if you issue SHOW ENGINE INNODB STATUS, which help, but were not an ideal solution. In this blog I’ll present a few of the most frequent error cases using MariaDB 5.5.44 and how these error messages are improved in MariaDB 5.5.45 and 10.0.21. I will use the default InnoDB (i.e. XtraDB) but innodb_plugin works very similarly.

Constraint name not unique

Foreign name constraint names must be unique in a database. However, the error message is unclear and leaves a lot unclear:

--------------
CREATE TABLE t1 (
  id int(11) NOT NULL PRIMARY KEY,
  a int(11) NOT NULL,
  b int(11) NOT NULL,
  c int not null,
  CONSTRAINT test FOREIGN KEY (b) REFERENCES t1 (id)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
--------------
Query OK, 0 rows affected (0.45 sec)

--------------
CREATE TABLE t2 (
id int(11) NOT NULL PRIMARY KEY,
a int(11) NOT NULL,
b int(11) NOT NULL,
c int not null,
CONSTRAINT mytest FOREIGN KEY (c) REFERENCES t1(id),
CONSTRAINT test FOREIGN KEY (b) REFERENCES t2 (id)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
--------------

ERROR 1005 (HY000): Can't create table `test`.`t2` (errno: 121 "Duplicate key on write or update")
--------------
show warnings
--------------

+---------+------+--------------------------------------------------------------------------------+
| Level | Code | Message |
+---------+------+--------------------------------------------------------------------------------+
| Error | 1005 | Can't create table `test`.`t2` (errno: 121 "Duplicate key on write or update") |
| Warning | 1022 | Can't write; duplicate key in table 't2' |
+---------+------+--------------------------------------------------------------------------------+
2 rows in set (0.00 sec)

These messages are not very helpful because there are two foreign key constraints. Looking into SHOW ENGINE INNODB STATUS we get a better message:

show engine innodb status
--------------
------------------------
LATEST FOREIGN KEY ERROR
------------------------
2015-07-30 12:37:48 7f44a1111700 Error in foreign key constraint creation for table `test`.`t2`.
A foreign key constraint of name `test`.`test`
already exists. (Note that internally InnoDB adds 'databasename'
in front of the user-defined constraint name.)
Note that InnoDB's FOREIGN KEY system tables store
constraint names as case-insensitive, with the
MySQL standard latin1_swedish_ci collation. If you
create tables or databases whose names differ only in
the character case, then collisions in constraint
names can occur. Workaround: name your constraints
explicitly with unique names.

In MariaDB 5.5.45 and 10.0.21, the message is clearly improved:

CREATE TABLE t1 (
  id int(11) NOT NULL PRIMARY KEY,
  a int(11) NOT NULL,
  b int(11) NOT NULL,
  c int not null,
  CONSTRAINT test FOREIGN KEY (b) REFERENCES t1 (id)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
--------------

Query OK, 0 rows affected (0.14 sec)

--------------
CREATE TABLE t2 (
  id int(11) NOT NULL PRIMARY KEY,
  a int(11) NOT NULL,
  b int(11) NOT NULL,
  c int not null,
  CONSTRAINT mytest FOREIGN KEY (c) REFERENCES t1(id),
  CONSTRAINT test FOREIGN KEY (b) REFERENCES t2 (id)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
--------------

ERROR 1005 (HY000): Can't create table 'test.t2' (errno: 121)
--------------
show warnings
--------------

+---------+------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Level   | Code | Message                                                                                                                                                                                                                                                                                                     |
+---------+------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Warning |  121 | Create or Alter table `test`.`t2` with foreign key constraint failed. Foreign key constraint `test/test` already exists on data dictionary. Foreign key constraint names need to be unique in database. Error in foreign key definition: CONSTRAINT `test` FOREIGN KEY (`b`) REFERENCES `test`.`t2` (`id`). |
| Error   | 1005 | Can't create table 'test.t2' (errno: 121)                                                                                                                                                                                                                                                                   |
+---------+------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)

No index

There should be an index for columns in a referenced table that contains referenced columns as the first columns.

create table t1(a int, b int, key(b)) engine=innodb
--------------
Query OK, 0 rows affected (0.46 sec)

--------------
create table t2(a int, b int, constraint b foreign key (b) references t1(b), constraint a foreign key a (a) references t1(a)) engine=innodb
--------------

ERROR 1005 (HY000): Can't create table `test`.`t2` (errno: 150 "Foreign key constraint is incorrectly formed")
--------------
show warnings
--------------

+---------+------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Level | Code | Message |
+---------+------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Warning | 150 | Create table 'test/t2' with foreign key constraint failed. There is no index in the referenced table where the referenced columns appear as the first columns.
|
| Error | 1005 | Can't create table `test`.`t2` (errno: 150 "Foreign key constraint is incorrectly formed") |
| Warning | 1215 | Cannot add foreign key constraint |
+---------+------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+
3 rows in set (0.00 sec)

Fine but again we have no idea which foreign key it was. As before, there is a better message in the SHOW ENGINE INNODB STATUS output:

LATEST FOREIGN KEY ERROR
------------------------
2015-07-30 13:44:31 7f30e1520700 Error in foreign key constraint of table test/t2:
 foreign key a (a) references t1(a)) engine=innodb:
Cannot find an index in the referenced table where the
referenced columns appear as the first columns, or column types
in the table and the referenced table do not match for constraint.
Note that the internal storage type of ENUM and SET changed in
tables created with >= InnoDB-4.1.12, and such columns in old tables
cannot be referenced by such columns in new tables.
See http://dev.mysql.com/doc/refman/5.6/en/innodb-foreign-key-constraints.html
for correct foreign key definition.

In MariaDB 5.5.45 and 10.0.21, the message is clearly improved:

create table t1(a int, b int, key(b)) engine=innodb
--------------

Query OK, 0 rows affected (0.16 sec)

--------------
create table t2(a int, b int, constraint b foreign key (b) references t1(b), constraint a foreign key a (a) references t1(a)) engine=innodb
--------------

ERROR 1005 (HY000): Can't create table 'test.t2' (errno: 150)
--------------
show warnings
--------------

+---------+------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Level   | Code | Message                                                                                                                                                                                                                                |
+---------+------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Warning |  150 | Create  table '`test`.`t2`' with foreign key constraint failed. There is no index in the referenced table where the referenced columns appear as the first columns. Error close to  foreign key a (a) references t1(a)) engine=innodb. |
| Error   | 1005 | Can't create table 'test.t2' (errno: 150)                                                                                                                                                                                              |
+---------+------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)

Referenced table not found

A table that is referenced on foreign key constraint should exist in InnoDB data dictionary. If not:

create table t1 (f1 integer primary key) engine=innodb
--------------

Query OK, 0 rows affected (0.47 sec)

--------------
alter table t1 add constraint c1 foreign key (f1) references t11(f1)
--------------

ERROR 1005 (HY000): Can't create table `test`.`#sql-2612_2` (errno: 150 "Foreign key constraint is incorrectly formed")
--------------
show warnings
--------------

+---------+------+-----------------------------------------------------------------------------------------------------+
| Level   | Code | Message                                                                                             |
+---------+------+-----------------------------------------------------------------------------------------------------+
| Error   | 1005 | Can't create table `test`.`#sql-2612_2` (errno: 150 "Foreign key constraint is incorrectly formed") |
| Warning | 1215 | Cannot add foreign key constraint                                                                   |
+---------+------+-----------------------------------------------------------------------------------------------------+
show engine innodb status
--------------
LATEST FOREIGN KEY ERROR
------------------------
2015-07-30 13:44:34 7f30e1520700 Error in foreign key constraint of table test/#sql-2612_2:
 foreign key (f1) references t11(f1):
Cannot resolve table name close to:
(f1)

Both messages are first referring to an internal table name and the foreign key error message is referring to an incorrect name. In MariaDB 5.5.45 and 10.0.21, the message is clearly improved:

create table t1 (f1 integer primary key) engine=innodb
--------------

Query OK, 0 rows affected (0.11 sec)

--------------
alter table t1 add constraint c1 foreign key (f1) references t11(f1)
--------------

ERROR 1005 (HY000): Can't create table 'test.#sql-2b40_2' (errno: 150)
--------------
show warnings
--------------

+---------+------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Level   | Code | Message                                                                                                                                                                    |
+---------+------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Warning |  150 | Alter  table `test`.`t1` with foreign key constraint failed. Referenced table `test`.`t11` not found in the data dictionary close to  foreign key (f1) references t11(f1). |
| Error   | 1005 | Can't create table 'test.#sql-2b40_2' (errno: 150)                                                                                                                         |
+---------+------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)

--------------
show engine innodb status
--------------
150730 13:50:36 Error in foreign key constraint of table `test`.`t1`:
Alter  table `test`.`t1` with foreign key constraint failed. Referenced table `test`.`t11` not found in the data dictionary close to  foreign key (f1) references t11(f1).

Temporal tables

Temporal tables can’t have foreign key constraints because temporal tables are not stored to the InnoDB data dictionary.

create temporary table t2(a int, foreign key(a) references t1(a)) engine=innodb
--------------

ERROR 1005 (HY000): Can't create table `test`.`t2` (errno: 150 "Foreign key constraint is incorrectly formed")
--------------
show warnings
--------------

+---------+------+--------------------------------------------------------------------------------------------+
| Level   | Code | Message                                                                                    |
+---------+------+--------------------------------------------------------------------------------------------+
| Error   | 1005 | Can't create table `test`.`t2` (errno: 150 "Foreign key constraint is incorrectly formed") |
| Warning | 1215 | Cannot add foreign key constraint                                                          |
+---------+------+--------------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)

--------------
show engine innodb status
--------------
LATEST FOREIGN KEY ERROR
------------------------
2015-07-30 13:44:35 7f30e1520700 Error in foreign key constraint of table tmp/#sql2612_2_1:
foreign key(a) references t1(a)) engine=innodb:
Cannot resolve table name close to:
(a)) engine=innodb

--------------
alter table t1 add foreign key(b) references t1(a)
--------------

ERROR 1005 (HY000): Can't create table `test`.`#sql-2612_2` (errno: 150 "Foreign key constraint is incorrectly formed")
--------------
show warnings
--------------

+---------+------+-----------------------------------------------------------------------------------------------------+
| Level   | Code | Message                                                                                             |
+---------+------+-----------------------------------------------------------------------------------------------------+
| Error   | 1005 | Can't create table `test`.`#sql-2612_2` (errno: 150 "Foreign key constraint is incorrectly formed") |
| Warning | 1215 | Cannot add foreign key constraint                                                                   |
+---------+------+-----------------------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)

These error messages do not really help the user, because the actual reason for the error is not printed and the foreign key error references an internal table name. In MariaDB 5.5.45 and 10.0.21 this is clearly improved:

create temporary table t1(a int not null primary key, b int, key(b)) engine=innodb
--------------

Query OK, 0 rows affected (0.04 sec)

--------------
create temporary table t2(a int, foreign key(a) references t1(a)) engine=innodb
--------------

ERROR 1005 (HY000): Can't create table 'test.t2' (errno: 150)
--------------
show warnings
--------------

+---------+------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Level   | Code | Message                                                                                                                                                                             |
+---------+------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Warning |  150 | Create  table `tmp`.`t2`Ï with foreign key constraint failed. Referenced table `tmp`.`t1` not found in the data dictionary close to foreign key(a) references t1(a)) engine=innodb.  |
| Error   | 1005 | Can't create table 'test.t2' (errno: 150)                                                                                                                                           |
+---------+------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)

--------------
alter table t1 add foreign key(b) references t1(a)
--------------

ERROR 1005 (HY000): Can't create table 'test.#sql-2b40_2' (errno: 150)
--------------
show warnings
--------------

+---------+------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Level   | Code | Message                                                                                                                                                             |
+---------+------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Warning |  150 | Alter  table `tmp`.`t1`Ï with foreign key constraint failed. Referenced table `tmp`.`t1` not found in the data dictionary close to foreign key(b) references t1(a).  |
| Error   | 1005 | Can't create table 'test.#sql-2b40_2' (errno: 150)                                                                                                                  |
+---------+------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)

Column count does not match

There should be exactly the same number of columns in both the foreign key column list and the referenced column list. However, this currently raises the following error:

create table t1(a int not null primary key, b int, key(b)) engine=innodb
--------------

Query OK, 0 rows affected (0.17 sec)

--------------
alter table t1 add foreign key(a,b) references t1(a)
--------------

ERROR 1005 (HY000): Can't create table 'test.#sql-4856_1' (errno: 150)
--------------
show warnings
--------------

+-------+------+----------------------------------------------------+
| Level | Code | Message |
+-------+------+----------------------------------------------------+
| Error | 1005 | Can't create table 'test.#sql-4856_1' (errno: 150) |
+-------+------+----------------------------------------------------+
1 row in set (0.00 sec)

-----------------+
show engine innodb status;
-----------------+
LATEST FOREIGN KEY ERROR
------------------------
150730 15:15:57 Error in foreign key constraint of table test/#sql-4856_1:
foreign key(a,b) references t1(a):
Syntax error close to: 2015-07-30 13:44:35 7f30e1520700 Error in foreign key constraint of table tmp/#sql2612_2_2: foreign key(b) references t1(a): Cannot resolve table name close to: (a)

The error message is not clear and the foreign key error refers to an internal table name. In MariaDB 5.5.45 and 10.0.21 there is additional information:

create table t1(a int not null primary key, b int, key(b)) engine=innodb
--------------

Query OK, 0 rows affected (0.14 sec)

--------------
alter table t1 add foreign key(a,b) references t1(a)
--------------

ERROR 1005 (HY000): Can't create table 'test.#sql-2b40_2' (errno: 150)
--------------
show warnings
--------------

+---------+------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Level | Code | Message |
+---------+------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Warning | 150 | Alter table `test`.`t1` with foreign key constraint failed. Foreign key constraint parse error in foreign key(a,b) references t1(a) close to ). Too few referenced columns, you have 1 when you should have 2. |
| Error | 1005 | Can't create table 'test.#sql-2b40_2' (errno: 150) |
+---------+------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)

Incorrect cascading

A user may define a foreign key constraint with ON UPDATE SET NULL or ON DELETE SET NULL. However, this requires that the referenced columns are not defined as NOT NULL. Currently, the error message on this situation is:

create table t1 (f1 integer not null primary key) engine=innodb
--------------

Query OK, 0 rows affected (0.40 sec)

--------------
alter table t1 add constraint c1 foreign key (f1) references t1(f1) on update set null
--------------

ERROR 1005 (HY000): Can't create table `test`.`#sql-2612_2` (errno: 150 "Foreign key constraint is incorrectly formed")
--------------
show warnings
--------------

+---------+------+-----------------------------------------------------------------------------------------------------+
| Level   | Code | Message                                                                                             |
+---------+------+-----------------------------------------------------------------------------------------------------+
| Error   | 1005 | Can't create table `test`.`#sql-2612_2` (errno: 150 "Foreign key constraint is incorrectly formed") |
| Warning | 1215 | Cannot add foreign key constraint                                                                   |
+---------+------+---------------------------------------------------------------------------------------------

--------+
show engine innodb status;
--------+
LATEST FOREIGN KEY ERROR
------------------------
2015-07-30 13:44:37 7f30e1520700 Error in foreign key constraint of table test/#sql-2612_2:
 foreign key (f1) references t1(f1) on update set null:
You have defined a SET NULL condition though some of the
columns are defined as NOT NULL.

Both error messages are not very useful, because the first does not really tell how the foreign key constraint is incorrectly formed and later does not say which column has the problem. This is improved in MariaDB 5.5.45 and 10.0.21:

create table t1 (f1 integer not null primary key) engine=innodb
--------------

Query OK, 0 rows affected (0.10 sec)

--------------
alter table t1 add constraint c1 foreign key (f1) references t1(f1) on update set null
--------------

ERROR 1005 (HY000): Can't create table 'test.#sql-2b40_2' (errno: 150)
--------------
show warnings
--------------

+---------+------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Level   | Code | Message                                                                                                                                                                                                                         |
+---------+------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Warning |  150 | Alter  table `test`.`t1` with foreign key constraint failed. You have defined a SET NULL condition but column f1 is defined as NOT NULL in  foreign key (f1) references t1(f1) on update set null close to  on update set null. |
| Error   | 1005 | Can't create table 'test.#sql-2b40_2' (errno: 150)                                                                                                                                                                              |
+---------+------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)

Incorrect types

Column types for foreign key columns and referenced columns should match and use the same character set. If they do not, you currently get:

create table t1 (id int not null primary key, f1 int, f2 int, key(f1)) engine=innodb
--------------

Query OK, 0 rows affected (0.47 sec)

--------------
create table t2(a char(20), key(a), foreign key(a) references t1(f1)) engine=innodb
--------------

ERROR 1005 (HY000): Can't create table `test`.`t2` (errno: 150 "Foreign key constraint is incorrectly formed")
--------------
show warnings
--------------

+---------+------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Level   | Code | Message                                                                                                                                                         |
+---------+------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Warning |  150 | Create table 'test/t2' with foreign key constraint failed. There is no index in the referenced table where the referenced columns appear as the first columns.
 |
| Error   | 1005 | Can't create table `test`.`t2` (errno: 150 "Foreign key constraint is incorrectly formed")                                                                      |
| Warning | 1215 | Cannot add foreign key constraint                                                                                                                               |
+---------+------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+
3 rows in set (0.00 sec)

--------+
show engine innodb status;
--------+
LATEST FOREIGN KEY ERROR
------------------------
2015-07-30 13:44:39 7f30e1520700 Error in foreign key constraint of table test/t2:
foreign key(a) references t1(f1)) engine=innodb:
Cannot find an index in the referenced table where the
referenced columns appear as the first columns, or column types
in the table and the referenced table do not match for constraint.
Note that the internal storage type of ENUM and SET changed in
tables created with >= InnoDB-4.1.12, and such columns in old tables
cannot be referenced by such columns in new tables.
See http://dev.mysql.com/doc/refman/5.6/en/innodb-foreign-key-constraints.html
for correct foreign key definition.

But do we have an index for the referenced column f1 in the table t2? So if there are multiple columns in both the foreign key column list and the referenced column list, where do we look for the error? In MariaDB 5.5.45 and 10.0.21 this is improved by:

create table t1 (id int not null primary key, f1 int, f2 int, key(f1)) engine=innodb
--------------

Query OK, 0 rows affected (0.15 sec)

--------------
create table t2(a char(20), key(a), foreign key(a) references t1(f1)) engine=innodb
--------------

ERROR 1005 (HY000): Can't create table 'test.t2' (errno: 150)
--------------
show warnings
--------------

+---------+------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Level   | Code | Message                                                                                                                                                                                            |
+---------+------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Warning |  150 | Create  table `test`.`t2` with foreign key constraint failed. Field type or character set for column a does not mach referenced column f1 close to foreign key(a) references t1(f1)) engine=innodb |
| Error   | 1005 | Can't create table 'test.t2' (errno: 150)                                                                                                                                                          |
+---------+------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)

Conclusions

There are several different ways to incorrectly define a foreign key constraint. In many cases when using earlier versions of MariaDB (and MySQL), the error messages produced by these cases were not very clear or helpful. In MariaDB 5.5.45 and 10.0.21 there are clearly improved error messages to help out the user. Naturally, there is always room for further improvements, so feedback is more than welcome!

References

 


PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL 5.5.45 Overview and Highlights

$
0
0

MySQL 5.5.45 was recently released (it is the latest MySQL 5.5, is GA), and is available for download here:

http://dev.mysql.com/downloads/mysql/5.5.html

This release, similar to the last 5.5 release, is mostly uneventful.

There were 0 “Functionality Added or Changed” items this time, 1 “Security Fix”, and just 9 bugs overall fixed.

Out of the 9 bugs, there were 3 InnoDB bugs, 1 security-related bug, and 1 potential crashing bug. Here are the ones worth noting:

  • InnoDB: An index record was not found on rollback due to inconsistencies in the purge_node_t structure.
  • InnoDB: An assertion was raised when InnoDB attempted to dereference a NULL foreign key object.
  • InnoDB: On Unix-like platforms, os_file_create_simple_no_error_handling_func and os_file_create_func opened files in different modes when innodb_flush_method was set to O_DIRECT. (Bug #76627)
  • Security-related: Due to the LogJam issue (https://weakdh.org/), OpenSSL has changed the Diffie-Hellman key length parameters for openssl-1.0.1n and up. OpenSSL has provided a detailed explanation at http://openssl.org/news/secadv_20150611.txt. To adopt this change in MySQL, the key length used in vio/viosslfactories.c for creating Diffie-Hellman keys has been increased from 512 to 2,048 bits. (Bug #77275)
  • Crashing Bug: GROUP BY or ORDER BY on a CHAR(0) NOT NULL column could lead to a server exit.

I don’t think I’d call any of these urgent for all (unless you run the latest RHEL/CentOS with SSL connections + a DHE SSL cipher specifed with –ssl-cipher=DHE-RSA-…), but if running 5.5, especially if not a very recent 5.5, you should consider upgrading.

For reference, the full 5.5.45 changelog can be viewed here:

http://dev.mysql.com/doc/relnotes/mysql/5.5/en/news-5-5-45.html

Hope this helps.


PlanetMySQL Voting: Vote UP / Vote DOWN

Find Queries By Error Code With VividCortex

$
0
0

VividCortex now lets you search for queries that cause a specific error in your application. The error code itself will be database-specific, but for example error 1062 in MySQL is a duplicate key error, and in PostgreSQL error 23503 is a foreign key violation.

One of our customers requested that we add this feature so they could search for queries that cause UTF8 issues, which is a great example of when this can be useful.

To use this feature, click on the Queries navigation link, which brings up a catalog of every query we have seen execute in your systems. At the top, select the drop-down menu and filter by errors, then type in the error code you’re looking for and click Apply:

Filter By Errors

The result will be a listing of all queries that cause the server to return that error (even if it’s only an occasional error).

You might be surprised at how many queries cause once-in-a-million errors! They’re really hard to find in production systems if you don’t have the deep visibility we provide.

You can click on any of the errors and inspect it, sample by sample, to see exactly which instances of it cause errors. Sampling is biased towards capturing errors and warnings, so you can find and resolve them easily. Samples are color-coded red when they have errors.

Samples

Let us know if you, too, have great ideas for features we can implement to make your life easier!


PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL 5.6.26 Overview and Highlights

$
0
0

MySQL 5.6.26 was recently released (it is the latest MySQL 5.6, is GA), and is available for download here.

For this release, there are 3 “Functionality Added or Changed” items, 1 “Security Fix”, and 36 other bug fixes.

Out of those other 36 bugs, 13 are InnoDB, 1 Partitioning, 3 Replication, and 19 misc. (including 3 potentially crashing bug fixes, and 1 performance-related fix) Here are the ones of note:

  • Functionality Added/Changed: Replication: When using a multi-threaded slave, each worker thread has its own queue of transactions to process. In previous MySQL versions, STOP SLAVE waited for all workers to process their entire queue. This logic has been changed so that STOP SLAVE first finds the newest transaction that was committed by any worker thread. Then, it waits for all workers to complete transactions older than that. Newer transactions are not processed. The new logic allows STOP SLAVE to complete faster in case some worker queues contain multiple transactions. (Bug #75525)
  • Functionality Added/Changed: Previously, the max_digest_length system variable controlled the maximum digest length for all server functions that computed statement digests. However, whereas the Performance Schema may need to maintain many digest values, other server functions such as MySQL Enterprise Firewall need only one digest per session. Increasing the max_digest_length value has little impact on total memory requirements for those functions, but can increase Performance Schema memory requirements significantly. To enable configuring digest length separately for the Performance Schema, its digest length is now controlled by the new performance_schema_max_digest_length system variable.
  • Functionality Added/Changed: Previously, changes to the validate_password plugin dictionary file (named by the validate_password_dictionary_file system variable) while the server was running required a restart for the server to recognize the changes. Now validate_password_dictionary_file can be set at runtime and assigning a value causes the named file to be read without a restart. In addition, two new status variables are available. validate_password_dictionary_file_last_parsed indicates when the dictionary file was last read, and validate_password_dictionary_file_words_count indicates how many words it contains. (Bug #66697)
  • Security-related: Due to the LogJam issue (https://weakdh.org/), OpenSSL has changed the Diffie-Hellman key length parameters for openssl-1.0.1n and up. OpenSSL has provided a detailed explanation at http://openssl.org/news/secadv_20150611.txt. To adopt this change in MySQL, the key length used in vio/viosslfactories.c for creating Diffie-Hellman keys has been increased from 512 to 2,048 bits. (Bug #77275)
  • InnoDB: Importing a tablespace with a full-text index resulted in an assertion when attempting to rebuild the index.
  • InnoDB: Opening a foreign key-referenced table with foreign_key_checks enabled resulted in an error when the table or database name contained special characters.
  • InnoDB: The page_zip_verify_checksum function returned false for a valid compressed page.
  • InnoDB: A failure to load a change buffer bitmap page during a concurrent delete tablespace operation caused a server exit.
  • InnoDB: After dropping a full-text search index, the hidden FTS_DOC_ID and FTS_DOC_ID_INDEX columns prevented online DDL operations. (Bug #76012)
  • InnoDB: An index record was not found on rollback due to inconsistencies in the purge_node_t structure. (Bug #70214)
  • Partitioning: In certain cases, ALTER TABLE … REBUILD PARTITION was not handled correctly when executed on a locked table.
  • Replication: If flushing the cache to the binary log failed, for example due to a disk problem, the error was not detected by the binary log group commit logic. This could cause inconsistencies between the master and the slave. The fix uses the binlog_error_action variable to decide how to handle this situation. If binlog_error_action=ABORT_SERVER, then the server aborts after informing the client with an ER_BINLOGGING_IMPOSSIBLE error. If binlog_error_action=IGNORE_ERROR, then the error is ignored and binary logging is disabled until the server is restarted again. The same is mentioned in the error log file, and the transaction is committed inside the storage engine without being added to the binary log. (Bug #76795)
  • Replication: When using GTIDs, a multi-threaded slave which had relay_log_recovery=1 and that stopped unexpectedly could encounter a relay-log-recovery cannot be executed when the slave was stopped with an error or killed in MTS mode error upon restart. The fix ensures that the relay log recovery process checks if GTIDs are in use or not. If GTIDs are in use, the multi-threaded slave recovery process uses the GTID protocol to fill any unprocessed transactions. (Bug #73397)
  • Replication: When two slaves with the same server_uuid were configured to replicate from a single master, the I/O thread of the slaves kept reconnecting and generating new relay log files without new content. In such a situation, the master now generates an error which is sent to the slave. By receiving this error from the master, the slave I/O thread does not try to reconnect, avoiding this problem. (Bug #72581)
  • Crashing Bug: Incorrect cost calculation for the semi-join Duplicate Weedout strategy could result in a server exit.
  • Crashing Bug: For large values of max_digest_length, the Performance Schema could encounter an overflow error when computing memory requirements, resulting in a server exit.
  • Crashing Bug: GROUP BY or ORDER BY on a CHAR(0) NOT NULL column could lead to a server exit.
  • Performance-related: When choosing join order, the optimizer could incorrectly calculate the cost of a table scan and choose a table scan over a more efficient eq_ref join. (Bug #71584)

Conclusions:

So while there were no major changes, and not too many overall bug fixes, the security fix could be an issue if you run the latest RHEL/CentOS with SSL connections + a DHE SSL cipher specifed with –ssl-cipher=DHE-RSA-… Also, some of those InnoDB bugs are nasty, especially the fulltext bugs, thus if you use InnoDB’s fulltext, I’d recommend planning for an upgrade.

The full 5.6.26 changelogs can be viewed here (which has more details about all of the bugs listed above):

http://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-26.html

Hope this helps. :)


PlanetMySQL Voting: Vote UP / Vote DOWN

MariaDB 10.1.6 Overview and Highlights

$
0
0

MariaDB 10.1.6 was recently released, and is available for download here:

https://downloads.mariadb.org/mariadb/10.1.6/

This is the 4th beta, and 7th overall, release of MariaDB 10.1. There were not many major changes in this release, but a few notable items, as well as many overall bugs fixed (I counted 156, down ~50% from 10.1.5).

Since it’s beta, I’ll only cover the major changes and additions, and omit covering general bug fixes (feel free to browse them all here).

To me, these are the highlights:

  • RESET_MASTER is extended with TO # clause which allows one to specify the number of the first binary log. (MDEV-8469)
  • Added support for binlog_row_image=minimal for compatibility with MySQL.
  • New status variables: log_bin_basename, log_bin_index and relay_log_basename.
  • New status variables: innodb_buf_dump_status_frequency for determining how often the buffer pool dump status should be printed in the logs.
  • New status variables: Com_create_temporary_table and Com_drop_temporary_table for tracking the number of CREATE/DROP TEMPORARY TABLE statements.
  • Connect updated to 1.04.0001.
  • Mroonga updated to 5.04 (earlier versions of Mroonga did not work in 10.1).

Of course it goes without saying that do not use this for production systems since it is still only beta. However, I definitely recommend installing it on a test server and testing it out. And if you happen to be running a previous version of 10.1, then you should definitely upgrade to this latest release.

You can read more about the 10.1.6 release here:

https://mariadb.com/kb/en/mariadb-1016-release-notes/

And if interested, you can review the full list of changes in 10.1.6 (changelogs) here:

https://mariadb.com/kb/en/mariadb-1016-changelog/

Hope this helps.


PlanetMySQL Voting: Vote UP / Vote DOWN

HeidiSQL 9.3 released

$
0
0
This is a maintenance release which contains mainly bugfixes.

Get it from the download page.



Changelog:
* Bugfix: Crash in foreign key dropdown editor
* Bugfix: Crash when killing processes on very long running servers
* Bugfix: SQL error when accessing UUID and JSON columns in PostgreSQL via SUBSTR
* Bugfix: MSSQL: Prefer "schema.table" quoting over "schema"."table" when renaming a table
* Bugfix: Fix column type converted to locale string format by String.ToUpper in TDBConnection.GetCreateCode - prefer String.ToUpperInvariant instead, to avoid funny characters in data types
* Bugfix: MSSQL: Do not pass "Database=xyz" to connection string if database(s) setting contains more than one database
* Bugfix: MSSQL: Try to use some universal date/time format, by injecting a "T" between the date and the time portion
* Bugfix: Fix wrong detection of BIT default values
* Bugfix: Use "SET search_path TO db" instead of "SET SCHEMA db" for changing a database in PostgreSQL, for downward compatibility reasons
* Bugfix: Prepend 'E' to escaped PostgreSQL strings
* Bugfix: Use updated URL for MariaDB Explain analyzer, and encode semicolon in URL parameter
* Bugfix: User manager: Select "authentication_string" instead of "password" column on MySQL 5.7.6+
* Bugfix: Fix various selection bugs in column selection panel
* Bugfix: Fix SQL error in "Copy table" dialog, in PostgreSQL mode. Use lowercase table and column names in IS.TABLES, so PG can find them
* Bugfix: CSV import: Disable features supported in MySQL only, if active connection is not MySQL
* Bugfix: PostgreSQL: Always keep public schema in search path, so one can use procedures from it without prefixing
* Bugfix: Text import: Use very last value from last row, even if it's not followed by a field or line terminator
* Bugfix: PostgreSQL: Fix wrong ALTER TABLE query for modifying table comment
* Bugfix: Update VirtualTree component code to v6.1.0, to fix graphical issues in Windows 8 + 10

* Enhancement: Show error when SSH port is already in use
* Enhancement: Add support for PostgreSQL's data types uuid, cidr, inet and macaddr
* Enhancement: Strip folder path from various file settings, including plink.exe location, if it's the application directory
* Enhancement: Try higher ports, up to the 20 next ones, as SSH local port, when the configured one is in use
* Enhancement: Display session name in caption of all message dialogs
* Enhancement: Add a custom icon for confirmation dialogs, with a question mark on it, so we don't have to use the "i" icon.
* Enhancement: Use server time for data grid > "Insert value" menu items
* Enhancement: Show line breaks other than Windows style as normal line breaks in text editor

* New feature: Introduce option for setting the line break style in text cells without breaks
* New feature: Session manager: Add support for SSL cipher, and add various texthints

PlanetMySQL Voting: Vote UP / Vote DOWN

Why you should be careful when Loading data in MySQL with Galera.

$
0
0

An old story that is not yet solve.

 

Why this article.

Some time ago I had open a bug report to codership through Seppo.

The report was about the delay existing in executing data load with FK. (https://bugs.launchpad.net/codership-mysql/+bug/1323765).

The delay I was reporting at that time were such to scare me a lot, but I know talking with Alex and Seppo that they were aware of the need to optimize the approach an some work was on going.

After some time I had done the test again with newer version of PXC and Galera library.

This article is describing what I have found, in the hope that share information is still worth something, nothing less nothing more.

The tests

Tests had being run on a VM with 8 cores 16GB RAM RAID10 (6 spindle 10KRPM).

I have run 4 types of tests:

  • Load from file using SOURCE and extended inserts
  • Load from SQL dump and extended inserts
  • Run multiple threads operating against employees tables with and without FK
  • Run single thread operating against employees tables with and without FK

 For the test running against the employees’ db and simulating the client external access, I had used my own stresstool.

The tests have been done during a large period of time, given I was testing different versions and I had no time to stop and consolidate the article. Also I was never fully convinced, as such I was doing the tests over and over, to validate the results.

I have reviewed version from:

Server version:                        5.6.21-70.1-25.8-log Percona XtraDB Cluster binary (GPL) 5.6.21-25.8, Revision 938, wsrep_25.8.r4150

To

Server version:                        5.6.24-72.2-25.11-log Percona XtraDB Cluster binary (GPL) 5.6.24-25.11, Revision, wsrep_25.11

With consistent behavior.

 

What happened

The first test was as simple as the one I did for the initial report, and I was mainly loading the employees db in MySQL.

time mysql -ustress -ptool -h 127.0.0.1 -P3306 < employees.sql

Surprise surprise … I literally jump on the chair the load takes 37m57.792s.

Yes you are reading right, it was taking almost 38 minutes to execute.

I was so surprise that I did not trust the test, as such I did it again, and again, and again.

Changing versions, changing machines, and so on.

No way… the time remain surprisingly high.

Running the same test but excluding the FK and using galera was complete in 90 seconds, while with FK but not loading the Galera library 77 seconds.

Ok something was not right. Right?

I decide to dig a bit starting from analyzing the time taken, for each test.

See image below:

load

 

 

From all the tests the only one not align was the data loading with FK + Galera .

I had also decided to see what was the behavior in case of multiple threads and contention.

As such I prepare a test using my StressTool and run two class of tests, one with 8 threads pushing data, the other single threaded.

As usual I have also run the test with FK+Galera, NOFK+Galera, FK+No Galera.

The results were what I was expecting this time and the FK impact was minimal if any, see below:

threads

 

 

The distance between execution was minimal and in line with expectations.

Also it was consistent between versions, so no surprise, I relaxed there and I could focus on something else.

On what?

Well why on the case of the load from file, the impact was so significant.

The first thing done was starting to dig on the calls, and what each action was really doing inside MySQL.

To do so I have install some tools like PERF and OPROFILE, and start to dig into it.

First test with FK+Galera taking 38 minutes, was constantly reporting a different sequence of calls/cost from all other tests.

57.25%  [kernel]                      [k] hypercall_page

35.71%  libgcc_s-4.4.7-20120601.so.1  [.] 0x0000000000010c61

2.73%  libc-2.12.so                  [.] __strlen_sse42

0.16%  mysqld                        [.] MYSQLparse(THD*)

0.14%  libgcc_s-4.4.7-20120601.so.1  [.] strlen@plt

0.12%  libgalera_smm.so              [.] galera::KeySetOut::KeyPart::KeyPart(galera::KeySetOut::KeyParts&, galera::KeySetOut&, galera::K

0.12%  mysqld                        [.] btr_search_guess_on_hash(dict_index_t*, btr_search_t*, dtuple_t const*, unsigned long, unsigned

0.09%  libc-2.12.so                  [.] memcpy

0.09%  libc-2.12.so                  [.] _int_malloc

0.09%  mysqld                        [.] rec_get_offsets_func(unsigned char const*, dict_index_t const*, unsigned long*, unsigned long,

0.08%  mysql                         [.] read_and_execute(bool)

0.08%  mysqld                        [.] ha_innobase::wsrep_append_keys(THD*, bool, unsigned char const*, unsigned char const*)

0.07%  libc-2.12.so                  [.] _int_free

0.07%  libgalera_smm.so              [.] galera::KeySetOut::append(galera::KeyData const&)

0.06%  libc-2.12.so                  [.] malloc

0.06%  mysqld                        [.] lex_one_token(YYSTYPE*, THD*)

 

Comparing this with the output of the action without FK but still with Galera:

75.53%  [kernel]                      [k] hypercall_page

1.31%  mysqld                        [.] MYSQLparse(THD*)

0.81%  mysql                         [.] read_and_execute(bool)

0.78%  mysqld                        [.] ha_innobase::wsrep_append_keys(THD*, bool, unsigned char const*, unsigned char const*)

0.66%  mysqld                        [.] _Z27wsrep_store_key_val_for_rowP3THDP5TABLEjPcjPKhPm.clone.9

0.55%  mysqld                        [.] fill_record(THD*, Field**, List<Item>&, bool, st_bitmap*)

0.53%  libc-2.12.so                  [.] _int_malloc

0.50%  libc-2.12.so                  [.] memcpy

0.48%  mysqld                        [.] lex_one_token(YYSTYPE*, THD*)

0.45%  libgalera_smm.so              [.] galera::KeySetOut::KeyPart::KeyPart(galera::KeySetOut::KeyParts&, galera::KeySetOut&, galera::K

0.43%  mysqld                        [.] rec_get_offsets_func(unsigned char const*, dict_index_t const*, unsigned long*, unsigned long,

0.43%  mysqld                        [.] btr_search_guess_on_hash(dict_index_t*, btr_search_t*, dtuple_t const*, unsigned long, unsigned

0.39%  mysqld                        [.] trx_undo_report_row_operation(unsigned long, unsigned long, que_thr_t*, dict_index_t*, dtuple_t

0.38%  libgalera_smm.so              [.] galera::KeySetOut::append(galera::KeyData const&)

0.37%  libc-2.12.so                  [.] _int_free

0.37%  mysqld                        [.] str_to_datetime

0.36%  libc-2.12.so                  [.] malloc

0.34%  mysqld                        [.] mtr_add_dirtied_pages_to_flush_list(mtr_t*)

 

What comes out is the significant difference in the FK parsing.

The galera function

 

KeySetOut::KeyPart::KeyPart (KeyParts&  added, 
                             KeySetOut&     store,
                             const KeyPart* parent,
                             const KeyData& kd,
                             int const      part_num) 

 

 

is the top consumer before moving out to share libraries.

After it the server is constantly calling the strlen function, as if evaluating each entry in the insert multiple times.

This unfortunate behavior happens ONLY when the FK exists and require validation, and ONLY if the Galera library is loaded.

It is logic conclusion that the library is adding the overhead, probably in some iteration, and probably a bug.

 

Running the application tests, using multiple clients and threads, this delay is not happening, at least with this level of magnitude.

During the application tests, I had be using batching insert up to 50 insert for SQL command, as such I could have NOT trigger the limit, that is causing the issue in Galera.

As such, I am not still convinced that we are “safe” there and I have in my to do list to add this test soon, in the case of significant result I will append the information, but I was feeling the need to share in the meanwhile.

 

The other question was, WHY the data load from SQL dump was NOT taking so long?

That part is easy, comparing the load files we can see that in the SQL dump the FK and UK are disable while loading, as such the server skip the evaluation of the FK in full.

That’s it, adding:

 

SET FOREIGN_KEY_CHECKS=0, UNIQUE_CHECKS=0;

 

To the import and setting them back after, remove the delay and also the function calls become “standard”.

 

 

Conclusions

This short article has the purpose of:

  • Alert all of you of this issue in Galera and let you know this is going on from sometime and has not being fix yet.
  • Provide you a workaround. Use SET FOREIGN_KEY_CHECKS=0, UNIQUE_CHECKS=0; when performing data load, and rememeber to put them back (SET FOREIGN_KEY_CHECKS=1, UNIQUE_CHECKS=1;).
    Unfortunately, as we all know, not always we can disable them, Right? This brings us to the last point.
  • I think that Codership and eventually Percona, should dedicate some attention to this issue, because it COULD be limited to the data loading, but it may be not.

 

 

 

 

I have more info and oprofile output that I am going to add in the bug report, with the hope it will be processed.

 

Great MySQL to everyone …


PlanetMySQL Voting: Vote UP / Vote DOWN

An Outline for a Book on InnoDB

$
0
0

Years ago I pursued my interest in InnoDB’s architecture and design, and became impressed with its sophistication. Another way to say it is that InnoDB is complicated, as are all MVCC databases. However, InnoDB manages to hide the bulk of its complexity entirely from most users.

Iceberg

I decided to at least outline a book on InnoDB. After researching it for a while, it became clear that it would need to be a series of books in multiple volumes, with somewhere between 1000 and 2000 pages total.

At one time I actually understood a lot of this material, but I have forgotten most of it now.

I did not begin writing. Although it is incomplete, outdated, and in some cases wrong, I share the outline here in case anyone is interested. It might be of particular interest to someone who thinks it’s an easy task to write a new database.

High-Level Outline:

  • Introduction
  • Intro to Major Features
  • The InnoDB Architecture
  • Indexes
  • Transactions in InnoDB
  • Locking in InnoDB
  • Deadlocks
  • Multi-Version Concurrency Control (MVCC)
  • Old Row Versions and the Undo Space
  • Data Storage and Layout
  • Data Types
  • Large Value Storage and Compression
  • The Transaction Logs
  • Ensuring Data Integrity
  • The Insert Buffer (Change Buffer)
  • The Adaptive Hash Index
  • Buffer Pool Management
  • Memory Management
  • Checkpoints and Flushing
  • Startup, Crash Recovery, and Shutdown
  • InnoDB’s I/O Behavior and File Management
  • Data Manipulation (DML) Operations
  • The System Tables
  • Data Definition (DDL) Operations
  • Foreign Keys
  • InnoDB’s Interface to MySQL
  • Index Implementation
  • Data Distribution Statistics
  • How MySQL executes queries with InnoDB
  • Internal Maintenance Tasks
  • Tuning InnoDB
  • Mutexes and Latches
  • InnoDB Threads
  • Internal Structures
  • XtraBackup
  • InnoDB Recovery Tools
  • Inspecting Status

Section-By-Section Detailed Outline:

  • Introduction

    • History of InnoDB, what its roots are, context of the integration into MySQL and the Oracle purchase, etc
    • Based on Gray & Reuters book
    • high-level organization: USER visible things first, after enough high-level overview to understand the big features and moving parts; then INTERNALS afterwards.
  • Intro to Major Features

    • Transactions
    • ACID
    • MVCC
    • multi-version read consistency
    • row-level locking
    • standard isolation levels
    • automatic deadlock detection
    • Foreign Keys
    • Clustered Indexes
    • Page Compression
    • Crash Recovery
    • Exercises
  • The InnoDB Architecture

    • This will be a high-level introduction, just enough to understand the following chapters
    • Storage on disk
    • Pages, and page sizes
    • Extents
    • Segments
    • Tablespaces
      • The main tablespace and its major components
      • Data pages
      • data dictionary
      • insert buffer
      • undo log area
      • doublewrite buffer
      • reserved spaces for hardcoded offsets
      • see Vadim’s diagram on HPM blog
      • individual tablespaces for file-per-table
    • redo log files
    • Storage in memory
    • the buffer pool and its major components
      • Data pages
    • other memory usage
      • adaptive hash index
    • Major data structures
    • LRU list
    • flush list
    • free list
    • Exercises
  • Indexes

    • clustered
    • logically nearby pages can be physically distant, so sequential scan isn’t guaranteed to be sequential
    • no need to update when rows are moved to different pages
    • changing PK value is expensive
    • insert order
    • random inserts are expensive: splits, fragmentation, bad space utilization/fill factor
    • sequential inserts build least fragmented table
    • consider auto_increment
    • optimize table rebuilds, will compact PK but not secondary
      • they are built by insertion, not by sort, except for plugin fast creation
      • secondaries can be dropped and recreated in fast mode in the plugin to defrag
      • but this won’t shrink the tablespace
    • primary vs secondary
    • secondary causes 2 lookups, also might cause lookup to PK to check row visibility because secondary has version per-page, not per-row
    • PK ranges are very efficient
    • PK updates are in-place, secondary are delete+insert
    • automatic promotion of unique secondary
    • auto-creation of a primary key
    • primary columns are stored in secondary indexes
    • long primary keys expensive
    • uniqueness; is checking deferred, or immediate?
    • there is no prefix compression as there is in myisam, so indexes can be larger
    • rows contain trxn info, but secondary indexes do also at the page level (enables index-covered queries; more on this later)
    • Exercises
  • Transactions in InnoDB

    • A transaction is a sequence of actions that starts in one legal state, and ends in another legal state (need a good definition)
    • from Heikki’s slides: atomic (all committed or rolled back at once), consistent (operate on a consistent ivew of the data, and leave the data in a consistent state at the end); isolated (don’t see effects of other txns on the system until after commit); durable (all changes persist, even after a failure)
      • consistency means that any data I see is consistent with all other data I see at a single point in time (there are exceptions to this)
    • How are they started, and in what conditions?
    • what is the transaction ID and its relationship to the LSN?
    • what is the system version number (LSN)?
    • what is a minitransaction/mtr?
    • when are they committed?
    • how are savepoints implemented?
    • what happens on commit?
    • XA and interaction with the binary logs
    • fsync()s
    • group commit
    • what happens on rollback?
    • they are meant to be short-lived and commit; what if they stay open a long time or roll back?
  • Locking in InnoDB

    • locking is needed to ensure consistency, and so that transactions can operate in isolation from each other
    • row-level
    • non-locking consistent reads by default
    • pessimistic locking
    • locks are stored in a per-page bitmap
    • compact: 3-8 bits per lock
    • sparse pages have more memory and CPU overhead per locked row
    • no lock escalation to page-level or table-level
    • row locking
    • S locks
    • X locks
    • there is row-level locking on indexes, (but it may require access to PK to see if there is a current version of the row to lock, right?)
    • Heikki says a delete-marked index record can carry a lock. What are the cases where this would happen? Why does it matter? I imagine that a DELETE statement locks the row, deletes it, and leaves it there locked.
    • supremum row can carry a lock, but infimum cannot (maybe we should discuss later in another chapter)
    • table locking - IS and IX locks
    • InnoDB uses multiple-granularity locking, see http://en.wikipedia.org/wiki/Multiple_granularity_locking
    • also called two-phase locking, perhaps? Baron’s a bit confused
    • before setting a S row lock, it sets an intention lock IS on the table
    • before setting a X row lock, it sets an intention lock IX on the table
    • auto-increment locks
    • needed for stmt replication
    • before 5.0: table-level for the whole statement, even if the insert provided the value
    • released at statement end, not txn end
    • this is a serious bottleneck
    • 5.1 and later: two more types of behavior (Look up _____’s issues, I think they had some problem with it; also Mark Callaghan talked about a catch-22 with it)
    • row replication lets lock be released faster, or avoid it completely
      • complex behavior: interleaved numbers, gaps in sequences
    • locks set by types of statements
    • select doesn’t set locks unless it’s serializable
    • select lock in share mode
      • sets shared next-key locks on all index records it sees
    • select for update
      • sets exclusive next-key locks on all index records it sees, as well as locking the PK
    • insert into tbl1 select from tbl2
      • sets share locks on all rows it scans in tbl2 to prevent phantoms
      • can be reduced in 5.1 with read-committed; does a consistent non-locking read,
      • but requires rbr or it will be unsafe for replication
    • update and delete set exclusive next-key locks on all records they find (in pk/2nd?)
    • insert sets an x-lock on the inserted record (not next-key, so it doesn’t prevent others)
      • if there is a duplicate key error, sets s-lock on the index record (why?)
    • replace is like an insert
      • if there is a key collision, places x-next-key-lock on the updated row
    • insert…select…where sets x-record-lock on each destination row, and S-next-key-locks on source rows, unless innodb_locks_unsafe_for_binlog is set
    • create…select is an insert..select in disguise
    • if there is a FK, anything that checks the constraint sets S-record-locks on the records it looks at. Also sets locks if the constraint fails (why?)
    • lock types and compatibility matrix
    • how is lock wait timeout implemented, and what happens to trx that times out?
    • infimum/supremum “records” can be locked but don’t correspond to real rows (should we defer this till later?)
    • gap locking, etc are discussed later
  • Deadlocks

    • how are deadlocks detected?
    • cycle in the waits-for graph
    • or, too many recursions (200)
    • or, too many locks checked (1 million, I think?)
    • performance impact, disabling
    • do txn check when the set a lock (I think so) or is there a deadlock thread as Peter’s slides say?
    • deadlock behavior
    • how is a victim chosen? which one is rolled back?
      • txn that modified the fewest rows
    • how about deadlocks with > 2 transactions involved and SHOW INNODB STATUS
    • causes of deadlocks
    • foreign keys
    • gap locking
    • transactions accessing table through different indexes
    • how to reduce deadlocks
    • make txns use same index in same order
    • short txns, fewer modifications, commit often
    • use rbr and read_committed in 5.1, reduces gap locking
    • in 5.0, use innodb_locks_unsafe_for_binlog to remove gap locking
  • Multi-Version Concurrency Control (MVCC)

    • ensures isolation with minimal locking
    • isolation means that while transactions are changing data, other transactions see only a legal state of the database – either as of the start of the txn that is changing stuff, or at the end after it commits, but not midway
    • readers can read without being blocked by writers, and vice versa
    • writers block each other
    • TODO: some of the details here need to be moved to the next chapter, or they need to be merged
    • old row versions are kept until no longer visible, then deleted in background (purge)
    • Read views
    • how LSN is used for mvcc, “txn sees between X and Y”: DB_TRX_ID (6 bytes?) column
    • updates write a new version, move the old one to the undo space, even before commit
      • short rows are faster to update
      • whole rows are versioned, except for BLOBs
    • deletes update the txn ids, leave it in place – special-case of update
    • inserts have a txn id in the future – but they still write to undo space and it is discarded after commit
    • mvcc causes index bloat for deletions, but not for updates; updates cause undo space bloat
    • what the oldest view is used for, in srv0srv.c
    • Transaction isolation levels
    • SERIALIZABLE
      • Locking reads as if LOCK IN SHARE MODE.
      • Bypass multi versioning
      • No consistent reads; everyone sees latest state of DB.
    • REPEATABLE-READ (default)
      • Read commited data at it was on start of transaction
      • the snapshot begins with the first consistent read in the txn
      • begin transaction with consistent snapshot starts it “now”
      • update/delete use next-key locking
      • Is only for reads; multiversioning is for reads, not for writes. For example, you can delete a row and commit from one session. Another session can still see the row, but if it tries to update it, will affect 0 rows.
      • FOR UPDATE and LOCK IN SHARE MODE and INSERT..SELECT will drop out of this isolation level, because you can only lock or intend to update the most recent version of the row, not an old version. http://bugs.mysql.com/bug.php?id=17228 “transaction isolation level question from a student in class” thread Mar 2011 “Re: REPEATABLE READ doesn’t work correctly in InnoDB tables?” ditto
    • READ-COMMITED
      • Read commited data as it was at start of statement – “up to date”
      • each select uses its own snapshot; internally consistent, but not over whole txn
      • in 5.1, most gap-locking removed; requires row-based logging.
      • unique key checks in 2nd indexes, and some FK checks, still need to set gap locks
      • prevents inserting child row after parent is deleted
      • which FK checks?
    • READ-UNCOMMITED
      • Read non committed data as it is changing live
      • No consistency, even within a single statement
    • which is best to use? is read-committed really better, as a lot of people believe?
    • http://www.mysqlperformanceblog.com/2011/01/12/innodb-undo-segment-siz-and-transaction-isolation/
    • phantom rows: you don’t see it in the first query, you see it when you query again; stmt replication can’t tolerate them (not a problem in row-based); avoided by gap locking
    • Lock types for MVCC:
    • Next-key locks
    • gap locks
      • gap locks are simply a prohibition from inserting into the gap
      • they don’t give permission to do anything, they just block others
      • example: if I hold an exclusive next-key lock on a record, I can’t always insert into the gap before it; someone might have a gap lock or a waiting next-key lock request.
      • different modes of locks are allowed to coexist, because of gap merging via purge
      • holding a gap lock prevents others from inserting, but doesn’t permit me to insert; I must wait for conflicting locks to be released (many txns can hold the same gap lock)
      • supremum can be gap-locked, infimum can’t – why not?
      • which isolation levels use them?
      • types: next-key (locks key and gap before it), gap lock (just the gap before the key), record-only (just the key), insert-intention gap lock (held while waiting to insert into a gap).
      • searches for a unique key use record-only locks to minimize gap locking (pk updates, for example)
    • insert-intention locking
    • index locking (http://dom.as/2011/07/03/innodb-index-lock/)
    • when MVCC is bypassed:
    • updates: can only update a row that exists currently (what about delete?)
    • select lock in share mode
      • sets shared next-key locks on all index records it sees
    • select for update
      • sets exclusive next-key locks on all index records it sees, as well as locking the PK
    • insert into tbl1 select from tbl2
      • sets share locks on all rows it scans in tbl2 to prevent phantoms
      • can be reduced in 5.1 with read-committed; does a consistent non-locking read,
      • but requires rbr or it will be unsafe for replication
    • locking reads are slower, because they have to set locks (check for deadlocks)
    • How locks on a secondary index must lock the primary key, the impact of this on the txn isolation level
    • indexes contain pointers to all versions
    • Index key 5 will point to all rows which were 5 in the past
    • TODO: clarify this with Peter
    • Exercises
    • is there really such a thing as a unique index in InnoDB?
  • Old Row Versions and the Undo Space

    • old row versions are used for MVCC and rollback of uncommitted txns that fail
    • they provide Consistency: each txn sees data at a consistent point in time so old row versions are needed
      • cannot be updated: history cannot change; thus old row versions aren’t locked
    • each row in index contains DB_ROLL_PTR column, 7 bytes, points to older version
    • They are stored in a linked list, so a txn that reads old rows is slow, and rows that are updated many times are very slow, and long running txns that update a lot of rows can impact other txns
    • “rows read” is logical at the mysql level, but at the innodb level, many rows could be read
    • there is no limit on number of old versions to keep
    • history list
    • rollback segment / rseg
    • what is the difference between this and an undo segment / undo tablespace?
    • purge of old row versions
    • how it is done in the main loop
    • how it is done in a separate thread
    • interaction with long-running transactions, when a row version can be purged, txn isolation level of long-running transactions
    • it leaves a hole / fragmentation (see __________?id=17673)
    • purging can change gaps, which are gap-locked; what happens to them?
    • a deleted row is removed from an index; two gaps merge. The new bigger gap inherits both the locks from the gaps, and the lock that was on the deleted row
    • innodb_max_purge_lag slows down updates when purge falls behind
  • Data Storage and Layout

    • Tablespaces
    • what is in the global tablespace; why it grows; why it can’t be shrunk
    • main tablespace can be multiple files concatenated
    • legacy: raw device
    • tablespace header => id, size
    • segments, how they are used
    • leaf and non-leaf node segments for each index (makes scans more sequential IO) thus each index has two segments for these types of pages
    • rollback segment
    • insert buffer segment
    • segment allocation: small vs large, page-at-time vs extent-at-a-time
    • how free pages are recycled within the same segment
    • when a segment can be reused All pages in extent must be free before it is used in different segment of same tablespace
    • file-per-table
    • advantages: reclaim space, store data on different drives (symlinking and its pitfalls), backup/restore single tables, supports compression
    • disadvantages: filesystem per-inode mutexes, longer recovery, uses more space
    • free space within a segment can be used by same table only
    • how to import and export tables with xtradb, how this is different from import and export in standard innodb
    • file growth/extension, and how it is done
    • ibdata1: innodb_autoextend_increment
    • individual table .ibd files don’t respect that setting
    • http://bugs.mysql.com/56433 bug about mutex lock during extension, Yasufumi patched
    • file formats (redundant, compact, barracuda, etc)
    • Page format
    • types of pages
    • Row format
    • never fragmented, except blobs are stored in multiple pieces
    • there can be a lot of empty space between rows on the page
    • infimum/supremum records
    • how SHOW TABLE STATUS works: see issue 17673
  • Data Types

    • Data types supported, and their storage format
    • Nulls
    • are nulls equal with respect to foreign keys? what about unique indexes?
    • Exercises
  • Large Value Storage and Compression

    • Page compression
    • new in plugin
    • requires file-per-table and Barracuda
    • Pages are kept uncompressed in memory
      • TODO: Peter says both compressed and uncompressed can be kept in memory
    • compression is mostly per-page
    • uses zlib, zlib library version must match exactly on recovery, because inflate/deflate sizes must match exactly, so can’t do recovery on different mysql version than the crash was on, ditto for xtrabackup backup/prepare; if libz is not linked statically, this can cause problems (use ldd to see); recovery might be immature for compressed table spaces. http://bugs.mysql.com/bug.php?id=62011
    • TODO: peter says Uses fancy tricks: Per page update log to avoid re-compression
    • not really configurable
    • syntax: ROW_FORMAT=COMPRESSED KEY_BLOCK_SIZE=4; Estimate how well the data will compress
    • problems:
      • fs-level might be more efficient b/c page size is too small for good compression ratio
      • we have to guess/hint how much it can be compressed
      • setting is per-table, not per-index, but indexes vary in suitability
    • TODO: Peter says KEY_BLOCK_SIZE=16; - Only compress externally stored BLOBs - Can reduce size without overhead
    • Blob/large value storage (large varchar/text has same behavior)
    • small blobs are stored in the page whole, if they fit (max row len: ~8000 bytes, ~12 page) http://www.facebook.com/notes/mysql-at-facebook/how-many-pages-does-innodb-for-tables-with-large-columns/10150481019790933
    • large blobs are stored out-of-page on a dedicated extent, first 768 bytes in-page
      • same allocation rules as for any extent: page by page, then extent at a time
      • this can waste a lot of space; it makes sense to combine blobs if possible
    • Barracuda format lets us store the whole thing out-of-page, without the 768-byte prefix
    • no need to move blobs to their own table – innodb won’t read them unless needed
    • but the 768-byte prefix can make rows larger anyway
    • blob I/O is always “pessimistic”
    • how are BLOBs handled with MVCC and old row versions?
    • externally stored blobs are not updated in-place, a new version is created
    • does a row that contains a blob, which gets updated without touching the blob, create a new row that refers to the same copy of the blob as the old row does?
    • how is undo space handed? TODO: in-page/in-extent with the row, or in the undo log area?
  • The Transaction Logs

    • circular
    • File format: not page formatted, record formatted
    • 512 byte units (prevents o_direct, causes read-around writes)
    • if logs fit in os buffer, may improve performance; otherwise puts pressure on memory, written circularly and never read except for read-around writes, so OS caching is useless
    • tunable in XtraDB
    • records are physiological: page # and operation to perform
    • records are idempotent
    • only redo, not undo
    • what LSN is (bytes written to tx log, tx ID, system version number)
    • where it is used (each page is versioned, each row has 2 lsns in the pk)
    • given a LSN, where does it point to in the physical files? (it’s modulo, I think)
    • Changing the log file size
    • Headers and magic offsets, e.g. where the last checkpoint is written
    • Never implemented:
    • multiple log groups
    • log archiving (maybe implemented once, then removed?)
  • Ensuring Data Integrity

    • Page checksums (old, new, and faster implementations)
    • checked when page is read in
    • updated when it is flushed
    • how much overhead this causes
    • disable-able, not recommended
    • the doublewrite buffer
    • it isn’t really a “buffer” like other buffers
    • avoiding torn pages / partial page writes
    • it is a short-term page-level log; pages contain tablespaceid+pageid
    • process: write to buffer; sync; write original location; sync
    • after crash recovery, we check the buffer and the original location, update original if needed
    • unlike postgres, we don’t write a full page to log after checkpoint (logs aren’t page-oriented)
    • how big it is; configuring it to a different file in xtradb
    • uses sequential IO, so overhead is not 2x
    • higher overhead on SSD, plus more wear
    • safe to disable on ZFS
    • Exercises
  • The Insert Buffer (Change Buffer)

    • changes to non-unique secondary index leaf pages that aren’t in the buffer pool are saved for later
    • it is transactional, not a volatile cache
    • how much performance improvement it gives: said to be 15x reduction in random IO
    • how it is purged/merged
    • in the background, when there is time. This is why STOP SLAVE can trigger a huge flood of IO.
      • the rate is controlled by innodb_io_capacity, innodb_ibuf_accel_rate
      • done by main thread (?)
      • might not be fast enough – dedicated thread in xtradb
    • also (transparently) in the foreground, when the page that has un-applied changes is read from disk for some other reason.
    • if a lot of changes need to be merged, it can slow down page reads.
    • it is changed to “change buffer” in recent plugin
    • tunability
    • it can take up to 12 of the buffer pool, which isn’t tunable in standard innodb
    • xtradb lets you disable it, and set the max size
    • newer innodb plugin changed insert buffer to change buffering, and lets you disable them
    • disabling can be good for SSDs (why?)
    • inspecting status; status info after restart only shows since restart, not over the lifetime of the database
    • it is stored in ibdata1 file. Pages are treated same as normal buffer pool pages, subject to LRU etc.
    • what happens on shutdown: you can set fast_shutdown off, so a full merge happens (slow)
    • after a restart, it can slow down, because pages aren’t in the buffer pool, so random IO is needed to find them and merge changes into them
    • Things that were designed but never implemented
    • multiple insert buffers
  • The Adaptive Hash Index

    • think of it as “recently accessed index cache”
    • it’s a kind of partial index: build for values that are accessed often
    • fast lookups for records recently accessed, which are in the buffer pool
    • is a btree that works for both pk and secondary indexes
    • can be built for full index entries, and for prefixes of them, depending on how they are looked up
    • not configurable, except you can disable it
    • how much does it help performance?
    • there is only one, it has a single mutex, can slow things a lot
    • xtradb lets you partition it
  • Buffer Pool Management

  • Memory Management

    • system versus own malloc
    • the additional memory pool: stores dictionary
    • the adaptive hash index in memory
    • latch/lock storage (is that stored in the buffer pool?)
  • Checkpoints and Flushing

    • fuzzy vs sharp checkpoints
    • what happens when innodb has not enough free pages, or no space in the log files?
    • checkpoint spikes/stalls/furious flushing
    • smoothing out checkpoint writes
    • flush algorithms
    • standard in old innodb
    • adaptive checkpointing (xtradb)
    • adaptive flushing (innodb-plugin)
    • neighbor page flushing: next/prev pages (hurts SSD; tunable in xtradb)
    • flushing and page replacement
    • why page replacement? must clean a page before we can replace it with another from disk.
    • server tries to keep some pages clean: 10% in older versions, 25% in newer (innodb_max_dirty_pages_pct)
    • LRU algorithms: old, new
    • two-part lru list to guard against wiping out on scans (midpoint insertion)
    • the lru page replacement algorithm is explained by Inaam: http://www.mysqlperformanceblog.com/2011/01/13/different-flavors-of-innodb-flushing/ 1) if a block is available in the free list grab it. 2) else scan around 10% or the LRU list to find a clean block 3) if a clean block is found grab it 4) else trigger LRU flush and increment Innodb_buffer_pool_wait_free 5) after the LRU flush is finished try again 6) if able to find a block grab it otherwise repeat the process scanning deeper into the LRU list There are some other areas to take care of like having an additional LRU for compressed pages with uncompressed frames etc. And Innodb_buffer_pool_wait_free is not indicative of total number of LRU flushes. It tracks flushes that are triggered above. There are other places in the code which will trigger an LRU flush as well.
    • flush list
    • contains a list of pages that are dirty, in LSN order
    • the main thread schedules some flushes to keep clean pages available
    • this is a checkpoint, as well, because it flushes from the end of the flush list
    • innodb_io_capacity is used by innodb here, but not by xtradb
      • assumed to be the disk’s writes-per-second capacity
      • Peter writes: Affects number of background flushes and insert buffer merges (5% for each). What does 5% mean?
    • when the server is idle, it’ll do more flushing
    • flushing to replace is done in the user thread
    • What happens on shutdown
  • Startup, Crash Recovery, and Shutdown

    • What is done to boot the system up at start?
    • Fast vs slow shutdown
    • implications for the insert buffer, http://dev.mysql.com/doc/innodb/1.1/en/innodb-downgrading-issues-ibuf.html
    • What is done to prepare for shutdown?
    • setting innodb_max_dirty_pages_pct to prepare
    • you can’t kill the server and it is blocking, so shutdown can take a while otherwise
    • What structures in the server have to be warmed up or cooled off? e.g. LRU list, dirty pages…
    • stages of recovery: doublewrite restore, redo, undo
    • redo is synchronous: scan logs, read pages, compare LSNs
      • it happens in batches
    • undo is in the background since 5.0
      • faster with big logs (why?)
      • alter table commits every 10k rows to avoid long undos
      • very large dml is a problem, causes long undo after crash; don’t kill long txns lightly
    • how long will recovery take? 5.0 and 5.1 had slow algorithm; fixed in newer releases
    • larger logs = longer recovery, but it also depends on row sizes, database size, workload
    • are there cases when recovery is impossible? during DDL, .FRM file is not atomic
    • how innodb checks and uses the binary log during recovery
    • the recovery threads – transactions are replayed w/o mysql threads, so they look different
  • InnoDB’s I/O Behavior and File Management

    • How files are created, deleted, shrunk, expanded
    • How InnoDB opens data files: o_direct, etc
    • buffered vs direct IO
    • buffered:
      • advantage: faster warmup, faster flushes, reduce inode locking on ext3
      • bad: swap pressure, double buffering, loss of effective memory
    • direct:
    • Optimistic vs pessimistic IO http://dom.as/2011/07/03/innodb-index-lock/
    • How InnoDB opens log files
    • always buffered, except in xtradb
    • How InnoDB writes and flushes data files and log files
    • the log buffer
    • flushing logs to disk; innodb_flush_log_at_trx_commit; what is safe in what conditions
    • I/O threads
    • the dedicated IO threads
    • the main thread does IO in its main loop
    • dedicated threads for purge, insert buffer merge etc
    • read-ahead/prefetches for random and sequential IO; how an extent is determined to need prefetching
    • don’t count on it much
    • random read-ahead removed in version X, added back; impact of it
    • merging operations together, reordering, neighbor page operations http://dom.as/2011/07/03/innodb-index-lock/
    • async io
    • simulated: arrays, slots
    • native on Windows and in Linux in version 1.1
    • which I/O operations can be foregrounded and backgrounded
    • most writes are in the background
    • flushes can be sync if there are no free pages
    • log writes can be sync or async, configurable
    • thresholds: 75% and 85% by default (confirm)
    • what operations block in innodb? background threads sometimes block foreground threads; MarkC has written about; http://bugs.mysql.com/bug.php?id=55004
    • I/O operations for things like insert buffer merge (causes reads) and old row version purge
    • the purpose of files like “/tmp/ibPR9NL1 (deleted)”
  • Data Manipulation (DML) Operations

    • select
    • insert
    • update
    • delete
    • does it compact? ___________?id=14473
  • The System Tables

    • sys_tables
    • sys_indexes
    • sys_foreign
    • sys_stats
    • sys_fields
  • Data Definition (DDL) Operations

    • How CREATE TABLE works
    • How ALTER TABLE works
    • Doesn’t it internally commit every 10k rows?
    • create index
    • fast index creation; sort buffers; sort buffer size
    • Creates multiple transactions: see email Re: ALTER TABLE showing up more than once in ‘SHOW ENGINE INNODB STATUS’ Transaction list?
    • optimize table
    • analyze table
    • How DROP TABLE works
    • with file-per-table, it is DROP TABLESPACE, which blocks the server; see https://bugs.launchpad.net/percona-server/+bug/712591
    • InnoDB’s internal stored procedure language
  • Foreign Keys

    • implications for locking: causes additional locking, opportunities for deadlocks
    • cascades
    • nulls
    • rows that point to themselves, or rows that have cycles; can they be deleted?
    • is checking immediate, or deferred? it is immediate, not done at commit.
    • names are case sensitive
    • indexes required; change in behavior in 4.1
    • data types must match exactly
    • how they interact with indexes
    • appearance in SHOW INNODB STATUS
    • they use the internal stored procedures
    • InnoDB has to parse the SQL of the CREATE statement
  • InnoDB’s Interface to MySQL

    • The Handler interface
    • Relationship with .frm file
    • Built-In InnoDB
    • The InnoDB Plugin
    • Converting rows to MySQL’s row format
    • what columns are in every table (and can’t be used in a real table)
    • Communicating ha::rows_in_range and ha::info and other statistics
    • Communicating index capability bits (enables covering index queries)
    • Interaction with the query cache
    • MySQL thread statuses
    • they appear in INNODB STATUS
    • what statuses can be present while query is inside innodb: “statistics” for example
    • Implementation in ha_innodb.cc
    • Hacks: magic CREATE TABLE statements like innodb_table_monitor, parsing the SQL for FK definitions
    • how table and row locks are communicated between engine and server
    • innodb_table_locks=1 means that innodb knows about server table locks; what does it do with them?
    • the server knows about row locks – and it can tell innodb to release non-matched rows?
    • Exercises
  • Index Implementation

    • built on b-trees
    • leaf vs non-leaf nodes, the row format on them
    • secondary indexes
    • data pages store the rows in a heap within the page
    • page fill factor
    • page merges and splits
    • is something special done during deletes?
  • Data Distribution Statistics

    • how they are gathered
    • inaccuracy
    • configurability of whether to gather them or not
    • stability/randomness (older InnoDB isn’t properly random and is non-uniform)
    • how many samples? config options that affect that
    • ability to stop resampling
    • ability to store persistently with innodb_use_sys_stats_table
  • How MySQL executes queries with InnoDB

    • high-level overview of optimization, statistics
    • table locking and lock releasing
    • releasing rows locks for rows eliminated by WHERE clause in 5.1; isolation
    • index-only (covering index) queries
    • how it works
    • when an index query must look up the pk (when a page has a newer lsn than the query’s lsn; rows in secondary indexes don’t have LSNs, only the page does)
    • what extra columns are included in the secondary indexes
  • Internal Maintenance Tasks

    • Old Row Purge
    • Insert Buffer Merge
    • The statistics collector
    • rules for when stats are recomputed
      • by mysql: at first open, when SHOW TABLE STATUS / INDEX commands are used (configured with innodb_stats_on_metadata) or when ANALYZE TABLE is used)
      • by innodb: after size changes 1/16th or after 2B row insertions (disable with innodb_stats_auto_update=false)
    • stats are computed when table is first opened, too
    • bug: stats not valid for an index after fast-create (http://bugs.mysql.com/bug.php?id=62516)
    • Jervin’s blog: http://www.mysqlperformanceblog.com/?p=7516&preview=true
  • Tuning InnoDB

    • buffer pool size
    • using multiple buffer pools
    • log file size (1h worth of log writes)
    • log buffer size (10 sec worth of log writes; bigger for blobs; error if too small)
    • checkpoint behavior
    • flush_logs_at_trx_commit
    • dirty page pct in buffer pool
    • setting it lower doesn’t smooth IO by causing constant writing – it causes much more IO and doesn’t give a buffer to absorb spikes.
    • o_direct
    • all configuration variables
  • Mutexes and Latches

  • InnoDB Threads

    • thread per connection
    • thread statuses (innodb internal ones, not mysql ones)
    • user threads
    • recovery threads
    • IO threads
    • main thread
    • schedules other things
    • flush
    • purge
    • checkpoint
    • insert buffer merge
    • deadlock detector
    • monitoring thread
    • error monitor thread; see sync/sync0arr.c
    • statistics thread (??)
    • log thread
    • purge thread
    • innodb_thread_concurrency and the queue, concurrency tickets
    • limit includes threads doing disk io or storing data in tmp table
  • Internal Structures

    • Data dictionary
    • auto-increment values; populated at startup
    • statistics
    • system info
    • the size overhead per tabel can be 4-10k; this is version dependent
    • xtradb lets you limit this
    • Arrays
    • os_aio_read_array, os_aio_write_array, os_aio_ibuf_array
    • mutex/latch/semaphore/whatever arrays
    • data structures and what they’re used for
    • heaps (rows in the page, for example)
    • b-trees
    • linked lists (?)
    • arrays
  • XtraBackup

  • InnoDB Recovery Tools

  • Inspecting Status

    • SHOW STATUS counters
    • Innodb_buffer_pool_wait_free, per Inaam, is not indicative of total number of LRU flushes. It tracks flushes that are triggered from an LRU flush; there are more places in the code which will trigger an LRU flush as well.
    • show innodb status
    • innodb status monitor
    • lock monitor
    • tablespace monitor
    • writing to a file
    • truncation and the in-memory copy and its filehandle
    • information_schema tables in the plugin (esp. locks and trx)
    • how it attempts to provide a consistent view of the tables; consult https://bugs.launchpad.net/bugs/677407
    • show mutex status

Further reading:

Photo credits: iceberg


PlanetMySQL Voting: Vote UP / Vote DOWN

Become a MySQL DBA blog series - Configuration Tuning for Performance

$
0
0

A database server needs CPU, memory, disk and network in order to function. Understanding these resources is important for a DBA, as any resource that is weak or overloaded can become a limiting factor and cause the database server to perform poorly. A main task of the DBA is to tune operating system and database configurations and avoid overutilization or underutilization of the available resources.

In this blog post, we’ll discuss some of the settings that are most often tweaked and which can bring you significant improvement in the performance. We will also cover some of the variables which are frequently modified even though they should not. Performance tuning is not easy, but you can go a surprisingly long way with a few basic guidelines.
 
This is the eighth installment in the ‘Become a MySQL DBA’ blog series. Our previous posts in the DBA series include Live Migration using MySQL Replication, Database Upgrades, Replication Topology Changes, Schema Changes, High Availability, Backup & Restore, Monitoring & Trending.

Performance tuning - a continuous process

Installing MySQL is usually the first step in the process of tuning both OS and database configurations. This is a never-ending story as a database is a dynamic system. Your MySQL database can be CPU-bound at first, as you have plenty of memory and little data. With time, though, it may change and disk access may become more frequent. As you can imagine, the configuration of a server where I/O is the main concern will look different to that of a server where all data fits in memory. Additionally, your query mix may also change in time and as such, access patterns or utilization of the features available in MySQL (like adaptive hash index), can change with it.

What’s also important to keep in mind is that, most of the time, tweaks in MySQL configuration will not give you significant difference in performance. There are couple of exceptions but you should not expect anything like 10x improvement or something similar. Adding a correct index may help you much more than tweaking your my.cnf. 

The tuning process

Let’s start with a description of the tuning process.

To begin, you would need a deterministic environment to test your changes and observe results. The environment should be as close to production as possible. By that we mean both data and traffic. For safety reasons you should not implement and test changes directly on the production systems. It’s also much easier to make changes in the testing environment - some of the tweaks require MySQL to restart - this is not something you can do on production.

Another thing to keep in mind - when you make changes, it is very easy to lose track of which change affected your workload in a particular way. People tend to take shortcuts and make multiple tweaks at the same time - it’s not the best way. After implementing multiple changes at the same time, you do not really know what impact each of them had. The result of all the changes is known but it’s not unlikely that you’d be better off implementing only one of the five changes you made.

After each config change, you also want to ensure your system is in the same state - restore the data set to a known (and always the same) position - e.g., you can restore your data from a given backup. Then you need to run exactly the same query mix to reproduce the production workload - this is the only way to ensure that your results are meaningful and that you can reproduce them. What’s also important is to isolate the testing environment from the rest of your infrastructure so your results won’t be affected by external factors. This means that you do not want to use VMs located on a shared host as other VMs may impact your tests. The same is true for storage - shared SAN may cause some unexpected results.

OS tuning

You’ll want to check operating system settings related to the way the memory and filesystem cache are handled. In general, we want to keep both vm.dirty_ratio and vm.dirty_background_ratio low.

vm.dirty_background_ratio is the percentage of system memory that can be used to cache modified (“dirty”) pages before the background flush process kicks in. More of them means more work needs to be done in order to clean the cache. 

vm.dirty_ratio, on the other hand, is a hard limit of the memory that can be used to cache dirty pages. It can be reached if, due to high write activity, the background process cannot flush data fast enough to keep up with new modifications. Once vm.dirty_ratio is reached, all I/O activity is locked until dirty pages have been written to disk. Default setting here is usually 40% (it may be different in your distribution), which is pretty high for any host with large memory. Let’s say that for a 128GB instance, it amounts to ~51GB which may lock your I/O for a significant amount of time, even if you are using fast SSD’s.

In general, we want to see both of those variables set to low numbers, 5 - 10%, as we want background flushing to kick in early on and to keep any stalls as short as possible.

Another important system variable to tune is vm.swappiness. When using MySQL we do not want to use swap unless in dire need - swapping out InnoDB buffer pool to disk removes a point of having an in-memory buffer pool. On the other hand, if the alternative is to start OOM and kill MySQL, we’d prefer not to do that. Historically, such behavior could have been achieved by setting vm.swappiness to 0. Since kernel 3.5-rc1 (and this change has been backported to older kernels in some of the distros - CentOS for example), this behavior has changed and setting it to 0 prevents swapping. Therefore it’s recommended to set vm.swappiness to 1, to allow some of the swapping to happen should it be the only option to keep MySQL up. Sure, it will slow down the system but OOM on MySQL is very harsh. It may result in data loss (if you do not run with full durability settings) or, in the best case scenario, trigger InnoDB recovery, a process which may take some time to complete.

Another memory-related setting - ensure that you have NUMA interleave set to all. You can do it by modifying the startup script to start MySQL via:

numactl --interleave=all $command

This setting balances memory allocation between NUMA nodes and minimizes chances that one of the nodes go out of memory.

Memory allocators can also have a significant impact on MySQL performance. This is a larger topic and we’ll only scratch the surface here. You can choose different memory allocators to use with MySQL. Their performance differ between the versions and between workloads so the exact choice should be made only after you performed detailed tests to confirm which one works best in your environment. Most common choices you’ll be looking into are default glibc malloc, tcmalloc and jemalloc. You can add new allocators by installing a new package (for jemalloc and tcmalloc) and then use either LD_PRELOAD (i.e. export LD_PRELOAD="/usr/lib/libtcmalloc_minimal.so.4.1.2") or malloc-lib variable in [mysqld_safe] section of my.cnf.

Next, you’d want to take a look at disk schedulers. CFQ, which is usually the default one, is tuned for a desktop workload. It doesn’t work well for a database workload. Most of the time you’ll see better results if you change it to noop or deadline. There’s little difference between those two schedulers, we found that noop is slightly better for storage based on SAN (SAN usually is better in handling the workload as it knows more about the underlying hardware and what’s actually stored in its cache as compared to the operating system). Differences are minimal, though, and most of the time you won’t go wrong by using any of those options. Again, testing may help you squeeze a bit more from your system.

If we are talking about disks, most often the best choice for filesystem will be either EXT4 or XFS - this has changed a couple of times in the past, and if you’d like to get the most of your I/O subsystem, you’d probably have to do some testing on your setup. No matter which filesystem you use though, you should disable noatime and nodiratime for the MySQL volume - the less writes to the metadata, the lower the overall overhead.

MySQL configuration tuning

MySQL configuration tuning is a topic for a whole book, it’s not possible to cover it in a single blog post. We’ll try to mention some of the more important variables here.

InnoDB Buffer Pool

Let’s start with something rather obvious - InnoDB buffer pool. We still see, from time to time (although it becomes less and less frequent, which is really nice), that it’s not setup correctly. Defaults are way too conservative. What is the buffer pool and why is it so important? The buffer pool is memory used by InnoDB to cache data. It is used for caching both reads and writes - every page that has been modified, had to be loaded first to the buffer pool. It then becomes a dirty page - a page that has been modified and is not yet flushed to the tablespace. As you can imagine, such buffer is really important for a database to perform correctly. The worse the “memory/disk” ratio is, the more I/O bound your workload will be. I/O bound workloads tend to be slow.

You may have heard the rule of thumb to set the InnoDB buffer pool to 80% of the total memory in the system. It worked in times when 8GB was a huge amount of memory, but that is not true nowadays. When calculating the InnoDB buffer pool size, you need to take into consideration memory requirements of the rest of MySQL (assuming that MySQL is the only application running on the server). We are talking here, for example, about all those per-connection or even per-query buffers like join buffer or in-memory temporary table max size. You also need to take under consideration maximum allowed connections - more connections means more memory usage.

For a MySQL database server with 24 to 32 cores and 128GB memory, handling up to 20 - 30 of simultaneous running connections and up to a few hundreds of simultaneously connected clients, we’d say that 10 - 15GB of memory should be enough. If you want to stay on the safe side, 20GB should be plenty. In general, unless you know the behaviour of your database, it’s somewhat a process of trial and error to set up ideal buffer pool size. At the moment of writing, InnoDB buffer pool is not a dynamic variable so changes require restart. Therefore it is safer to err on the side of “too small”. It will change with MySQL 5.7 as Oracle introduced dynamically allocated buffer pool, something which will make tuning much easier.

MySQL uses many buffers other than the InnoDB buffer pool - they are controlled by variables: join_buffer_size, sort_buffer_size, read_buffer_size, read_rnd_buffer_size. These buffers are allocated per-session (with an exception of the join buffer, which is allocated per JOIN). We’ve seen MySQL with those buffers set to hundreds of megabytes - it’s somewhat natural that by increasing join_buffer_size, you’d expect your JOINs to perform faster.

By default those variables have rather small values and it actually makes sense - we’ve seen that low settings, up to 256K, can be significantly faster than larger values like for example 4M. It is hard to tell the exact reason for this behavior, most likely there are many of them. One, definitely, is the fact that Linux changes the way memory is allocated. Up to 256KB it uses malloc(). For larger chunks of memory - mmap(). What’s important to remember is that when it comes to those variables, any change has to be backed by benchmarks that confirm the new setting is indeed the correct one. Otherwise you may be reducing your performance instead of increasing it.

InnoDB Durability

Another variable that has a significant impact on the MySQL performance is innodb_flush_log_at_trx_commit. It’s governing to what extend InnoDB is durable. Defaults (1) ensure your data is safe even if the database server gets killed - under any circumstances there’ll be no data loss. Other settings (2 and 0) say that you may lose up to 1s of transactions if the whole database server crashes (2) and that you may lose up to 1s of transactions if the mysqld gets killed.

Full durability is obviously a great thing to have but it comes at a significant price - I/O load is much higher because the flush operation has to happen after each commit. Therefore, under some circumstances, it’s very popular to reduce durability and accept the risk of data loss in certain conditions. It’s true for master - multiple slaves setups where, usually, it’s perfectly fine to have one slave in the rebuild process after a crash because the rest of them can easily handle the workload. Same is true for Galera clusters - the whole cluster works as a single instance so even if one node crashes and somehow loses its data, it still can resync from another node in the cluster - it’s not worth paying the high price for full durability (especially that writes in Galera are already more expensive than in regular MySQL) when you can easily recover from such situations. 

I/O-related settings

Other variables which may have significant impact on some workloads are innodb_io_capacity, innodb_io_capacity_max and innodb_lru_scan_depth. Those variables define the number of disk operations that can be done by InnoDB’s background threads to, e.g., flush dirty pages from the InnoDB buffer pool. Default settings are conservative, which is fine most of the time. If your workload is very write-intensive, you may want to tune those settings and see if you are not preventing InnoDB from using your I/O subsystem fully. This is especially true if you have fast storage: SSD or PCIe SSD card.

When it comes to disks, innodb_flush_method is another setting that you may want to look at. We’ve seen visible performance gains by switching this setting from default fdatasync to O_DIRECT. Such gain is clearly visible on setups with hardware RAID controller which is backed up by the BBU. On the other hand, when it comes to EBS volumes, we’ve seen better results using O_DSYNC. Benchmarking here is very important to understand which setting would be better in your particular case.

InnoDB Redo Logs

The size of InnoDB’s redo logs is also something you may want to take a look at. It is governed by innodb_log_file_size and innodb_log_files_in_group. By default we have two logs in a group, each ~50MB in size. Those logs are used to store write transactions and they are written sequentially. The main problem here is that MySQL must not run out of space in the logs and if logs are almost full, it will have to stop the whole activity and focus on flushing the data to the tablespaces. Of course, this is very bad for the application as no writes can happen during this time. This is one of the reasons why InnoDB I/O settings, that we discussed above, are very important. We can also help by increasing the redo log size by changing innodb_log_file_size. The rule of thumb is to set them large enough to cover at least 1h of writes. We discussed InnoDB I/O settings in more details in an earlier post, where we also covered a method for calculating InnoDB redo log size.

Query Cache

MySQL query cache is also often “tuned” - this cache stores hashes of the SELECT statements and their results. There are two problems with it - first one, the cache may be frequently flushed. If any DML was executed against a given table, all results related to this table are removed from the query cache. This seriously impacts the usefulness of the MySQL query cache. Second problem is that the query cache is protected by a mutex and access is serialized. This is a significant drawback and limitation for any workload with higher concurrency. Therefore it is strongly recommended to “tune” MySQL query cache by disabling it altogether. You can do it by setting query_cache_type to OFF. It’s true that in some cases it can be of some use, but most of the time it’s not. Instead of relying on MySQL query cache, you can also leverage any other external systems like Memcached or Redis to cache data.

Internal contention handling

Another set of settings that you may want to look at are variables that control how many instances/partitions of a given structure that MySQL should create. We are talking here about the variables: innodb_buffer_pool_instances, table_open_cache_instances, metadata_locks_hash_instances and innodb_adaptive_hash_index_partitions. Those options were introduced when it became clear that, for example, a single buffer pool or single adaptive hash index can become a point of contention for workloads with high concurrency. Once you find out that one of those structures becomes a pain point (we discussed how you can catch these situations in an earlier blog post) you’ll want to adjust the variables. Unfortunately, there are no rules of thumb here. It’s suggested that a single buffer pool instance should be at least 2GB in size, so for smaller buffer pools you may want to stick to this limit. In case of the other variables, if we are talking about issues with contentions, you will probably increase the number of instances/partitions of those data structures, but there are no rules on how to do that - you need to observe your workload and decide at which point contention is no longer an issue.

Other settings

There are a few other settings you may want to look at, some are applicable in the most efficient way at the setup time. Some can be changed dynamically. Those settings won’t have a large impact on the performance (sometimes the impact may also be negative one), but it is still important to keep them in mind. 

max_connections - on one hand you want to keep it high enough to handle any incoming connections. On the other hand, you don’t want to keep it too high as most of the servers are not able to handle hundreds or more connections simultaneously. One way of going around this problem is to implement connection pooling on the application side, or e.g. using a load balancer like HAProxy to throttle the load.

log_bin - if you are using MySQL replication, you need to have binary logs enabled. Even if you do not use them, it’s very handy to keep them enabled as they can be used to do a point-in-time recovery.

skip_name_resolve - this variable decides whether DNS lookup is performed on the host that is a source of incoming connection. If enabled, FQDNs can be used in MySQL grants as host. If it’s not, only users defined with IP addresses as host will work. The problem of having DNS lookup enabled is that it can introduce extra latency. DNS servers can also stop responding (because of a crash or network issues) and in such case MySQL won’t be able to accept any new connections.

innodb_file_per_table - this variable decides if InnoDB tables are to be created in a separate tablespace (when set to 1) or in the shared tablespace (when set to 0). It’s much easier to manage MySQL when each of the InnoDB tables has a separate tablespace. For example, with separate tablespaces you can easily reclaim disk space by dropping the table or partition. With shared tablespace it doesn’t work - the only way of reclaiming the disk space is to dump the data, clean the MySQL data directory and then reload the data again. Obviously, this is not convenient.

That is it for now. As we mentioned at the beginning, tweaking those settings might not make your MySQL database blazing fast - you are more likely to speed it up by tuning your queries. But they should still have visible impact on the overall performance. Good luck with the tuning work!

Related resources

 

Blog category:


PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL::Sandbox 3.0.66 - improved usability and support for newest releases

$
0
0

The latest MySQL Sandbox, version 3.0.66 is out. It has a few new features (as always, when I find myself doing the same thing many times, I script it) and improved support for latest releases of MySQL. You can now install, among other versions, MySQL 5.7.8 and MariaDB 10.1.x


Some notable additions in this release are in the scripts that are created and customized for each sandbox. There are many of them and when one more arrives, it's easy to overlook it. So, here are the new arrivals.


./show_binlog


When I am troubleshooting replication behavior, I often need to inspect the latest binary log. The sandbox has a shortcut that gives me the right version of mysqlbinlog for the deployment:


./my sqlbinlog data/mysql-bin.000002 |less

(Notice the blank between “my” and “sqlbinlog”.)

However, this shortcut is still long to type, and requires that I find the latest binlog. So, now, there is a show_binlog script that does exactly that. It gets the latest binary log and shows it using mysqlbinlog –verbose.


By default, the script gets the files from ./data/mysql-bin.[0–9]*, but I can indicate something different.


I can invoke the script with a number, and get the corresponding binlog


./show_binlog 000002

(Will show ./data/mysql-bin.000002)


And I can then pipe it through a pager


./show_binlog | less  
./show_binlog | vim -

Or pipe it to something else:


./show_binlog | grep -i 'create table'

./show_relaylog


This script is similar to show_binlog, but it shows, you can guess it, a relay log instead.

It accepts two optional arguments. The first is the base name of the relay log (by default "mysql-relay") and the second a number to identify the log, same as we have seen for show_binlog.

This is particularly useful when we are dealing with multi-source replication, where the fan-in slave has several relay-groups.


./show_relaylog mysql-relay-node2 | grep -i create

./add_option


The sandbox offers already the ability of restarting the server with a new parameter.


./restart --gtid-mode=ON --master-info-repository=table \  
--relay-log-info-repository=table \
--gtid_mode=ON \
--enforce-gtid-consistency

This is convenient and easy, but the option is used only once. When you restart the sandbox, it’s lost. The script add_option solves this problem.


./add_option master-info-repository=table \  
relay-log-info-repository=table \
gtid_mode=ON \
enforce-gtid-consistency
# option 'master-info-repository=table' added to configuration file
# option 'relay-log-info-repository=table' added to configuration file
# option 'gtid_mode=ON' added to configuration file
# option 'enforce-gtid-consistency' added to configuration file
. sandbox server started

./json_in_db


This script has been around for some time. It’s a script that loads the contents of the sandbox JSON description into a table.

The new thing about this script is that, if you use MySQL 5.7.8 or later, the table will have a JSON column instead of a plain TEXT.


Let’s have a look.


$ ./json_in_db  
connection.json saved to test.connection_json

This does not look like much. But inside the table we will find something interesting.


$ ./use test  
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 12
Server version: 5.7.8-rc MySQL Community Server (GPL)

Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql [localhost] {msandbox} (test) > desc connection_json;
+-------+------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+------+------+-----+---------+-------+
| t | json | YES | | NULL | |
+-------+------+------+-----+---------+-------+
1 row in set (0.00 sec)

mysql [localhost] {msandbox} (test) > select json_extract(t, '$.users.admin') from connection_json\G
*************************** 1. row ***************************
json_extract(t, '$.users.admin'): {"password": "msandbox", "username": "root@localhost", "privileges": "all, with grant option"}
1 row in set (0.00 sec)

mysql [localhost] {msandbox} (test) > select json_extract(t, '$.users.read_write') from connection_json\G
*************************** 1. row ***************************
json_extract(t, '$.users.read_write'): {"password": "msandbox", "username": "msandbox_rw@127.%", "privileges": "SELECT,INSERT,UPDATE,DELETE,CREATE,DROP,INDEX,ALTER,SHOW DATABASES,CREATE TEMPORARY TABLES,LOCK TABLES, EXECUTE"}
1 row in set (0.00 sec)

mysql [localhost] {msandbox} (test) > select json_extract(t, '$.origin') from connection_json\G
*************************** 1. row ***************************
json_extract(t, '$.origin'): {"binaries": "/Users/gmax/opt/mysql/5.7.8", "mysql_version": "5.7", "mysql_sandbox_version": "3.0.62"}
1 row in set (0.00 sec)

As you can see, you can use JSON extraction syntax to manipulate the data inside.


./mycli and $MYSQL_EDITOR

mycli is a convenient command line client for MySQL that has many useful features. It would be nice to use it with MySQL Sandbox. You can, of course, say something like
mycli --user=msandbox --pass=msandbpx --port=5708 --socket=/tmp/mysql_sandbox_5708.sock

But to do this you need to remember the port number and there is way too much to write. Starting with version 3.0.66, MySQL Sandbox includes in each sandbox a ./mycli script, which you can invoke instead of ./use.

Just type ./mycli, and , provided that mycli is installed, it will be used with the appropriate options.

And there is another addition, since we are dealing with MySQL clients. The ./use script now recognizes a MYSQL_EDITOR variable. If you set this variable to point to your favorite client, it will be used instead of the "mysql" client from the sandbox version. Whatever you put in that variable, though, should recognize the option --defaults-file, because this is how the client is invoked.

Thanks to Morgan Tocker for suggesting these two features.

Update: the author of mycli responded to a feature request, and made mycli able to use a mysql configuration file (with --defaults-file). Thus, now you can use 'mycli' by setting the MYSQL_EDITOR variable.

$ export MYSQL_EDITOR=mycli
$ ./use
Version: 1.2.0
Chat: https://gitter.im/dbcli/mycli
Mail: https://groups.google.com/forum/#!forum/mycli-users
Home: http://mycli.net
Thanks to the contributor - Nathan Taggart
mysql msandbox@localhost:(none)>

However, be aware that mycli is an interactive client. It does not support "-e" and batch mode. If you use both, it's better to stick to the two scripts: ./use to invoke the default MySQL client for batch mode and "-e", and ./mycli for interactive usage.


PlanetMySQL Voting: Vote UP / Vote DOWN

LinuxCon North America in Seattle

$
0
0

I’m excited to be at LinuxCon North America in Seattle next week (August 17-19 2015). I’ve spoken at many LinuxCon events, and this one won’t be any different. Part of the appeal of the conference is being able to visit a new place every year.

MariaDB Corporation will have a booth, so you’ll always be able to see friendly Rod Allen camped there. In between talks and meetings, there will also be Max Mether and quite possibly all the other folk that live in Seattle (Kolbe Kegel, Patrick Crews, Gerry Narvaja).

For those in the database space, don’t forget to come attend some of our talks (represented by MariaDB Corporation and Oracle Corporation):

  1. MariaDB: The New MySQL is Five Years Old & Everywhere by Colin Charles
  2. MySQL High Availability in 2015 by Colin Charles
  3. Handling large MySQL and MariaDB farms with MaxScale by Max Mether
  4. The Proper Care and Feeding of a MySQL Database for a Linux Administrator by Dave Stokes
  5. MySQL Security in a Cloudy World by Dave Stokes

See you in Seattle soon!


PlanetMySQL Voting: Vote UP / Vote DOWN

Election Money and Data

$
0
0

So far, the early drama of the 2016 presidential race has been more silly than substantial. Donald Trump has been a (successful?!) one-man show, Hillary has been playing coy, some candidates have been getting trolled (much to Reddit’s delight), and, this week, Fox News has announced its first Republican debate line-up, which somehow seems more akin to the guest list for a popular 7th grader’s slumber party than presidential grooming. But there’s been one topic of conversation that’s been deadly serious and that is sure to stay on people’s minds: campaign finance.

statue-money

Incredibly vast sums of money are now flowing in anticipation of next November, and it’s the entire country’s concern where that cash comes from, where it’s going, how it’s being spent, and how it’s being tracked. Data fans, now’s your time to heed Jack’s advice: help your fellow citizens stay afloat in this ocean of information. At VividCortex, we understand how suffocating a vast data field can seem if an analyst doesn’t have the proper tools for navigation. And this, a presidential election, is a time when it’s absolutely vital to the public’s interest that data be both tracked and clearly understood.

In 2010, the Supreme Court’s decision in Citizens United v. FEC codified new rules for how donors can contribute to campaigns: basically, Super PACs are free to grow, without limit. Now, Americans’ eyes are on the funds. And that means watching data and databases. The New York Times has been publishing extensive reports and timelines that make some of that information visual and legible for the average reader.

nytimes-graph

But that’s hardly cutting edge. For instance, this 2009 post from R-bloggers used MySQL and R to track and visualize the progression of contributions to Obama’s first campaign.

For more current, granular reports, any user can go directly to the source: the Federal Election Commission’s website, where all campaign finance info is made public. Below are snippets of Clinton and Bush’s individual campaign finance report cards. (Or you can try a search yourself.)

Clinton’s Finance Report Card Summary

Clinton-info

Bush’s Finance Report Card Summary

Bush-info

The info on the Super PACs themselves is more complicated. Here’s the list of “Independent Expenditure-Only Committees.” They’re technically independent of any individual candidate, but the nuances of that technicality is another controversial topic. Regardless, they hold most of the money. Want to see for yourself? Take a look at the mid-year report for Right to Rise, the PAC that supports Bush, the candidate currently supported by the most funds. The report is 1,656 pages long. Everything really is bigger in Texas – even the data.

This is why we consider it crucial that data not just be collected, but also managed, streamlined, and presented in a way that it can be understood. We Americans might consider it self-evident that all people are created equal, but here at VividCortex it’s also obvious that not all data are created that way: some pieces of information are more valuable than others. Sometimes, within 1,656 pages, all you’re really looking for is a paragraph, a sentence, a word, or a single number next to a dollar sign. And at times like this, DBA’s aren’t the only people who need to understand what big data means.

Be on the lookout for more VividCortex posts on Databases and Democracy and candidate finances in the future – the campaign season is just getting started.


PlanetMySQL Voting: Vote UP / Vote DOWN

Server Monitoring with Munin and Monit on Debian 8 (Jessie)

$
0
0
In this article, I will describe how you can monitor your Debian 8 server with Munin and Monit. munin produces nifty little graphics about nearly every aspect of your server without much configuration, whereas Monit checks the availability of services like Apache, MySQL, Postfix and takes the appropriate action such as a restart if it finds a service is not behaving as expected. The combination of the two gives you full monitoring: graphics that let you recognize current or upcoming problems, and a watchdog that ensures the availability of the monitored services.
PlanetMySQL Voting: Vote UP / Vote DOWN

Use Docker To Explore MySQL 5.7.8-rc

$
0
0

Recently I have been using Ansible and Vagrant to test the MySQL 5.7 release candidates but several of you asked about using Docker. The hardest part of this process will be installing Docker on your operating system of choice and that is fairly easy. I am using Ubuntu 14.04 LTS and the installation was a wget command.

Next comes the magic. Docker will download the MySQL 5.7.8-rc image if it is not already loaded locally and then start it.
docker run -p 3306:3306 --name mysql -e MYSQL_ROOT_PASSWORRD=secret -d mysql:5.7.8-rc
The quick translation of the above is that we are telling Docker to set up a container named mysql on port 3306 using a password of secret, run all this as a daemon using MySQL version 5.7.8-rc.

And MySQL 5.7.8-rc is running. But to find it you will have to ask Docker where the server is running.

docker inspect mysql | grep -i ipad
        "IPAddress": "172.17.0.12",
        "SecondaryIPAddresses": null,

Using the local MySQL client, it is easy to connect to the 5.7.8-rc server.

mysql -h 172.17.0.12 -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.8-rc MySQL Community Server (GPL)

Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> \s
--------------
mysql  Ver 14.14 Distrib 5.6.26, for Linux (x86_64) using  EditLine wrapper

Connection id:          2
Current database:       
Current user:           root@172.17.42.1
SSL:                    Not in use
Current pager:          stdout
Using outfile:          ''
Using delimiter:        ; 
Server version:         5.7.8-rc MySQL Community Server (GPL)
Protocol version:       10
Connection:             172.17.0.12 via TCP/IP
Server characterset:    latin1
Db     characterset:    latin1
Client characterset:    utf8
Conn.  characterset:    utf8
TCP port:               3306
Uptime:                 12 sec

Threads: 1  Questions: 5  Slow queries: 0  Opens: 105  Flush tables: 1  Open tables: 98  Queries per second avg: 0.416
--------------

And now there is a instance of 5.7.8-rc to use. Just add in your schemas!

Note that by default 5.7.8 would rather assign a random password and the above ‘force’ of a password is an insecure install (–initialize-insecure).



PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL QA Episode 12: My server is crashing… Now what? For customers or users experiencing a crash

$
0
0

My server is crashing… Now what?

This special episode in the MySQL QA Series is for customers or users experiencing a crash.

  1. A crash?
    1. Cheat sheet: https://goo.gl/rrmB9i
    2. Sever install & crash. Note this is as a demonstration: do not action this on a production server!
      sudo yum install -y http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm
      sudo yum install -y Percona-Server-client-56 Percona-Server-server-56
      sudo service mysql start
  2. Gimme Stacks!
    1. Debug info packages (can be executed on a production system, but do match your 5.5, 5.6 or 5.7 version correctly)
      sudo yum install -y Percona-Server-56-debuginfo
  3. Testcase?

Full-screen viewing @ 720p resolution recommended.

The post MySQL QA Episode 12: My server is crashing… Now what? For customers or users experiencing a crash appeared first on MySQL Performance Blog.


PlanetMySQL Voting: Vote UP / Vote DOWN

Facebook's Charity Majors Says VividCortex Is Impressive

$
0
0

A few months ago, we featured Charity Majors, the production engineering manager for Parse at Facebook, on Brainiac Corner. We are featuring Charity and her expertise once again. This time, though, she is reviewing VividCortex: from installation to problem solving to a feature wishlist.

One of our favorite takeaways: “And VividCortex is a DB monitoring system built by database experts. They know what information you are going to need to diagnose problems, whether you know it or not. It’s like having a half a DBA on your team.” And without further ado…

Parse review of VividCortex

Many years ago, when I was but a wee lass trying to upgrade mysql and having a terrible time with performance regressions, Baron and the newly-formed Percona team helped me figure my shit out. The Percona toolset (formerly known as Maatkit) changed my life. It helped me understand what was going on under the hood in my database for the very first time, and basically I’ve been playing with data ever since. (Thanks, I think?)

I’ve been out of the mysql world for a while now, mostly doing Mongo, Redis, and Cassandra these days. So when I heard that Baron’s latest startup VividCortex was entering the NoSQL monitoring space, I was intrigued.

To be perfectly clear, I don’t need VividCortex at the moment, and do not use it for my day-to-day work. Parse was acquired by Facebook two years ago, and the first thing we did was pipeline all of our metrics into the sophisticated Facebook monitoring systems. Facebook’s powerful tools work insanely well for what we need to do. That said, I was eager to take VividCortex for a spin.

Parse workload

First, a little bit of background on Parse. We are a complete framework for building mobile apps. You can use our APIs and SDKs to build beautiful, fully featured apps with core storage, analytics, push notifications, cloud code etc without needing to build your own backend. We currently host over half a million apps , and all mobile application data is stored in MongoDB using the RocksDB storage engine.

We face some particular challenges with our MongoDB storage layer. We have millions of collections and tens of millions of indexes, which is not your traditional Mongo use case. Indexes are intelligently auto-generated for apps based on real query patterns and the cardinality of their data. Parse is a platform, which means we have very little control over the types of queries that enter our systems. We often have to do things like transparently scale or optimize apps that have just been featured on the front page of the iTunes store, or handle spiky events, or figure out complex query planner caching issues.

Basically, Parse is a DBA’s worst nightmare or most delicious fantasy, depending on how you feel about tracking down crazy problems and figuring out how to solve them naively for the entire world.

SO.

VividCortex. I was really curious to see if it could tell me anything new about our systems, given that we have already extensively instrumented them using the sophisticated monitoring platforms at Facebook.

Setup

The setup flow for VividCortex is a delight. It took less than two minutes from generating an account to capturing all metrics for a few machines (the trial period lets you monitor 5 nodes for 14 days). Signup is fun, too: you get a cute little message from the VividCortex team, a tutorial video, and a nudge for how to get live chat support.

I chose to install the agent on each node. You have the option of installing locally or remotely, but you have to install one agent process per monitored node. I sorta wish I could install just one agent, or one per replica set with autodetection for all nodes in the replica set, but as a storage-agnostic monitoring layer this decision makes sense. If I was running this in production, I would probably consider making this part of the chef bootstrap process. It has a supervisor process that restarts the agent if it dies, and the agent polls the VividCortex API to detect any server-side instructions or configuration changes.

I had to input the DB auth credentials, but it automatically detected what type of DB I was running and enabled all the right monitoring plugins — nice touch.

Agent

The agent works by capturing pcaps off the network, reconstructing queries or transactions, and also frequently running “SHOW INNODB STATUS” or “db.serverStatus()” or whatever the equivalent is for that db.

The awesome thing about monitoring over the network is that this gives VividCortex second-level granularity for metrics gathering, and it has less potential impact on your production systems. At Parse we do all our monitoring by tailing logs, reprocessing the logs into a structured format, and aggregating the metrics after that (whether via ganglia or FB systems). This means we have minute-level granularity and often a delay of a couple of minutes before logs are fully processed and stored. On the one hand this means we can use the same unified storage systems for all of our structured logs and metrics, but on the other hand it takes a lot more work upfront to parse the logs, structure the data, and ship it off for storage.

Second-level granularity isn’t a thing that I’ve often longed to have, but it could be that this is just because I’ve never had it before. Also: log files can lie to you. There’s a long-standing bug in MongoDB where the query time logged to disk doesn’t include the time spent waiting to acquire the lock. If you were timing this yourself over the wire, you wouldn’t have this problem. Log files also incur a performance penalty that can be substantial.

Query families

The most impressive feature of VividCortex is really the query family normalization and “top queries” dashboard. As a scrappy startup with limited engineering cycles, this is the most important thing for you to pay attention to. It’s not particularly easy to implement, and every company past a certain scale ends up reinventing the same wheel again. We built something very similar to this at Parse a while back. Before we had it we spent a lot of time tailing and sorting logs, looking for slow queries, running mongotop, sorting by scanned documents and read/write lock time held, and other annoying firefighting techniques.

With the top queries dashboard, you can just watch the list or generate a daily report. Or better yet, train your developers to check it themselves after they ship a change. :)

VividCortex also has a really neat “compare queries” feature, which lets you compare the same query over two different time ranges. This is definitely something we don’t have now, although we can kinda fake it. The “adaptive fault detection” also looks basically like magic, although not yet implemented for MongoDB (it’s a patent-pending method that VividCortex has developed for detecting database stalls).

Live support

Ok, I don’t usually use this kind of thing, but I actually love VividCortex’s built-in live support chat. The techs were incredibly friendly and responsive. We ran into some strange edge cases due to the weirdness of our traffic characteristics, which caused some hiccups getting started. The people manning the chat systems were clearly technical contributors with deep knowledge of the systems, and they were very straightforward about what was happening on the backend, what they had to do to fix it, and when we could expect to get back up and running. Love it.

Things I wish it had

  • I wish it was easier to group query families, queue lengths and load by replica set, not just by single nodes. If you’re sending a lot of queries to secondaries, you need those aggregate counts. You can get around this by creating an “environment” by hand for each replica set (thanks @dbsmasher!), but that’s gonna get painful if you have more than a few replica sets, and it won’t dynamically adjust the host list for a RS when it changes.
  • Comments attached to query families. It’s really nice to be able to attach a comment to the query in the application layer, for example with the line number of the code that’s issuing the query.
  • Some sort of integration with in-house monitoring systems. Like, maybe a REST API that a Nagios check could query for alerting on critical thresholds. This is obviously a pretty complicated request, but my heart longs for it.

Summary

This might be a good time to mention that I’ve always been fairly prejudiced against outsourcing my monitoring and metrics. I hate being paged by multiple systems or having to correlate connected incidents across disparate sources of truth. I still think monitoring sprawl and source-of-truth proliferation is a serious issue for anyone who decides to outsource any or all of their monitoring infrastructure.

But you know what? I’m getting really tired of building monitoring systems over and over again. If I never have to build out another ganglia or graphite system I will be pretty damn happy. Especially since the acquisition, I’ve come to see how wonderful it is when you can let experts do their thing so you don’t have to. And VividCortex is a DB monitoring system built by database experts. They know what information you are going to need to diagnose problems, whether you know it or not. It’s like having a half a DBA on your team.

Monitoring, for me, is starting to cross over that line between “key competency that you should always own in-house” to “commodity service that you should outsource to other companies that are better at it so you can concentrate on your own core product.” In a couple of years, I think we’re all going to look at building our own monitoring pipelines the same way we now look at running our own mail systems and spam filters: mildly insane.

I do still think there’s a lot of efficiencies to aggregating all metrics in the same space. For that reason, I would love to see more crossover and interoperability between deep specialists like VividCortex, and more generalized offerings like Interana and Data Dog, or even on-prem solutions like syncing VividCortex data back to crappy local ganglia instances.

But if I were to go off and do a new startup today? VividCortex would be a really useful tool to have, no question.

Thanks, Charity, for the thoughtful, flattering, and constructive review! See for yourself how VividCortex can revolutionize your monitoring with a free trial.


PlanetMySQL Voting: Vote UP / Vote DOWN

Installing Lighttpd with PHP5 (PHP-FPM) and MySQL on Debian 8 (Jessie)

$
0
0
Lighttpd is a secure, fast, standards-compliant web server designed for speed-critical environments. This tutorial shows how you can install Lighttpd on a Debian 8 (Jessie) server with PHP5 support (through PHP-FPM) and MySQL as a database server. PHP-FPM (FastCGI Process Manager) is a new PHP FastCGI implementation with some additional features useful for sites of any size, especially busier sites. I use PHP-FPM in this tutorial instead of Lighttpd's spawn-fcgi.
PlanetMySQL Voting: Vote UP / Vote DOWN
Viewing all 18822 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>