Quantcast
Channel: Planet MySQL
Viewing all 18833 articles
Browse latest View live

MySQL 5.7, utf8mb4 and the load data infile

$
0
0
utf8mb4 and the load data infile

utf8mb4 and the load data infileIn this post, I’ll discuss how MySQL 5.7 handles UTF8MB4 and the load data infile.

Many of my clients have told me that they do not like using the LOAD DATA INFILE statement and prefer to manually parse and load the data. The main reason they do it is issues with the character sets, specifically UTF8MB4 and the load data infile. This was surprising to me as nowadays everyone uses UTF8. MySQL 5.7 (as well as 5.6) has full support for UTF8MB4, which should fix any remaining issues (i.e., you can now load new emoji, like ?).

Last week I was investigating an interesting case where we were loading data and got the following error:

mysql -e 'select version()'
+-----------+
| version() |
+-----------+
| 5.7.12    |
+-----------+
$ mysql -vvv testdb < load_data.sql
ERROR 1300 (HY000) at line 1: Invalid utf8mb4 character string: 'Casa N'

The load data statement:

LOAD DATA LOCAL INFILE
                           'input.psv'
                        REPLACE INTO TABLE
                            input
                        CHARACTER SET
                            utf8mb4
                        FIELDS
                            TERMINATED BY '|'
                        LINES
                            TERMINATED BY 'rn'
                        IGNORE
                            1 LINES

The table uses the correct character set (global character set applied to all varchar fields):

CREATE TABLE `input` (
  `id` int(11) unsigned NOT NULL AUTO_INCREMENT,
  ...
  `address` varchar(255) DEFAULT NULL,
  ...
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4

The string looked like “Casa Nº 24”. So this should be N + U+00BA (MASCULINE ORDINAL INDICATOR, hex code: c2ba). When I do “less input.tsv”, it shows N<BA> 24. So why can’t MySQL load it?

After further investigation, we discovered the original encoding is not UTF8. WE found out by running:

$ file -i input.tsv
input.tsv: text/plain; charset=iso-8859-1

So the code <BA> was misleading. Also, when I got the actual character from the file, it was just one byte (UTF8 for this character should be two bytes). When MySQL parsed the UTF8 input file, it found only the first part of the multibyte UTF8 code and stopped with an error.

The original character in hex is “ba”:

xxd -p char_ascii
ba0a

(0a is a carriage return, and “ba” is “masculine ordinal indicator”)

The UTF8 equivalent:

$ xxd -p char_utf8
c2ba0a

This is now two bytes (+ carriage return): c2ba

To solve the problem we can simply change the CHARACTER SET utf8mb4 to CHARACTER SET latin1 when doing a load data infile. This fixed the issue:

Query OK, 2 rows affected (0.00 sec)
Records: 2  Deleted: 0  Skipped: 0  Warnings: 0
mysql> set names utf8mb4;
Query OK, 0 rows affected (0.00 sec)
mysql> select address from input;
+--------------------------------+
| consignee_address              |
+--------------------------------+
| Casa Nº 24 ................... |
...
+--------------------------------+
2 rows in set (0.00 sec)

Another option will be to detect the character set encoding (iconv can do it) and convert to UTF8.

But it worked before…?

It worked a bit differently in MySQL 5.6:

$ mysql -e 'select version()'
+-------------+
| version()   |
+-------------+
| 5.6.25-73.0 |
+-------------+
$ mysql -vvv testdb < load_data.sql
...
Query OK, 2 rows affected, 2 warnings (0.00 sec)
Records: 2  Deleted: 0  Skipped: 0  Warnings: 2
--------------
show warnings
--------------
+---------+------+---------------------------------------------------------------------+
| Level   | Code | Message                                                             |
+---------+------+---------------------------------------------------------------------+
| Warning | 1366 | Incorrect string value: 'xBA 24 ...' for column 'address' at row 1  |
| Warning | 1366 | Incorrect string value: 'xBA 24 ...' for column 'address' at row 2  |
+---------+------+---------------------------------------------------------------------+
2 rows in set (0.00 sec)

MySQL 5.7 is more strict and doesn’t allow you to insert data in the wrong format. However, it is not 100% consistent. For some characters, MySQL 5.7 will also throw a warning if disabling strict SQL mode.

Another character that caused the same issue was xC9. When loading to MySQL 5.7 with the default sql_mode (ONLY_FULL_GROUP_BY, STRICT_TRANS_TABLES, NO_ZERO_IN_DATE, NO_ZERO_DATE, ERROR_FOR_DIVISION_BY_ZERO, NO_AUTO_CREATE_USER, NO_ENGINE_SUBSTITUTION) it throws an error:

ERROR 1366 (HY000) at line 1: Incorrect string value: 'xC9' for column 'address' at row 1

When disabling the strict mode it now defaults to warnings:

mysql> set global sql_mode = '';
Query OK, 0 rows affected (0.00 sec)
Query OK, 2 rows affected, 1 warning (0.00 sec)
Records: 1  Deleted: 1  Skipped: 0  Warnings: 1
--------------
show warnings
--------------
+---------+------+--------------------------------------------------------------+
| Level   | Code | Message                                                      |
+---------+------+--------------------------------------------------------------+
| Warning | 1366 | Incorrect string value: 'xC9' for column 'address' at row 1  |
+---------+------+--------------------------------------------------------------+
1 row in set (0.00 sec)

Emoji in MySQL

With UTF8MB4 support (in MySQL 5.6 and 5.7), you can also insert a little dolphin into a MySQL table:

CREATE TABLE `test_utf8mb4` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `v` varchar(100) DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
mysql> set names utf8mb4;
Query OK, 0 rows affected (0.00 sec)
mysql> insert into test_utf8mb4 (v) values ('Dolphin:?');
Query OK, 1 row affected (0.00 sec)
mysql> select * from test_utf8mb4;
+----+--------------+
| id | v            |
+----+--------------+
|  1 | Dolphin:?   |
+----+--------------+
1 row in set (0.00 sec)

This should help you clear up issues with UTF8MB4 and the load data infile. Have fun!


PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL 8.0

$
0
0
MySQL 8

MySQL 8.0

If you haven’t heard the news yet, MySQL 8.0 is apparently the next release of the world-famous database server.

Obviously abandoning plans to name the next release 5.8, Percona Server’s upstream provider relabelled all 5.8-related bugs to 8.0 as follows:

Reported version value updated to reflect release name change from 5.8 to 8.0

What will MySQL 8.0 bring to the world?

While lossless RBR has been suggested by Simon Mudd (for example), the actual feature list (except a Boost 1.60.0 upgrade!) remains a secret.

As far as bug and feature requests go, a smart google query revealed which bugs are likely to be fixed in (or are feature requests for) MySQL 8.0.

Here is the full list:

  • MySQL Bug #79380: Upgrade to Boost 1.60.0
  • MySQL Bug #79037: get rid of dynamic_array in st_mysql_options
  • MySQL Bug #80793: EXTEND EXPLAIN to cover ALTER TABLE
  • MySQL Bug #79812: JSON_ARRAY and JSON_OBJECT return …
  • MySQL Bug #79666: fix errors reported by ubsan
  • MySQL Bug #79463: Improve P_S configuration behaviour
  • MySQL Bug #79939: default_password_lifetime &gt; 0 should print …
  • MySQL Bug #79330: DROP TABLESPACE fails for missing general …
  • MySQL Bug #80772: Excessive memory used in memory/innodb …
  • MySQL Bug #80481: Accesses to new data-dictionary add confusing …
  • MySQL Bug #77712: mysql_real_query does not report an error for …
  • MySQL Bug #79813: Boolean values are returned inconsistently with …
  • MySQL Bug #79073: Optimizer hint to disallow full scan
  • MySQL Bug #77732: REGRESSION: replication fails for insufficient …
  • MySQL Bug #79076: make hostname a dynamic variable
  • MySQL Bug #78978: Add microseconds support to UNIX_TIMESTAMP
  • MySQL Bug #77600: Bump major version of libmysqlclient in 8.0
  • MySQL Bug #79182: main.help_verbose failing on freebsd
  • MySQL Bug #80627: incorrect function referenced in spatial error …
  • MySQL Bug #80372: Built-in mysql functions are case sensitive …
  • MySQL Bug #79150: InnoDB: Remove runtime checks for 32-bit file …
  • MySQL Bug #76918: Unhelpful error for mysql_ssl_rsa_setup when …
  • MySQL Bug #80523: current_memory in sys.session can go negative!
  • MySQL Bug #78210: SHUTDOWN command should have an option …
  • MySQL Bug #80823: sys should have a mdl session oriented view
  • MySQL Bug #78374: “CREATE USER IF NOT EXISTS” reports an error
  • MySQL Bug #79522: can mysqldump print the fully qualified table …
  • MySQL Bug #78457: Use gettext and .po(t) files for translations
  • MySQL Bug #78593: mysqlpump creates incorrect ALTER TABLE …
  • MySQL Bug #78041: GROUP_CONCAT() truncation should be an …
  • MySQL Bug #76927: Duplicate UK values in READ-COMMITTED …
  • MySQL Bug #77997: Automatic mysql_upgrade
  • MySQL Bug #78495: Table mysql.gtid_executed cannot be opened.
  • MySQL Bug #78698: Simple delete query causes InnoDB: Failing …
  • MySQL Bug #76392: Assume that index_id is unique within a …
  • MySQL Bug #76671: InnoDB: Assertion failure in thread 19 in file …
  • MySQL Bug #76803: InnoDB: Unlock row could not find a 2 mode …
  • MySQL Bug #78527: incomplete support and/or documentation of …
  • MySQL Bug #78732: InnoDB: Failing assertion: *mbmaxlen &lt; 5 in file …
  • MySQL Bug #76356: Reduce header file dependencies for …
  • MySQL Bug #77056: There is no clear error message if …
  • MySQL Bug #76329: COLLATE option not accepted in generated …
  • MySQL Bug #79500: InnoDB: Assertion failure in thread …
  • MySQL Bug #72284: please use better options to …
  • MySQL Bug #78397: Subquery Materialization on DELETE WHERE …
  • MySQL Bug #76552: Cannot shutdown MySQL using JDBC driver
  • MySQL Bug #76532: MySQL calls exit(MYSQLD_ABORT_EXIT …
  • MySQL Bug #76432: handle_fatal_signal (sig=11) in …
  • MySQL Bug #41925: Warning 1366 Incorrect string value: … for …
  • MySQL Bug #78452: Alter table add virtual index hits assert in …
  • MySQL Bug #77097: InnoDB Online DDL should support change …
  • MySQL Bug #77149: sys should possibly offer user threads …

PlanetMySQL Voting: Vote UP / Vote DOWN

Percona Server 5.7.13-6 is now available

$
0
0
percona server 5.6.30-76.3

percona server 5.6.30-76.3Percona announces the GA release of Percona Server 5.7.13-6 on July 6, 2016. Download the latest version from the Percona web site or from the Percona Software Repositories.

Based on MySQL 5.7.13, including all the bug fixes in it, Percona Server 5.7.13-6 is the current GA release in the Percona Server 5.7 series. Percona’s provides completely open-source and free software. All the details of the release can be found in the 5.7.13-6 milestone at Launchpad.

New Features:
  • TokuDB MTR suite is now part of the default MTR suite in Percona Server 5.7.
Bugs Fixed:
  • Querying the GLOBAL_TEMPORARY_TABLES table would cause server crash if temporary table owning threads would execute new queries. Bug fixed #1581949.
  • IMPORT TABLESPACE and undo tablespace truncate could get stuck indefinitely with a writing workload in parallel. Bug fixed #1585095.
  • Requesting to flush the whole of the buffer pool with doublewrite parallel buffer wasn’t working correctly. Bug fixed #1586265.
  • Audit Log Plugin would hang when trying to write log record of audit_log_buffer_size length. Bug fixed #1588439.
  • Audit log in ASYNC mode could skip log records which don’t fit into log buffer. Bug fixed #1588447.
  • In order to support innodb_flush_method being set to ALL_O_DIRECT, the log I/O buffers were aligned to innodb_log_write_ahead_size. That implementation missed the case that the variable is dynamic and could still lead to a server to crash. Bug fixed #1597143.
  • InnoDB tablespace import would fail when trying to import a table with different data directory. Bug fixed #1548597 (upstream #76142).
  • Audit Log Plugin was truncating SQL queries to 512 bytes. Bug fixed #1557293.
  • mysqlbinlog did not free the existing connection before opening a new remote one. Bug fixed #1587840 (upstream #81675).
  • Fixed a memory leak in mysqldump. Bug fixed #1588845 (upstream #81714).
  • Transparent Huge Pages check will now only happen if tokudb_check_jemalloc option is set. Bugs fixed #939 and #713.
  • Logging in ydb environment validation functions now prints more useful context. Bug fixed #722.

Other bugs fixed: #1541698 (upstream #80261), #1587426 (upstream, #81657), #1589431, #956, and #964.

The release notes for Percona Server 5.7.13-6 are available in the online documentation. Please report any bugs on the launchpad bug tracker .


PlanetMySQL Voting: Vote UP / Vote DOWN

Pipelining versus Parallel Query Execution with MySQL 5.7 X Plugin

$
0
0
Pipelining versus Parallel Query Execution

Pipelining versus Parallel Query ExecutionIn this blog post, we’ll look at pipelining versus parallel query execution when using X Plugin for MySQL 5.7.

In my previous blog post, I showed how to useX Plugin for MySQL 5.7 for parallel query execution. The tricks I used to make it work:

  • Partitioning by hash
  • Open N connections to MySQL, where N = number of CPU cores

I had to do it manually (as well as to sort the result at the end) as X Plugin only supports “pipelining” (which only saves the round trip time) and does not “multiplex” connections to MySQL (MySQL does not use multiple CPU cores for a single query).

TL:DR; version

In this (long) post I’m playing with MySQL 5.7 X Plugin / X Protocol and document store. Here is the summary:

  1. X Plugin does not “multiplex” connections/sessions to MySQL. Similar to the original protocol, one connection to X Plugin will result in one session open to MySQL
  2. An X Plugin query (if the library supports it) returns immediately and does not wait until the query is finished (async call). MySQL works like a queue.
  3. X Plugin does not have any additional server-level durability settings. Unless you check or wait for the acknowledgement (which is asynchronous) from the server, the data might or might not be written into MySQL (“fire and forget”).

At the same time, X Protocol can be helpful if:

  • We want to implement an asynchronous client (i.e., we do not want to block the network communication such as downloading or API calls) when the MySQL table is locked.
  • We want to use MySQL as a queue and save the round-trip time.
Benchmark results: “pipelining” versus “parallelizing” versus a single query

I’ve done a couple of tests comparing the results between “pipelining” versus “parallelizing” versus a single query. Here are the results:

      1. Parallel queries with NodeJS:
        $ time node async_wikistats.js
        ...
        All done! Total: 17753
        ...
        real    0m30.668s
        user    0m0.256s
        sys     0m0.028s
      2. Pipeline with NojeJS:
        $ time node async_wikistats_pipeline.js
        ...
        All done! Total: 17753
        ...
        real 5m39.666s
        user 0m0.212s
        sys 0m0.024s

        In the pipeline with NojeJS, I’m reusing the same connection (and do not open a new one for each thread).
      3. Direct query – partitioned table:
        mysql> select sum(tot_visits) from wikistats.wikistats_by_day_spark_part where url like ‘%postgresql%’;
        +-----------------+
        | sum(tot_visits) |
        +-----------------+
        | 17753           |
        +-----------------+
        1 row in set (5 min 31.44 sec)
      4. Direct query – non-partitioned table.
        mysql> select sum(tot_visits) from wikistats.wikistats_by_day_spark where url like ‘%postgresql%’;
        +-----------------+
        | sum(tot_visits) |
        +-----------------+
        | 17753           |
        +-----------------+
        1 row in set (4 min 38.16 sec)
Advantages of pipelines with X Plugin 

Although pipelining with X Plugin does not significantly increase query response time (it can reduce the total latency), it might be helpful in some cases. For example, let’s say we are downloading something from the Internet and need to save the progress of the download as well as the metadata for the document. In this example, I use youtube-dl to search and download the metadata about YouTube videos, then save the metadata JSON into MySQL 5.7 Document Store. Here is the code:

var mysqlx = require('mysqlx');
# This is the same as running $ youtube-dl -j -i ytsearch100:"mysql 5.7"
const spawn = require('child_process').spawn;
const yt = spawn('youtube-dl', ['-j', '-i', 'ytsearch100:"mysql 5.7"'], {maxBuffer: 1024 * 1024 * 128});
var mySession =
mysqlx.getSession({
    host: 'localhost',
    port: 33060,
    dbUser: 'root',
    dbPassword: '<your password>'
});
yt.stdout.on('data', (data) => {
        try {
                dataObj = JSON.parse(data);
                console.log(dataObj.fulltitle);
                mySession.then(session => {
                                                session.getSchema("yt").getCollection("youtube").add(  dataObj  )
                                                .execute(function (row) {
                                                }).catch(err => {
                                                        console.log(err);
                                                })
                                                .then( function (notices) { console.log("Wrote to MySQL: " + JSON.stringify(notices))  });
                                }).catch(function (err) {
                                              console.log(err);
                                              process.exit();
                                });
        } catch (e) {
                console.log(" --- Can't parse json" + e );
        }
});
yt.stderr.on('data', (data) => {
  console.log("Error receiving data");
});
yt.on('close', (code) => {
  console.log(`child process exited with code ${code}`);
  mySession.then(session => {session.close() } );
});

In the above example, I execute the youtube-dl binary (you need to have it installed first) to search for “MySQL 5.7” videos. Instead of downloading the videos, I only grab the video’s metadata in JSON format  (“-j” flag). Because it is JSON, I can save it into MySQL document store. The table has the following structure:

CREATE TABLE `youtube` (
  `doc` json DEFAULT NULL,
  `_id` varchar(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,'$._id'))) STORED NOT NULL,
  UNIQUE KEY `_id` (`_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4

Here is the execution example:

$ node yt.js
What's New in MySQL 5.7
Wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["3f312c3b-b2f3-55e8-0ee9-b706eddf"]}}
MySQL 5.7: MySQL JSON data type example
Wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["88223742-9875-59f1-f535-f1cfb936"]}}
MySQL Performance Tuning: Part 1. Configuration (Covers MySQL 5.7)
Wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["c377e051-37e6-8a63-bec7-1b81c6d6"]}}
Dave Stokes — MySQL 5.7 - New Features and Things That Will Break — php[world] 2014
Wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["96ae0dd8-9f7d-c08a-bbef-1a256b11"]}}
MySQL 5.7 & JSON: New Opportunities for Developers - Thomas Ulin - Forum PHP 2015
Wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["ccb5c53e-561c-2ed5-6deb-1b325739"]}}
Cara Instal MySQL 5.7.10 NoInstaller pada Windows Manual Part3
Wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["95efbd79-8d79-e7b6-a535-271640c8"]}}
MySQL 5.7 Install and Configuration on Ubuntu 14.04
Wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["b8cfe132-aca4-1eba-c2ae-69e48db8"]}}

Now, here is what make this example interesting: as NodeJS + X Plugin = Asynchronous + Pipelining, the program execution will not stop if the table is locked. I’ve opened two sessions:

  • session 1: $ node yt.js > test_lock_table.log
  • session 2:
    mysql> lock table youtube read; select sleep(10); unlock tables;
    Query OK, 0 rows affected (0.00 sec)
    +-----------+
    | sleep(10) |
    +-----------+
    |         0 |
    +-----------+
    1 row in set (10.01 sec)
    Query OK, 0 rows affected (0.00 sec)

Results:

...
Upgrade MySQL Server from 5.5 to 5.7
... => wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["d4d62a8a-fbfa-05ab-2110-2fd5cf6d"]}}
OSC15 - Georgi Kodinov - Secure Deployment Changes Coming in MySQL 5.7
... => wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["8ac1cdb9-1499-544c-da2a-5db1ccf5"]}}
MySQL 5.7: Create JSON string using mysql
FreeBSD 10.3 - Instalación de MySQL 5.7 desde Código Fuente - Source Code
Webinar replay: How To Upgrade to MySQL 5.7 - The Best Practices - part 1
How to install MySQL Server on Mac OS X Yosemite - ltamTube
Webinar replay: How To Upgrade to MySQL 5.7 - The Best Practices - part 4
COMO INSTALAR MYSQL VERSION 5.7.13
MySQL and JSON
MySQL 5.7: Merge JSON data using MySQL
... => wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["a11ff369-6f23-11e9-187b-e3713e6e"]}}
... => wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["06143a61-4add-79da-0e1d-c2b52cf6"]}}
... => wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["1eb94ef4-db63-cb75-767e-e1555549"]}}
... => wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["e25f15b5-8c19-9531-ed69-7b46807a"]}}
... => wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["02b5a4c9-6a21-f263-90d5-cd761906"]}}
... => wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["e0bef958-10af-b181-81cd-5debaaa0"]}}
... => wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["f48fa635-fa63-7481-0668-addabbac"]}}
... => wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["557fa5c5-3c8a-fe01-c17c-549c557e"]}}
MySQL 5.7 Install and Configuration on Ubuntu 14.04
... => wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["456b11d8-ba03-0aec-8e06-9517c6e1"]}}
MySQL WorkBench 6.3 installation on Ubuntu 14.04
... => wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["0b651987-9b23-b5e0-f8f7-49b8ba5c"]}}
Going through era of IoT with MySQL 5.7 - FOSSASIA 2016
... => wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["e133746c-836c-a7e0-3893-292a7429"]}}
MySQL 5.7: MySQL JSON operator example
... => wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["4d13830d-7b30-5b31-d068-c7305e0a"]}}

As we can see, the first two writes were immediate. Then I’ve locked the table, and no MySQL queries went through. At the same time the download process (which is the slowest part here) proceeded and was not blocked (we can see the titles above, which are not followed by lines “… => wrote to MySQL:”). When the table was unlocked, a pile of waiting queries succeeded.

This can be very helpful when running a “download” process, and the network is a bottleneck. In a traditional synchronous query execution, when we lock a table the application gets blocked (including the network communication). With NodeJS and X Plugin, the download part will proceed with MySQL acting as a queue.

Pipeline Durability

How “durable” this pipeline, you might ask. In other words, what will happen if I will kill the connection? To test it out, I have (once again) locked the table (but now before starting the nodejs), killed the connection and finally unlocked the table. Here are the results:

Session 1:
----------
mysql> truncate table youtube_new;
Query OK, 0 rows affected (0.25 sec)
mysql> lock table youtube_new read;
Query OK, 0 rows affected (0.00 sec)
mysql> select count(*) from youtube_new;
+----------+
| count(*) |
+----------+
|        0 |
+----------+
1 row in set (0.00 sec)
Session 2:
----------
(when table is locked)
$ node yt1.js
11 03  MyISAM
Switching to InnoDB from MyISAM
tablas InnoDB a MyISAM
MongoDB vs MyISAM (MariaDB/MySQL)
MySQL Tutorial 35 - Foreign Key Constraints for the InnoDB Storage Engine
phpmyadmin foreign keys myisam innodb
Convert or change database manual from Myisam to Innodb
... >100 other results omited ...
^C
Session 1:
----------
mysql> select count(*) from youtube_new;
+----------+
| count(*) |
+----------+
|        0 |
+----------+
1 row in set (0.00 sec)
     Id: 4916
   User: root
   Host: localhost:33221
     db: NULL
Command: Query
   Time: 28
  State: Waiting for table metadata lock
   Info: PLUGIN: INSERT INTO `iot`.`youtube_new` (doc) VALUES ('{"upload_date":"20140319","protocol":"
mysql> unlock table;
Query OK, 0 rows affected (0.00 sec)
mysql> select count(*) from youtube_new;
+----------+
| count(*) |
+----------+
|        2 |
+----------+
1 row in set (0.00 sec)
mysql>  select json_unquote(doc->'$.title') from youtube_new;
+---------------------------------+
| json_unquote(doc->'$.title')    |
+---------------------------------+
| 11 03  MyISAM                   |
| Switching to InnoDB from MyISAM |
+---------------------------------+
2 rows in set (0.00 sec)

Please note: in the above, there isn’t a single acknowledgement from the MySQL server. When code receives a response from MySQL it prints “Wrote to MySQL: {“_state”:{“rows_affected”:1,”doc_ids”:[“…”]}}“. Also, note that when the connection was killed the MySQL process is still there, waiting on the table lock.

What is interesting here is is that only two rows have been inserted into the document store. Is there a “history length” here or some other buffer that we can increase? I’ve asked Jan Kneschke, one of the authors of the X Protocol, and the answers were:

  • Q: Is there any history length or any buffer and can we tune it?
    • A: There is no “history” or “buffer” at all, it is all at the connector level.
  • Q: Then why is 2 rows were finally inserted?
    • To answer this question I’ve collected tcpdump to port 33060 (X Protocol), see below

This is very important information! Keep in mind that the asynchronous pipeline has no durability settings: if the application fails and there are some pending writes, those writes can be lost (or could be written).

To fully understand how the protocol works, I’ve captured tcpdump (Jan Kneschke helped me to analyze it):

tcpdump -i lo -s0 -w tests/node-js-pipelining.pcap "tcp port 33060"

This is what is happening:

  • When I hit CTRL+C, nodejs closes the connection. As the table is still locked, MySQL can’t write to it and will not send the result of the insert back.
  • When the table is unlocked, it starts the first statement despite the fact that the connection has been closed. It then acknowledges the first insert and starts the second one.
  • However, at this point the script (client) has already closed the connection and the final packet (write done, here is the id) gets denied. The X Plugin then finds out that the client closed the connection and stops executing the pipeline.

Actually, this is very similar to how the original MySQL protocol worked. If we kill the script/application, it doesn’t automatically kill the MySQL connection (unless you hit CTRL+C in the MySQL client, sends the kill signal) and the connection waits for the table to get unlocked. When the table is unlocked, it inserts the first statement from a file.

Session 1
---------
mysql> select * from t_sql;
Empty set (0.00 sec)
mysql> lock table t_sql read;
Query OK, 0 rows affected (0.00 sec)
Session 2:
----------
$ mysql iot < t.sql
$ kill -9 ...
[3]   Killed                  mysql iot < t.sql
Session 1:
----------
mysql> show processlist;
+------+------+-----------------+------+---------+---------+---------------------------------+-----------------------------------------------+
| Id   | User | Host            | db   | Command | Time    | State                           | Info                                          |
+------+------+-----------------+------+---------+---------+---------------------------------+-----------------------------------------------+
| 4913 | root | localhost       | iot  | Query   |      41 | Waiting for table metadata lock | insert into t_sql  values('{"test_field":0}') |
+------+------+-----------------+------+---------+---------+---------------------------------+-----------------------------------------------+
4 rows in set (0.00 sec)
mysql> unlock tables;
Query OK, 0 rows affected (0.00 sec)
mysql> select * from t_sql;
+-------------------+
| doc               |
+-------------------+
| {"test_field": 0} |
+-------------------+
1 row in set (0.00 sec)

Enforcing unique checks

If I restart my script, it finds the same videos again. We will probably need to enforce the consistency of our data. By default the plugin generates the unique key (_id) for the document, so it prevents inserting the duplicates.

Another way to enforce the unique checks is to create a unique key for youtube id. Here is the updated table structure:

CREATE TABLE `youtube` (
  `doc` json DEFAULT NULL,
  `youtube_id` varchar(11) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,'$.id'))) STORED NOT NULL,
  UNIQUE KEY `youtube_id` (`youtube_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4

I’ve changed the default “_id” column to the YouTube’s unique ID. Now when I restart the script it shows:

MySQL 5.7: Merge JSON data using MySQL
{ [Error: Document contains a field value that is not unique but required to be]
  info:
   { severity: 0,
     code: 5116,
     msg: 'Document contains a field value that is not unique but required to be',
     sql_state: 'HY000' } }
... => wrote to MySQL: undefined

…as this document has already been loaded.

Conclusion

Although X Plugin pipelining does not necessarily significantly increase query response (it might save the roundtrip time) it can be helpful for some applications.We might not want to block the network communication (i.e., downloading or API calls) when the MySQL table is locked, for example. At the same time, unless you check/wait for the acknowledgement from the server, the data might or might not be written into MySQL.

Bonus: data analysis

Now we can see what we have downloaded. There are a number of interesting fields in the result:

"is_live": null,
	"license": "Standard YouTube License",
	"duration": 2965,
	"end_time": null,
	"playlist": ""mysql 5.7"",
	"protocol": "https",
	"uploader": "YUI Library",
	"_filename": "Douglas Crockford - The JSON Saga--C-JoyNuQJs.mp4",
	"age_limit": 0,
	"alt_title": null,
	"extractor": "youtube",
	"format_id": "18",
	"fulltitle": "Douglas Crockford: The JSON Saga",
	"n_entries": 571,
	"subtitles": {},
	"thumbnail": "https://i.ytimg.com/vi/-C-JoyNuQJs/hqdefault.jpg",
	"categories": ["Science & Technology"],
	"display_id": "-C-JoyNuQJs",
	"like_count": 251,
	"player_url": null,
	"resolution": "640x360",
	"start_time": null,
	"thumbnails": [{
		"id": "0",
		"url": "https://i.ytimg.com/vi/-C-JoyNuQJs/hqdefault.jpg"
	}],
	"view_count": 36538,
	"annotations": null,
	"description": "Yahoo! JavaScript architect Douglas Crockford tells the story of how JSON was discovered and how it became a major standard for describing data.",
	"format_note": "medium",
	"playlist_id": ""mysql 5.7"",
	"upload_date": "20110828",
	"uploader_id": "yuilibrary",
	"webpage_url": "https://www.youtube.com/watch?v=-C-JoyNuQJs",
	"uploader_url": "http://www.youtube.com/user/yuilibrary",
	"dislike_count": 5,
	"extractor_key": "Youtube",
	"average_rating": 4.921875,
	"playlist_index": 223,
	"playlist_title": null,
	"automatic_captions": {},
	"requested_subtitles": null,
	"webpage_url_basename": "-C-JoyNuQJs"

We can see the most popular videos. To do that I’ve added one more virtual field on view_count, and created an index on it:

CREATE TABLE `youtube` (
  `doc` json DEFAULT NULL,
  `youtube_id` varchar(11) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,'$.id'))) STORED NOT NULL,
  `view_count` int(11) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,'$.view_count'))) VIRTUAL,
  UNIQUE KEY `youtube_id` (`youtube_id`),
  KEY `view_count` (`view_count`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4

We can run the queries like:

mysql> select json_unquote(doc->'$.title'),
    -> view_count,
    -> json_unquote(doc->'$.dislike_count') as dislikes
    -> from youtube
    -> order by view_count desc
    -> limit 10;
+----------------------------------------------------------------------------------------------------+------------+----------+
| json_unquote(doc->'$.title')                                                                       | view_count | dislikes |
+----------------------------------------------------------------------------------------------------+------------+----------+
| Beginners MYSQL Database Tutorial 1 # Download , Install MYSQL and first SQL query                 |     664153 | 106      |
| MySQL Tutorial                                                                                     |     533983 | 108      |
| PHP and MYSQL - Connecting to a Database and Adding Data                                           |     377006 | 50       |
| PHP MySQL Tutorial                                                                                 |     197984 | 41       |
| Installing MySQL (Windows 7)                                                                       |     196712 | 28       |
| Understanding PHP, MySQL, HTML and CSS and their Roles in Web Development - CodersCult Webinar 001 |     195464 | 24       |
| jQuery Ajax Tutorial #1 - Using AJAX & API's (jQuery Tutorial #7)                                  |     179198 | 25       |
| How To Root Lenovo A6000                                                                           |     165221 | 40       |
| MySQL Tutorial 1 - What is MySQL                                                                   |     165042 | 45       |
| How to Send Email in Blackboard Learn                                                              |     144948 | 28       |
+----------------------------------------------------------------------------------------------------+------------+----------+
10 rows in set (0.00 sec)

Or if we want to find out the most popular resolutions:

mysql> select count(*) as cnt,
    -> sum(view_count) as sum_views,
    -> json_unquote(doc->'$.resolution') as resolution
    -> from youtube
    -> group by resolution
    -> order by cnt desc, sum_views desc
    -> limit 10;
+-----+-----------+------------+
| cnt | sum_views | resolution |
+-----+-----------+------------+
| 273 |   3121447 | 1280x720   |
|  80 |   1195865 | 640x360    |
|  18 |     33958 | 1278x720   |
|  15 |     18560 | 1152x720   |
|  11 |     14800 | 960x720    |
|   5 |      6725 | 1276x720   |
|   4 |     18562 | 1280x682   |
|   4 |      1581 | 1280x616   |
|   4 |       348 | 1280x612   |
|   3 |      2024 | 1200x720   |
+-----+-----------+------------+
10 rows in set (0.02 sec)

Special thanks to Jan Kneschke and Morgan Tocker from Oracle for helping with the X Protocol internals.


PlanetMySQL Voting: Vote UP / Vote DOWN

How to make sure that 'password' is not a valid MySQL password

$
0
0
Password management is an issue for many. It is not uncommon for an organization to require you to changing your password on a regular basis and many have rules on the length and content of passwords. The length and complexity (that require certain amounts of upper & lower case characters, special characters, and numerals) rules try to prevent users from using '12345', 'qwerty', or even the word 'password' itself as passwords. With MySQL 5.7 you can filter out those bad passwords, obscenities, slurs, or other words you do not want used as or part of a password.

If the STRONG password policy is set in the my.cnf file, a validate_password_dictionary_file can be specified and you can use your favorite text editor to add the words or phrases you wish to not be valid in that file. Note that words shorter than four characters are ignored. So create a file with a text editor ( mine was placed in /var/lib/mysql/dit) with the banned words, edit the my.cnf file to set the validate_password_policy to STRONG, and also add the path to your dictionary file to validate_password_dictionary_file line of the same file. Restart your server and test.

My test dictionary had words like 'foobar', 'snafu', and 'password' and trying to use password with one of the words in the dictionary file would generate a ERROR 1819 (HY000): Your password does not satisfy the current policy requirement, even if I mixed the case of the various letters in the restricted words.

MySQL 5.7 also added the ability to set lifetimes for password in age, the ability to lock accounts, and stopped adding anonymous accounts (no user name and no password) during installation.

By the way I will be speaking in Detroit at the Converge Conference on MySQL 5.7 security if you would like to know more about this and other MySQL 5.7 related information.
PlanetMySQL Voting: Vote UP / Vote DOWN

libssl.so.6: cannot open shared object file with MariaDB Galera

$
0
0

After getting some errors:

160707 13:23:05 [Note] /home/sh/mariadb-galera-10.0.26-linux-x86_64/bin/mysqld (mysqld 10.0.26-MariaDB-wsrep) starting as process 3132 ...
160707 13:23:05 [Note] WSREP: Read nil XID from storage engines, skipping position init
160707 13:23:05 [Note] WSREP: wsrep_load(): loading provider library '/home/sh/mariadb-galera-10.0.26-linux-x86_64/lib/libgalera_smm.so'
160707 13:23:05 [ERROR] WSREP: wsrep_load(): dlopen(): libssl.so.6: cannot open shared object file: No such file or directory
160707 13:23:05 [ERROR] WSREP: wsrep_load(/home/sh/mariadb-galera-10.0.26-linux-x86_64/lib/libgalera_smm.so) failed: Invalid argument (22). Reverting to no provider.
160707 13:23:05 [Note] WSREP: Read nil XID from storage engines, skipping position init
160707 13:23:05 [Note] WSREP: wsrep_load(): loading provider library 'none'
160707 13:23:05 [ERROR] Aborting

I was trying to fix this issue.
Inside:
/home/sh/mariadb-galera-10.0.26-linux-x86_64/lib/galera

[sh@pxc_5_7 galera]$ ldd ./libgalera_smm.so
linux-vdso.so.1 => (0x00007ffdfa943000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f9fdb121000)
librt.so.1 => /lib64/librt.so.1 (0x00007f9fdaf19000)
libssl.so.6 => not found
libcrypto.so.6 => not found
libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007f9fdac10000)
libm.so.6 => /lib64/libm.so.6 (0x00007f9fda90d000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f9fda6f7000)
libc.so.6 => /lib64/libc.so.6 (0x00007f9fda336000)
/lib64/ld-linux-x86-64.so.2 (0x00007f9fdb899000)

Go to /usr/lib64:

[root@pxc_5_7 lib64]# ls -l | grep libcrypt
-rwxr-xr-x. 1 root root 40816 Feb 16 21:18 libcrypt-2.17.so
lrwxrwxrwx. 1 root root 19 Jul 7 13:26 libcrypto.so -> libcrypto.so.1.0.1e
lrwxrwxrwx. 1 root root 19 Jul 7 13:26 libcrypto.so.10 -> libcrypto.so.1.0.1e
-rwxr-xr-x. 1 root root 2017168 May 9 12:10 libcrypto.so.1.0.1e
lrwxrwxrwx. 1 root root 22 May 4 16:06 libcryptsetup.so.4 -> libcryptsetup.so.4.7.0
-rwxr-xr-x. 1 root root 166640 Nov 20 2015 libcryptsetup.so.4.7.0
lrwxrwxrwx. 1 root root 16 May 4 16:06 libcrypt.so.1 -> libcrypt-2.17.so
[root@pxc_5_7 lib64]# ls -l | grep libssl
-rwxr-xr-x. 1 root root 276688 Apr 25 18:34 libssl3.so
lrwxrwxrwx. 1 root root 16 Jul 7 13:26 libssl.so -> libssl.so.1.0.1e
lrwxrwxrwx. 1 root root 16 Jul 7 13:26 libssl.so.10 -> libssl.so.1.0.1e
-rwxr-xr-x. 1 root root 449880 May 9 12:10 libssl.so.1.0.1e

So it is sufficient to create symlinks:

ln -s libcrypto.so.1.0.1e libcrypto.so.6
ln -s libssl.so.1.0.1e libssl.so.6

The result:

[sh@pxc_5_7 galera]$ ldd ./libgalera_smm.so
linux-vdso.so.1 => (0x00007ffc94d50000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f3e72f25000)
librt.so.1 => /lib64/librt.so.1 (0x00007f3e72d1d000)
libssl.so.6 => /lib64/libssl.so.6 (0x00007f3e72aaf000)
libcrypto.so.6 => /lib64/libcrypto.so.6 (0x00007f3e726c7000)
libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007f3e723bf000)
libm.so.6 => /lib64/libm.so.6 (0x00007f3e720bc000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f3e71ea6000)
libc.so.6 => /lib64/libc.so.6 (0x00007f3e71ae5000)
/lib64/ld-linux-x86-64.so.2 (0x00007f3e7369d000)
libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x00007f3e71898000)
libkrb5.so.3 => /lib64/libkrb5.so.3 (0x00007f3e715b3000)
libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00007f3e713af000)
libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x00007f3e7117c000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f3e70f78000)
libz.so.1 => /lib64/libz.so.1 (0x00007f3e70d62000)
libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x00007f3e70b52000)
libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00007f3e7094e000)
libresolv.so.2 => /lib64/libresolv.so.2 (0x00007f3e70734000)
libselinux.so.1 => /lib64/libselinux.so.1 (0x00007f3e7050e000)
libpcre.so.1 => /lib64/libpcre.so.1 (0x00007f3e702ad000)
liblzma.so.5 => /lib64/liblzma.so.5 (0x00007f3e70088000)


PlanetMySQL Voting: Vote UP / Vote DOWN

Planets9s - #MySQLHA CrowdChat Launch, HA for PostgreSQL and Live Upgrades to MySQL 5.7

$
0
0

Welcome to this week’s Planets9s, covering all the latest resources and technologies we create around automation and management of open source database infrastructures.

Join the Conversation: Severalnines Launches #MySQLHA CrowdChat

This new CrowdChat is brought to you by Severalnines and is hosted by a community of subject matter experts. CrowdChat is a community platform that works across Facebook, Twitter, and LinkedIn to allow users to discuss a topic using a specific #hashtag. This crowdchat focuses on the hashtag #MySQLHA. So if you’re a DBA, architect, CTO, or a database novice, sign up and become part of the conversation!

Sign up for #MySQLHA CrowdChat

Become a PostgreSQL DBA - How to Setup Streaming Replication for High Availability

Historically, PostgreSQL did not have support for replication, which was provided using external tools like Pgpool and Slony. These solutions did not come out of the box and most of them required a good bit of work to set them up. This was a serious drawback, and it made people look into MySQL, where replication had been available for a long time. Thankfully, with PostgreSQL 9.0, replication has been added natively to PostgreSQL and this post shows you how to set up streaming replication.

Read the blog

Performing a Live Upgrade to MySQL 5.7

After studying the differences between MySQL 5.6 and 5.7, and going through a vigorous regression test process in our two previous posts on this topic, it’s now time for perform the actual upgrade itself. How do we best introduce 5.7 in our live environment? How can we minimize risks? What do we do if something goes wrong? And what tools are available out there to assist us? This latest post provides answers to these questions as well as a link to the related whitepaper on how to upgrade to MySQL 5.7.

Read the blog

That’s it for this week! Feel free to share these resources with your colleagues and follow us in our social media channels.

Have a good end of the week,

Jean-Jérôme Schmidt
Planets9s Editor
Severalnines AB


PlanetMySQL Voting: Vote UP / Vote DOWN

Solving the non-atomic table swap, Take III: making it atomic

$
0
0

With the unintended impression of becoming live blogging, we now follow up on Solving the non-atomic table swap, Take II and Solving the Facebook-OSC non-atomic table swap problem with a safe, blocking, atomic solution

Why yet another iteration?

The solution presented in Solving the non-atomic table swap, Take II was good, in that it was safe. No data corruption. Optimistic: if no connection is killed throughout the process, then completely blocking.

Two outstanding issues remained:

  • If something did go wrong, the solution reverted to a table-outage
  • On replicas, the table swap is non atomic, non blocking. There's table-outage scenario on replica.

As it turns out, there's a simpler solution which overcomes both the above. As with math and physics, the simpler solution is often the preferred one. But it took those previous iterations to gather a few ideas together. So, anyway:

Safe, locking, atomic, asynchronous table swap

Do read the aforementioned previous posts; the quick-quick recap is: we want to be able to LOCK a table tbl, then do some stuff, then swap it out and put some ghost table in its place. MySQL does not allow us to rename tbl to tbl_old, ghost to tbl if we have locks on tbl in that session.

The solution we offer is now based on two connections only (as opposed to three, in the optimistic approach). "Our" connections will be C10, C20. The "normal" app connections are C1..C9, C11..C19, C21..C29.

  • Connections C1..C9 operate on tbl with normal DML: INSERT, UPDATE, DELETE
  • Connection C10: CREATE TABLE tbl_old (id int primary key) COMMENT='magic-be-here'
  • Connection C10: LOCK TABLES tbl WRITE, tbl_old WRITE
  • Connections C11..C19, newly incoming, issue queries on tbl but are blocked due to the LOCK
  • Connection C20: RENAME TABLE tbl TO tbl_old, ghost TO tbl
    This is blocked due to the LOCK, but gets prioritized on top connections C11..C19 and on top C1..C9 or any other connection that attempts DML on tbl
  • Connections C21..C29, newly incoming, issue queries on tbl but are blocked due to the LOCK and due to the RENAME, waiting in queue
  • Connection C10: checks that C20's RENAME is applied (looks for the blocked RENAME in processlist)
  • Connection 10: DROP TABLE tbl_old
    Nothing happens yet; tbl is still locked. All other connections still blocked.
  • Connection 10: UNLOCK TABLES
    BAM!
    The RENAME is first to execute, ghost table is swapped in place of tbl, then C1..C9, C11..C19, C21..C29 all get to operate on the new and shiny tbl

Some notes

  • We create tbl_old as a blocker for a premature swap
  • It is allowed for a connection to DROP a table it has under a WRITE LOCK
  • A blocked RENAME is always prioritized over a blocked INSERT/UPDATE/DELETE, no matter who came first

What happens on failures?

Much fun. Just works; no rollback required.

  • If C10 errors on the CREATE we do not proceed.
  • If C10 errors on the LOCK statement, we do not proceed. The table is not locked. App continues to operate as normal.
  • If C10 dies just as C20 is about to issue the RENAME:
    • The lock is released, the queries C1..C9, C11..C19 immediately operate on tbl.
    • C20's RENAME immediately fails because tbl_old exists.
      The entire operation is failed, but nothing terrible happens; some queries were blocked for some time is all. We will need to retry everything
  • If C10 dies while C20 is blocked on RENAME: Mostly similar to the above. Lock released, then C20 fails the RENAME (because tbl_old exists), then all queries resume normal operation
  • If C20 dies before C10 drops the table, we catch the error and let C10 proceed as planned: DROP, UNLOCK. Nothing terrible happens, some queries were blocked for some time. We will need to retry
  • If C20 dies just after C10 DROPs the table but before the unlock, same as above.
  • If both C10 and C20 die, no problem: LOCK is cleared; RENAME lock is cleared. C1..C9, C11..C19, C21..C29 are free to operate on tbl.

No matter what happens, at the end of operation we look for the ghost table. Is it still there? Then we know the operation failed, "atomically". Is it not there? Then it has been renamed to tbl, and the operation worked atomically.

A side note on failure is the matter of cleaning up the magic tbl_old. Here this is a matter of taste. Maybe just let it live and avoid recreating it, or you can drop it if you like.

Impact on app

App connections are guaranteed to be blocked, either until ghost is swapped in, or until operation fails. In the former, they proceed to operate on the new table. In the latter, they proceed to operate on the original table.

Impact on replication

Replication only sees the RENAME. There is no LOCK in the binary logs. Thus, replication sees an atomic two-table swap. There is no table-outage.

Conclusion

This solution satisfies all we wanted to achieve. We're unlikely to give this another iteration. Well, if some yet-more-elegant solution comes along I'll be tempted, for the beauty of it, but the solution offered in this post is simple-enough, safe, atomic, replication friendly, and should make everyone happy.


PlanetMySQL Voting: Vote UP / Vote DOWN

Percona Server 5.6.31-77.0 is now available

$
0
0
percona server 5.6.30-76.3


percona server 5.6.31-77.0Percona
 announces the release of Percona Server 5.6.31-77.0 on July 7th, 2016. Download the latest version from the Percona web site or the Percona Software Repositories.

Based on MySQL 5.6.31, including all the bug fixes in it, Percona Server 5.6.31-77.0 is the current GA release in the Percona Server 5.6 series. Percona Server is open-source and free – this is the latest release of our enhanced, drop-in replacement for MySQL. Complete details of this release are available in the 5.6.31-77.0 milestone on Launchpad.

New Features:
  • Percona Server implemented protocol support for TLS 1.1 and TLS 1.2. This implementation turns off TLS v1.0 support by default.
  • TokuDB MTR suite is now part of the default MTR suite in Percona Server 5.6.
Bugs Fixed:
  • Querying the GLOBAL_TEMPORARY_TABLES table caused server crash if temporary table owning threads would execute new queries. Bug fixed #1581949.
  • Audit Log Plugin would hang when trying to write log record of audit_log_buffer_size length. Bug fixed #1588439.
  • Audit log in ASYNC mode could skip log records that don’t fit into log buffer. Bug fixed #1588447.
  • The innodb_log_block_size feature attempted to diagnose the situation where the logs have been created with a log block value that differs from the current innodb_log_block_size setting. But this diagnostics came too late, and a misleading error No valid checkpoints found was produced first, aborting the startup. Bug fixed #1155156.
  • Some transaction deadlocks did not increase the INFORMATION_SCHEMA.INNODB_METRICS lock_deadlocks counter. Bug fixed #1466414 (upstream #77399).
  • InnoDB tablespace import failed when trying to import a table with different data directory. Bug fixed #1548597 (upstream #76142).
  • Audit Log Plugin truncated SQL queries to 512 bytes. Bug fixed #1557293.
  • Regular user extra port connection failed if max_connections plus one SUPER user were already connected on the main port, even if it connecting would not violate the extra_max_connections. Bug fixed #1583147.
  • The error log warning Too many connections was only printed for connection attempts when max_connections plus one SUPER have connected. If the extra SUPER is not connected, the warning was not printed for a non-SUPER connection attempt. Bug fixed #1583553.
  • mysqlbinlog did not free the existing connection before opening a new remote one. Bug fixed #1587840 (upstream #81675).
  • Fixed memory leaks in mysqltest. Bugs fixed #1582718 and #1588318.
  • Fixed memory leaks in mysqlcheck. Bug fixed #1582741.
  • Fixed memory leak in mysqlbinlog. Bug fixed #1582761 (upstream #78223).
  • Fixed memory leaks in mysqldump. Bug fixed #1587873 and #1588845 (upstream #81714).
  • Fixed memory leak in non-existing defaults file handling. Bug fixed #1588344.
  • Fixed memory leak in mysqlslap. Bug fixed #1588361.
  • Transparent Huge Pages check will now only happen if tokudb_check_jemalloc option is set. Bugs fixed #939 and #713.
  • Logging in ydb environment validation functions now prints more useful context. Bug fixed #722.

Other bugs fixed: #1588386, #1529885, #1541698 (upstream #80261), #1582681, #1583589, #1587426 (upstream, #81657), #1589431, #956, and #964.

Release notes for Percona Server 5.6.31-77.0 are available in the online documentation. Please report any bugs on the launchpad bug tracker.


PlanetMySQL Voting: Vote UP / Vote DOWN

Develop By Example – Document Store Connections using Node.js

$
0
0

In this post we are going to explain how to connect a Node.js application to a MySQL server using the new MySQL Connector/Node.js; needless to say that we will be using the MySQL server as a document store.

There are two types of session that a connection can provide: XSession and NodeSession.
An XSession encapsulates access to a single MySQL server running the X Plugin or
multiple MySQL Cluster nodes; and the NodeSession serves as an abstraction for a physical connection to exactly one MySQL server running the X Plugin. To enable the XPlugin in the MySQL server using the MySQL Client command line you need to use the root account or an account with INSERT privilege to mysql.plugin table:

  • Invoke the MySQL command-line client: mysql -u user –p
  • Run the following command: INSTALL PLUGIN mysqlx SONAME ‘mysqlx.so’;

Click here for more information about how to setting up MySQL as document store.

Creating a connection to a MySQL server as a document store is quite similar to create a connection to a traditional MySQL server; we require the following connection parameters: host, database user, user password, and port.

The following example demonstrates how to connect to a single MySQL Server using XSession:

var mysqlx = require('mysqlx');
mysqlx.getSession({
        host: ‘localhost’,
        port: ‘33060’,
        dbUser: ‘testUser’,
        dbPassword: ‘myPass’
}).then(function (session) {
        console.log(‘we are connected!’);
        session.close();
}) .catch(function (err) {
        console.log(err.message);
        console.log(err.stack);
});

In the previous code example, we created and closed a connection to a server using an XSession; as you can see the code is very simple and easy to read.

The first line of code loads the Connector/Node.js client module, mysqlx. We then call its getSession method. This method implements a promise. If the connection to the MySQL server is successful the promise is fulfilled by returning an XSession (session) object. We then call the session object’s close method to close the connection.

In the previous code there are two important things to note. The first one is that we do not specify a schema because the XSession works similar to a traditional session: You do not need to specify a schema because, at the time you connect, your working schema might not exist yet. The second one is the port. By default the X DevAPI uses the port 33060; we are assuming that the running server is using the default port for TCP/IP connections. The port can be configured when the server starts and is stored in a server variable.

The following example demonstrates how to connect to a single MySQL Server using NodeSession:

var mysqlx = require('mysqlx');
mysqlx.getNodeSession({
        host: ‘localhost’,
        port: ‘33060’,
        dbUser: ‘testUser’,
        dbPassword: ‘myPass’
}).then(function (nodeSession) {
        console.log(‘we are connected!’);
        nodeSession.close();
}) .catch(function (err) {
        console.log(err.message);
        console.log(err.stack);
});

The NodeSession example code is almost the same code used to get an XSession object, the difference is the method that is called to get the session object and the port; the code does exactly the same.

You might need to use a NodeSession in certain scenarios where you require access to SQL features that are not supported by an XSession. In a subsequent post we are going to cover some examples about how to use the NodeSession.

To work with schemas and collections we need to add some extra lines of code. The following code demonstrates how to do it.

var mysqlx = require('mysqlx');
mysqlx.getSession({
        host: ‘localhost’,
        port: ‘33060’,
        dbUser: ‘testUser’,
        dbPassword: ‘myPass’
}).then(function (session) {
     var schema = session.getSchema(‘test’);
     var coll = schema.getCollection(‘myColl’);

     coll.find(“$._id == ‘1’”).execute(function (myDoc) {
           return myDoc;
     }).catch(function (err) {
           console.log(err.message);
           console.log(err.stack);
     });
     session.close();
}) .catch(function (err) {
        console.log(err.message);
        console.log(err.stack);
});

In the last code example; from the session object, we call the getSchema method to get an object (schema) that represents the schema in which we want to work. Once we have the schema object we execute the method getCollection to get an object (coll) that represents the collection we want to work with. In this example, we want to retrieve the document with an id value of ‘1’ from the collection. First we call the find method passing the JSON path and value we are searching for. Then we call the execute method to perform the query. The execute method returns a promise which supplies the requested document when the method completes.

See you in the next blog post where we are going to explain more about the operations that can be performed using collections.


PlanetMySQL Voting: Vote UP / Vote DOWN

Why Adaptive Fault Detection is Powerful and Unique

$
0
0

Adaptive Fault Detection is a patented, algorithm-based technology and one of the important central components of the VividCortex app . Unlike other monitoring methodologies  such as anomaly detection or threshold alerting — adaptive fault detection is designed to detect events that are, by definition, detrimental to a system. It looks for issues that actually prevent work from completeing — not just anomalies or outliers. With this quick blog post, we want to help readers understand the definition and value of fault detection. To do so, it helps to delve into several key concepts:

  • Why is it important to identify faults?
  • How does VividCortex’s detect faults?
  • How does our app help users address faults when they appear?

Why is it important to identify faults?

A fault is most easily defined as a certain kind of momentary stall. Specifically, it's when a system fails to service requests for work (i.e. queries, IO operations) even though those requests continue to arrive. In other words, the work continues to line up, but it can’t complete. It's earmarked by a bottleneck, even if it's extremely brief.

Faults are typically caused by system overload or poor performance, when something demands more than it should from the system, or when the system is simply underperforming, resulting in a back-up. This can occur for a variety of reasons, including resource overload/saturation, internal scalability problems, intensive periodic tasks, or a number of other things. In any case, the occurrence of a fault can be understood to represent a moment when a system fails to perform work effectively.

There are many instances when a fault will initially appear only momentarily and then resolve itself, as the system catches back up; the only symptom of such an issue might be a mere one-second stall. This often makes faults transient and extremely difficult to detect without specifically designed detection. Likewise, the symptoms and causes of faults tend to be complex, because systems that stall have often misbehaved in a variety of ways, meaning there’s often no single cause-effect relationship to track down.

But why is it important for users to take note of seemingly small problems? Well, system performance problems almost always start small and, overtime, snowball into much more serious issues. Catching them early is the best way to prevent major performance problems and outages. Those seemingly benign, virtually invisible hiccups can compound into something severe if given time. That’s why faults are best dealt with while they’re still small (lasting only a second or two). Short-duration faults are much easier to diagnose and fix. When they’re bigger, there’s more to untangle.

How does VividCortex detect faults?

As VividCortex's founder and CEO Baron Schwartz wrote in a previous blog post, faults are decidedly different from anomalies and other notable events, which means the method for detecting them must be more precise than simply pinpointing outliers. Instead, our fault detection algorithm is based on queueing theory, a very potent concept that Baron has written about in detail. There are so many factors that can cause a fault that we’ve determined the most effective way to find them is by defining their most significant upshot: work isn’t getting done.

This rationale guides our algorithm and lets us see faults based on the effects they produce in a system, rather than their sources, which can be manifold and very hard to predict. This is what we mean when we say that VividCortex has a “work-centric worldview.”

Using advanced statistics and machine learning, VividCortex’s Fault Detection is completely adaptive and self-tuning – it doesn’t require any configuration. The program can detect faults as short as one second in duration. Even the most attentive user would likely fail to notice system stalls so small, but with our adaptive fault detection, they’re easily diagnosed and solved. On top of that, the algorithm is incredibly efficient, practically free for a system’s CPU and memory.  

How does the app help users address faults when they appear?

When a fault occurs, our agents react immediately by gathering additional data at high frequency for a few moments. Faults then appear as events in the Faults Dashboard, easily accessible from the app’s navigation pane. They’re displayed in a timeline, from left to right, accompanied by widgets that show what was happening in the server at the specific moment of the fault. You can click on any fault to examine it, and a two-column display will appear below. The left-hand pane displays summary information about activity and status in the faulty system during the affected time period, with vertical red lines indicating the moment of the fault. Here’s an example:

Fault_Detection.png

From there, diagnosis requires application-specific knowledge, but in summary, this application is a background task that executed an expensive DELETE statement against the database, which then issued a large set of I/O requests to the disk.

To see another example, this video showcases how fault detection helped identify an abusive MySQL query, by looking at high Disk throughput, CPU activity, and MySQL concurrency.

 

The Value of Fault Detection

Ultimately, Adaptive Fault Detection shows you inherently valuable information — the ability for work to complete — and guides you to the clues you need to proactively fix or prevent an issue. Fault detection isn’t the same as other monitoring approaches, and it has the potential to reveal parts of your system that nothing else can. While it’s not an instant antidote for all monitoring woes — there's no such thing — it’s an important type of visibility to have available and at your disposal, and will reveal much about your system, especially when used in conjunction with other medtods.

Want to see for yourself? Try giving it a spin on your own systems.


PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL sys version 1.5.1 released

$
0
0

MySQL sys version 1.5.1 has just been released.

This is a purely bug fix release, and has been merged in to the upcoming MySQL 5.7.14 release.

Here’s a full summary of the changes:

Improvements

  • A quote_identifier function was added, which can be used to properly backtick identifier names
  • The `Tls_version` column was added to the output from the `mysql.slave_master_info` table, from the `diagnostics` procedure (backported from 5.7 upstream change)

Bug Fixes

  • MySQL Bug #77853 / Oracle Bug #21512106 – The `format_path` function did not consider directory boundaries when comparing variables to paths – it now does. Also fixed to no longer translate backslashes within Windows paths to forward slash
  • Oracle Bug #21663578 – Fixed an instability within the `sysschema.v_schema_tables_with_full_table_scans` test
  • Oracle Bug #21970078 – The `host_summary` view could fail with a division by zero error
  • MySQL Bug #78874 / Oracle Bug #22066096 – The `ps_setup_show_enabled` procedure showed all rows for the `performance_schema.setup_objects` table, rather than only those that are enabled
  • MySQL Bug #80569 / Oracle Bug #22848110 – The `max_latency` column for the `host_summary_by_statement_latency` view incorrectly showed the SUM of latency
  • MySQL Bug #80833 / Oracle Bug #22988461 – The `pages_hashed` and `pages_old` columns within the `innodb_buffer_stats_by_schema` and `innodb_buffer_stats_by_table` views were calculated incorrectly (Contributed by Tsubasa Tanaka)
  • MySQL Bug #78823 / Oracle Bug #22011361 – The `create_synonym_db` procedure failed when using reserved words as the synonym name (this change also introduced the quote_identifier function mentioned above Contributed by Paul Dubois)
  • MySQL Bug #81564 / Oracle Bug #23335880 – The `ps_setup_show_enabled` and `ps_setup_show_disabled` procedures were fixed to:
    • Show `user@host` instead of `host@user` for accounts
    • Fixed the column header for `disabled_users` within `ps_setup_show_disabled`
    • Explicitly ordered all output for test stability
    • Show disabled users for 5.7.6+
  • Oracle Bug #21970806 – The `sysschema.fn_ps_thread_trx_info` test was unstable
  • Oracle Bug #23621189 – The `ps_trace_statement_digest` procedure ran EXPLAIN incorrectly in certain cases (such as on a SHOW statement, no query being specified, or not having a full qualified table), the procedure now catches these issues and ignores them

PlanetMySQL Voting: Vote UP / Vote DOWN

Percona XtraBackup 2.3.5 is now available

$
0
0
Percona XtraBackup 2.3.5

Percona XtraBackup 2.3.5Percona announces the release of Percona XtraBackup 2.3.5 on July 8, 2016. Downloads are available from our download site or Percona Software Repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups.

This release is the current GA (Generally Available) stable release in the 2.3 series.

Bugs fixed:
  • Backup process would fail if --throttle option was used. Bug fixed #1554235.
  • .ibd files for remote tablespaces were not copied back to the original location pointed by the .isl files. Bug fixed #1555423.
  • When called with insufficient parameters, like specifying the empty --defaults-file option, Percona XtraBackup could crash. Bug fixed #1566228.
  • Documentation states that the default value for --ftwrl-wait-query-type is all, however it was update. Changed the default value to reflect the documentation. Bug fixed #1566315.
  • Free Software Foundation address in copyright notices was outdated. Bug fixed #1222777.
  • Backup process would fail if the datadir specified on the command-line was not the same as one that is reported by the server. Percona XtraBackup now allows the datadir from my.cnf override the one from SHOW VARIABLES. xtrabackup will print a warning that they don’t match, but continue. Bug fixed #1526467.
  • Backup process would fail on MariaDB if binary logs were in non-standard directory. Bug fixed #1517629.
  • Output of --slave-info option was missing an apostrophe. Bug fixed #1573371.

Other bugs fixed: #1599397.

Release notes with all the bugfixes for Percona XtraBackup 2.3.5 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.


PlanetMySQL Voting: Vote UP / Vote DOWN

Percona Server 5.5.50-38.0 is now available

$
0
0
Percona_ServerLogoVert_CMYK


Percona announces the release of Percona Server 5.5.50-38.0 on July 8, 2016. Based on MySQL 5.5.50, including all the bug fixes in it, Percona Server 5.5.50-38.0 is now the current stable release in the 5.5 series.

Percona Server is open-source and free. Details of the release can be found in the 5.5.50-38.0 milestone on Launchpad. Downloads are available here and from the Percona Software Repositories.

New Features:
Bugs Fixed:
  • Querying the GLOBAL_TEMPORARY_TABLES table would cause server crash if temporary table owning threads would execute new queries. Bug fixed #1581949.
  • The innodb_log_block_size feature attempted to diagnose the situation where the logs have been created with a log block value that differs from the current innodb_log_block_size setting. But this diagnostics came too late, and a misleading error No valid checkpoints found was produced first, aborting the startup. Bug fixed #1155156.
  • AddressSanitizer build with LeakSanitizer enabled was failing at gen_lex_hash invocation. Bug fixed #1580993 (upstream #80014).
  • ssl.cmake file was broken when custom OpenSSL build was used. Bug fixed #1582639 (upstream #61619).
  • mysqlbinlog did not free the existing connection before opening a new remote one. Bug fixed #1587840 (upstream #81675).
  • Fixed memory leaks in mysqltest. Bugs fixed #1582718 and #1588318.
  • Fixed memory leaks in mysqlcheck. Bug fixed #1582741.
  • Fixed memory leak in mysqlbinlog. Bug fixed #1582761 (upstream #78223).
  • Fixed memory leaks in mysqldump. Bug fixed #1587873 and #1588845 (upstream #81714).
  • Fixed memory leak in innochecksum. Bug fixed #1588331.
  • Fixed memory leak in non-existing defaults file handling. Bug fixed #1588344.
  • Fixed memory leak in mysqlslap. Bug fixed #1588361.

Other bugs fixed: #1588169, #1588386, #1529885, #1587757, #1587426 (upstream, #81657), #1587527, #1588650, and #1589819.

The release notes for Percona Server 5.5.50-38.0 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.


PlanetMySQL Voting: Vote UP / Vote DOWN

Shinguz: Temporary tables and MySQL STATUS information

$
0
0

When analysing MySQL configuration and status information at customers it is always interesting to see how the applications behave. This can partially be seen by the output of the SHOW GLOBAL STATUS command. See also Reading MySQL fingerprints.

Today we wanted to know where the high Com_create_table and the twice as high Com_drop_table is coming from. One suspect was TEMPORARY TABLES. But are real temporary tables counted as Com_create_table and Com_drop_table at all? This is what we want to find out today. The tested MySQL version is 5.7.11.

Caution: Different MySQL or MariaDB versions might behave differently!

Session 1 Global Session 2
CREATE TABLE t1 (id INT);
Query OK, 0 rows affected
   
Com_create_table +1
Opened_table_definitions +1
Com_create_table +1
Opened_table_definitions +1
 
 
CREATE TABLE t1 (id INT);
ERROR 1050 (42S01): Table 't1' already exists
   
Com_create_table +1
Open_table_definitions +1
Open_tables +1
Opened_table_definitions +1
Opened_tables +1
Com_create_table + 1
Open_table_definitions +1
Open_tables +1
Opened_table_definitions +1
Opened_tables +1
 
 
CREATE TABLE t1 (id INT);
ERROR 1050 (42S01): Table 't1' already exists
   
Com_create_table + 1 Com_create_table + 1  
 
DROP TABLE t1;
Query OK, 0 rows affected
   
Com_drop_table +1
Open_table_definitions -1
Open_tables -1
Com_drop_table +1
Open_table_definitions -1
Open_tables -1
 
 
DROP TABLE t1;
ERROR 1051 (42S02): Unknown table 'test.t1'
   
Com_drop_table -1 Com_drop_table -1  
 
CREATE TEMPORARY TABLE ttemp (id INT);
Query OK, 0 rows affected
   
Com_create_table +1
Opened_table_definitions +2
Opened_tables +1
Com_create_table +1
Opened_table_definitions +2
Opened_tables +1
 
 
CREATE TEMPORARY TABLE ttemp (id INT);
ERROR 1050 (42S01): Table 'ttemp' already exists
   
Com_create_table +1 Com_create_table +1  
 
DROP TABLE ttemp;
Query OK, 0 rows affected
   
Com_drop_table +1 Com_drop_table +1  
 
CREATE TEMPORARY TABLE ttemp (id int);
Query OK, 0 rows affected
  CREATE TEMPORARY TABLE ttemp (id int);
Query OK, 0 rows affected
Com_create_table +1
Opened_table_definitions +2
Opened_tables +1
Com_create_table +2
Opened_table_definitions +4
Opened_tables +2
Com_create_table +1
Opened_table_definitions +2
Opened_tables +1
 
DROP TABLE ttemp;
Query OK, 0 rows affected
  DROP TABLE ttemp;
Query OK, 0 rows affected
Com_drop_table +1 Com_drop_table +2 Com_drop_table +1

Conclusion

  • A successful CREATE TABLE command opens and closes a table definition.
  • A non successful CREATE TABLE command opens the table definition and the file handle of the previous table. So a faulty application can be quite expensive.
  • A further non successful CREATE TABLE command has no other impact.
  • A DROP TABLE command closes a table definition and the file handle.
  • A CREATE TEMPORARY TABLE opens 2 table definitions and the file handle. Thus behaves different than CREATE TABLE
  • But a faulty CREATE TEMPORARY TABLE seems to be much less intrusive.
  • Open_table_definitions and Open_tables is always global, also in session context.
Taxonomy upgrade extras: 

PlanetMySQL Voting: Vote UP / Vote DOWN

Store a tag-cloud in MySQL

$
0
0

There was a time when tag-clouds were the thing for website owners to fancy oneself. These clouds are mostly gone, but seen from the perspective of how to implement such a thing, one can learn quite a lot, especially with large amounts of links. Anyway, imagine you publish some articles on your website, which are stored in a table "post" and you want to to add tags to every post in order to print a tag-cloud.

Read the rest »


PlanetMySQL Voting: Vote UP / Vote DOWN

The fastest MySQL Sandbox setup ever!

$
0
0

MySQL-Sandbox 3.1.11 introduces a new utility, different from anything I have put before in the MySQL Sandbox toolkit.

make_sandbox_from_url downloads a tiny MySQL tarball from a repository and install it straight away.

As of today, the following packages are available

Major release versions package size
(what you download)
expanded size
(storage used)
original size
(not included)
5.0 5.0.96 20M 44M 371M
5.1 5.1.72 23M 59M 485M
5.5 5.5.50 15M 49M 690M
5.6 5.6.31 18M 61M 1.1G
5.7 5.7.13 33M 108M 2.5G

The sizes of the tarballs mentioned in the table above are much smaller than the original packages. The binaries have been stripped of debug info, compressed whenever possible, and purged of all binaries that are not needed for sandbox operations. This means that:

  • You can download the needed tarball very fast;
  • The storage needed for the binaries is reduced immensely.

Noun archive 8572

Here is an example of the script in action. We download and install mySQL 5.0.96 in one go:

$ make_sandbox_from_url 5.0 -- --no_show
wget -O 5.0.96.tar.gz
'http://github.com/datacharmer/mysql-docker-minimal/blob/master/dbdata/5.0.96.tar.gz?raw=true'
URL transformed to HTTPS due to an HSTS policy
--2016-07-10 17:59:33--
https://github.com/datacharmer/mysql-docker-minimal/blob/master/dbdata/5.0.96.tar.gz?raw=true
Resolving github.com (github.com)... 192.30.253.112
Connecting to github.com (github.com)|192.30.253.112|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location:
https://github.com/datacharmer/mysql-docker-minimal/raw/master/dbdata/5.0.96.tar.gz
[following]
--2016-07-10 17:59:33--
https://github.com/datacharmer/mysql-docker-minimal/raw/master/dbdata/5.0.96.tar.gz
Reusing existing connection to github.com:443.
HTTP request sent, awaiting response... 302 Found
Location:
https://raw.githubusercontent.com/datacharmer/mysql-docker-minimal/master/dbdata/5.0.96.tar.gz
[following]
--2016-07-10 17:59:34--
https://raw.githubusercontent.com/datacharmer/mysql-docker-minimal/master/dbdata/5.0.96.tar.gz
Resolving raw.githubusercontent.com (raw.githubusercontent.com)...
151.101.12.133
Connecting to raw.githubusercontent.com
(raw.githubusercontent.com)|151.101.12.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 20052235 (19M) [application/octet-stream]
Saving to: ‘5.0.96.tar.gz’

5.0.96.tar.gz
100%[=================================================================================>]
19.12M 15.2MB/s in 1.3s

2016-07-10 17:59:37 (15.2 MB/s) - ‘5.0.96.tar.gz’ saved [20052235/20052235]

The MySQL Sandbox, version 3.1.11
(C) 2006-2016 Giuseppe Maxia
# Starting server
. sandbox server started
# Loading grants
Your sandbox server was installed in $HOME/sandboxes/msb_5_0_96

If you call the same command twice, you will get a message saying that you can now use make_sandbox x.x.xx to install your sandbox.

The script is doing what I should probably have done from the beginning by default: expands the tarball in $SANDBOX_BINARY (by default $HOME/opt/mysql) from where it is easy to reuse with minimum typing.

As of today, the binaries are Linux ONLY. I made this repository to use it with Docker (I will write about it soon) and that means using Linux. This is still part of an experiment that so far is working well. The project can either evolve in smarter directions or merge with clever containers. It's early to say. For now, enjoy the fastest set-up that MySQL Sandbox can offer!


PlanetMySQL Voting: Vote UP / Vote DOWN

What's new in ClusterControl Documentation

$
0
0

If you haven’t upgraded to ClusterControl 1.3.1, you should! It’s full of great new features and enhancements. We have lots of documentation to help you get started. Documentation on older versions is also available in our Github repository.

Wizard - Create Replication Setups for Oracle MySQL, MariaDB and Percona Server

It is now possible to create entire master-slave setups in one go via the deployment wizard. In previous versions, one had to first create a master, and afterwards, add slaves to it. Among other improvements, it is possible to encrypt client/server connections and let ClusterControl automatically set all slaves to read-only (auto_manage_readonly) to avoid accidental writes.

Wizard - Add Existing MySQL Cluster (NDB)

We recently added support for deployment of MySQL Cluster (NDB), and it is now also possible to import existing NDB Cluster deployments (2 MGMT nodes, x SQL nodes and y Data nodes).

Official Changelog

We now have two Changelog pages, one in our support forum (this is mostly for our development reference) and a new official one in the documentation. You can now easily browse all the changes between each release, including release features, type of release and package build numbers.

Check out the new Changelog page.

ClusterControl troubleshooting with debug package

ClusterControl Controller (cmon) now comes with a debuginfo package to help trace any crashes. It produces a core dump of the working memory of the server at the time the program crashed or terminated abnormally.

ClusterControl Controller (CMON) package comes with a cron file installed under /etc/cron.d/ which will auto-restart if the cmon process is terminated abnormally. Typically, you may notice if cmon process has crashed by looking at the “dmesg” output.

Check out the new debugging steps here.

Standby ClusterControl

It is possible to have several ClusterControl servers to monitor a single cluster. This is useful if you have a multi-datacenter cluster and need to have ClusterControl on the remote site to monitor and manage local nodes if the network connection between them goes down. However, the ClusterControl servers must be configured to be working in active/passive mode to avoid race conditions when digesting queries and recovering a failed node or cluster.

Check out the updated instructions to install the ClusterControl Standby server.

ClusterControl RPC key

ClusterControl v1.3.1 introduces and enforces an RPC key for any communication request to the RPC interface on port 9500. This authentication string is critical and must be included in any interaction between CMON controller and the client to obtain a correct response. The RPC key is distinct per cluster and stored inside CMON configuration file of the respective cluster.

ClusterControl Domain Specific Language (CCDSL)

The DSL syntax is similar to JavaScript, with extensions to provide access to ClusterControl’s internal data structures and functions. The CCDSL allows you to execute SQL statements, run shell commands/programs across all your cluster hosts, and retrieve results to be processed for advisors/alerts or any other actions.

Our javascript-like language to manage your database infrastructure has now been updated with several new features, for example:

  • Types:
    • CmonMongoHost
    • CmonMaxscaleHost
    • CmonJob
  • Functions:
    • JSON
    • Regular Expression
    • CmonJob
    • Cluster Configuration Job
  • Examples:
    • Interact with MongoDB

Check out the ClusterControl DSL page here.

We welcome any feedback, suggestion and comment in regards to our documentation page to make sure it serves the purpose right. Happy clustering!


PlanetMySQL Voting: Vote UP / Vote DOWN

Webinar July 14, 10 am PDT: Introduction into storage engine troubleshooting

$
0
0
storage engine troubleshooting

storage engine troubleshootingPlease join Sveta Smirnova for a webinar Thursday, July 14 at 10 am PDT (UTC-7) on an Introduction Into Storage Engine Troubleshooting.

The number of MySQL storage engines provide great flexibility for database users, administrators and developers. At the same time, engines add an extra level of complexity when it comes to troubleshooting issues. Before choosing the right troubleshooting tool, you need to answer the following questions (and often others):

  • What part of the server is responsible for my issue?
  • Was a lock set at the server or engine level?
  • Is a standard or engine-specific tool better?
  • Where are the engine-specific options?
  • How to know if an engine-specific command exists?

This webinar will discuss these questions and how to find the right answers across all storage engines in a general sense.

You will also learn:

  • How to troubleshoot issues caused by simple storage engines such as MyISAM or Memory
  • Why Federated is deprecated, and what issues affected that engine
  • How Blackhole can affect replication

. . . and more.

Register for the webinar here.

Note: We will hold a separate webinar specifically for InnoDB.

storage engine troubleshootingSveta Smirnova, Principal Technical Services Engineer
Sveta joined Percona in 2015. Her main professional interests are problem-solving, working with tricky issues, bugs, finding patterns which can solve typical issues quicker, teaching others how to deal with MySQL issues, bugs and gotchas effectively. Before joining Percona Sveta worked as a Support Engineer in the MySQL Bugs Analysis Support Group in MySQL AB-Sun-Oracle. She is the author of the book “MySQL Troubleshooting” and JSON UDF functions for MySQL.

PlanetMySQL Voting: Vote UP / Vote DOWN

Call for Percona Live Europe MongoDB Speakers

$
0
0
Percona Live Europe MongoDB Speakers

Percona Live Europe MongoDB SpeakersWant to become one of the Percona Live Europe MongoDB speakers? Read this blog for details.

The Percona Live Europe, Amsterdam call for papers is ending soon and we are looking for MongoDB speakers! This is a great way to build your personal and company brands. If you haven’t submitted a paper yet, here are a list of ideas we would love to see covered at this conference:

If you find any of these ideas interesting, simply let @Percona know and we can help get you listed as the speaker. If nothing on this list strikes your fancy or peaks your interest, please submit a similar talk of your own – we’d love to find out what you have to say!

Want to become one of the Percona Live Europe MongoDB speakers? Read this blog for details.

The Percona Live Europe, Amsterdam call for papers is ending soon, and we are looking for MongoDB speakers! This is a great way to build your personal and company brands. If you haven’t submitted a paper yet, here are a list of ideas we would love to see covered at this conference:

If you any these ideas interesting, simply let @Percona know and we can help get you listed as the speaker. If nothing on this list strikes your fancy or peaks your interest, please submit a similar talk of your own – we’d love to find out what you have to say!

Here are some other ideas that might get your thoughts bubbling:

  • Secret use of “hidden” and tagged ReplicaSets
  • To use a hashed shard key or not?
  • Understanding how a shard key is used in MongoDB
  • Using scatter-gathers to your benefit
  • WriteConcern and its use cases
  • How to quickly build a sharded environment for MongoDB in Docker
  • How to monitor and scale MongoDB in the cloud
  • MongoDB Virtualization: the good, the bad, and the ugly
  • MongoDB and VMware: a cautionary tale
  • Streaming MySQL bin logs to MongoDB and back again
  • How to ensure that other technologies can safely use the epilog for pipelining

The Percona team and conference commitee would love to see what other ideas the community has that we haven’t covered. Anything helps: using @Percona and mentioning topics you would like to see, to sharing topics on twitter you like, or even just sharing the link to the call for papers.

The call for papers closes next Monday (7/18), so let’s get some great things in this week and build a truly dynamic conference!


PlanetMySQL Voting: Vote UP / Vote DOWN
Viewing all 18833 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>