Quantcast
Channel: Planet MySQL
Viewing all 18783 articles
Browse latest View live

MySQL Community Contributor Award Program 2019

$
0
0

We would like to announce that MySQL Community Team 2019 MySQL Contributor Reward Program winners. This award started in 2015 to identify & reward the most active community members from the MySQL Community world. MySQL Community Team let printed a limited edition of “MySQL Community Contributor” t-shirts which were already distributed to almost all winners.

Same as in 2015 also now specific criteria for the nominations were wet as follows:

  1. The most active code contributor
  2. Bug reporter
  3. The most active MySQL blogger
  4. People who plays very active role in translation/documentation of MySQL articles
  5. People who provides feedback on DMR releases, Labs releases, or change proposals
  6. Anyone in the community who worked their butt off or did really useful work that you feel should be thanks publicly

We are happy to announce that the people below successfully met one or more criteria above, were nominated by MySQL group and approved by our committee, please join us to congratulate to them!

  • Laurynas Biveinis for a lot of code contributions & bug submissions
  • René Cannaò for ProxySQL & for his continuous support of it
  • Vin Chen @Tencent Game DBA Team for the Instant Add Columns as a main contribution to MySQL 8.0
  • Daniel Van Eeden, the MySQL ACE for a lot of code contributions in a lot of areas (connectors, the server, etc.), and the detailed feedback for the bugs or issues he found
  • Gabriela D'Ávila Ferrara for interesting blogs about MySQL
  • Bill Karwin for his active role on many MySQL forums (MySQL, Stack Overflow, Quora, &c.), as well as for multiple speeches focusing on all things SQL
  • Alexey Kopytov for the new Sysbench code (the #1 tool for MySQL Performance evaluation)
  • Felix Liang @Tencent Game DBA Team for his active role in ACMUG (Chinese MySQL User Group) and contributions to the Instant Add Column in MySQL 8.0 
  • Tomita Masahiro for supporting MySQL community and for the MySQL Parameters tool
  • Giuseppe Maxia for the amazing feedback provided to us, for his blog posts & for dbdeployer
  • Simon Mudd for his active role in Madrid MySQL UG and, for a number of bugs submitted, for contributing the original logging to syslog, and other tools such ps-top
  • Shlomi Noach for delivering open source solutions to solve cases around MySQL and for multiple MySQL talks as well as for the tooling within the MySQL space, such as MySQL orchestrator and ghOST
  • Zsolt Parragi for a lot of code contributions & bug submissions
  • Alex Rubin for extensive testing of our new DD code and for bug submissions
  • Sveta Smirnova for popularization of MySQL Performance Schema, general MySQL troubleshooting and promoting new mysql features
  • Yura Sorokin for a lot of code contributions & bug submissions
  • Sugu Sougoumarane for running Vitess.io and helping the users on multiple places
  • Peter Zaitsev for his active role in MySQL world as well as for his contributions to MySQL Community

Please join us to congratulate to all above!   


Conferences with MySQL this week!

$
0
0

As a reminder for the shows where you can find MYSQL at this week, please see below:

  1. ​​GIDS - Great Indian Developer Summit, Bangalore, India, Apr 22-26, 2019 - with MySQL at Oracle's booth & with a MySQL talk given by Sanjay Manwani on "MySQLaaS & Rapid" in the Data track. 
  2. LinuxFest Northwest, Bellingham, US, April 26-28, 2019 - with MySQL booth in the expo area as well as talk given by David Stokes on "7 Database Mistakes You Are Making and MySQL 8 Features for Developers".
  3. LOADays, Antwerp, Belgium, Apr 27-28, 2019 - with Fred Descamps, the MySQL Community Manager talking on "How to write efficient and correct MySQL recipes for you configuration management system in 2019".

We are looking forward seeing & talking to you @ above shows this week!


  •  

How to Perform a Failback Operation for MySQL Replication Setup

$
0
0

MySQL master-slave replication is pretty easy and straightforward to set up. This is the main reason why people choose this technology as the first step to achieve better database availability. However, it comes at the price of complexity in management and maintenance; it is up to the admin to maintain the data integrity, especially during failover, failback, maintenance, upgrade and so on.

There are many articles out there describing on how to perform failover operation for replication setup. We have also covered this topic in this blog post, Introduction to Failover for MySQL Replication - the 101 Blog. In this blog post, we are going to cover the post-disaster tasks when restoring to the original topology - performing failback operation.

Why Do We Need Failback?

The replication leader (master) is the most critical node in a replication setup. It requires good hardware specs to ensure it can process writes, generate replication events, process critical reads and so on in a stable way. When failover is required during disaster recovery or maintenance, it might not be uncommon to find us promoting a new leader with inferior hardware. This situation might be okay temporarily, however for a long run, the designated master must be brought back to lead the replication after it is deemed healthy.

Contrary to failover, failback operation usually happens in a controlled environment through switchover, it rarely happens in panic-mode. This gives the operation team some time to plan carefully and rehearse the exercise for a smooth transition. The main objective is simply to bring back the good old master to the latest state and restore the replication setup to its original topology. However, there are some cases where failback is critical, for example when the newly promoted master did not work as expected and affecting the overall database service.

How to Perform Failback Safely?

After failover happened, the old master would be out of the replication chain for maintenance or recovery. To perform the switchover, one must do the following:

  1. Provision the old master to the correct state, by making it the most up-to-date slave.
  2. Stop the application.
  3. Verify all slaves are caught up.
  4. Promote the old master as the new leader.
  5. Repoint all slaves to the new master.
  6. Start up the application by writing to the new master.

Consider the following replication setup:

"A" was a master until a disk-full event causing havoc to the replication chain. After a failover event, our replication topology was lead by B and replicates onto C till E. The failback exercise will bring back A as the leader and restore the original topology before the disaster. Take note that all nodes are running on MySQL 8.0.15 with GTID enabled. Different major version might use different commands and steps.

While this is what our architecture looks like now after failover (taken from ClusterControl's Topology view):

Node Provisioning

Before A can be a master, it must be brought up-to-date with the current database state. The best way to do this is to turn A as slave to the active master, B. Since all nodes are configured with log_slave_updates=ON (it means a slave also produces binary logs), we can actually pick other slaves like C and D as the source of truth for initial syncing. However, the closer to the active master, the better. Keep in mind of the additional load it might cause when taking the backup. This part takes the most of the failback hours. Depending on the node state and dataset size, syncing up the old master could take some time (it could be hours and days).

Once problem on "A" is resolved and ready to join the replication chain, the best first step is to attempt replicating from "B" (192.168.0.42) with CHANGE MASTER statement:

mysql> SET GLOBAL read_only = 1; /* enable read-only */
mysql> CHANGE MASTER TO MASTER_HOST = '192.168.0.42', MASTER_USER = 'rpl_user', MASTER_PASSWORD = 'p4ss', MASTER_AUTO_POSITION = 1; /* master information to connect */
mysql> START SLAVE; /* start replication */
mysql> SHOW SLAVE STATUS\G /* check replication status */

If replication works, you should see the following in the replication status:

             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes

If the replication fails, look at the Last_IO_Error or Last_SQL_Error from slave status output. For example, if you see the following error:

Last_IO_Error: error connecting to master 'rpl_user@192.168.0.42:3306' - retry-time: 60  retries: 2

Then, we have to create the replication user on the current active master, B:

mysql> CREATE USER rpl_user@192.168.0.41 IDENTIFIED BY 'p4ss';
mysql> GRANT REPLICATION SLAVE ON *.* TO rpl_user@192.168.0.41;

Then, restart the slave on A to start replicating again:

mysql> STOP SLAVE;
mysql> START SLAVE;

Other common error you would see is this line:

Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: ...

That probably means the slave is having problem reading the binary log file from the current master. In some occasions, the slave might be way behind whereby the required binary events to start the replication have been missing from the current master, or the binary on the master has been purged during the failover and so on. In this case, the best way is to perform a full sync by taking a full backup on B and restore it on A. On B, you can use either mysqldump or Percona Xtrabackup to take a full backup:

$ mysqldump -uroot -p --all-databases --single-transaction --triggers --routines > dump.sql # for mysqldump
$ xtrabackup --defaults-file=/etc/my.cnf --backup --parallel 1 --stream=xbstream --no-timestamp | gzip -6 - > backup-full-2019-04-16_071649.xbstream.gz # for xtrabackup

Transfer the backup file to A, reinitialize the existing MySQL installation for a proper cleanup and perform database restoration:

$ systemctl stop mysqld # if mysql is still running
$ rm -Rf /var/lib/mysql # wipe out old data
$ mysqld --initialize --user=mysql # initialize database
$ systemctl start mysqld # start mysql
$ grep -i 'temporary password' /var/log/mysql/mysqld.log # retrieve the temporary root password
$ mysql -uroot -p -e 'ALTER USER root@localhost IDENTIFIED BY "p455word"' # mandatory root password update
$ mysql -uroot -p < dump.sql # restore the backup using the new root password

Once restored, setup the replication link to the active master B (192.168.0.42) and enable read-only. On A, run the following statements:

mysql> SET GLOBAL read_only = 1; /* enable read-only */
mysql> CHANGE MASTER TO MASTER_HOST = '192.168.0.42', MASTER_USER = 'rpl_user', MASTER_PASSWORD = 'p4ss', MASTER_AUTO_POSITION = 1; /* master information to connect */
mysql> START SLAVE; /* start replication */
mysql> SHOW SLAVE STATUS\G /* check replication status */

For Percona Xtrabackup, please refer to the documentation page on how to restore to A. It involves a prerequisite step to prepare the backup first before replacing the MySQL data directory.

Once A has started replicating correctly, monitor the Seconds_Behind_Master in the slave status. This will give you an idea on how far the slave has left behind and how long you need to wait before it catches up. At this point, our architecture looks like this:

Once Seconds_Behind_Master falls back to 0, that's the moment when A has caught up as an up-to-date slave.

If you are using ClusterControl, you have the option to resync the node by restoring from an existing backup or create and stream the backup directly from the active master node:

Staging the slave with existing backup is the recommended way to do in order to build the slave, since it doesn't bring any impact the active master server when preparing the node.

Promote the Old Master

Before promoting A as the new master, the safest way is to stop all writes operation on B. If this is not possible, simply force B to operate in read-only mode:

mysql> SET GLOBAL read_only = 'ON';
mysql> SET GLOBAL super_read_only = 'ON';

Then, on A, run SHOW SLAVE STATUS and check the following replication status:

Read_Master_Log_Pos: 45889974
Exec_Master_Log_Pos: 45889974
Seconds_Behind_Master: 0
Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates

The value of Read_Master_Log_Pos and Exec_Master_Log_Pos must be identical, while Seconds_Behind_Master is 0 and the state must be 'Slave has read all relay log'. Make sure that all slaves have processed any statements in their relay log, otherwise you will risk that the new queries will affect transactions from the relay log, triggering all sorts of problems (for example, an application may remove some rows which are accessed by transactions from relay log).

On A, stop the replication and use RESET SLAVE ALL statement to remove all replication-related configuration and disable read only:

mysql> STOP SLAVE;
mysql> RESET SLAVE ALL;
mysql> SET GLOBAL read_only = 'OFF';
mysql> SET GLOBAL super_read_only = 'OFF';

At this point, A is ready to accept writes (read_only=OFF), however slaves are not connected to it, as illustrated below:

For ClusterControl users, promoting A can be done by using "Promote Slave" feature under Node Actions. ClusterControl will automatically demote the active master B, promote slave A as master and repoint C and D to replicate from A. B will be put aside and user has to explicitly choose "Change Replication Master" to rejoin B replicating from A at a later stage.

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

Slave Repointing

It's now safe to change the master on related slaves to replicate from A (192.168.0.41). On all slaves except E, configure the following:

mysql> STOP SLAVE;
mysql> CHANGE MASTER TO MASTER_HOST = '192.168.0.41', MASTER_USER = 'rpl_user', MASTER_PASSWORD = 'p4ss', MASTER_AUTO_POSITION = 1;
mysql> START SLAVE;

If you are a ClusterControl user, you may skip this step as repointing is being performed automatically when you decided to promote A previously.

We can then start our application to write on A. At this point, our architecture is looking something like this:

From ClusterControl topology view, we have restored our replication cluster to its original architecture which looks like this:

Take note that failback exercise is much less risky if compared to failover. It's important to schedule this exercise during off-peak hours to minimize the impact to your business.

Final Thoughts

Failover and failback operation must be performed carefully. The operation is fairly simple if you have a small number of nodes but for multiple nodes with complex replication chain, it could be a risky and error-prone exercise. We also showed how ClusterControl can be used to simplify complex operations by performing them through the UI, plus the topology view is visualized in real-time so you have the understanding on the replication topology you want to build.

Creating Custom Sysbench Scripts

$
0
0
sysbench-lua for benchmark tooling

sysbench-lua for benchmark toolingSysbench has long been established as the de facto standard when it comes to benchmarking MySQL performance. Percona relies on it daily, and even Oracle uses it when blogging about new features in MySQL 8. Sysbench comes with several pre-defined benchmarking tests. These tests are written in an easy-to-understand scripting language called Lua. Some of these tests are called: oltp_read_write, oltp_point_select, tpcc, oltp_insert. There are over ten such scripts to emulate various behaviors found in standard OLTP applications.

But what if your application does not fit the pattern of traditional OLTP? How can you continue to utilize the power of load-testing, benchmarking, and results analysis with sysbench? Just write your own Lua script!

For those that want to jump ahead and see the full source, here you go.

Sysbench API

To start off, each Lua script you create must implement three core sysbench-Lua API functions. These are thread_init, thread_done, and event. You can read the comments in the code below for the meaning of each function and what is happening inside.

-- Called by sysbench one time to initialize this script
function thread_init()
  -- Create globals to be used elsewhere in the script
  -- drv - initialize the sysbench mysql driver
  drv = sysbench.sql.driver()
  -- con - represents the connection to MySQL
  con = drv:connect()
end
-- Called by sysbench when script is done executing
function thread_done()
  -- Disconnect/close connection to MySQL
  con:disconnect()
end
-- Called by sysbench for each execution
function event()
  -- If user requested to disable transactions,
  -- do not execute BEGIN statement
  if not sysbench.opt.skip_trx then
    con:query("BEGIN")
  end
  -- Run our custom statements
  execute_selects()
  execute_inserts()
  -- Like above, if transactions are disabled,
  -- do not execute COMMIT
  if not sysbench.opt.skip_trx then
    con:query("COMMIT")
  end
end

That’s all pretty simple and should function as a good template in your scripts. Now let’s take a look at the rest of the script.

Sanity checks and options

Now let’s get into the core code. At the top you’ll find the following sections:

if sysbench.cmdline.command == nil then
   error("Command is required. Supported commands: run")
end
sysbench.cmdline.options = {
  point_selects = {"Number of point SELECT queries to run", 5},
  skip_trx = {"Do not use BEGIN/COMMIT; Use global auto_commit value", false}
}

The first section is a sanity check to make sure the user actually wants to run this test. Other test scripts, mentioned above, support commands like prepare, run, and cleanup. Our script only supports run as the data we are using is pre-populated by our core application.

The second section allows us, the script writer, to let the user pass some options specific to our test script. In the code above, we can see an option for the number of SELECT statements that will be ran on each thread/iteration (default is 5) and the other option allows the user to disable BEGIN/COMMIT if they so desire (default is false). If you want more customization in your script, simply add more options. You’ll see how to reference these parameters later on.

The queries

Now it is time to define the custom queries we want to execute in our script.

-- Array of categories to be use in the INSERTs
local page_types = { "actor", "character", "movie" }
-- Array of COUNT(*) queries
local select_counts = {
  "SELECT COUNT(*) FROM imdb.title"
}
-- Array of SELECT statements that have 1 integer parameter
local select_points = {
  "SELECT * FROM imdb.title WHERE id = %d",
  "SELECT * FROM imdb.comments ORDER BY id DESC limit 10",
  "SELECT AVG(rating) avg FROM imdb.movie_ratings WHERE movie_id = %d",
  "SELECT * FROM imdb.users ORDER BY RAND() LIMIT 1"
}
-- Array of SELECT statements that have 1 string parameter
local select_string = {
  "SELECT * FROM imdb.title WHERE title LIKE '%s%%'"
}
-- INSERT statements
local inserts = {
  "INSERT INTO imdb.users (email_address, first_name, last_name) VALUES ('%s', '%s', '%s')",
  "INSERT INTO imdb.page_views (type, viewed_id, user_id) VALUES ('%s', %d, %d)"
}

The above code defines several arrays/lists of different queries. Why is this necessary? Later on in the code, we will have to parse each SQL statement and populate/replace the various parameters with randomly generated values. It would not do us any good to repeat the same SELECT * FROM fooTable WHERE id = 44 every time, now would it? Certainly not. We want to generate random numbers and have our queries select from the entire dataset.

Some queries have no parameters, some have integer-based, and some string-based. We will handle these differently below, which is why they are in different arrays above. This method also allows for future expansion. When you want to run additional queries within the script, just add another line to each array; no need to change any other code.

Parse and execute

The function below, execute_selects, will be called from the parent function, event, which we discussed earlier in the post. You can see for-loops for each of the three SELECT categories we created above. The comments inline should help explain what is happening. Note the use of the user-provided option –point-selects in the second loop below, which we created previously in the ‘Sanity and Options’ section.

function execute_selects()
  -- Execute each simple, no parameters, SELECT
  for i, o in ipairs(select_counts) do
    con:query(o)
  end
  -- Loop for however many queries the
  -- user wants to execute in this category
  for i = 1, sysbench.opt.point_selects do
    -- select random query from list
    local randQuery = select_points[math.random(#select_points)]
    -- generate random id and execute
    local id = sysbench.rand.pareto(1, 3000000)
    con:query(string.format(randQuery, id))
  end
  -- generate random string and execute
  for i, o in ipairs(select_string) do
    local str = sysbench.rand.string(string.rep("@", sysbench.rand.special(2, 15)))
    con:query(string.format(o, str))
  end
end

Two more things to mention for this code. First, you will notice the use of sysbench.rand.pareto to generate a random number between 1 and 3,000,000. For our dataset, we know that each table referenced in all queries relating to WHERE id = ? has that many number of rows, at minimum. This is specific to our data. Your values will certainly be different. Second, notice the use of sysbench.rand.string, and string.rep. The string.rep segment will generate a string comprised of ‘@’ symbols, between 2 and 15 characters long. That string of ‘@’ symbols will then be passed to sysbench.rand.string, which will swap out each ‘@’ for a random alphanumeric value. For example, ‘@@@@@@’ could be changed to ‘Hk9EdC’ which will then replace the ‘%s’ inside the query string (string.format) and be executed.

Handle inserts

Our INSERT statements require values. Again, sysbench calls the function execute_inserts from event on each iteration. Inside execute_inserts, we generate some fake string data using built-in functions described above.

Those strings are then formatted into the SQL and executed.

function create_random_email()
  local username = sysbench.rand.string(string.rep("@",sysbench.rand.uniform(5,10)))
  local domain = sysbench.rand.string(string.rep("@",sysbench.rand.uniform(5,10)))
  return username .. "@" .. domain .. ".com"
end
function execute_inserts()
  -- generate fake email/info
  local email = create_random_email()
  local firstname = sysbench.rand.string("first-" .. string.rep("@", sysbench.rand.special(2, 15)))
  local lastname = sysbench.rand.string("last-" .. string.rep("@", sysbench.rand.special(2, 15)))
  -- INSERT for new imdb.user
  con:query(string.format(inserts[1], email, firstname, lastname))
  -- INSERT for imdb.page_view
  local page = page_types[math.random(#page_types)]
  con:query(string.format(inserts[2], page, sysbench.rand.special(2, 500000), sysbench.rand.special(2, 500000)))
end

Example run

$ sysbench imdb_workload.lua \
    --mysql-user=imdb --mysql-password=imdb \
    --mysql-db=imdb --report-interval=1 \
    --events=0 --time=0 run
WARNING: Both event and time limits are disabled, running an endless test
sysbench 1.0.17 (using system LuaJIT 2.0.4)
Running the test with following options:
Number of threads: 1
Report intermediate results every 1 second(s)
Initializing random number generator from current time
Initializing worker threads...
Threads started!
[ 1s ] thds: 1 tps: 15.96 qps: 177.54 (r/w/o: 112.71/31.92/32.91) lat (ms,95%): 158.63 err/s: 0.00 reconn/s: 0.00
[ 2s ] thds: 1 tps: 15.01 qps: 169.09 (r/w/o: 109.06/30.02/30.02) lat (ms,95%): 137.35 err/s: 0.00 reconn/s: 0.00
[ 3s ] thds: 1 tps: 26.00 qps: 285.00 (r/w/o: 181.00/52.00/52.00) lat (ms,95%): 108.68 err/s: 0.00 reconn/s: 0.00
[ 4s ] thds: 1 tps: 15.00 qps: 170.00 (r/w/o: 108.00/32.00/30.00) lat (ms,95%): 164.45 err/s: 0.00 reconn/s: 0.00

And there we have it! Custom queries specific to our application and dataset. Most of the sysbench parameters are self-explanatory, but let me talk about –report-interval=1 which shows statistics every 1 second. Normally sysbench does not output stats until the end of the run, however, the example execution will run forever (–events=0 –time=0) so we need stats to show all the time. You can adjust the parameters to your liking. For instance, if you only want to run a test for 5 minutes, set –events=0 –run-time=300.

Conclusion

Sysbench is a very well designed application that allows you to load-test your MySQL instances using pre-defined and custom queries. Using the Lua scripting language, you can create just about any scenario to fit your needs. The above is just one example that we use within Percona’s Training and Education department. It is by no means an exhaustive example of all of the capabilities of sysbench-Lua.


Photo by Lachlan Donald on Unsplash

MySQL Connector/Python 8.0.16 has been released

$
0
0

Dear MySQL users,

MySQL Connector/Python 8.0.16 is the latest GA release version of the
MySQL Connector Python 8.0 series. The X DevAPI enables application
developers to write code that combines the strengths of the relational
and document models using a modern, NoSQL-like syntax that does not
assume previous experience writing traditional SQL.

To learn more about how to write applications using the X DevAPI, see
http://dev.mysql.com/doc/x-devapi-userguide/en/. For more information
about how the X DevAPI is implemented in MySQL Connector/Python, and its
usage, see http://dev.mysql.com/doc/dev/connector-python.

Please note that the X DevAPI requires at least MySQL Server version 8.0
or higher with the X Plugin enabled. For general documentation about how
to get started using MySQL as a document store, see
http://dev.mysql.com/doc/refman/8.0/en/document-store.html.

To download MySQL Connector/Python 8.0.16, see the “General Available
(GA) releases” tab at http://dev.mysql.com/downloads/connector/python/

Enjoy!

Changes in MySQL Connector/Python 8.0.16 (2019-04-25, General
Availability)


     * Functionality Added or Changed

     * Bugs Fixed

Functionality Added or Changed


     * Two informative text files were added: INFO_BIN contains
       information about the build environment used to produce
       the distribution, and INFO_SRC provides information about
       the product version and the source repository from which
       the distribution was produced. (Bug #29454706)

     * Django 1.11 is now the minimum supported Django version.

     * For X DevAPI applications, Connector/Python now supports
       the ability to send connection attributes (key-value
       pairs that application programs can pass to the server at
       connect time). Connector/Python defines a default set of
       attributes, which can be disabled or enabled. In
       addition, applications can specify attributes to be
       passed in addition to the default attributes. The default
       behavior is to send the default attribute set.
       For X DevAPI applications, specify connection attributes
       as a connection-attributes parameter in a connection
       string, or setting connection-attributes as a dictionary
       inside the connection settings parameter under the
       connection-attributes key. Both the mysqlx.get_session()
       and mysqlx.get_client() methods can receive this
       information.
       The connection-attributes parameter value must be empty
       (the same as specifying true), a Boolean value (true or
       false to enable or disable the default attribute set), or
       a list or zero or more key=value specifiers separated by
       commas (to be sent in addition to the default attribute
       set). Within a list, a missing key value evaluates as an
       empty string. An example connection string:
mysqlx://user:password@host:33060/schema?connection-attributes=[foo=bar,baz=qux,quux]

       Application-defined attribute names cannot begin with _
       because such names are reserved for internal attributes.
       If connection attributes are not specified in a valid
       way, an error occurs and the connection attempt fails.
       For general information about connection attributes, see
       Performance Schema Connection Attribute Tables
(http://dev.mysql.com/doc/refman/8.0/en/performance-schema-connection-attribute-tables.html).

     * Connector/Python now has improved support for resetting
       sessions in connection pools. Returning a session to the
       pool drops session-related objects such as temporary
       tables, session variables, and transactions, but the
       connection remains open and authenticated so that
       reauthentication is not required when the session is
       reused.

     * Protobuf was updated to Protobuf 3.6.1.

     * For X DevAPI, performance for statements that are
       executed repeatedly (two or more times) is improved by
       using server-side prepared statements for the second and
       subsequent executions. This happens internally;
       applications need take no action and API behavior should
       be the same as previously. For statements that change,
       repreparation occurs as needed. Providing different data
       values or different offset() or limit() values does not
       count as a change. Instead, the new values are passed to
       a new invocation of the previously prepared statement.

Bugs Fixed


     * Added a "username" alias for the "user" connection
       argument. Thanks to Matthew Woods for the patch. (Bug
       #29324966, Bug #94248)

     * Solaris 11 package files had the expected owner/group set
       as pb2user/common instead of root/bin. (Bug #29278489)

     * CRUD operations would not allow referencing a renamed
       column (AS SomeLabel) from the fetched result. (Bug
       #29001628)

     * Fixed a memory corruption issue that caused an unexpected
       halt when fetching fields. (Bug #28479054)

     * Querying an empty LONG BLOB raised an IndexError
       exception. (Bug #27897881, Bug #90519)

Enjoy and thanks for the support!

On Behalf of Oracle/MySQL Release Engineering Team,
Balasubramanian Kandasamy

MySQL Workbench 8.0.16 has been released

The MySQL 8.0.16 Maintenance Release is Generally Available

$
0
0

The MySQL Development team is very happy to announce that MySQL 8.0.16 is now available for download at dev.mysql.com. In addition to bug fixes there are a few new features added in this release.  Please download 8.0.16 from dev.mysql.com or from the MySQL  YumAPT, or SUSE repositories.…

Announcing MySQL Server 8.0.16, 5.7.26 and 5.6.44

$
0
0
MySQL Server 8.0.16, 5.7.26 and 5.6.44, new versions of the popular Open Source Database Management System, have been released. These releases are recommended for use on production systems. For an overview of what’s new, please see http://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html http://dev.mysql.com/doc/refman/5.7/en/mysql-nutshell.html http://dev.mysql.com/doc/refman/5.6/en/mysql-nutshell.html For information on installing the release on new servers, please see the MySQL installation documentation at […]

How to Install Phorum with Nginx on Ubuntu 18.04 LTS

$
0
0
Phorum is a PHP and MySQL based Open Source forum software. In this guide, we will guide you step-by-step through the Phorum installation process on the Ubuntu 18.04 LTS operating system using Nginx as the web server, MySQL as the database, and acme.sh and Let's Encrypt for HTTPS.

Use Artificial Intelligence to Suggest 1-5 Star Ratings

$
0
0

A handful of products rely on reviews and star-ratings for product feedback. When customers are impressed or dissatisfied about our product, they come back to where it was purchased looking for a way to leave feedback. Apps and e-commerce products are examples of the scenarios where we would love to hear what the customer thinks.

Here's what we'll build. Notice how the stars generate based on our text!

TLDR: How does it work!?

We'll create a serverless function that accepts text. We'll analyze the text using Cognitive Services by Azure. We'll display the stars using React.

Review Flow (1-5 Stars)

The common flow for requesting feedback for our products might start with asking for a rating "1-5 stars" for example, and followed by a request for more personal observations to give context to the rating that’s been requested. This additional context gives future consumers more information about whether they might also like the product being reviewed based on common context.

The other strategy is to present the customer with both the form to write their review and the star rating control at the same time.

Amazon.com

What if we take the user experience up one level and help them suggest stars based on the sentiments in their review texts?

This would look like the customer typing and we automatically rate the product in realtime.

  • For each keystroke or set of keystrokes, we will analyze what the customer is typing, generate a score based on the sentiment of the customer’s input, then star the product.
  • The customer can also click on the star to rate directly before writing the feedback or if they don’t agree on the suggestion

In this article, I will walk you through all the steps to make this possible. You can use it as an idea for a related project or read to get more excited about Artificial Intelligence.

The project is divided into 2 sections:

  1. A serverless function that handles the requests and analysis
  2. A React app for the UI

Serverless function for our server

One of the reasons why serverless functions are shining these days is because it takes only 5 mins to set up a scalable backend for your products at all complexity levels. I use Azure Functions a lot because it integrates nicely with VS Code, so I end up not having to go to a dashboard in the browser or even the terminal to test and deploy servers.

If you have not set up a function before or need a refresher, take a look at my detailed yet quick 5 minute post on how to set up a serverless function using VS Code.

Create a new function called autostar in your function project/folder using HTTPTrigger. You should get the default function’s template in autostar/index.js.

Getting started with Cognitive Services (CogS)

Cognitive Services is a subset of Artificial Intelligence that gives you trained models. These models help you with things like text analysis, sentiment analysis, facial recognition, speech analysis, and a lot more.

What this means is that you will NOT need to manually use data to train a system before it can be smart enough to answer some questions like:

  1. Which celebrity is in this picture?
  2. How sad is the text I sent?
  3. What is the license number of the cars that visited my hotel today?
  4. How happy is my grandmother this morning?
  5. Is this customer old enough to order a beer?
  6. Who said that?

With CogS, you just send an audio, video, image or text file to an endpoint, then get some analysis and answers. Answers enough to solve the questions we listed above.

Get a CogS endpoint and key

You need an API endpoint to send your requests to and a key to authenticate that it’s really you.

  1. Go to Try Cognitive Service.
  2. Choose the Language APIs tab
  3. Click Get API Key.
  4. Pick one of the signup options that works for you. You can pick something very temporary just for learning and fun sake.
  5. Copy your Endpoint and Keys. You can only use one of the keys at a given request.

Requesting for text (sentiment) analysis

Sentiment analysis is basically analyzing what someone said and weighing how positive or negative their content is. Is it sad? Is it happy? Is it hateful? This is what we need to determine how many stars a feedback is worth.

This is what the lifecycle of our App will look like when trying to analyze feedback:

The React app will send the feedback content to the serverless endpoint. The serverless endpoint validates the request and sends a payload to the CogS endpoint for analysis. After analyzing, the CogS endpoint will send the sentiment payload back to the serverless function and the serverless function forwards this response to the React app for the stars.

Delete everything in autostar/index.js and add the following:

const fetch = require('node-fetch');

module.exports = async function(context, req) {
  const accessKey = '<KEY>';
  const baseUri = 'https://<center>.api.cognitive.microsoft.com';
  const sentimentPath = '/text/analytics/v2.0/sentiment';

  const feedback = req.query.feedback || (req.body && req.body.feedback);

  if (feedback) {
    const documents = {
      documents: [{ id: 1, language: 'en', text: feedback }]
    };
    const params = {
      method: 'post',
      body: JSON.stringify(documents),
      headers: {
        'Content-Type': 'application/json',
        'Ocp-Apim-Subscription-Key': accessKey
      }
    };
    const res = await fetch(baseUri + sentimentPath, params);
    const json = await res.json();
    context.res = { body: json };
  } else {
    context.res = {
      status: 400,
      body: {
        error:
          'Please pass a feedback text on the query string or in the request body'
      }
    };
  }
};

When a request hits this function, it validates if feedback can be found anywhere in the body or query string. After which it assembles the feedback into an array then sends a request to CogS. Notice how we are passing the key using the Ocp-Apim-Subscription-Key.

Rating and submitting feedback with React

We have the endpoint for our UI to push feedback to, now let’s start sending those requests. Create a new React app in CodeSandbox or with the CLI tool:

npx create-react-app auto-star-ui

We need an external library for rating and Lodash’s throttle. The throttle function will help us delay spamming our server with too much requests during keystrokes from the customer:

npm install --save react-rater lodash.throttle

Create a FeedbackForm.js file in the src folder. This will be the component that holds and handles the feedback form. Import the following:

import React from 'react';
import Rater from 'react-rater';
import throttle from 'lodash.throttle'; 
import 'react-rater/lib/react-rater.css';
import './App.css';

I am also importing the App.css file because we will update it with styles that are related to this component.

Create the actual functional component skeleton:

export const FeedbackForm = ({
  handleFormSubmit,
  handleFeedbackChange,
  setRating,
  curRating = 0
}) => {
  // ...
}

The component will receive

  1. handleFormSubmit to be called when the form is submitted.
  2. handleFeedbackChange This is where the real thing happens. It’s called at keystrokes (customer is typing) and it also sends the request to our serverless function.
  3. setRating is used to update the stars.
  4. curRating is the current rating at any given time.

We need an internal state to hold the feedback text that the user is typing:

export const FeedbackForm = ({
 //...
}) => {
 const [feedback, setFeedback] = React.useState('');
}

If we send a request at every keystroke, we would flood our endpoint with meaningless or wasteful content. A quick strategy to fix this is to throttle the requests. What this means is that we only send a request every x period of time; in our case, every 1.5 seconds. Lodash’s throttle will help us with this.

const [feedback, setFeedback] = React.useState('');

const throttledChangeHandler = React.useRef(
  throttle(feedback => handleFeedbackChange(feedback), 1500)
);

React.useEffect(() => {
  if (!!feedback) {
    return throttledChangeHandler.current(feedback);
  }
}, [feedback]);

We are wrapping the throttle in a useRef because for every render; we get a new throttle function which resets everything, so the function does not remember what the last inputs of the user were.I explained this in more detail in an article I wrote previously.

Now we can render the elements, pass in some states and props to them, and handle events:

export const FeedbackForm = ({
 // ...
}) => {
  //...
  return (
    <form>
      <div className="label-stars">
        <label htmlFor="feedback">What do you think?</label>
        <Rater
          total={5}
          rating={curRating}
          onRate={({ rating }) => {
            setRating(rating);
          }}
        />
      </div>
      <textarea
        value={feedback}
        onKeyPress={e => {
          if (e.which === 13 && !e.shiftKey) {
            e.preventDefault();
            handleFormSubmit(feedback, curRating);
            setFeedback('');
          }
        }}
        onChange={e => {
          setFeedback(e.target.value);
        }}
        placeholder="Love or hate? No hard feelings!"
      />
    </form>
  );
};

When we type in the textarea, both onChang``e() and onKeyPres``s() is called:

  1. onChange calls setFeedback which updates the state. When the state is updated, the component re-renders and calls useEffect which we saw previously. When useEffect is called, it runs throttledChangeHandler and then handleFeedbackChange is called.
  2. onKeyPress just checks if we hit enter without holding shift. If we held shit, it would move to a new line. If we didn’t, it would submit the feedback.

I am also updating the rater through the onRate property and also setting the current rating with rating property.

Sending requests from React

We are just passing down handleFeedbackChange, handleFormSubmit, and setRating but we don’t know what they look like. We are now going to render FeedbackForm in App and see how those props get passed down and what those props look like.

Delete the content of ./src/App.js and update with the following:

import React, { Component } from 'react';
import { FeedbackForm } from './FeedbackForm';
import './App.css';

class App extends Component {
  state = {
    curRating: 0,
  };

  // ...
  render() {
    return (
      <div className="App">
        <FeedbackForm
          curRating={this.state.curRating}
          handleFeedbackChange={this.handleFeedbackChange}
          handleFormSubmit={this.handleFormSubmit}
          setRating={this.setRating}
        />
      </div>
    );
  }
}
export default App;

Next, add the setRating and handleFeedbackChange instance methods to the class:

setRating = (rating) => {
  this.setState({ curRating: rating });
}
handleFeedbackChange = feedback => {
  fetch('http://localhost:7071/api/autoStar', {
    method: 'post',
    body: JSON.stringify({ feedback }), 
    headers: {
      'Content-Type': 'application/json'
    }
  })
    .then(res => res.json())
    .then(json => {
      const sentimentScore = json.documents && json.documents[0] ? json.documents[0].score : 0;
      this.setState({
        curRating: Math.round((sentimentScore * 10) / 2)
      });
    });
};

setRating simply updates the curRating state item which is also passed as a prop to FeedbackForm.

handleFeedbackChange sends a request to our serverless function with the user’s feedback attached to the body as payload. When a response is returned, we expect to have a sentiment score if no error occurred. Lastly, in the response callback, we are updating the curRating state with a value between 1-5. The reason for multiplying with 10 is because CogS gives its rating between 0 and 1.

What’s Next?

You might have noticed that I did not implement what happens when you submit. I left this one up to you. It’s less difficult than what we just learned. You need to set up a database of choice (maybe Cosmos DB since it integrates well with Functions) and create one more function that the create request will be sent to.

If you get stuck, you can take a look at the GitHub repo which has the completed project. You can also have a look at the final demo to play with and see how things are expected to works.

I will strongly suggest you take a look at the Mojifier project. It’s a project that uses CogS to suggest emojis based on facial expressions. More like what you learned here but with images and faces.

Watch Webinar: Billion Goods in Few Categories – How Histograms Save a Life?

$
0
0
Webinar: Billion Goods in Few Categories: How Histograms Save a Life?


Webinar: Billion Goods in Few Categories: How Histograms Save a Life?
Please join Percona’s Principal Support Engineer Sveta Smirnova as she presents Billion Goods in Few Categories: How Histograms Save a Life?

Watch the Recorded Webinar

We store data with the intention to use it: search, retrieve, group, sort, etc. To perform these actions effectively, MySQL storage engines index data and communicate statistics with the Optimizer when it compiles a query execution plan. This approach works perfectly well unless your data distribution is uneven.

Last year I worked on several tickets where data followed the same pattern: millions of popular products fit into a couple of categories and the rest used all the others. We had a hard time finding a solution for retrieving goods fast. Workarounds for version 5.7 were offered. However, we learned a new MySQL 8.0 feature – histograms – would work better, cleaner, and faster. Thus, the idea of our talk was born.

In this webinar, we will discuss:
– How index statistics are physically stored
– Which data is exchanged with the Optimizer
– Why it is not enough to make a correct index choice

In the end, I will explain which issues are resolved by histograms, and we will discuss why using index statistics are insufficient for the fast retrieval of unevenly distributed data.

MySQL Connector/C++ 8.0.16 has been released

$
0
0

Dear MySQL users,

MySQL Connector/C++ 8.0.16 is a new release version of the MySQL Connector/C++ 8.0 series.

Connector/C++ 8.0 can be used to access MySQL implementing Document Store or in a traditional way, using SQL queries. It allows writing both C++ and plain C applications using X DevAPI and X DevAPI for C. It also supports the legacy API of Connector/C++ 1.1 based on JDBC4.

To learn more about how to write applications using X DevAPI, see “X DevAPI User Guide” at

https://dev.mysql.com/doc/x-devapi-userguide/en/

See also “X DevAPI Reference” at

https://dev.mysql.com/doc/dev/connector-cpp/devapi_ref.html

and “X DevAPI for C Reference” at

https://dev.mysql.com/doc/dev/connector-cpp/xapi_ref.html

For generic information on using Connector/C++ 8.0, see

https://dev.mysql.com/doc/dev/connector-cpp/

For general documentation about how to get started using MySQL as a document store, see

http://dev.mysql.com/doc/refman/8.0/en/document-store.html

To download MySQL Connector/C++ 8.0.16, see the “Generally Available (GA) Releases” tab at

https://dev.mysql.com/downloads/connector/cpp/

Changes in MySQL Connector/C++ 8.0.16 (2019-04-25, General Availability)

Character Set Support

     * Connector/C++ supports all Unicode character sets for
       connections to servers for MySQL 8.0.14 and higher, but
       previously had Unicode support limited to the utf8
       character set for servers older than MySQL 8.0.14.
       Connector/C++ now supports all Unicode character sets for
       older servers, including utf8mb4, utf16, utf16le, utf32,
       and ucs2. (Bug #28966038)

Compilation Notes

     * Thanks to Daniël van Eeden, who contributed a code change
       to use the stdbool.h header file rather than a bool
       typedef. (Bug #29167120, Bug #93803)

     * Thanks to Daniël van Eeden, who contributed a code change
       to use lib instead of lib64 on 64-bit FreeBSD.
       (Bug #29167098, Bug #93801)

     * Previously, for Connector/C++ applications that used the
       legacy JDBC API, source files had to use this set of
       #include directives:

         #include <jdbc/mysql_driver.h>
         #include <jdbc/mysql_connection.h>
         #include <jdbc/cppconn/*.h>

       Now a single #include directive suffices:

         #include <mysql/jdbc.h>

Configuration Notes

     * Thanks to Daniël van Eeden, who contributed a code change
       to build the documentation as part of the all target if
       Connector/C++ is configured with -DWITH_DOC=ON.
       (Bug #29167107, Bug #93802)

     * Previously, for Connector/C++ 8.0 applications that use
       the legacy JDBC connector, only static linking to the
       MySQL client library was supported. The
       MYSQLCLIENT_STATIC_LINKING and MYSQLCLIENT_STATIC_BINDING
       CMake options are now available to permit dynamic
       linking. By default, MYSQLCLIENT_STATIC_LINKING is
       enabled, to use static linking to the client library.
       Disable this option to use dynamic linking. If
       MYSQLCLIENT_STATIC_LINKING is enabled,
       MYSQLCLIENT_STATIC_BINDING may also be used. If
       MYSQLCLIENT_STATIC_BINDING is enabled (the default),
       Connector/C++ is linked to the shared MySQL client
       library. Otherwise, the shared MySQL client library is
       loaded and mapped at runtime.

     * Connector/C++ 8.0 configuration now requires a minimum
       CMake version of 3.0.

Packaging Notes

     * Connector/C++ debug packages are now available for Linux
       and Windows, The packages enable symbolic debugging using
       tools such as gdb on Linux and windbg on Windows, as well
       as obtaining symbolic information about connector code
       locations from application crash dumps. Use of the debug
       packages requires that you have installed and configured
       the Connector/C++ sources.
       (Bug #29117059, Bug #93645, Bug #26128420, Bug #86415)

     * For improved GitHub friendliness, Community Connector/C++
       source distributions now include a CONTRIBUTING.md
       markdown file. CONTRIBUTING.md contains guidelines
       intended to be helpful to contributors.

     * The Protobuf sources bundled in the Connector/C++ source
       tree were updated to Protobuf 3.6.1. (Only the parts
       needed for Connector/C++ are included, to reduce
       compilation time.)

Prepared Statement Notes

     * For X DevAPI and X DevAPI for C, performance for
       statements that are executed repeatedly (two or more
       times) is improved by using server-side prepared
       statements for the second and subsequent executions. This
       happens internally; applications need take no action and
       API behavior should be the same as previously. For
       statements that change, repreparation occurs as needed.
       Providing different data values or different OFFSET or
       LIMIT clause values does not count as a change. Instead,
       the new values are passed to a new invocation of the
       previously prepared statement.

X DevAPI Notes

     * For X DevAPI and X DevAPI for C applications,
       Connector/C++ now supports the ability to send connection
       attributes (key-value pairs that application programs can
       pass to the server at connect time). Connector/C++
       defines a default set of attributes, which can be
       disabled or enabled. In addition, applications can
       specify attributes to be passed in addition to the
       default attributes. The default behavior is to send the
       default attribute set.

          + For X DevAPI applications, specify connection
            attributes as a connection-attributes parameter in a
            connection string, or by using a
            SessionOption::CONNECTION_ATTRIBUTES option for the
            SessionSettings constructor.
            The connection-attributes parameter value must be
            empty (the same as specifying true), a Boolean value
            (true or false to enable or disable the default
            attribute set), or a list or zero or more key=value
            specifiers separated by commas (to be sent in
            addition to the default attribute set). Within a
            list, a missing key value evaluates as an empty
            string. Examples:

              "mysqlx://user@host?connection-attributes"
              "mysqlx://user@host?connection-attributes=true"
              "mysqlx://user@host?connection-attributes=false"
              "mysqlx://user@host?connection-attributes=[attr1=val1,attr2,attr3=]"
              "mysqlx://user@host?connection-attributes=[]"

            The SessionOption::CONNECTION_ATTRIBUTES option
            value must be a Boolean value (true or false to
            enable or disable the default attribute set), or a
            DbDoc or JSON string (to be sent in addition to the
            default attribute set). Examples:

                Session sess(..., SessionOption::CONNECTION_ATTRIBUTES, false);
                Session sess(..., SessionOption::CONNECTION_ATTRIBUTES, attr_doc );
                Session sess(..., SessionOption::CONNECTION_ATTRIBUTES,
                  R"({ "attr1": "val1", "attr2" : "val2" })"
                );

          + For X DevAPI for C applications, specify connection
            attributes using the OPT_CONNECTION_ATTRIBUTES()
            macro for the mysqlx_session_option_set() function.
            The option value must be null (to disable the
            default attribute set) or a JSON string (to be sent
            in addition to the default attribute set). Examples:

              mysqlx_session_option_set(opts, OPT_CONNECTION_ATTRIBUTES(nullptr));
              mysqlx_session_option_set(opts,
                OPT_CONNECTION_ATTRIBUTES("{ \"attr1\": \"val1\", \"attr2\" : \"val2\" }")
              );

       Application-defined attribute names cannot begin with _
       because such names are reserved for internal attributes.
       If connection attributes are not specified in a valid
       way, an error occurs and the connection attempt fails.
       For general information about connection attributes, see
       Performance Schema Connection Attribute Tables
(http://dev.mysql.com/doc/refman/8.0/en/performance-schema-connection-attribute-tables.html).

X DevAPI for C Notes

     * The signatures for several X DevAPI for C functions have
       been changed to enable better error information to be
       returned to applications by means of a mysqlx_error_t
       handle. These functions are affected:

         mysqlx_client_t*
         mysqlx_get_client_from_url(
           const char *conn_string,
           const char *client_opts,
           mysqlx_error_t **error
         )

         mysqlx_client_t*
         mysqlx_get_client_from_options(
           mysqlx_session_options_t *opt,
           mysqlx_error_t **error
         )

         mysqlx_session_t*
         mysqlx_get_session(
           const char *host, int port,
           const char *user, const char *password,
           const char *database,
           mysqlx_error_t **error
         )

         mysqlx_session_t*
         mysqlx_get_session_from_url(
           const char *conn_string,
           mysqlx_error_t **error
         )

         mysqlx_session_t*
         mysqlx_get_session_from_options(
           mysqlx_session_options_t *opt,
           mysqlx_error_t **error
         )

         mysqlx_session_t *
         mysqlx_get_session_from_client(
           mysqlx_client_t *cli,
           mysqlx_error_t **error
         )

       The final argument in each case is a mysqlx_error_t
       handle into which Connector/C++ stores error information.
       If the argument is a null pointer, Connector/C++ ignores
       it. The application is responsible to free non-null
       handles by passing them to mysqlx_free().
       The signature for mysqlx_free() has also been changed to
       accept a void * argument so that it can accept a handle
       of any type. Consequently, other type-specific free
       functions, such as mysqlx_free_options(), are no longer
       needed and are deprecated.
       The preceding modifications change the Connector/C++ API,
       which has these implications:

          + The modifications change the ABI, so the ABI version
            is changed from 1 to 2. This changes the connector
            library names.

          + X DevAPI for C applications to be compiled against
            the new API must be modified to use the new function
            signatures. (X DevAPI applications should build
            without change.)

          + Applications built against the old ABI will not run
            with the new connector library.

          + The API change and ABI version change do not affect
            the legacy JDBC interface, so library names for the
            legacy JDBC connector library do not change and
            legacy application need not be changed.

          + It is possible to install both the old and new
            libraries. However, installers may remove the old
            libraries, so they may need to be re-added manually
            after installing the new libraries.

Functionality Added or Changed

     * Thanks to Daniël van Eeden, who contributed documentation
       for the mysqlx_column_get_collation() function and
       various corrections in the developer documentation.
       (Bug #29123114, Bug #93665, Bug #29115285, Bug #93640,
        Bug #29122490, Bug #93663)

     * Connector/C++ now has improved support for resetting
       sessions in connection pools. Returning a session to the
       pool drops session-related objects such as temporary
       tables, session variables, and transactions, but the
       connection remains open and authenticated so that
       reauthentication is not required when the session is
       reused.

Bugs Fixed

     * Previously, for the SSL_MODE_VERIFY_IDENTITY connection
       option, Connector/C++ checked whether the host name that
       it used for connecting matched the Common Name value in
       the certificate but not the Subject Alternative Name
       value. Now, if used with OpenSSL 1.0.2 or higher,
       Connector/C++ checks whether the host name matches either
       the Subject Alternative Name value or the Common Name
       value in the server certificate.
       (Bug #28964313, Bug #93301)

     * After repeated calls, mysqlx_get_session_from_client()
       could hang. (Bug #28587287)

     * The SessionSettings/ClientSettings iterator
       implementation was incomplete. (Bug #28502574)

Enjoy and thanks for the support!

On Behalf of Oracle/MySQL Release Engineering Team,
Kent Boortz

MySQL Connector/Node.js 8.0.16 has been released

$
0
0

Dear MySQL users,

MySQL Connector/Node.js is a new Node.js driver for use with the X
DevAPI. This release, v8.0.16, is a maintenance release of the
MySQL Connector/Node.js 8.0 series.

The X DevAPI enables application developers to write code that combines
the strengths of the relational and document models using a modern,
NoSQL-like syntax that does not assume previous experience writing
traditional SQL.

MySQL Connector/Node.js can be downloaded through npm (see
https://www.npmjs.com/package/@mysql/xdevapi for details) or from
https://dev.mysql.com/downloads/connector/nodejs/.

To learn more about how to write applications using the X DevAPI, see
http://dev.mysql.com/doc/x-devapi-userguide/en/. For more information
about how the X DevAPI is implemented in MySQL Connector/Node.js, and
its usage, see http://dev.mysql.com/doc/dev/connector-nodejs/.

Please note that the X DevAPI requires at least MySQL Server version
8.0 or higher with the X Plugin enabled. For general documentation
about how to get started using MySQL as a document store, see
http://dev.mysql.com/doc/refman/8.0/en/document-store.html.

Changes in MySQL Connector/Node.js 8.0.16 (2019-04-25, General
Availability)

X DevAPI Notes

* Connector/Node.js now supports connection attributes as
key-value pairs that application programs can pass to the
server. Connector/Node.js defines a default set of
attributes, which can be disabled or enabled. In addition
to these default attributes, applications can also
provide their own set of custom attributes.

+ Specify connection attributes as a
connection-attributes parameter in a connection
string, or by using the connectionAttributes
property using either a plain JavaScript object or
JSON notation to specify the connection
configuration options.
The connection-attributes parameter value must be
either empty (the same as specifying true), a
Boolean value (true or false to enable or disable
the default attribute set), or a list of zero or
more key=value pair specifiers separated by commas
(to be sent in addition to the default attribute
set). Within a list, a missing key value evaluates
as NULL.
The connectionAttributes property allows passing
user-defined attributes to the application using
either a plain JavaScript object or JSON notation to
specify the connection configuration options. Define
each attribute in a nested object under
connectionAttributes where the property names
matches the attribute names, and the property values
match the attribute values. Unlike
connection-attributes, and while using plain
JavaScript objects or JSON notation, if the
connectionAttributes object contains duplicate keys
then no error is thrown and the last value specified
for a duplicate object key is chosen as the
effective attribute value.
Examples:
Not sending the default client-defined attributes:
mysqlx.getSession('{ "user": "root", "connectionAttributes": false }')

mysqlx.getSession('mysqlx://root@localhost?connection-attributes=false
')

mysqlx.getSession({ user: 'root', connectionAttributes: { foo: 'bar',
baz: 'qux', quux: '' } })
mysqlx.getSession('mysqlx://root@localhost?connection-attributes=[foo=
bar,baz=qux,quux]')

Application-defined attribute names cannot begin with _
because such names are reserved for internal attributes.
If connection attributes are not specified in a valid
way, an error occurs and the connection attempt fails.
For general information about connection attributes, see
Performance Schema Connection Attribute Tables
(http://dev.mysql.com/doc/refman/8.0/en/performance-schema-connection-attribute-tables.html).

Functionality Added or Changed

* Optimized the reuse of existing connections through
client.getSession() by only re-authenticating if
required.

* For X DevAPI, performance for statements that are
executed repeatedly (two or more times) is improved by
using server-side prepared statements for the second and
subsequent executions. This happens internally;
applications need take no action and API behavior should
be the same as previously. For statements that change,
repreparation occurs as needed. Providing different data
values or different offset() or limit() values does not
count as a change. Instead, the new values are passed to
a new invocation of the previously prepared statement.

Bugs Fixed

* Idle pooled connections to MySQL Server were not reused,
and instead new connections had to be recreated. (Bug
#29436892)

* Executing client.close() would not close all associated
connections in the connection pool. (Bug #29428477)

* connectTimeout instead of maxIdleTime determined whether
idle connections in the connection pool were reused
rather than creating new connections. (Bug #29427271)

* Released connections from the connection pool were not
being reset and reused; instead new connections were
being made. (Bug #29392088)

* Date values in documents were converted to empty objects
when inserted into a collection. (Bug #29179767, Bug
#93839)

* A queueTimeout value other than 0 (infinite) prevented
the acquisition of old released connections from the
connection pool. (Bug #29179372, Bug #93841)

On Behalf of MySQL/ORACLE RE Team
Gipson Pulla

MySQL Connector/NET 8.0.16 has been released

$
0
0

Dear MySQL users,

MySQL Connector/NET 8.0.16 is the fourth version to support
Entity Framework Core 2.1 and the sixth general availability release
of MySQL Connector/NET to add support for the new X DevAPI, which
enables application developers to write code that combines the
strengths of the relational and document models using a modern,
NoSQL-like syntax that does not assume previous experience writing traditional SQL.

To learn more about how to write applications using the X DevAPI, see
http://dev.mysql.com/doc/x-devapi-userguide/en/index.html. For more
information about how the X DevAPI is implemented in Connector/NET, see
http://dev.mysql.com/doc/dev/connector-net. NuGet packages provide functionality at a project level. To get the
full set of features available in Connector/NET such as availability
in the GAC, integration with Visual Studio’s Entity Framework Designer
and integration with MySQL for Visual Studio, installation through
the MySQL Installer or the stand-alone MSI is required.

Please note that the X DevAPI requires at least MySQL Server version
8.0 or higher with the X Plugin enabled. For general documentation
about how to get started using MySQL as a document store, see
http://dev.mysql.com/doc/refman/8.0/en/document-store.html.

To download MySQL Connector/NET 8.0.16, see
http://dev.mysql.com/downloads/connector/net/

Installation instructions can be found at
https://dev.mysql.com/doc/connector-net/en/connector-net-installation.html

Changes in MySQL Connector/NET 8.0.16 ( 2019-04-25, General Availability )

* Functionality Added or Changed

* Bugs Fixed

Functionality Added or Changed

* Document Store: Support was added for the -> operator to
be used with JSON document paths in relational
statements. For example:
table.Select().Where("additionalinfo->$.hobbies = 'Reading'");

(Bug #29347028)

* Document Store: The performance for statements that are
executed repeatedly (two or more times) is improved by
using server-side prepared statements for the second and
subsequent executions. This happens internally;
applications need take no action and API behavior should
be the same as previously. For statements that change,
repreparation occurs as needed. Providing different data
values or different OFFSET or LIMIT clause values does
not count as a change. Instead, the new values are passed
to a new invocation of the previously prepared statement.

* Document Store: Connector/NET now supports the ability to
send connection attributes (key-value pairs that
application programs can pass to the server at connect
time). Connector/NET defines a default set of attributes,
which can be disabled or enabled. In addition,
applications can specify attributes to be passed together
with the default attributes. The default behavior is to
send the default attribute set.
The aggregate size of connection attribute data sent by a
client is limited by the value of the
performance_schema_session_connect_attrs_size server
variable. The total size of the data package should be
less than the value of the server variable. For X DevAPI
applications, specify connection attributes as a
connection-attributes parameter in a connection string.
For usage information, see Options for X Protocol Only
(http://dev.mysql.com/doc/connector-net/en/connector-net-8-0-connection-options.html#connector-net-8-0-connection-options-xprotocol).
For general information about connection attributes, see
Performance Schema Connection Attribute Tables
(http://dev.mysql.com/doc/refman/8.0/en/performance-schema-connection-attribute-tables.html).

* Document Store: Connector/NET now has improved support
for resetting sessions in connection pools. Returning a
session to the pool drops session-related objects such as
temporary tables, session variables, and transactions,
but the connection remains open and authenticated so that
reauthentication is not required when the session is
reused.

* Connector/NET applications now can use certificates in
PEM format to validate SSL connections in addition to the
native PFX format (see Tutorial: Using SSL with Connector/NET
(http://dev.mysql.com/doc/connector-net/en/connector-net-tutorials-ssl.html)).
PEM support applies to both classic MySQL protocol
and X Protocol connections.

Bugs Fixed

* Document Store: All methods able to execute a statement
were unable to execute the same statement a second time.
Now, the values and binding parameters remain available
after the method is executed and string parameters are no
longer converted to numbers. Both changes enable a
follow-on execution to reuse the previous parameters.
(Bug #29249857, Bug #29304767)

* An exception was generated when the MySqlDbType
enumeration was given an explicit value and then passed
as a parameter to the MySqlCommand.Prepare method. (Bug
#28834253, Bug #92912)

* Validation was added to ensure that when a column is of
type TIME and the value is 00:00:00, it takes the value
instead of setting NULL. (Bug #28383726, Bug #91752)

On Behalf of MySQL Release Engineering Team,
Surabhi Bhat

MySQL 8.0.16 Check Constraints

$
0
0
Before MySQL 8.0.16 you could put constraint checks into your Data Definition Language (DDL) when creating tables but the server ignored them.  There was much gnashing of teeth as taunts of "It is not a real database" from other databases taunted the MySQL Community. 
 
But with 8.0.16 this has all changed. You can now have your data constraints checked by the server. Below is an example table with two constraints.

mysql>CREATE TABLE parts 
            (id int, cost decimal(5,2) not null check (cost > 0),
             price decimal(5,2) not null check (price > 1.0)
          );
Query OK, 0 rows affected (0.09 sec)

mysql> insert into parts (id,cost,price) values (1,1.10,2.25);
Query OK, 1 row affected (0.03 sec)

mysql> insert into parts (id,cost,price) values (1,1.10,0.25);
ERROR 3819 (HY000): Check constraint 'parts_chk_2' is violated.


mysql> insert into parts (id,cost,price) values (2,-1.1,4.25);
ERROR 3819 (HY000): Check constraint 'parts_chk_1' is violated.


The first data input above does passes the constraints since both the cost and price columns are greater than the minimum. But not so with the next inserts, So the constraints work!

Finding Out What The Error Message Means

Now seeing Check constraint 'parts_chk_1' is violated. tells me the data is out of value. But how do I find out what parts_chk_1 is?

The fist thing to check is the table itself.


mysql> show create table parts;
+-------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table                                                                                                                                                                                                                                                                                |
+-------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| parts | CREATE TABLE `parts` (
  `id` int(11) DEFAULT NULL,
  `cost` decimal(5,2) NOT NULL,
  `price` decimal(5,2) NOT NULL,
  CONSTRAINT `parts_chk_1` CHECK ((`cost` > 0)),
  CONSTRAINT `parts_chk_2` CHECK ((`price` > 1.0))

) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci |
+-------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.01 sec)


And you can also see the information in the IS.

mysql> select * from INFORMATION_SCHEMA.CHECK_CONSTRAINTS;
+--------------------+-------------------+-----------------+-----------------+
| CONSTRAINT_CATALOG | CONSTRAINT_SCHEMA | CONSTRAINT_NAME | CHECK_CLAUSE    |
+--------------------+-------------------+-----------------+-----------------+
| def                | davetest          | xxx_chk_1       | (`age` > 18)    |
| def                | davetest          | parts_chk_1     | (`cost` > 0)    |
| def                | davetest          | parts_chk_2     | (`price` > 1.0) |
| def                | davetest          | client_chk_1    | (`age` > 17)    |
+--------------------+-------------------+-----------------+-----------------+
4 rows in set (0.00 sec)

MySQL Connector/ODBC 8.0.16 has been released

$
0
0

Dear MySQL users,

MySQL Connector/ODBC 8.0.16 is a new version in the MySQL Connector/ODBC 8.0 series,
the ODBC driver for the MySQL Server.

The available downloads include both a Unicode driver and an ANSI driver based on the
same modern codebase. Please select the driver type you need based on the type of your
application – Unicode or ANSI. Server-side prepared statements are enabled by default.
It is suitable for use with the latest MySQL server version 8.0.

This release of the MySQL ODBC driver is conforming to the ODBC 3.8 specification.
It contains implementations of key 3.8 features, including self-identification
as a ODBC 3.8 driver, streaming of output parameters (supported for binary types
only), and support of the SQL_ATTR_RESET_CONNECTION connection attribute (for the
Unicode driver only).

The release is now available in source and binary form for a number of platforms
from our download pages at

https://dev.mysql.com/downloads/connector/odbc/

For information on installing, please see the documentation at

https://dev.mysql.com/doc/connector-odbc/en/connector-odbc-installation.html

Enjoy and thanks for the support!

==================================================

Changes in MySQL Connector/ODBC 8.0.16 (2019-04-25, General Availability)

Bugs Fixed

* Connector/ODBC 8.0 is now built with OpenSSL 1.0.2R.
Issues fixed in the new OpenSSL version are described at
http://www.openssl.org/news/vulnerabilities.html. (Bug
#29538143)

* An exception was emitted when fetching contents of a
BLOB/TEXT records after executing a statement as a
server-side prepared statement with a bound parameter.
The workaround is not using parameters or specifying
NO_SSPS=1 in the connection string; this allows the
driver to fetch the data. (Bug #29282638, Bug #29512548,
Bug #28790708, Bug #93895, Bug #94545, Bug #92078)

On Behalf of Oracle/MySQL Release Engineering Team,
Hery Ramilison

MySQL Connector/J 8.0.16 has been released

$
0
0

Dear MySQL users,

MySQL Connector/J Version 8.0.16 is the GA release of the 8.0
branch of MySQL Connector/J. It is suitable for use with MySQL Server
versions 8.0, 5.7 and 5.6. It supports the Java Database
Connectivity (JDBC) 4.2 API, and implements the X DevAPI.

This release includes the following new features and changes, also
described in more detail on

https://dev.mysql.com/doc/relnotes/connector-j/8.0/en/news-8-0-16.html

As always, we recommend that you check the “CHANGES” file in the
download archive to be aware of changes in behavior that might affect
your application.

To download MySQL Connector/J 8.0.16 GA, see the “Generally Available
(GA) Releases” tab at http://dev.mysql.com/downloads/connector/j/

Enjoy!

———————————————————————–

Changes in MySQL Connector/J 8.0.16 (2019-04-25, General Availability)

Functionality Added or Changed

* X DevAPI: Added BigInteger, BigDecimal, and Character as
supported classes whose instances can be passed to a
Table as Expression instances. Also made the error
message clearer when applications try to pass instances
of unsupported classes. (Bug #25650912)

* X DevAPI: Connector/J now supports the ability to send
connection attributes
(http://dev.mysql.com/doc/refman/8.0/en/performance-schema-connection-attribute-tables.html)
(key-value pairs that application programs can pass to the
server at connect time) for X Protocol connections.
Connector/J defines a default set of attributes
(http://dev.mysql.com/doc/refman/8.0/en/performance-schema-connection-attribute-tables.html#performance-schema-connection-attributes-cj),
which can be disabled or enabled. In addition,
applications can specify attributes to be
passed in addition to the default attributes. The default
behavior is to send the default attribute set. See the
description for the new configuration property
xdevapi.connect-attributes for details.
Note
The aggregate size of connection attribute data sent by a
client is limited by the value of the
performance_schema_session_connect_attrs_size server
variable. The total size of the data package should be
less than the value of the server variable, or the
attribute data will be truncated.

* X DevAPI: When using X DevAPI, performance for statements
that are executed repeatedly (two or more times) is
improved by using server-side prepared statements for the
second and subsequent executions. See Working with
Prepared Statements
(http://dev.mysql.com/doc/x-devapi-userguide/en/working-with-prepared-statements.html)
in the X DevAPI User Guide
(http://dev.mysql.com/doc/x-devapi-userguide/en/) for details.

* The version number has been removed from the name of the
Connector/J JAR archive within the RPM packages for
Connector/J. That makes upgrading Connector/J with RPM
packages easier. (Bug #29384853)

* The collation utf8mb4_zh_0900_as_cs has been added to the
CharsetMapping class. (Bug #29244101)

* The following third-party libraries have been removed
from the distribution bundles for Connector/J:

+ Google protobuf for Java (required for using X
DevAPI and for building Connector/J from source)

+ C3P0 (required for building Connector/J from source)

+ JBoss common JDBC wrapper (required for building
Connector/J from source)

+ Simple Logging Facade API (required for using the
logging capabilities provided by the default
implementation of org.slf4j.Logger.Slf4JLogger by
Connector/J, and for building Connector/J from
source)
Users who need those libraries have to obtain them on
their own. See Installing Connector/J from a Binary
Distribution
(http://dev.mysql.com/doc/connector-j/8.0/en/connector-j-binary-installation.html)
and Installing from Source
(http://dev.mysql.com/doc/connector-j/8.0/en/connector-j-installing-source.html)
for details.

Bugs Fixed

* X DevAPI: The method unquoteWorkaround() has been removed
from the ExprParser class, as the workaround is no longer
needed, and it actually produced wrong results in some
cases. (Bug #29257922)

* X DevAPI: Connector/J threw an error when a JSON document
contained only a field with an empty array as its value.
With this fix, Connector/J now takes that as a valid JSON
document. (Bug #28834959, Bug #92819)

* getByte() calls failed on table columns of the
BINARY data type. This was due to issues with string
conversion, which has been corrected with this fix. (Bug
#25650385)

* X DevAPI: Any statements sent after a failed procedure
call caused Connector/J to hang. This was because after
the failed call, Connector/J was not aware that the
result streamer had already been closed by the server.
With this fix, an error is thrown when the procedure call
fails, and the result streamer is nullified. (Bug
#22038729)

* X DevAPI: Unary negative and positive operators inside
expressions were parsed wrongly as binary minus and plus
operators. (Bug #21921956)

* Because the SHOW PROCESSLIST
(http://dev.mysql.com/doc/refman/8.0/en/show-processlist.html)
statement might cause the server to fail sometimes,
Connector/J now avoids using the statement, but queries
the performance scheme instead for the information it
needs. (Bug #29329326)

* Some unnecessary information has been removed from the
Connector/J log. (Bug #29318273)

* In the DatabaseMetaDataUsingInfoSchema interface, the
getProcedureColumns() and getFunctionColumns() methods
returned wrong results for the PRECISION column, and the
getColumns() and getVersionColumns() methods returned
wrong results for the COLUMN_SIZE column. The errors were
due to the wrong handling of the temporal type precision
by Connector/J, which has now been fixed. (Bug #29186870)

* For an SSL connection, after a client disconnected from a
server by calling Connection.close(), the TCP connection
remained in the TIME_WAIT
(http://dev.mysql.com/doc/refman/8.0/en/can-not-connect-to-server.html)
state on the server side. With this fix,
the connection remains in the TIME_WAIT
(http://dev.mysql.com/doc/refman/8.0/en/can-not-connect-to-server.html)
state on the client side instead, in most
cases. (Bug #29054329, Bug #93590)

* The function
LoadBalancedConnectionProxy.getGlobalBlacklist() always
returned an empty map, thus there was never a blacklist
for load-balanced connections. (Bug #28860051, Bug
#93007)

* The redundant file, changelog.gz, has been removed from
the Debian 9 package for Connector/J. The file repeated
the contents of the CHANGES.gz file. (Bug #27786499)

* Using getBytes() to retrieve TEXT data resulted in a
NumberFormatException. With this fix, the proper
exception (SQLDataException), is now thrown. (Bug
#27784363)

* A changeUser() call failed with a java.io.IOException
when the configuration property enablePacketDebug was set
to true for a connection. (Bug #25642021)

* bindings.getBoolean() always returned false. It was due
to a mishandling of data types, which has been corrected
with this fix. (Bug #22931700)

Enjoy and thanks for the support!

On Behalf of MySQL/ORACLE RE Team
Sreedhar S

MySQL Shell 8.0.16 for MySQL Server 8.0 and 5.7 has been released

$
0
0

Dear MySQL users,

MySQL Shell 8.0.16 is a maintenance release of MySQL Shell 8.0 Series (a
component of the MySQL Server). The MySQL Shell is provided under
Oracle’s dual-license.

MySQL Shell 8.0 is highly recommended for use with MySQL Server 8.0 and
5.7. Please upgrade to MySQL Shell 8.0.16.

MySQL Shell is an interactive JavaScript, Python and SQL console
interface, supporting development and administration for the MySQL
Server. It provides APIs implemented in JavaScript and Python that
enable you to work with MySQL InnoDB cluster and use MySQL as a document
store.

The AdminAPI enables you to work with MySQL InnoDB cluster, providing an
integrated solution for high availability and scalability using InnoDB
based MySQL databases, without requiring advanced MySQL expertise. For
more information about how to configure and work with MySQL InnoDB
cluster see

https://dev.mysql.com/doc/refman/en/mysql-innodb-cluster-userguide.html

The X DevAPI enables you to create “schema-less” JSON document
collections and perform Create, Update, Read, Delete (CRUD) operations
on those collections from your favorite scripting language. For more
information about how to use MySQL Shell and the MySQL Document Store
support see

https://dev.mysql.com/doc/refman/en/document-store.html

For more information about the X DevAPI see

https://dev.mysql.com/doc/x-devapi-userguide/en/

If you want to write applications that use the the CRUD based X DevAPI
you can also use the latest MySQL Connectors for your language of
choice. For more information about Connectors see

https://dev.mysql.com/doc/index-connectors.html

For more information on the APIs provided with MySQL Shell see

https://dev.mysql.com/doc/dev/mysqlsh-api-javascript/8.0/

and

https://dev.mysql.com/doc/dev/mysqlsh-api-python/8.0/

Using MySQL Shell’s SQL mode you can communicate with servers using the
legacy MySQL protocol. Additionally, MySQL Shell provides partial
compatibility with the mysql client by supporting many of the same
command line options.

For full documentation on MySQL Server, MySQL Shell and related topics,
see

https://dev.mysql.com/doc/mysql-shell/8.0/en/

For more information about how to download MySQL Shell 8.0.16, see the
“Generally Available (GA) Releases” tab at

http://dev.mysql.com/downloads/shell/

We welcome and appreciate your feedback and bug reports, see

http://bugs.mysql.com/

Enjoy and thanks for the support!

Changes in MySQL Shell 8.0.16 (2019-04-25, General Availability)

* Functionality Added or Changed

* Bugs Fixed

Functionality Added or Changed

* Important Change: Attempting to connect to an X Protocol
port, 33060 by default, using the classic MySQL protocol resulted
in the following error: ERROR 2013 (HY000): Lost connection to
MySQL server at 'reading initial communication packet', system
error: 0 This was because of differences in X Protocol and
classic MySQL protocol clients expectations on how connections
were initialized. Now, in such a situation the generated error
message is ERROR 2007 (HY000): Protocol mismatch; server version
= 11, client version = 10. If you encounter this error then you
are probably trying to use the wrong port for the protocol your
client is using. As part of this improvement the
mysqlx_enable_hello_notice system variable has been added, which
controls messages sent to classic MySQL protocol clients that try
to connect over X Protocol. When enabled, clients which do not
support X Protocol that attempt to connect to the server X
Protocol port receive an error explaining they are using the
wrong protocol. Set mysqlx_enable_hello_notice to false to permit
clients which do not recognize the hello message to still
connect.

* MySQL Shell's upgrade checker utility can now check the
configuration file (my.cnf or my.ini) for the server instance.
The utility checks for any system variables that are defined in
the configuration file but have been removed in the target MySQL
Server release, and also for any system variables that are not
defined in the configuration file and will have a different
default value in the target MySQL Server release. For these
checks, when you invoke checkForServerUpgrade(), you must provide
the file path to the configuration file. If you omit the file
path and the upgrade checker utility needs to run a check that
requires the configuration file, that check fails with a message
informing you that you must specify the file path. (Bug
#27801824, Bug #29222179)

* MySQL InnoDB cluster automatically and transparently
manages the communication protocol versions of its members,
whenever the cluster topology is changed using AdminAPI
operations. An InnoDB cluster always uses the most recent
communication protocol version that is supported by all instances
that are part of the cluster or joining it.

+ When an instance is added to, removed from, or
rejoins the cluster, or a rescan or reboot operation
is carried out on the cluster, the communication
protocol version is automatically set to a version
supported by the instance that is now at the
earliest MySQL Server version.

+ When you carry out a rolling upgrade by removing
instances from the cluster, upgrading them, and
adding them back into the cluster, the communication
protocol version is automatically upgraded when the
last remaining instance at the old MySQL Server
version is removed from the cluster prior to its
upgrade.
To see the communication protocol version in use in an
InnoDB cluster, use the Cluster.status() function with
the 'extended' option enabled. The communication protocol
version is returned in the 'GRProtocolVersion' field,
provided that the cluster has quorum and no cluster
members are unreachable.

* MySQL Shell now has a framework and commands that you can
use to set up and run reports to display live information from a
MySQL server, such as status and performance information. Reports
can be run once using the MySQL Shell \show command, or run then
refreshed continuously in a MySQL Shell session using the \watch
command. They can also be accessed as API functions in the
shell.reports object. The reporting facility supports both
built-in reports and user-defined reports. User-defined reports
can be created in the supported scripting languages JavaScript
and Python, and can be run in any MySQL Shell mode (JavaScript,
Python, or SQL), regardless of the language that the report was
written in. Reports can be saved in a folder in the MySQL Shell
configuration path and automatically loaded at startup. You can
also create a report directly in the MySQL Shell prompt. You
register a report to MySQL Shell using the shell.registerReport
method to provide information about the report and the options
and arguments that it supports. For more information, see
Reporting with MySQL Shell
(http://dev.mysql.com/doc/mysql-shell/8.0/en/mysql-shell-reporting.html).

* When running MySQL Shell in interactive mode, you can now
execute an SQL statement without switching to SQL mode and back
again afterwards. This function enables you to conveniently issue
some SQL statements in the context of a longer AdminAPI workflow
in JavaScript or Python mode. Use the \sql command immediately
followed by the SQL statement, for example:
\sql select * from sakila.actor limit 3;
The SQL statement does not need any additional quoting, and the
statement delimiter is optional. With this format, MySQL Shell
does not switch mode as it would if you entered the \sql command.
After the SQL statement has been executed, MySQL Shell remains in
JavaScript or Python mode.
You cannot use multiple line mode when you use the \sql command
with a query to execute single SQL statements while another
language is active. The command only accepts a single SQL query
on a single line.

* MySQL Shell history is now split per active language
which the command was issued under. This means that your history
now matches the active language, for example when you are running
in JavaScript mode having issued \js, the history contains the
previous JavaScript statements you issued, and when you issue
\sql to change to SQL mode your history contains the previous SQL
statements you issued. Similarly, now any history related
commands such as \history clear or \history delete are performed
on the history of the current active language. When you install
this version, any existing MySQL Shell history files are
duplicated to ensure that existing history is not lost.
Subsequent operations are then added to the language specific
history file.

* The new autoRejoinTries option enables you to configure
how many times an instance tries to rejoin a group after being
expelled. In scenarios where network glitches happen but recover
quickly, setting this option prevents you from having to manually
add the expelled instance back to the group. The autoRejoinTries
option accepts positive integer values between 0 and 2016 and the
default value is 0, which means that instances do not try to
automatically rejoin. Set the value to a valid integer to
configure the number of attempts expelled instances should make
to rejoin the group. You can pass the autoRejoinTries option to
these AdminAPI operations:

+ dba.createCluster()

+ Cluster.addInstance()

+ Cluster.setOption()

+ Cluster.setInstanceOption()
When you configure the autoRejoinTries option, it sets
the group_replication_autorejoin_tries system variable.
Passing the option to dba.createCluster(),
Cluster.addInstance() or Cluster.setInstanceOption()
configures the automatic rejoin for specific cluster
instances. Passing the option to Cluster.setOption()
configures the automatic rejoin for all cluster
instances.
For more information, see Responses to Failure Detection
and Network Partitioning
(http://dev.mysql.com/doc/refman/8.0/en/group-replication-responses-failure.html).

* When resultFormat was set to json or json/raw, every
result was being returned as a JSON document. This behavior was
expected when JSON wrapping is off (in other words the --json
command option was not used when starting MySQL Shell). Now, for
consistency reasons when JSON wrapping is off and resultFormat is
set to json or json/raw, every record is printed in a separate
document and statistics and warnings are printed in plain text.
For example if MySQL Shell is started without --json and
resultFormat=json/raw:
mysqlsh-sql> SHOW DATABASES;
{"Database":"information_schema"}
{"Database":"mysql"}
{"Database":"performance_schema"}
{"Database":"sys"}
4 rows in set (0.0035 sec)

If MySQL Shell is started with --json and with
resultFormat=json/raw:
mysqlsh-sql> SHOW DATABASES;
{
"hasData": true,
"rows": [
{
"Database": "information_schema"
},
{
"Database": "mysql"
},
{
"Database": "performance_schema"
},
{
"Database": "sys"
}
],
"executionTime": "0.0018 sec",
"affectedRowCount": 0,
"affectedItemsCount": 0,
"warningCount": 0,
"warningsCount": 0,
"warnings": [],
"info": "",
"autoIncrementValue": 0
}

* AdminAPI now reports information about the version of
MySQL running on instances. This information is available from
the following operations:

+ Cluster.status()

+ Cluster.describe()

+ Cluster.rescan()

See Checking the MySQL Version on Instances
(http://dev.mysql.com/doc/refman/8.0/en/mysql-innodb-cluster-working-
with-cluster.html#checking-version-on-instances) for
more information.

Bugs Fixed

* Removing an instance from a cluster when the instance to
be removed had no user defined for the group_replication_recovery
channel resulted in dropping users on the remaining instances of
the cluster. (Bug #29617572)

* MySQL Shell could be installed in an environment where
Python was not present, but the application has a dependency on
many standard Python modules, resulting in error messages at
startup. The RPM and Debian packages for MySQL Shell now
explicitly specify the dependency on Python. (Bug #29469201)

* The MSI file that is used by Windows Installer to install
MySQL Shell now adds the path to the application binary (mysqlsh)
to the Windows PATH environment variable, so that the application
can be started from a command prompt. (Bug #29457639)

* In the instructions to build MySQL Shell from source (the
INSTALL document), the required version of the optional V8
dependency has been updated from 3.28.71.19 to 6.7.288.46. Thanks
to Evgeniy Patlan for spotting this. (Bug #29430049, Bug #94529)

* The failoverConsistency option has been deprecated and a
new option named consistency has been added, to make it more
consistent with the target Group Replication
group_replication_consistency system variable name. The MySQL
Shell online documentation now also correctly describes all of
the values you can assign to the consistency option. (Bug
#29356599)

* The dba.configureLocalInstance() operation would remove
any section that did not start with mysqld from the provided
option file. This could remove sections such as the client
section from the option file. (Bug #29349014)

* MySQL Shell's upgrade checker utility
checkForServerUpgrade() could incorrectly report a schema
inconsistency error for a table whose name included a special
character such as a hyphen. (Bug #29346836, Bug #94303)

* When an instance with X Plugin disabled was added to an
InnoDB cluster, if the instance was later removed from the
cluster using Cluster.removeInstance() the operation failed with
LogicError "get_string(7): field is NULL". This was a regression
introduced by the fix for Bug#27677227. (Bug #29304183)

* There was an inconsistency between the behavior of
dba.checkInstanceConfiguration() and the commands to add
instances to the cluster (dba.createCluster() and
Cluster.addInstance()) regarding the localhost and loopback
address validation. In particular, a simple error was printed by
dba.checkInstanceConfiguration() but the execution of the
operation continued showing that everything was correct at the
end of the operation, while an error was issued and the execution
stopped for dba.createCluster() and Cluster.addInstance(). As
part of fixing this issue, it was decided that the existing
localhost and loopback address validations are no longer needed
and should be removed. In particular, whatever address is
specified for report_host, even if it is localhost or the
loopback address (127.0.0.1), should be allowed, because it was
explicitly specified by the user to use it. (Bug #29279941)

* The dba.rebootClusterFromCompleteOutage() operation was
not preserving the existing Group Replication configurations
previously set for the instances. In particular, the Group
Replication local address and exit state action values were being
changed. Now all settings are read at the time of rebooting the
cluster. (Bug #29265869)

* On Windows, MySQL Shell's upgrade checker utility
checkForServerUpgrade() incorrectly reported a schema
inconsistency error for partitioned tables. (Bug #29256562)

* Using either Cluster.setOption() or
Cluster.setInstanceOption() to set an option which only exists in
MySQL 8.0 on an instance running MySQL 5.7 was not being caught
correctly. (Bug #29246657)

* On Debian-based platforms (such as Ubuntu), if the
hostname resolved to 127.0.1.1 - which is the default on these
platforms - it was not possible to create a cluster using the
default settings. Now, in such situations a proper validation of
the instance is performed before creating a cluster and adding
instances to it. (Bug #29246110)

* MySQL Shell stopped unexpectedly if Python code was
running in interactive mode and threw exceptions from C++
libraries. These exceptions are now caught and translated to
Python's built-in RuntimeError exceptions. (Bug #29057116)

* The dba.checkInstanceConfiguration() operation did not
validate host restrictions for the account provided for cluster
administration, for example if the account could actually connect
to all of the instances in the cluster. In particular, now an
error is issued if the provided user account is only able to
connect through localhost. (Bug #29018457)

* When a connection is specified using key-value pairs in
MySQL Shell's shell.connect() method, the host name cannot be an
empty string. MySQL Shell now handles this situation consistently
and returns an error if the supplied host name is an empty
string. (Bug #28899522)

* InnoDB cluster configured auto_increment_increment and
auto_increment_offset on instances for clusters running in
multi-primary mode and consisting of up to 7 instances based on
the logic described at InnoDB cluster and Auto-increment
(http://dev.mysql.com/doc/refman/8.0/en/mysql-innodb-cluster-working-with-
cluster.html#mysql-innodb-cluster-auto-increment). But Group
Replication permits groups to contain up to 9 members, and
Cluster.addInstance() and Cluster.removeInstance() were not
following the logic used for other operations. Now, InnoDB
cluster uses the same logic for auto increment regardless of the
operation used and correctly handles multi-primary clusters with
more than 7 instances. (Bug #28812763)

* MySQL Shell's JSON import utility can now accept input
from FIFO special files (named pipes) when you invoke the utility
using the util.importJSON function, so you can carry out large
imports by this method without needing to put the data into a
file. (Bug #28785527)

* When you use the MySQL Shell command \help (or \h, or \?)
with a search pattern to search for help on a specific subject,
multiple help topic titles can match the pattern and be returned
as a list, to be selected by entering the command again with an
extended search pattern. With this system, it was possible for
help topics with a single-word title to be inaccessible from such
a list because there was nothing further to add to the search
pattern. To avoid this situation, the handling of multiple
matches has now been improved. If a topic title is found that
matches the given search pattern exactly (case-sensitive in the
event of multiple topic matches, and case-insensitive in the
event of no case-sensitive matches), the topic is identified as
the exact match and its help data is printed. The rest of the
topics with pattern matches in their titles are listed in a "see
also" section and can be selected by further pattern matching.
(Bug #28393119)

* MySQL Shell uses the host value of the provided
connection parameters as the target hostname used for AdminAPI
operations, namely to register the instance in the metadata (for
the dba.createCluster() and cluster.addInstance() operations).
However, the host used for the connection parameters might not
match the hostname that is used or reported by Group Replication,
which uses the value of the report_host system variable when it
is defined (in other words it is not NULL), otherwise the value
of hostname is used. Therefore, AdminAPI now follows the same
logic to register the target instance in the metadata and as the
default value for the group_replication_local_address variable on
instances, instead of using the host value from the instance
connection parameters. During this fix it was detected that when
the report_host variable was set to empty, Group Replication uses
an empty value for the host but AdminAPI (for example in commands
such as dba.checkInstanceConfiguration(),
dba.configureInstance(), dba.createCluster()) reports the
hostname as the value used which is inconsistent with the value
reported by Group Replication. An error is now issued by AdminAPI
if an empty value is set for the report_host system variable.
(Bug #28285389)

* In the event that dba.createCluster() failed and a
rollback was performed to remove the created replication
(recovery) users, the account created at localhost and any of the
ipWhitelist addresses were not being removed. The fix ensures
that the replication accounts are removed whenever a rollback
related to dba.createCluster() is performed. This work was based
on a code contribution from Bin Hong. (Bug #94182, Bug #29308037)

 

On Behalf of Oracle/MySQL Release Engineering Team,
Nawaz Nazeer Ahamed

MySQL 8.0.16: how to validate JSON values in NoSQL with check constraint

$
0
0

As you may have noticed, MySQL 8.0.16 has been released today !

One of the major long expected feature is the support of CHECK contraints .

My colleague, Dave Stokes, already posted an article explaining how this works.

In this post, I wanted to show how we could take advantage of this new feature to validate JSON values.

Let’s take the following example:

So we have a collection of documents representing rates from a user on some episodes. Now, I expect that the value for the rating should be between 0 and 20.

Currently I could enter whatever value, even characters…

To avoid characters, I can already create a virtual column as integer:

So now, only integer value for rating should be allowed:

Perfect, but can I enter any integer value ?

In fact yes of course ! And that’s where the new CHECK Constraints enter in action !

We need first to modify the current document having a value for the ratingattribute that won’t be valid for the new constraints.

And now we can test again:

Woohooo! Nice feature that also benefits to the MySQL Document Store !

For the curious that want to see how the table looks like in SQL definition:

Enjoy NoSQL with MySQL 8.0 Document Store #MySQL8isGreat.

MySQL 8.0.16 Introducing CHECK constraint

$
0
0

MySQL 8.0.16 introduces the SQL CHECK constraint feature. This is one of the most requested and long awaited features for MySQL. This post describes the details of the feature. Let’s get started!

Introduction

The CHECK constraint is a type of integrity constraint in SQL.…

Viewing all 18783 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>