Quantcast
Channel: Planet MySQL
Viewing all 18800 articles
Browse latest View live

How free is your Open Source software…really?

$
0
0

With growing development and support costs as well as the need to maintain a sustainable business, it became apparent that maintaining the OSS release of Tungsten Replicator (TR) just wasn’t viable. Earlier this year we decided to close it down, instead focusing our efforts on improving and developing its commercial release whilst also growing our flagship commercial product, Tungsten Clustering.

For many businesses, Open Source Software (OSS) is still a very viable solution in many use cases, and I’m sure it will continue to be so for a long time, but I wanted to explore whether an OSS solution really is as free as the $0 price tag suggests.  In this blog I explore just that..

The hidden costs of OSS

Over the last few weeks I stumbled across a number of blog posts, discussing the “hidden” costs of OSS.  All approached the topic slightly differently but underlying all of them was the same message…How free is that free piece of software?

I thought I’d pull some figures from a few sources and try and actually put it into some kind of cost perspective, using what I know from my experience as a DBA over the past 25 years and my experiences over the last 4 years working at Continuent.

What are the costs to the OSS Developer?

In terms of costs to Continuent, we need to assess the following questions:

  • How much did the development and maintenance cost?
  • How much was spent on Marketing?
  • How much time was spent by support staff manning the community forums?
  • How much was spent on infrastructure such as maintaining online documentation and associated repositories?

According to glassdoor.com, average salaries in the US for a single Java Developer, a Technical Marketing Engineer, a QA Engineer and a Support Engineer would set you back about around $320k/year. Sure, those roles aren’t solely involved in the development and support of the OSS, there are many other responsibilities, but just assuming that 1/3 of their time is spent on OSS, we’re still in the region of $100-$120k/year – that’s before you even take into account any of the material costs (Equipment, hosting, etc.).

What are the costs to the End-user?

Let’s now dig into this a little more from the perspective of a user. You’re a hard working DBA, given a project to find a solution to replicate sales and order information from your production MySQL databases into your warehouse system, in real time.  Your only remit is “as cheap as possible”.

OSS suddenly becomes attractive – this “free” solution that can do just what you want and it won’t cost you anything…or will it?

Your only guide to installation and configuration is the online documentation and public forums.  You may or may not get the response you need straight away, but maybe in 2-3 days you might have some help from someone on the other side of the world.

Fast forward 6 weeks and you’ve just about managed to get the product configured and you’ve started to see data replicate, but in that time you’ve needed the help of your network engineers to open up firewalls.  You’ve pulled in System Admins to set up an environment for you to install on – maybe a week’s worth of waiting for hosts to be configured.

Let’s look at the average salaries of all the staff you’ve had to call upon for help:



6 weeks may have passed but let’s for arguments sake say that the actual time spent was around 4 weeks in total, with an average salary of the 3 roles combined at $260k/year, we’re looking at about $20k so far.

Add on top of that the infrastructure costs (AWS Instances for example) and then the ongoing support of this.  What happens when you hit a bug? As much as any developer would love to ship 100% bug free products, it’s unlikely!

If we average all this out, we get setup costs that look somewhere in the region of…

  • $20-$25k of staff costs to set it up
  • $1200/year AWS costs (approx. based on a single c4.large instance)

What happens when things go wrong?

So what could an outage cost the business? Let’s say your production MySQL database(s) take sales for an e-commerce website, that on average take around $4m worth of sales per day. During seasonal sales, that could easily be double.

You’re now using this “free” piece of software to replicate information about these sales to your warehouse system.  If the warehouse system doesn’t receive the orders, you start loosing money because you can’t ship your products.  It’s 8am on Black Friday, your support staff get a call, replication is broken.  They call you, the clock is now ticking, your business is loosing up to $5000 PER MINUTE.

You don’t know if this is a bug with Tungsten Replicator, or a bug with MySQL – luckily you have paid support with Percona, but they don’t know anything about Tungsten Replicator and rule out a database bug.  You search the community forums and hit jackpot; you find the solution, however, it’s taken you 2 hours to get to it.  Your business has lost upwards of $600k of orders.

This single outage in the first year has just put a price tag of approx $625k on your “free” solution, so the question now is how free was that free piece of software?

But surely a commercial solution won’t be much different?

Well, of course, you still have staffing costs to consider and the costs of AWS, some of those are unavoidable, but it would be less.  Time to go-live would be quicker, plus you will get plenty of training up front so you are able to support the system.

If we take the support from Continuent as an example, full 24/7 support with no restrictions is included in your subscription.  On average, our first response time for urgent requests is ~5mins, average resolution in under 1 hour, although generally closer to 30 mins.

Based on the figures above, this puts your setup costs around the same, maybe a touch higher when factoring in the support costs

  • $10-$15k of staff costs to set it up
  • $1200/year AWS costs (approx. based on a single c4.large instance)
  • $15k/year support contract*

*Actual cost of support would vary based on a number of factors such as size of deployment etc.

But that outage would still hit the business hard, right?

Yes, of course, any outage is going to hurt, however let’s take the Black Friday example and work it through again.

Your front line support team place a case with Continuent, and we’re on a call with you in less than 5 minutes.  Within 15 minutes we’ve identified the issue(s), after 30 minutes your system is operational.

Whilst 30 minutes outage is still not ideal in a business where the risk if $5000/minute, the impact here of $120k vs $600k suddenly adds considerable value to the annual support cost.

In summary

So, when it comes to that little piece of software with a $0 price tag, just take a moment and ask yourself if it really is going to be free.  Yes, there are many perfectly valid use cases for using OSS software, but ensure you fully understand the risks and potential business imapact from an outage first.

I’m sure the figures presented here can be pulled apart and argued for/against, my point here is illustrative based on real experiences I have personally faced in my career.

*Avg salary figures sourced from glassdoor.com


MySQL Shell 8.0.18 for MySQL Server 8.0 and 5.7 has been released

$
0
0

Dear MySQL users,

MySQL Shell 8.0.18 is a maintenance release of MySQL Shell 8.0 Series (a
component of the MySQL Server). The MySQL Shell is provided under
Oracle’s dual-license.

MySQL Shell 8.0 is highly recommended for use with MySQL Server 8.0 and
5.7. Please upgrade to MySQL Shell 8.0.18.

MySQL Shell is an interactive JavaScript, Python and SQL console
interface, supporting development and administration for the MySQL
Server. It provides APIs implemented in JavaScript and Python that
enable you to work with MySQL InnoDB cluster and use MySQL as a document
store.

The AdminAPI enables you to work with MySQL InnoDB cluster, providing an
integrated solution for high availability and scalability using InnoDB
based MySQL databases, without requiring advanced MySQL expertise. For
more information about how to configure and work with MySQL InnoDB
cluster see

  https://dev.mysql.com/doc/refman/en/mysql-innodb-cluster-userguide.html

The X DevAPI enables you to create “schema-less” JSON document
collections and perform Create, Update, Read, Delete (CRUD) operations
on those collections from your favorite scripting language.  For more
information about how to use MySQL Shell and the MySQL Document Store
support see

  https://dev.mysql.com/doc/refman/en/document-store.html

For more information about the X DevAPI see

  https://dev.mysql.com/doc/x-devapi-userguide/en/

If you want to write applications that use the the CRUD based X DevAPI
you can also use the latest MySQL Connectors for your language of
choice. For more information about Connectors see

  https://dev.mysql.com/doc/index-connectors.html

For more information on the APIs provided with MySQL Shell see

  https://dev.mysql.com/doc/dev/mysqlsh-api-javascript/8.0/

and

  https://dev.mysql.com/doc/dev/mysqlsh-api-python/8.0/

Using MySQL Shell’s SQL mode you can communicate with servers using the
legacy MySQL protocol. Additionally, MySQL Shell provides partial
compatibility with the mysql client by supporting many of the same
command line options.

For full documentation on MySQL Server, MySQL Shell and related topics,
see

  https://dev.mysql.com/doc/mysql-shell/8.0/en/

For more information about how to download MySQL Shell 8.0.18, see the
“General Availability (GA) Releases” tab at

  http://dev.mysql.com/downloads/shell/

We welcome and appreciate your feedback and bug reports, see

  http://bugs.mysql.com/

Enjoy and thanks for the support!


Changes in MySQL Shell 8.0.18 (2019-10-14, General Availability)

     * InnoDB Cluster Added or Changed Functionality

     * InnoDB Cluster Bugs Fixed

     * Functionality Added or Changed

     * Bugs Fixed

InnoDB Cluster Added or Changed Functionality


     * MySQL Shell can now optionally log SQL statements that
       are executed by AdminAPI operations, and output them to
       the console if the –verbose option is set. The
       dba.logSql MySQL Shell configuration option or
       –dba-log-sql command line option activates logging for
       these statements. Statements executed by sandbox
       operations are excluded. Viewing the statements lets you
       observe the progress of the AdminAPI operations in terms
       of SQL execution, which can help with problem diagnosis
       for any errors.

     * AdminAPI now supports IPv6 addresses if the target MySQL
       Server version is higher than 8.0.13. When using MySQL
       Shell 8.0.18 or higher, if all cluster instances are
       running 8.0.14 or higher then you can use an IPv6 or
       hostname that resolves to an IPv6 address for instance
       connection strings and with options such as localAddress,
       groupSeeds and ipWhitelist. For more information on using
       IPv6 see Support For IPv6 And For Mixed IPv6 And IPv4
       Groups
(https://dev.mysql.com/doc/refman/8.0/en/group-replication-ipv6.html).
       References: See also: Bug #29557250, Bug #30111022, Bug
       #28982989.

     * You can now reset the passwords for the internal recovery
       accounts created by InnoDB cluster, for example to follow
       a custom password lifetime policy. Use the
       Cluster.resetRecoveryAccountsPassword() operation to
       reset the passwords for all internal recovery accounts
       used by the cluster. The operation sets a new random
       password for the internal recovery account on each
       instance which is ONLINE. If an instance cannot be
       reached, the operation fails. You can use the force
       option to ignore such instances, but this is not
       recommended, and it is safer to bring the instance back
       online before using this operation. This operation only
       applies to the passwords created by InnoDB cluster and
       cannot be used to update manually created passwords.
       Note
       The user which executes this operation must have all the
       required clusterAdmin privileges, in particular CREATE
       USER, in order to ensure that the password of recovery
       accounts can be changed regardless of the password
       verification-required policy. In other words, independent
       of whether the password_require_current system variable
       is enabled or not.

     * MySQL Shell now supports specifying TLS version 1.3 and
       TLS cipher suites for classic MySQL protocol connections.
       You can use:

          + the –tls-version command option to specify TLS
            version 1.3.

          + the –tls-ciphersuites command option to specify
            cipher suites.

          + the tls-versions and tls-ciphersuites connection
            parameters as part of a URI-type connection string.
       Note
       tls-versions (plural) does not have a key-value
       equivalent, it is only supported in URI-type connection
       strings. Use tls-version to specify TLSv1.3 in a
       key-value connection string.
       To use TLS version 1.3, both MySQL Shell and MySQL server
       must have been compiled with OpenSSL 1.1.1 or higher. For
       more information see Using Encrypted Connections
(https://dev.mysql.com/doc/refman/8.0/en/encrypted-connections.html).


InnoDB Cluster Bugs Fixed


     * The Cluster.rejoinInstance() operation was not setting
       the auto increment values defined for InnoDB cluster,
       leading to the use of the default Group Replication
       behavior if the instance configuration was not properly
       persisted, for example on 5.7 servers. The fix ensures
       that the Cluster.rejoinInstance() operation updates the
       auto increment settings of the target instance. (Bug
       #30174191)

     * The output of Cluster.status() now includes the
       replicationLag field. The value is displayed in HH:MM:SS
       format and shows the time difference between the last
       transaction commit timestamp and the last transaction
       applied timestamp. This enables you to monitor the amount
       of time between the most recent transaction being
       committed and being applied on an instance. (Bug
       #30003034)

     * Cluster.addInstance() did not ensure that the MySQL Clone
       plugin was installed or loaded on all cluster instances,
       when available and not disabled. This meant that whenever
       a cluster was created using an older MySQL Shell version,
       on a target MySQL instance supporting clone, the instance
       would not have the clone plugin installed. The result was
       that any Cluster.addInstance() call that used clone would
       fail. The same issue happened if an instance was added to
       a cluster consisting of one instance using the
       incremental recovery type and afterwards the seed
       instance was removed. This resulted in all cluster
       instances not having the clone plugin installed and
       therefore any instance added using the clone recovery
       method would fail. The fix ensures that the clone plugin
       is installed on all cluster members (if available and not
       disabled) at cluster creation time and also whenever an
       instance is added to a cluster. (Bug #29954085)

     * The Cluster.rejoinInstance() operation was not checking
       the GTID consistency of an instance being rejoined to a
       cluster, which could result in data diverging. Now, the
       GTID consistency checks conducted as part of the
       Cluster.rejoinInstance() operation have been improved to
       check for irrecoverable or diverged data-sets and also
       for empty GTID sets. If an instance is found to not be
       consistent with the cluster, it is not rejoined and the
       operation fails with a descriptive error. You are also
       shown the list of errant transactions, possible outcomes
       and solutions. (Bug #29953812)

     * Cluster.describe() was retrieving information about the
       cluster’s topology and the MySQL version installed on
       instances directly from the current session. Now, the
       information is retrieved from the Metadata schema, and
       the MySQL version is not included in the information
       output by Cluster.describe(). (Bug #29648806)

     * Using a password containing the ‘ character caused
       dba.deploySandbox() to fail. Now, all sensitive data is
       correctly wrapped to avoid such issues. (Bug #29637581)

     * The Cluster.addInstance() operation creates internal
       recovery users which are required by the Group
       Replication recovery process. If the
       Cluster.addInstance() operation failed, for example
       because Group Replication could not start, the created
       recovery users were not removed. Now, in the event of a
       failure any internal users are removed. (Bug #25503159)

     * When a cluster had lost quorum and the majority of the
       cluster instances were offline except the primary, after
       reestablishing quorum and adding a new instance to the
       cluster, it was not possible to remove and add the
       previous primary instance to the cluster. This was
       because the operation failed when trying to contact
       offline instances, which was because the feature to
       verify if a Group Replication protocol upgrade is
       required was not considering the possibility of some
       cluster instances being offline (not reachable). The fix
       improves the Group Replication protocol upgrade handling
       for the Cluster.removeInstance() operation, which now
       attempts to connect to other cluster instances and use
       the first reachable instance for this purpose. (Bug
       #25267603)

     * The dba.configureInstance() operation was not setting the
       binlog_checksum option with the required value (NONE) in
       the option file for instances that did not support SET
       PERSIST (for example instances running MySQL 5.7), when
       the option file path was not provided as an input
       parameter but instead specified though the operation
       wizard in interactive mode. (Bug #96489, Bug #30171090)

Functionality Added or Changed


     * MySQL Shell’s upgrade checker utility (the
       util.checkForServerUpgrade() operation) includes the
       following new and extended checks:

          + The utility now checks for tablespace names
            containing the string “FTS”, which can be
            incorrectly identified as tablespaces of full-text
            index tables, preventing upgrade. The issue has been
            fixed in MySQL 8.0.18, but affects upgrades to
            earlier MySQL 8.0 releases.

          + The check for database objects with names that
            conflict with reserved keywords now covers the
            additional keywords ARRAY, MEMBER, and LATERAL.

          + – The checks for obsolete sql_mode flags now check
            the global sql_mode setting.
       Running the upgrade checker utility no longer alters the
       gtid_executed value, meaning that the utility can be used
       on Group Replication group members without affecting
       their synchronization with the group. The upgrade checker
       also now works correctly with the ANSI_QUOTES SQL mode.
       (Bug #30002732, Bug #30103683, Bug #96351)
       References: See also: Bug #29992589.

     * MySQL Shell has two new built-in reports, which provide
       information drawn from various sources including MySQL’s
       Performance Schema:

          + threads lists the current threads in the connected
            MySQL server which belong to the user account that
            is used to run the report. Using the report-specific
            options, you can choose to show foreground threads,
            background threads, or all threads. You can report a
            default set of information for each thread, or
            select specific information to include in the report
            from a larger number of available choices. You can
            filter, sort, and limit the output.

          + thread provides detailed information about a
            specific thread in the connected MySQL server. By
            default, the report shows information on the thread
            used by the current connection, or you can identify
            a thread by its ID or by the connection ID. You can
            select one or more categories of information, or
            view all of the available information about the
            thread.
       You can run the new reports using MySQL Shell’s \show and
       \watch commands. The reports work with servers running
       all supported MySQL 5.7 and MySQL 8.0 versions. If any
       item of information is not available in the MySQL Server
       version of the target server, the reports leave it out.

     * MySQL Shell has two new control commands:

          + The \edit (\e) command opens a command in the
            default system editor for editing. If you specify an
            argument to the command, this text is placed in the
            editor, and if you do not, the last command in the
            MySQL Shell history is placed in the editor. When
            you have finished editing, MySQL Shell presents your
            edited text ready for you to execute or cancel. The
            command can also be invoked using the short form \e
            or the key combination Ctrl-X Ctrl-E.

          + The \system (\!) command runs the operating system
            command that you specify as an argument to the
            command, then displays the output from the command
            in MySQL Shell. MySQL Shell returns an error if it
            was unable to execute the command.

     * MySQL Shell now uses Python 3. For platforms that include
       a system supported installation of Python 3, MySQL Shell
       uses the most recent version available, with a minimum
       supported version of Python 3.4.3. For platforms where
       Python 3 is not included, MySQL Shell bundles Python
       3.7.4. MySQL Shell maintains code compatibility with
       Python 2.6 and Python 2.7, so if you require one of these
       older versions, you can build MySQL Shell from source
       using the appropriate Python version.

Bugs Fixed


     * In debug mode, MySQL Shell raised an assertion when
       handling a character contained in SQL strings. (Bug
       #30286680)

     * If a Python lambda was added as a member of a MySQL Shell
       extension object, the Python object was not released
       correctly when MySQL Shell shut down, causing a
       segmentation fault. (Bug #30156304)

     * A memory leak could occur when Python code was executed
       in interactive mode. (Bug #30138755)

     * Help information for a MySQL Shell report could not be
       displayed unless there was an active session. MySQL Shell
       now checks for an open session only before actually
       running the report. (Bug #30083371)

     * If a default schema was set for the MySQL Shell
       connection, and a different default schema was set after
       the connection was made, MySQL Shell’s \reconnect command
       attempted to use the default schema from the original
       connection. The user’s current default schema is now used
       for the reconnection attempt. (Bug #30059354)

     * Due to a bug introduced by a change in MySQL Shell
       8.0.16, the MSI file that is used by Windows Installer to
       install MySQL Shell overwrote the Windows PATH
       environment variable with the path to the application
       binary (mysqlsh), removing any other paths present. The
       issue has now been fixed. (Bug #29972020, Bug #95432)

     * When the \reconnect command is used to attempt
       reconnection to a server, if the last active schema set
       by the user appears to be no longer available, MySQL
       Shell now attempts to connect with no schema set. (Bug
       #29954572)

     * In interactive mode, MySQL Shell now handles multiline
       comments beginning with a slash and asterisk (/*) and
       ending with an asterisk and slash (*/). (Bug #29938424)

     * The MySQL Shell \source command was not handled correctly
       when used in combination with SQL statements. (Bug
       #29889926)

     * With MySQL Shell in SQL mode, if multiple SQL statements
       including a USE statement were issued on a single line
       with delimiters, the USE statement was not handled
       correctly. (Bug #29881615)

     * If MySQL Shell’s JSON import utility was used to send a
       large number of JSON documents to a server with
       insufficient processing capacity, the utility could fill
       up the write queue with batches of prepared documents,
       causing the connection to time out and the import to
       fail. The utility now waits to read the response from the
       server before sending the next batch of prepared
       documents to the server. (Bug #29878964)

     * When MySQL Shell was built from source with a bundled
       OpenSSL package, the required linker flags were not set.
       The issue has now been fixed. (Bug #29862189)

     * If a new query was executed in MySQL Shell while a result
       was still active, resulting in rows being cached, not all
       rows were returned by the old query. (Bug #29818714)


On Behalf of Oracle/MySQL Release Engineering Team,
Balasubramanian Kandasamy

MySQL Connector/Python 8.0.18 has been released

$
0
0

Dear MySQL users,

MySQL Connector/Python 8.0.18 is the latest GA release version of the
MySQL Connector Python 8.0 series. The X DevAPI enables application
developers to write code that combines the strengths of the relational
and document models using a modern, NoSQL-like syntax that does not
assume previous experience writing traditional SQL.

To learn more about how to write applications using the X DevAPI, see

http://dev.mysql.com/doc/x-devapi-userguide/en/

For more information about how the X DevAPI is implemented in MySQL
Connector/Python, and its usage, see

http://dev.mysql.com/doc/dev/connector-python

Please note that the X DevAPI requires at least MySQL Server version 8.0
or higher with the X Plugin enabled. For general documentation about how
to get started using MySQL as a document store, see

http://dev.mysql.com/doc/refman/8.0/en/document-store.html

To download MySQL Connector/Python 8.0.18, see the “General Availability
(GA) Releases” tab at

http://dev.mysql.com/downloads/connector/python/

Enjoy!


Changes in MySQL Connector/Python 8.0.18 (2019-10-14, General Availability)

Functionality Added or Changed


     * Added Python 3.8 support.

     * Connector/Python connections now set
       CAN_HANDLE_EXPIRED_PASSWORDS to indicate it can handle
       sandbox mode for expired passwords. This indicates that
       Connector/Python does not execute SET commands by a
       connection with an expired password, an operation that’s
       disallowed by MySQL Server 8.0.18 and higher.

     * On Windows, added platform dependent MSI installers that
       install and update Connector/Python for all supported
       Python versions on the system. Downloading and installing
       separate packages for each version is no longer required.

Bugs Fixed


     * The /usr/lib/mysqlx folder was not created after
       executing setup.py from commercial packages. (Bug
       #29959309)

     * A table scan for a float using the C Extension caused a
       memory leak. (Bug #29909157)

     * Added read_default_file as an alias for option_files to
       increase MySQLdb compatibility. (Bug #25349794, Bug
       #84389)

     * Connector/Python 8.0.17 does not properly negotiate the
       highest TLS protocol version supported by both the client
       and server. As such, because MySQL 5.6/5.7 platform
       packages (DEB and RPM) include YaSSL prior to
       5.6.45/5.7.27, and YaSSL only supports up to TLS 1.1,
       systems setting a minimum TLS protocol version above 1.1
       (such as Debian 10 that sets MinProtocol=TLSv1.2) do not
       function with Connector/Python 8.0.17.
       As a workaround, the wheel (pip) packages function
       properly as they are built using glibc and bundle OpenSSL
       instead of YaSSL.
       Connector/Python 8.0.18 adds a tls-versions option to
       define the TLS version to use.

Enjoy and thanks for the support!

Second MySQL Meetup in Frankfurt - Oct 22, 2019

$
0
0

We are happy to announce that for the second time a MySQL meetup is hold in Frankfurt Germany. Please see details below:

  • Date: October 22, 2019
  • Time: 18:30-21:00
  • Place: Oracle Deutschland B.V & Co. KG, Neue Mainzer Str. 46-50 - Frankfurt am Main
  • Agenda: we will talk about MySQL best practices then have some evening relax with pizza & drinks!
  • Speakers: Henry Kröger & Carsten Thalheimer  

More information and registration here.

We are looking forward to meeting & talking to you on Oct 22!

The MySQL 8.0.18 Maintenance Release is Generally Available

$
0
0

The MySQL Development team is very happy to announce that MySQL 8.0.18 is now available for download at dev.mysql.com. In addition to bug fixes there are a few new features added in this release.  Please download 8.0.18 from dev.mysql.com or from the MySQL  YumAPT, or SUSE repositories.…

MySQL Connector/ODBC 8.0.18 has been released

$
0
0

Dear MySQL users,

MySQL Connector/ODBC 8.0.18 is a new version in the MySQL Connector/ODBC 8.0 series, the ODBC driver for the MySQL Server.

The available downloads include both a Unicode driver and an ANSI driver based on the same modern codebase. Please select the driver type you need based on the type of your application – Unicode or ANSI. Server-side prepared statements are enabled by default. It is suitable for use with the latest MySQL server version 8.0.

This release of the MySQL ODBC driver is conforming to the ODBC 3.8 specification.
It contains implementations of key 3.8 features, including self-identification as a ODBC 3.8 driver, streaming of out for binary types only), and support of the SQL_ATTR_RESET_CONNECTION connection attribute (for the Unicode driver only).

The release is now available in source and binary form for a number of platforms from our download pages at

https://dev.mysql.com/downloads/connector/odbc/

For information on installing, please see the documentation at

https://dev.mysql.com/doc/connector-odbc/en/connector-odbc-installation.html

Enjoy and thanks for the support!

==================================================

Changes in MySQL Connector/ODBC 8.0.18 (2019-10-14,
General Availability)

Bugs Fixed

     * On Linux, memory was leaked on each server connection
       attempt due to how mysql_server_end was implemented and
       executed. (Bug #26194929)

     * On Windows, fixed direct setlocale() usage for
       multi-threaded applications.
       The workaround was to add ;NO_LOCALE=1 to the connection
       string.
       Thanks to Jacques Germishuys for the patch.
      (Bug#24814467, Bug #83297)

On Behalf of Oracle/MySQL Release Engineering Team
Prashant Tekriwal

MySQL 8.0 Clone Plugin and its internal process.

$
0
0

MySQL 8 has recently released clone plugin which makes DBA’s task of rebuilding the DB servers more easy.

  • Cloning is a process of creating an exact copy of the original. In technical terms cloning alias to (Backup + Recovery), MySQL database cloning requires a sequence of actions to be performed manually or in a scripted fashion with and without the tools involved.
  • Cloning is the first step when you want to configure the replication slave or Joining a new server to the InnoDB cluster. There was no native support for auto provisioning earlier. Percona XtraDB Cluster (MySQL + Galera Cluster) does cloning using xtrabackup tool by default when a new node joins the cluster.
  • Now MySQL simplified this task, In this post, We will see how to clone the database using clone plugin and its internals.

Clone Plugin :

  • Clone Plugin was bundled with MySQL 8.0.17 , which enables the automatic node provisioning from an existing node ( Donor).
  • The clone plugin permits cloning data locally or from a remote MySQL server instance. The cloned data is a physical snapshot of data stored in InnoDB.

Types of cloning :

  1. Remote cloning
  2. Local cloning

Remote Cloning :

  • The remote cloning operation is initiated on the local server (recipient), cloned data is transferred over the network from the Remote server (donor) to the recipient.
  • By default, during remote cloning operation removes the data in the recipient data directory and replaces it with the cloned data.
  • Optionally, you can clone data to a different directory on the recipient to avoid removing existing data.

Local Cloning :

  • The clone plugin permits cloning data locally. Cloned data is a physical snapshot of data stored in InnoDB that includes schemas, tables, tablespaces, and data dictionary.
  • The cloned data comprises a fully functional data directory, which permits using the clone plugin for MySQL server provisioning.

Plugin Installation :

  • To load the plugin at server startup we need to add the following in my.cnf file and restart the server for the new settings to take effect.
[mysqld]
plugin-load-add=mysql_clone.so
clone=FORCE_PLUS_PERMANENT

Runtime Plugin installation :

  • We can load the plugin at runtime, use the below statement,
mysql> install plugin clone soname 'mysql_clone.so';
Query OK, 0 rows affected (0.27 sec)
  • In this install plugin registers the mysql.plugins system table to cause the plugin to be loaded.
  • To check whether the plugin is loaded we can use information_schema.
mysql> select plugin_name,plugin_status from information_schema.plugins where plugin_name='clone';
+-------------+---------------+
| plugin_name | plugin_status |
+-------------+---------------+
| clone       | ACTIVE        |
+-------------+---------------+
1 row in set (0.00 sec)

Cloning Remote Data :

Remote Cloning Prerequisites

1) To perform a cloning operation, the clone plugin must be active on both the donor and recipient MySQL servers.

2) A MySQL user on the donor and recipient is required for executing the cloning operation It’s called “clone user”.

3) The donor and recipient must have the same MySQL server version 8.0.17 and higher.

4) The donor and recipient MySQL server instances must run on the same operating system and platform.

untitled-diagram-1

Required Privileges :

1) The donor node clone user requires the “BACKUP_ADMIN” privilege for accessing and transferring data from the donor, and for blocking DDL during the cloning operation.

2) The recipient, the clone user requires the “CLONE_ADMIN” privilege for replacing recipient data, blocking DDL during the cloning operation, and automatically restarting the server.

Step 1 :

  • Login the Donor node and create a new clone user with required privilege.
mysql> create user 'mydbops_clone_user'@'%' identified by 'Mydbops@8017';
Query OK, 0 rows affected (0.04 sec)

mysql> grant backup_admin on *.* to 'mydbops_clone_user'@'%';
Query OK, 0 rows affected (0.00 sec)

mysql> show grants for 'mydbops_clone_user'@'%';
+-------------------------------------------------------+
| Grants for mydbops_clone_user@%                       |
+-------------------------------------------------------+
| GRANT USAGE ON *.* TO `mydbops_clone_user`@`%`        |
| GRANT BACKUP_ADMIN ON *.* TO `mydbops_clone_user`@`%` |
+-------------------------------------------------------+
2 rows in set (0.01 sec)

Step 2 :

  • Login the recipient node and create a new clone user with required privilege.
mysql> create user 'mydbops_clone_user'@'%' identified by 'Mydbops@8017';
Query OK, 0 rows affected (0.04 sec)

mysql> grant clone_admin on *.* to 'mydbops_clone_user'@'%';
Query OK, 0 rows affected (0.01 sec)

mysql> show grants for 'mydbops_clone_user'@'%';
+------------------------------------------------------+
| Grants for mydbops_clone_user@%                      |
+------------------------------------------------------+
| GRANT USAGE ON *.* TO `mydbops_clone_user`@`%`       |
| GRANT CLONE_ADMIN ON *.* TO `mydbops_clone_user`@`%` |
+------------------------------------------------------+
2 rows in set (0.00 sec)

Step 3 :

  • By default, a remote cloning operation removes the data in the recipient data directory and replaces it with the cloned data. By cloning to a named directory, you can avoid removing existing data from the recipient data directory.
  • Here i am cloned the remote server data do different location using “DATA DIRECTORY” option.
mysql> clone instance from mydbops_clone_user@192.168.33.11:3306 identified by 'Mydbops@8017' data directory='/var/lib/mysql_backup/mysql';
Query OK, 0 rows affected (4.94 sec)
[root@mydbopslabs12 mysql]# pwd
/var/lib/mysql_backup/mysql
[root@mydbopslabs12 mysql]# ls -lrth
total 152M
drwxr-x---. 2 mysql mysql 6 Aug 25 09:12 mysql
drwxr-x---. 2 mysql mysql 28 Aug 25 09:12 sys
drwxr-x---. 2 mysql mysql 30 Aug 25 09:12 accounts
-rw-r-----. 1 mysql mysql 3.4K Aug 25 09:12 ib_buffer_pool
-rw-r-----. 1 mysql mysql 12M Aug 25 09:12 ibdata1
-rw-r-----. 1 mysql mysql 23M Aug 25 09:12 mysql.ibd
-rw-r-----. 1 mysql mysql 10M Aug 25 09:12 undo_002
-rw-r-----. 1 mysql mysql 10M Aug 25 09:12 undo_001
-rw-r-----. 1 mysql mysql 48M Aug 25 09:12 ib_logfile0
-rw-r-----. 1 mysql mysql 48M Aug 25 09:12 ib_logfile1
drwxr-x---. 2 mysql mysql 89 Aug 25 09:12 #clone

Local Cloning :

  • Cloning data from the local MySQL data directory to another directory on the same server where the MySQL server instance runs.

Step 1 :

mysql> grant BACKUP_ADMIN ON *.* TO 'mydbops_clone_user'@'%';
Query OK, 0 rows affected (0.10 sec)

Step 2 :

mysql> clone local data directory='/vagrant/clone_backup/mysql';
Query OK, 0 rows affected (3.94 sec)

Note :

  • The MySQL server must have the necessary write access to create the directory.
  • A local cloning operation does not support cloning of user-created tables or table-spaces that reside outside of the data directory.

How does the clone Plugin Works ?

  • I have two standalone servers with the same configuration.
1) 192.168.33.25 -
   * 2 core
   * 4GB RAM
   * 50 GB SSD

2) 192.168.33.26 -
   * 2 core
   * 4GB RAM
   * 50 GB SSD
  • i have installed the MySQL 8.0.17 and enabled the clone plugin for those two servers, and created the above mentioned users for donor & recipient nodes.
Donor  192.168.33.25
Recipient 192.168.33.26

Step 1 :

  • I have created mydbops database and created 3 tables then loaded 2M records for each tables in donor node.

Step 2 :

  • I have created another database called sysbench and started to loading the data.

Example :

[root@mydbopslabs25 sysbench]# sysbench oltp_insert.lua --table-size=2000000 --num-threads=2 --rand-type=uniform --db-driver=mysql --mysql-db=sysbench --tables=10 --mysql-user=test --mysql-password=Secret!@817 prepare
WARNING: --num-threads is deprecated, use --threads instead
sysbench 1.0.17 (using system LuaJIT 2.0.4)

Initializing worker threads...

Creating table 'sbtest1'...
Creating table 'sbtest2'...
Inserting 2000000 records into 'sbtest1'
Inserting 2000000 records into 'sbtest2'
Creating a secondary index on 'sbtest1'...
Creating a secondary index on 'sbtest2'...
.
.
.
Inserting 2000000 records into 'sbtest8'
.
.
Inserting 2000000 records into 'sbtest10'
Creating a secondary index on 'sbtest9'...
Creating a secondary index on 'sbtest10'...

Step 3 :

  • At the same time i have added the address of donor MySQL server instance (with port) in recipient node.

Example :

mysql > set global clone_valid_donor_list='192.168.33.25:3306';
Query OK, 0 rows affected (0.01 sec)

Step 4:

  • initialised the cloning process in recipient node.
mysql> clone instance from mydbops_clone_user@192.168.33.25:3306 identified by 'Mydbops@123';
Query OK, 0 rows affected (4 min 20.48 sec)
  • While running the cloning process i have analysed the mysql data directory, how it will clone the data and how it’s replacing data directory files.
  • During this process it will not overwrite the existing undo & redo log files.It will create new files like this.
-rw-r-----. 1 mysql mysql 23M Aug 25 10:44 mysql.ibd.#clone
-rw-r-----. 1 mysql mysql 5.3K Aug 25 10:44 ib_buffer_pool.#clone
-rw-r-----. 1 mysql mysql 12M Aug 25 10:45 ibdata1.#clone
-rw-r-----. 1 mysql mysql 40M Aug 25 10:48 undo_002.#clone
-rw-r-----. 1 mysql mysql 40M Aug 25 10:48 undo_002.#clone
  • Inside the data directory it will create #clone directory following files are created.

1.#view_progress: persists performance_schema.clone_progress data

2.#view_status: Persists persists performance_schema.clone_status data

3.#status_in_progress: Temporary file that exists when clone in progress

4.#status_error: Temporary file to indicate incomplete clone.

5.#status_recovery: Temporary file to hold recovery status information

6.#new_files: List of all files created during clone

7.#replace_files: List of all files to be replaced during recovery

  • once the cloning process is completed it will swap the file and restart the mysql service.
  • During this cloning we are able to access the data inside mysql in recipient node.It will close the connection while restarting (Swapping the files )the mysqld service.

Example :

[root@mydbopslabs26 vagrant]# mysql -e "select count(*) from mydbops.t1;select sleep(30);select count(*) from mydbops.t1;"
+----------+
| count(*) |
+----------+
| 2000000  |
+----------+
+-----------+
| sleep(30) |
+-----------+
| 0         |
+-----------+
ERROR 2006 (HY000) at line 1: MySQL server has gone away
  • After the completion of cloning process will maintain a few stats in #clone directory.
  • This directory will be located inside the mysql data directory.

1) #view_status

  • This file will maintain the donor node host and mysql port details

2) #view_progress

  • In this file will maintain the Progress of cloning status

Example :

2 1 1568463987624627 1568463988356572 0 0 0
2 2 1568463988357856 1568464039117879 1630655790 1630655790 1630750990
2 2 1568464039120173 1568464039648790 0 0 197
  • Here “1568464039120173” is an epoch timestamp.

3) #status_recovery

  • This file contain binlog co-ordinates.

Example :

./binlog.000011
190479203

Note :

  • We can get those stat’s from performance schema too.

Page Tracking :

How the active changes to DB are tracked ?

  • The pages modified during the cloning process are tracked either during mtr (mini transaction) address them to flush list or when they are  flushed to disk by I/O threads. We choose to track

Consistency of this phase is defined as follows,

* At start, it guarantees to track all pages that are not yet flushed. All
flushed pages would be included in “FILE COPY”.

* At end, it ensures that the pages are tracked at least up to the checkpoint
LSN. All modifications after checkpoint would be included in “REDO COPY”.

Monitoring Cloning Operations:

Is it possible to monitor the cloning progress ?

  • Yes , A cloning operation may take a long/short time to complete, depending on the amount of data and other factors related to data transfer.
  • You can monitor the status and progress of a cloning operation using performance schema.
  • In Mysql 8.0.17 they introduced new Clone tables and Clone Instrumentation are introduced as well

Note :

  • The clone_status and clone_progress Performance Schema tables can be used to monitor a cloning operation on the recipient MySQL server instance only.
  • The clone_status table provides the state of the current or last executed cloning operation.
  • A clone operation has four possible states:
    • Not Started
    • In Progress
    • Completed,
    • Failed.

Example :

mysql> select stage,state,begin_time as start_time,end_time,data,network from performance_schema.clone_progress;
+-----------+-------------+----------------------------+----------------------------+----------+----------+
| stage     | state       | start_time                 | end_time                   | data     | network  |
+-----------+-------------+----------------------------+----------------------------+----------+----------+
| DROP DATA | Completed   | 2019-08-25 09:27:53.725694 | 2019-08-25 09:27:53.922072 | 0        |  0       |
| FILE COPY | Completed   | 2019-08-25 09:27:53.922424 | 2019-08-25 09:27:54.651132 | 57904527 | 57915509 |
| PAGE COPY | Completed   | 2019-08-25 09:27:54.651463 | 2019-08-25 09:27:54.756606 | 0        | 99       |
| REDO COPY | Completed   | 2019-08-25 09:27:54.756926 | 2019-08-25 09:27:54.858837 | 2560     | 3031     |
| FILE SYNC | Completed   | 2019-08-25 09:27:54.859098 | 2019-08-25 09:27:55.273789 | 0        | 0        |
| RESTART   | Not Started | NULL                       | NULL                       | 0        | 0        |
| RECOVERY  | Not Started | NULL                       | NULL                       | 0        | 0        |
+-----------+-------------+----------------------------+----------------------------+----------+----------+
7 rows in set (0.00 sec)

Snapshot Status :

INIT :

  • The clone object is initialized identified by a Donor.

FILE COPY :

  • The state changes from INIT to “FILE COPY” when snapshot_copy interface is called.
  • Before making the state change we start “Page Tracking” at lSN “CLONE START LSN”.
  • In this state we copy all database files and send to the recipient.

PAGE COPY :

  • The state changes from “FILE COPY” to “PAGE COPY” after all files are copied and sent.
  • Before making the state change we start “Redo Archiving” at lsn “CLONE FILE END LSN” and stop “Page Tracking”.
  • In this state, all modified pages as identified by Page IDs between “CLONE START LSN” and “CLONE FILE END LSN” are read from “buffer pool” and sent.
  • We would sort the pages by space ID, page ID to avoid random read(donor) and random write(recipient) as much as possible.

REDO COPY :

  • The state changes from “PAGE COPY” to “REDO COPY” after all modified pages are sent.
  • Before making the state change we stop “Redo Archiving” at lsn “CLONE LSN”.
  • This is the LSN of the cloned database. We would also need to capture the replication coordinates at this point in future.
  • It should be the replication coordinate of the last committed transaction up to the “CLONE LSN”.
  • We send the redo logs from archived files in this state from “CLONE FILE END LSN” to “CLONE LSN” before moving to “Done” state.

Done :

  • The clone object is kept in this state till destroyed by snapshot_end() call.

Performance Schema to Monitor Cloning:

  • There are three stages of events for monitoring progress of a cloning operation.
  • Each stage event reports WORK_COMPLETED and WORK_ESTIMATED values. Reported values are revised as the operation progresses.

1)stage/innodb/clone (file copy) :

    • Indicates progress of the file copy phase of the cloning operation.
    • The number of files to be transferred is known at the start of the file copy phase, and the number of chunks is estimated based on the number of files.

2) stage/innodb/clone (page copy) :

  • Indicates progress of the page copy phase of cloning operation.
  • Once the file copy phase is completed, the number of pages to be transferred is known, and WORK_ESTIMATED is set to this value.

3) stage/innodb/clone (redo copy) :

  • Indicates progress of the redo copy phase of cloning operation.
  • Once the page copy phase is completed, the number of redo chunks to be transferred is known, and WORK_ESTIMATED is set to this value.

Enabling Monitoring 

mysql> update performance_schema.setup_instruments set ENABLED='YES' where NAME LIKE 'stage/innodb/clone%';
Query OK, 0 rows affected (0.00 sec)
Rows matched: 3 Changed: 3 Warnings: 0

Replication Configuration :

  • The clone plugin supports replication, In addition to cloning data, a cloning operation extracts and transfers replication coordinates from the donor and applies them on the recipient.
  • The clone plugin for provisioning is considerably faster and more efficient than replicating a large number of transactions.
  • Both binary log position and GTID coordinates are extracted and transferred from the donor MySQL server instance.

Binlog and position :

  • The binlog and position’s are stored in clone_status table.Need to check this log file and position in donor node.
mysql> select binlog_file,binlog_position from performance_schema.clone_status;
+------------------+-----------------+
| binlog_file      | binlog_position |
+------------------+-----------------+
| mysql-bin.000479 | 483007997       |
+------------------+-----------------+
  • If you are using GTID use below query,
mysql> select @@global.gtid_executed;
  • Here i am using binlog co-ordinates for replication.
mysql> change master to master_host ='192.168.33.11', master_port =3306,master_log_file ='mysql-bin.000479',master_log_pos =483007997;

mysql> start slave user='repl' password='Repl@123';

Limitations :

  • The clone plugin has some limitations,
  • DDL, including TRUNCATE TABLE, is not permitted during a cloning operation. Concurrent DML is permitted.
  • An instance cannot be cloned from a different MySQL server version. The donor and recipient must have the same MySQL server version.
  • The clone plugin does not support cloning of binary logs.
  • The clone plugin only clones data stored in InnoDB. Other storage engine data is not cloned MyISAM and CSV engine tables.

Conclusion :

  • I believe for now creating replicas has become much easier with the help of the MySQL 8.0.17 clone plugin.
  • The clone plugin can be used to set up not only asynchronous replicas but provisioning Group Replication members also.

MySQL Cloud Backup and Restore Scenarios Using Microsoft Azure

$
0
0

Backups are a very important part of your database operations, as your business must be secured when catastrophe strikes. When that time comes (and it will), your Recovery Point Objective (RPO) and Recovery Time Objective (RTO) should be predefined, as this is how fast you can recover from the incident which occurred. 

Most organizations vary their approach to backups, trying to have a combination of server image backups (snapshots), logical and physical backups. These backups are then stored in multiple locations, so as to avoid any local or regional disasters.  It also means that the data can be restored in the shortest amount of time, avoiding major downtime which can impact your company's business. 

Hosting your database with a cloud provider, such as Microsoft Azure (which we will discuss in this blog), is not an exception, you still need to prepare and define your disaster recovery policy.

Like other public cloud offerings, Microsoft Azure (Azure) offers an approach for backups that is practical, cost-effective, and designed to provide you with recovery options. Microsoft Azure backup solutions allow you to configure and operate and are easily handled using their Azure Backup or through the Restore Services Vault (if you are operating your database using virtual machines). 

If you want a managed database in the cloud, Azure offers Azure Database for MySQL. This should be used only if you do not want to operate and manage the MySQL database yourself. This service offers a rich solution for backup which allows you to create a backup of your database instance, either from a local region or through a geo-redundant location. This can be useful for data recovery. You may even be able to restore a node from a specific period of time, which is useful in achieving point-in-time recovery. This can be done with just one click.

In this blog, we will cover all of these backup and restore scenarios using a MySQL database on the Microsoft Azure cloud.

Performing Backups on a Virtual Machine on Azure

Unfortunately, Microsoft Azure does not offer a MySQL-specific backup type solution (e.g. MySQL Enterprise Backup, Percona XtraBackup, or MariaDB's Mariabackup). 

Upon creation of your Virtual Machine (using the portal), you can setup a process to backup your VM using the Restore Services vault. This will guard you from any incident, disaster, or catastrophe and the data stored is encrypted by default. Adding encryption is optional and, though recommended by Azure, it comes with a price. You can take a look at their Azure Backup Pricing page for more details.

To create and setup a backup, go to the left panel and click All Resources → Compute → Virtual Machine. Now set the parameters required in the text fields. Once you are on that page, go to the Management tab and scroll down below. You'll be able to see how you can setup or create the backup. See the screenshot below:

Create a Virtual Machine - Azure

Then setup your backup policy based on your backup requirements. Just hit the Create New link in the Backup policy text field to create a new policy. See below:

Define Backup Policy - Azure

You can configure your backup policy with retention by week, monthly, and yearly. 

Once you have your backup configured, you can check that you have a backup enabled on that particular virtual machine you have just created. See the screenshot below:

Backup Settings - Azure

Restore and Recover Your Virtual Machine on Azure

Designing your recovery in Azure depends on what kind of policy and requirements your application requires. It also depends on whether RTO and RPO must be low or invisible to the user in case an incident or during maintenance. You may setup your virtual machine with an availability set or on a different availability zone to achieve a higher recovery rate. 

You may also setup a disaster recovery for your VM to replicate your virtual machines to another Azure region for business continuity and disaster recovery needs. However, this might not be a good idea for your organization as it comes with a high cost. If in place, Azure offers you an option to restore or create a virtual machine from the backup created. 

For example, during the creation of your virtual machine, you can go to Disks tab, then go to Data Disks. You can create or attach an existing disk where you can attach the snapshot you have available. See the screenshot below for which you'll be able to choose from snapshot or storage blob:

Create a New Disk - Azure

 You may also restore on a specific point in time just like in the screenshot below:

Set Restore Point - Azure

Restoring in Azure can be done in different ways, but it uses the same resources you have already created.

For example, if you have created a snapshot or a disk image stored in the Azure Storage blob, if you create a new VM, you can use that resource as long as it's compatible and available to use. Additionally, you may even be able to do some file recovery, aside from restoring a VM just like in the screenshot below:

File Recovery - Azure

During File Recovery, you may be able to choose from a specific recovery point, as well as download a script to browse and recover files. This is very helpful when you need only a specific file but not the whole system or disk volume.

Restoring from backup on an existing VM takes about three minutes. However, restoring from backup to spawn a new VM takes twelve minutes. This, however, could depend on the size of your VM and the network bandwidth available in Azure. The good thing is that, when restoring, it will provide you with details of what has been completed and how much time is remaining. For example, see the screenshot below:

Recovery Job Status - Azure

Backups for Azure Database For MySQL

Azure Database for MySQL is a fully-managed database service by Microsoft Azure. This service offers a very flexible and convenient way to setup your backup and restore capabilities.

Upon creation of your MySQL server instance, you can then setup backup retention and create your backup redundancy options; either locally redundant (local region) or geo-redundant (on a different region). Azure will provide you the estimated cost you would be charged for a month. See a sample screenshot below:

Pricing Calculator - Azure

Keep in mind that geo-redundant backup options are only available on General Purpose and Memory Optimized types of compute nodes. It's not available on a Basic compute node, but you can have your redundancy in the local region (i.e. within the availability zones available).

Once you have a master setup, it's easy to create a replica by going to Azure Database for MySQL servers → Select your MyQL instance → Replication → and click Add Replica. Your replica can be used as the source or restore target when needed. 

Keep in mind that in Azure, when you stop the replication between the master and a replica, this will be forever and irreversible as it makes the replica a standalone server. A replica created using Microsoft Azure is ideally a managed instance and you can stop and start the replication threads just like what you do on a normal master-slave replication. You can do a restart and that's all. If you created the replica manually, by either restoring from the master or a backup, (e.g. via a point-in-time recovery), then you'll be able to stop/start the replication threads or setup a slave lag if needed.

Restoring Your Azure Database For MySQL From A Backup

Restoring is very easy and quick using the Azure portal. You can just hit the restore button with your MySQL instance node and just follow the UI as shown in the screenshot below:

Restoring Your Azure Database For MySQL From A Backup

Then you can select a period of time and create/spawn a new instance based on this backup captured:

Restore - Azure Database For MySQL

Once you have the node available, this node will not be a replica of the master yet. You need to manually set this up with easy steps using their stored procedures available:

CALL mysql.az_replication_change_master('<master_host>', '<master_user>', '<master_password>', 3306, '<master_log_file>', <master_log_pos>, '<master_ssl_ca>');

where,

master_host: hostname of the master server

master_user: username for the master server

master_password: password for the master server

master_log_file: binary log file name from running show master status

master_log_pos: binary log position from running show master status

master_ssl_ca: CA certificate’s context. If not using SSL, pass in empty string.

Then starting the MySQL threads is as follows,

CALL mysql.az_replication_start;

or you can stop the replication threads as follows,

CALL mysql.az_replication_stop;

or you can remove the master as,

CALL mysql.az_replication_remove_master;

or skip SQL thread errors as,

CALL mysql.az_replication_skip_counter;

As mentioned earlier, when a replica is created using Microsoft Azure under the Add Replica feature under a MySQL instance, these specific stored procedures aren't available. However, the mysql.az_replication_restart procedure will be available since you are not allowed to stop nor start the replication threads of a managed replica by Azure. So the example we have above was restored from a master which takes the full copy of the master but acts as a single node and needs a manual setup to be a replica of an existing master.

Additionally, when you have a manual replica that you have setup, you will not be able to see this under Azure Database for MySQL servers → Select your MyQL instance → Replication since you created or setup the replication manually.

Alternative Cloud and Restore Backup Solutions

There are certain scenarios where you want to have full-access when taking a full backup of your MySQL database in the cloud. To do this you can create your own script or use open-source technologies. With these you can control how the data in your MySQL database should be backed up and precisely how it should be stored. 

You can also leverage Azure Command Line Interface (CLI) to create your custom automation. For example, you can create a snapshot using the following command with Azure CLI:

az snapshot create  -g myResourceGroup -source "$osDiskId" --name osDisk-backup

or create your MySQL server replica with the following command:

az mysql server replica create --name mydemoreplicaserver --source-server mydemoserver --resource-group myresourcegroup

Alternatively, you can also leverage an enterprise tool that features ways to take your backup with restore options. Using open-source technologies or 3rd party tools requires knowledge and skills to leverage and create your own implementation. Here's the list you can leverage:

  • ClusterControl - While we may be a little biased, ClusterControl offers the ability to manage physical and logical backups of your MySQL database using battle-tested, open-source technologies (PXB, Mariabackup, and mydumper). It supports MySQL, Percona, MariaDB, Galera databases. You can easily create our backup policy and store your database backups on any cloud (AWS, GCP, or Azure) Please note that the free version of ClusterControl does not include the backup features.
  • LVM Snapshots - You can use LVM to take a snapshot of your logical volume. This is only applicable for your VM since it requires access to block-level storage. Using this tool requires caveat since it can bring your database node unresponsive while the backup is running.
  • Percona XtraBackup (PXB) - An open source technology from Percona. With PXB, you can create a physical backup copy of your MySQL database. You can also do a hot-backup with PXB for InnoDB storage engine but it's recommended to run this on a slave or non-busy MySQL db server. This is only applicable for your VM instance since it requires binary or file access to the database server itself.
  • Mariabackup - Same with PXB, it's an open-source technology forked from PXB but is maintained by MariaDB. Specifically, if your database is using MariaDB, you should use Mariabackup in order to avoid incompatibility issues with tablespaces.
  • mydumper/myloader - These backup tools creates a logical backup copies of your MySQL database. You can use this with your Azure database for MySQL though I haven't tried how successful is this for your backup and restore procedure.
  • mysqldump - it's a logical backup tool which is very useful when you need to backup and dump (or restore) a specific table or database to another instance. This is commonly used by DBA's but you need to pay attention of your disks space as logical backup copies are huge compared to physical backups.
  • MySQL Enterprise Backup - It delivers hot, online, non-blocking backups on multiple platforms including Linux, Windows, Mac & Solaris. It's not a free backup tool but offers a lot of features.
  • rsync - It's a fast and extraordinarily versatile file copying tool. It can copy locally, to/from another host over any remote shell, or to/from a remote rsync daemon. It offers a large number of options that control every aspect of its behavior and permit very flexible specification of the set of files to be copied. Mostly in Linux systems, rsync is installed as part of the OS package.

MySQL Connector/J 8.0.18 has been released

$
0
0

Dear MySQL users,

MySQL Connector/J 8.0.18 is the latest General Availability release of
the MySQL Connector/J 8.0 series.  It is suitable for use with MySQL
Server versions 8.0, 5.7, and 5.6.  It supports the Java Database
Connectivity (JDBC) 4.2 API, and implements the X DevAPI.

This release includes the following new features and changes, also
described in more detail on

https://dev.mysql.com/doc/relnotes/connector-j/8.0/en/news-8.0.18.html

As always, we recommend that you check the “CHANGES” file in the
download archive to be aware of changes in behavior that might affect
your application.

To download MySQL Connector/J 8.0.18 GA, see the “General Availability
(GA) Releases” tab at http://dev.mysql.com/downloads/connector/j/

Enjoy!

Changes in MySQL Connector/J 8.0.18 (2019-10-14, General
Availability)

Bugs Fixed


     * A minor code improvement has been put into
       DatabaseMetaDataUsingInfoSchema.getColumns(). (Bug #29898567,
       Bug #95741)

     * For a replication setup, when the connection property
       loadBalanceAutoCommitStatementThreshold was set to any values
       other than 0, load-balancing server switching failed. It was
       because in this case, LoadBalancedAutoCommitInterceptor did not
       have the required access to the parent connection proxy that
       had established the connection, and this fix enables such access.
       (Bug #25223123, Bug #84098)

     * An attempt to retrieve multiple result sets returned by
       asynchronous executions of stored procedures resulted in an
       ExecutionException. With this fix, Connector/J now works properly
       when asynchronous executions return multiple result sets.
       (Bug #23721537)

Enjoy and thanks for the support!

On behalf of the MySQL Release Team,
Nawaz Nazeer Ahamed

MySQL Connector/Node.js 8.0.18 has been released

$
0
0

Dear MySQL users,

MySQL Connector/Node.js is a new Node.js driver for use with the X
DevAPI. This release, v8.0.18, is a maintenance release of the
MySQL Connector/Node.js 8.0 series.

The X DevAPI enables application developers to write code that combines
the strengths of the relational and document models using a modern,
NoSQL-like syntax that does not assume previous experience writing
traditional SQL.

MySQL Connector/Node.js can be downloaded through npm (see
https://www.npmjs.com/package/@mysql/xdevapi for details) or from
https://dev.mysql.com/downloads/connector/nodejs/.

To learn more about how to write applications using the X DevAPI, see
http://dev.mysql.com/doc/x-devapi-userguide/en/.
For more information about how the X DevAPI is implemented in MySQL
Connector/Node.js, and its usage, see
http://dev.mysql.com/doc/dev/connector-nodejs/.

Please note that the X DevAPI requires at least MySQL Server version
8.0 or higher with the X Plugin enabled. For general documentation
about how to get started using MySQL as a document store, see
http://dev.mysql.com/doc/refman/8.0/en/document-store.html.

Changes in MySQL Connector/Node.js 8.0.18 (2019-10-14, General
Availability)

Functionality Added or Changed

  • Implemented the X DevAPI cursor model, which includes
    adding methods such as fetchOne(), fetchAll(),
    getColumns(), hasData(), and nextResult(). For additional
    details, see the X DevAPI documentation about Working
    with Result Sets

(https://dev.mysql.com/doc/x-devapi-userguide/en/working-with-result-sets.html).
Previously, handling result set data or metadata required
specific callback functions when calling execute(). With
this new interface, the connector automatically switches
to this new pull-based cursor model if these callback
functions are not provided.

  • Improved Collection.getOne() performance by making the
    underlying lookup expression to only parse once, and
    having subsequent Collection.getOne() calls utilize
    server-side prepared statements.
  • Added support to generate test coverage reports by
    running the likes of npm run coverage; see the bundled
    CONTRIBUTING.md for details and requirements. This was
    added to help users contribute patches.
  • Added linter check support to help enforce coding style
    and convention rules for new contributions by running the
    likes of npm run linter; see the bundled CONTRIBUTING.md
    for details.

Bugs Fixed

  • Added support for assigning Node.js Buffer values to
    expression or SQL query placeholders. (Bug #30163003, Bug 96480)
  • MySQL column binary values (such as BLOB, BINARY, and
    VARBINARY) can now convert to proper Node.js Buffer
    instances. (Bug #30162858, Bug #96478)
  • Inserting a raw Node.js Buffer value into MySQL BLOB
    field resulted in an error as the content_type was
    improperly set; it’s now handled as a raw byte string by
    the X Plugin. (Bug #30158425)
  • The padding characters used for fixed-lengthed columns
    now map to the collation code provided by the column
    metadata; previously it was based on the JavaScript
    native type of the values. (Bug #30030159)

On Behalf of MySQL/ORACLE RE Team
Gipson Pulla

MySQL Workbench 8.0.18 has been released

MySQL Connector/C++ 8.0.18 has been released

$
0
0

Dear MySQL users,

MySQL Connector/C++ 8.0.18 is a new release version of the MySQL
Connector/C++ 8.0 series.

Connector/C++ 8.0 can be used to access MySQL implementing Document
Store or in a traditional way, using SQL queries. It allows writing
both C++ and plain C applications using X DevAPI and X DevAPI for C.
It also supports the legacy API of Connector/C++ 1.1 based on JDBC4.

To learn more about how to write applications using X DevAPI, see
“X DevAPI User Guide” at

https://dev.mysql.com/doc/x-devapi-userguide/en/

See also “X DevAPI Reference” at

https://dev.mysql.com/doc/dev/connector-cpp/devapi_ref.html

and “X DevAPI for C Reference” at

https://dev.mysql.com/doc/dev/connector-cpp/xapi_ref.html

For generic information on using Connector/C++ 8.0, see

https://dev.mysql.com/doc/dev/connector-cpp/

For general documentation about how to get started using MySQL
as a document store, see

http://dev.mysql.com/doc/refman/8.0/en/document-store.html

To download MySQL Connector/C++ 8.0.18, see the “General Availability (GA)
Releases” tab at

https://dev.mysql.com/downloads/connector/cpp/


Changes in MySQL Connector/C++ 8.0.18 (2019-10-14, General Availability)

Compilation Notes

     * It is now possible to compile Connector/C++ using OpenSSL
       1.1.

     * Connector/C++ no longer supports using wolfSSL as an
       alternative to OpenSSL. All Connector/C++ builds now use
       OpenSSL.

On Behalf of MySQL Release Engineering Team,
Surabhi Bhat

Announcing MySQL Server 8.0.18, 5.7.28 and 5.6.46

$
0
0
MySQL Server 8.0.18, 5.7.28 and 5.6.46, new versions of the popular Open Source Database Management System, have been released. These releases are recommended for use on production systems. For an overview of what’s new, please see http://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html http://dev.mysql.com/doc/refman/5.7/en/mysql-nutshell.html http://dev.mysql.com/doc/refman/5.6/en/mysql-nutshell.html For information on installing the release on new servers, please see the MySQL installation documentation at […]

MySQL Connector/NET 8.0.18 has been released

$
0
0

Dear MySQL users,

MySQL Connector/NET 8.0.18 is the first version to support
.Net Core 3.0 and the sixth general availability release
of MySQL Connector/NET to add support for the X DevAPI, which
enables application developers to write code that combines the
strengths of the relational and document models using a modern,
NoSQL-like syntax that does not assume previous experience writing traditional SQL.

To learn more about how to write applications using the X DevAPI, see
http://dev.mysql.com/doc/x-devapi-userguide/en/index.html. For more
information about how the X DevAPI is implemented in Connector/NET, see
http://dev.mysql.com/doc/dev/connector-net.
NuGet packages provide functionality at a project level. To get the
full set of features available in Connector/NET such as availability
in the GAC, integration with Visual Studio’s Entity Framework Designer
and integration with MySQL for Visual Studio, installation through
the MySQL Installer or the stand-alone MSI is required.

Please note that the X DevAPI requires at least MySQL Server version
8.0 or higher with the X Plugin enabled. For general documentation
about how to get started using MySQL as a document store, see
http://dev.mysql.com/doc/refman/8.0/en/document-store.html.

To download MySQL Connector/NET 8.0.18, see
http://dev.mysql.com/downloads/connector/net/

Installation instructions can be found at
https://dev.mysql.com/doc/connector-net/en/connector-net-installation.html

Changes in MySQL Connector/NET 8.0.18 (2019-10-14, General Availability)

Functionality Added or Changed

     * Connector/NET now supports IPV6 connections made using
       the classic MySQL protocol when the operating system on
       the server host also supports IPV6. (Bug #29682333)

     * Support for .NET Core 3.0 was added.

     * In tandem with Microsoft, Connector/NET ends support for
       .NET Core 1.0 and 1.1 (and also for Entity Framework Core
       1.1, which depends on .NET Core 1.1).

     * Previously, if the server restricted a classic
       Connector/NET session to sandbox mode and the password on
       the account expired, the session continued to permit the
       use of SET statements. Now, SET statements in sandbox
       mode with an expired password are prohibited and will
       return an error message if used. The one exception is SET
       PASSWORD, which is still permitted (see Server Handling
       of Expired Passwords
(https://dev.mysql.com/doc/refman/8.0/en/expired-password-handling.html)).

Bugs Fixed

     * The Renci.SshNet.dll deployment was problematic for
       Connector/NET 8.0.17 MSI installations. Some
       applications, such as Microsoft Excel, were unable to
       read MySQL data as a result. This fix removes unnecessary
       dependencies on the DLL and also ensures that the MSI
       installation deploys the correct Renci.SshNet.dll to the
       GAC. (Bug #30215984, Bug #96614)

     * Connector/NET returned an inaccurate value for the YEAR
       type when a prepared command was used. (Bug #28383721,
       Bug #91751)

     * Entity Framework Core: A syntax error was generated
       during an operation attempting to rename a table that was
       previously migrated from code. Now, the primary key
       constraint for an existing table can be dropped without
       errors when the follow-on migration operation is
       performed. (Bug #28107555, Bug #90958)

On Behalf of Oracle/MySQL Engineering Team
Sreedhar S

MySQL 8.0.18 Replication Enhancements

$
0
0

The latest MySQL 8.0 release is out. That is MySQL 8.0.18.  We have some new replication enhancements and some work done on replication internals in this release that we would like to highlight and celebrate with our users. Here is a quick summary.…


Replication with restricted privileges

$
0
0

Up until MySQL 8.0.18, the slave executes replicated transactions without checking privileges. It does so to be able to apply everything that its upstream server (the master) tells it to. In practice this means that the slave fully trusts it’s master.…

MySQL InnoDB Cluster – What’s new in Shell AdminAPI 8.0.18 release

$
0
0

The MySQL Development Team is very happy to announce a new 8.0 Maintenance Release of InnoDB Cluster – 8.0.18.

In addition to major quality improvements, 8.0.18 brings some very useful features!

This blog post will only cover InnoDB cluster’s frontend and control panel – MySQL Shell and its AdminAPI – Stay tuned for other blog posts covering MySQL Router and Group Replication!…

How to Start a 3-Node Percona XtraDB Cluster with the Binary Tarball Package

$
0
0
3-Node Percona XtraDB Cluster

This blog post will help you configure a 3-node Percona XtraDB Cluster using a binary tarball on your local machine. Configuration files are auto-generated with mostly default configurations except for port/IP address details. The tool has the handy script to create configuration files and start multiple Percona XtraDB Cluster nodes on the fly, helping you to start PXC quickly without spending time on startup configuration as well as avoid using any virtual environments.  The script is available in the percona-qa github project. Currently, this script supports PXC binary tarball distributions only.

You can download the appropriate tarball package from the Percona-XtraDB-Cluster-8.0 downloads page. Once you have the packages available on your local machine, unpack the tarball package.

Note: This tool even works with PXC-5.7 packages.

Now we need to run the pxc-startup.sh script from the Percona XtraDB Cluster base directory. It will check out the PXC startup script called start_pxc.

The following steps will help you to start a 3-node PXC in CentOS 7.

1. Checkout pxc-startup.sh repo:

wget https://raw.githubusercontent.com/Percona-QA/percona-qa/master/pxc-tests/pxc-startup.sh

2. Download PXC binary tarball packages for CentOS7 (In this blog we will be using the PXC-8.0 experimental package):

wget https://www.percona.com/redir/downloads/TESTING/Percona-XtraDB-Cluster-8.0/centos7/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102.tar.gz

3. Unpack tarball package and run pxc-startup.sh script from Percona XtraDB Cluster base directory:

tar -xzf Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102.tar.gz

$ cd Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/

$ bash ../pxc-startup.sh
Added scripts: ./start_pxc
./start_pxc will create ./stop_pxc | ./*node_cli | ./wipe scripts
$

4. If you want to start the 3-node cluster, please use numeric 3 as parameter with ./start_pxc:

$ ./start_pxc 3
Starting PXC nodes…
Server on socket /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node1/socket.sock with datadir /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node1 started
  Configuration file : /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node1.cnf
Server on socket /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node2/socket.sock with datadir /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node2 started
  Configuration file : /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node2.cnf
Server on socket /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node3/socket.sock with datadir /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node3 started
  Configuration file : /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node3.cnf
$

The start_pxc script will also create shutdown(stop)/wipe/cli sanity scripts.

$ ls -1 *_node_cli wipe *_pxc
1_node_cli
2_node_cli
3_node_cli
start_pxc
stop_pxc
wipe
$

The ./stop_pxc script will stop all cluster nodes.

$ ./stop_pxc
Server on socket /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node3/socket.sock with datadir /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node3 halted
Server on socket /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node2/socket.sock with datadir /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node2 halted
Server on socket /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node1/socket.sock with datadir /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node1 halted
$

The ./[1-3]_node_cli  scripts will help to login to the respective node using the MySQL client.

$ ./1_node_cli
[..]
node1:root@localhost> show status like 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 3     |
+--------------------+-------+

1 row in set (0.03 sec)
node1:root@localhost>

The ./wipe script will trigger stop_pxc script and move the data directory to .PREV.

$ ls  -d1 *.PREV
node1.PREV
node2.PREV
node3.PREV
$

The configuration files will be created in the base directory. You can also add custom configurations in the start_pxc script.

$ ls -1 *.cnf
node1.cnf
node2.cnf
node3.cnf
$

MySQL is OpenSSL-only now !

$
0
0

MySQL needs an SSL/TLS library. It uses it primarily to encrypt network connections, but also uses its various algorithms and random number generators.

OpenSSL is the golden standard when it comes to cross-platform open source SSL/TLS library that you use from C/C++.…

MySQL Shell 8.0.18 – What’s New?

$
0
0

The MySQL Development team is proud to announce a new version of the MySQL Shell with the following major improvements:

  • Migration to Python 3
  • Built-in thread reports
  • Ability to use an external editor
  • Ability to execute system commands within the shell
  • Admin API Improvements:
    • New options to log all the SQL used on the different operations
    • Support for IPv6 in InnoDB Clusters
    • New function to reset the InnoDB Cluster recovery accounts
  • General maintenance and bug fixing

Python 3 Migration

Due to the coming end-of-life (EOL) for Python 2.7 by end of this year, the Shell has been updated to use Python 3.
Viewing all 18800 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>