Quantcast
Channel: Planet MySQL
Viewing all 18381 articles
Browse latest View live

MySQL Deadlocks Are Our Friends

$
0
0
MySQL Deadlocks

MySQL DeadlocksWhy another article on this, Marco?

MySQL deadlocks is a topic covered many times, including here at Percona. I suggest you review the reference section at the end of this post for articles on how to identify deadlocks and from where they are generated.

So why another article?

The answer is that messages we receive like the following are still very common:

User (John): “Marco, our MySQL is having problems”
Marco: “Ok John what problems? Can you be a bit more specific?”
John: “Our log scraper is collecting that MySQL has a lot of errors”
Marco: “Ok can you share the MySQL log so I can review it?”
John: “Errors are in the application log, will share one application log”

Marco reviews the log and in it he finds:

“ERROR 1213 (40001): Deadlock found when trying to get lock;
try restarting transaction”

Marco’s reaction is: “Oh my …”

Why? Because deadlocks are not what is expressed in the message, and of course, we have a problem of mindset and terminology. In this very short article, I will try to change your point of view around deadlocks.

What is a MySQL Deadlock?

A deadlock is a situation wherein two or more competing actions are waiting for the other to finish. As a consequence, neither ever does. In computer science, deadlock refers to a specific condition when two or more processes are each waiting for each other to release a resource.

In order for a deadlock to happen, four conditions (Coffman conditions) should exist:

1. Mutual exclusion: At least one resource must be held in a non-shareable mode. Otherwise, the processes would not be prevented from using the resource when necessary. Only one process can use the resource at any given instant of time.

2. Hold and wait or resource holding: A process is currently holding at least one resource and requesting additional resources that are being held by other processes.

3. No preemption: A resource can be released only voluntarily by the process holding it.

4. Circular wait: Each process must be waiting for a resource that is being held by another process, which in turn is waiting for the first process to release the resource.

All the above illustrates conditions that are not bound only to RDBMS but to any system dealing with data transaction processing. In any case, it is a fact that today, in most cases, deadlocks are not avoidable unless to prevent one of the above conditions from happening without compromising the system execution integrity. Breaking or ignoring one of the above rules, especially for RDBMS, could affect data integrity, which goes against the reason for an RDBMS to exist.

Just to help us to better contextualize, let us review a simple case of deadlock.

Say I have MySQL with the World schema loaded, and I have the TWO transactions running, both looking for the same two cities in Tuscany (Firenze and Prato) but in a different order.

mysql> select * from City where CountryCode = 'ITA' and District='Toscana';
+------+---------+-------------+----------+------------+
| ID   | Name    | CountryCode | District | Population |
+------+---------+-------------+----------+------------+
| 1471 | Firenze | ITA    | Toscana       | 376662     | <---
| 1483 | Prato   | ITA    | Toscana       |  172473    | <--- ...
+------+---------+-------------+----------+------------+

And both transactions are updating the population: 

Connection 1 will have: connection1 > start transaction;
Query OK, 0 rows affected (0.01 sec)

connection1 > select * from City where ID=1471;
+------+---------+-------------+----------+------------+
| ID   | Name    | CountryCode | District | Population |
+------+---------+-------------+----------+------------+
| 1471 | Firenze | ITA         | Toscana  | 376662     |
+------+---------+-------------+----------+------------+
1 row in set (0.00 sec)

connection1 > update City set Population=Population + 1 where ID = 1471;
Query OK, 1 row affected (0.05 sec)
Rows matched: 1 Changed: 1 Warnings: 0

connection1 > update City set Population=Population + 1 where ID = 1483;
Query OK, 1 row affected (2.09 sec)
Rows matched: 1 Changed: 1 Warnings: 0

Connection 2 will have:
connection 2 >start transaction;
Query OK, 0 rows affected (0.01 sec)

connection 2 >select * from City where ID=1483;
+------+-------+-------------+----------+------------+
| ID   | Name  | CountryCode | District | Population |
+------+-------+-------------+----------+------------+
| 1483 | Prato | ITA         | Toscana  | 172473     |
+------+-------+-------------+----------+------------+
1 row in set (0.01 sec)

connection 2 >update City set Population=Population + 1 where ID = 1483;
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0

connection 2 >update City set Population=Population + 1 where ID = 1471;
ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction

This is a very simple example of deadlock detection. An image may help:

If we stop for a second and ignore the word “ERROR” in the message, what is really happening is that MySQL is preventing us from modifying the data in the wrong way. If the locks would not be in place, one of the two transactions would modify the population, incrementing a number that is not valid anymore.

So, What’s the Right Thing to Do?

The right thing to do is to abort one of the two transactions and NOTIFY the application that, if you really need to perform the action, and in this case, increase the population, better to redo the execution and be sure it is still the case. Just think, it could happen that the application re-runs transaction two and identifies there is no need to increase the value because it is already what it is supposed to be.

Think about this: you are calculating the financial situation of your company and you and your colleague are processing the same data but for different tasks. Without locks and deadlocks, you may end up corrupting each other’s interpretation of the data, and perform the wrong operations. As a result, you may end up paying the wrong salaries – or worse.

Given that, and more, deadlocks (and locks) need to be seen as friends helping us in keeping our data consistent. The problems arise when we have applications poorly designed and developed, and unfortunately, the wrong terminology (in my opinion) in MySQL.

Deadlock Detection

Let us start with MySQL. “Deadlock detection” is detecting an intrinsic inevitable condition in the RDBMS/ACID world. As such, defining it as an ERROR is totally misleading. A deadlock is a CONDITION, and its natural conclusion is the abortion of one of the transactions by reason of the deadlock. The message should be a NOTIFICATION, not an ERROR.

The problem in the apps, instead, is that normally the isolation and validation of the data are demanded to RDBMS, which is fine. But then only seldom can we see applications able to deal with messages like lock-timeout or deadlock. This is, of course, a huge pitfall, because while it is natural to have the RDBMS dealing with the data consistency, it is not, and should not, be responsible for the retry that is bound to the application logic.

Nowadays we have a lot of applications that require very fast execution, and locks and deadlocks are seen as enemies because they have a time cost.

But this is a mistake – a design mistake. Because if you are more willing to have speed instead of data consistency, then you should not use an RDBMS that must respect specific rules, at any (time) cost. Other systems to store data (eventually consistent) will be more appropriate in your case.

If you care about your data, then you need to listen to your RDBMS and write the code in a way that you will get all the benefits out of it, including when it comes to deadlocks.

Conclusion

MySQL deadlocks (and locks) should be seen as friends. They are mechanisms that exist to keep our data consistent. We should not bypass them unless willing to compromise our data. As previously indicated, if you want to understand, in detail, how to diagnose a deadlock, please review the links in the references below.

References

Logging Deadlock Errors

How to Deal With MySQL Deadlocks

Minimize MySQL Deadlocks with 3 Steps


A Simple MySQL Plugin to Retrieve System Metrics

$
0
0
MySQL Plugin to Retrieve System Metrics

MySQL Plugin to Retrieve System MetricsEver wanted to extend MySQL and add some feature you think it is missing?  With MySQL plugins, you can do exactly that.  One thing that has bothered me for several years is that you cannot easily retrieve system metrics from within MySQL.  Whether I am connecting via a remote connection or looking to add features to monitoring without the need for another interface with the server, I have wanted to retrieve system metrics without leaving the MySQL interface.

So, I started a Proof of Concept for this.  My goal was to get metrics such as RAM (total, used, free), system load, CPU utilization, disk utilization for the file system containing the datadir, and more.  My objective was to do this as efficiently within MySQL as possible.  For this, I chose to utilize standard C libraries in as few lines of code as possible without having to scrape system files or run commands to get the data.  The data is pulled on demand so as not to increase the load on the system.

The MySQL plugin architecture is one of the most underutilized features in MySQL in my opinion.  It provides so much power, and I feel the MySQL ecosystem would be so much more powerful if more people took advantage of it.  Below is an example of a basic plugin I created to pull some system metrics.

For this plugin, I chose to access the data via INFORMATION_SCHEMA.OS_METRICS.  This is defined in the plugin in the following:

static struct st_mysql_information_schema simple_table_info = { MYSQL_INFORMATION_SCHEMA_INTERFACE_VERSION };

static ST_FIELD_INFO simple_table_fields[]=
{
{"NAME", 15, MYSQL_TYPE_STRING, 0, 0, 0, 0},
{"VALUE", 6, MYSQL_TYPE_FLOAT, 0, MY_I_S_UNSIGNED, 0, 0},
{"COMMENT", 50, MYSQL_TYPE_STRING, 0, 0, 0, 0},
{0, 0, MYSQL_TYPE_NULL, 0, 0, 0, 0}
};

static int simple_fill_table(THD *thd, TABLE_LIST *tables, Item *cond)
{
struct sysinfo info;
TABLE *table= tables->table;

This defines the structure of the virtual table as having three columns: NAME, VALUE, and COMMENT.  NAME will be a string up to 15 characters long, followed by a float number for VALUE, and a text string for COMMENT up to 50 characters long.

By invoking the sysinfo() function in C, I am able to pull various metrics.  These metrics are returned in a structure.  These can then be passed into the OS_METRICS “table” with the following commands:

struct sysinfo info;
sysinfo(&info);

// Total usable main memory size
table->field[0]->store("TOTAL_RAM", 9, system_charset_info);
table->field[1]->store(info.totalram * info.mem_unit);
table->field[2]->store("Total usable main memory size", 29, system_charset_info);
if (schema_table_store_record(thd, table)) return 1;

In the above case, I reference the element “totalram” from the sysinfo structure and store it in the table.  You can see where there is a line for each column of the table and the values are stored one by one.

Here is the most basic form of a plugin that only pulls RAM information and makes it available within INFORMATION_SCHEMA.OS_METRICS:

#include <sql_class.h>
#include <table.h>
#include <stdlib.h>
#include <ctype.h>
#include <mysql_version.h>
#include <mysql/plugin.h>
#include <my_global.h>
#include <sys/time.h>
#include <sys/resource.h>
#include <sys/sysinfo.h>

static struct st_mysql_information_schema simple_table_info = { MYSQL_INFORMATION_SCHEMA_INTERFACE_VERSION };

static ST_FIELD_INFO simple_table_fields[]=
{
{"NAME", 15, MYSQL_TYPE_STRING, 0, 0, 0, 0},
{"VALUE", 6, MYSQL_TYPE_FLOAT, 0, MY_I_S_UNSIGNED, 0, 0},
{"COMMENT", 50, MYSQL_TYPE_STRING, 0, 0, 0, 0},
{0, 0, MYSQL_TYPE_NULL, 0, 0, 0, 0}
};

static int simple_fill_table(THD *thd, TABLE_LIST *tables, Item *cond)
{
struct sysinfo info;
TABLE *table= tables->table;

sysinfo(&info);

// Total usable main memory size
table->field[0]->store("TOTAL_RAM", 9, system_charset_info);
table->field[1]->store(info.totalram * info.mem_unit);
table->field[2]->store("Total usable main memory size", 29, system_charset_info);
if (schema_table_store_record(thd, table)) return 1;

// Available memory size
table->field[0]->store("FREE_RAM", 8, system_charset_info);
table->field[1]->store(info.freeram * info.mem_unit);
table->field[2]->store("Available memory size", 21, system_charset_info);
if (schema_table_store_record(thd, table)) return 1;

// Used memory size
table->field[0]->store("USED_RAM", 8, system_charset_info);
table->field[1]->store((info.totalram - info.freeram) * info.mem_unit);
table->field[2]->store("Used memory size", 16, system_charset_info);
if (schema_table_store_record(thd, table)) return 1;

// Available memory (percentage)
table->field[0]->store("FREE_RAM_PCT", 12, system_charset_info);
table->field[1]->store((float) info.freeram / info.totalram * 100 * info.mem_unit);
table->field[2]->store("Available memory as a percentage", 32, system_charset_info);
if (schema_table_store_record(thd, table)) return 1;

// Used memory (percentage)
table->field[0]->store("USED_RAM_PCT", 12, system_charset_info);
table->field[1]->store((float) (info.totalram - info.freeram) / info.totalram * 100 * info.mem_unit);
table->field[2]->store("Free memory as a percentage", 27, system_charset_info);
if (schema_table_store_record(thd, table)) return 1;

// Amount of shared memory
table->field[0]->store("SHARED_RAM", 10, system_charset_info);
table->field[1]->store(info.sharedram * info.mem_unit);
table->field[2]->store("Amount of shared memory", 23, system_charset_info);
if (schema_table_store_record(thd, table)) return 1;

// Memory used by buffers
table->field[0]->store("BUFFER_RAM", 10, system_charset_info);
table->field[1]->store(info.bufferram * info.mem_unit);
table->field[2]->store("Memory used by buffers", 22, system_charset_info);
if (schema_table_store_record(thd, table)) return 1;

return 0;
}

static int simple_table_init(void *ptr)
{
ST_SCHEMA_TABLE *schema_table= (ST_SCHEMA_TABLE*)ptr;
schema_table->fields_info= simple_table_fields;
schema_table->fill_table= simple_fill_table;
return 0;
}

mysql_declare_plugin(os_metrics)
{
MYSQL_INFORMATION_SCHEMA_PLUGIN,
&simple_table_info, /* type-specific descriptor */
"OS_METRICS", /* table name */
"Michael Patrick", /* author */
"OS Metrics INFORMATION_SCHEMA table", /* description */
PLUGIN_LICENSE_GPL, /* license type */
simple_table_init, /* init function */
NULL,
0x0100, /* version = 1.0 */
NULL, /* no status variables */
NULL, /* no system variables */
NULL, /* no reserved information */
0 /* no flags */
}
mysql_declare_plugin_end;

You will need to have the MySQL source code available on a server along with the libraries needed to compile C code.  For me, I went with the most basic approach of manually compiling the plugin, although I need to update it with cmake so it is easier to compile.

I named my file, osmetricsplugin.cc.  Of course, in the example below, you will need to define the path for where your code lives where I have placed “{PATH_TO_YOUR_PLUGIN_CODE}”.

You can compile the plugin with a command such as the following:

SRCBASE="../percona-server-5.7.24-27"
g++ -DMYSQL_DYNAMIC_PLUGIN -Wall -fPIC -shared \
-I/usr/include/mysql -m64 \
-I${SRCBASE}/sql \
-I${SRCBASE}/include \
-I${SRCBASE}/libbinlogevents/export \
-I${SRCBASE}/libbinlogevents/include \
-I{PATH_TO_YOUR_PLUGIN_CODE} \
-o osmetricsplugin.so osmetricsplugin.cc

If you are interested in seeing more of what can be done with the above, check out the GitHub page for the plugin I wrote.

Once you compile it, you should get an osmetricsplugin.so file which can be copied to your MySQL plugin directory with a command such as:

cp osmetricsplugin.so /usr/lib64/mysql/plugin/

Once it is in place, you can tell MySQL to load the plugin with a command such as:

mysql> INSTALL PLUGIN OS_METRICS SONAME 'osmetricsplugin.so';

You can verify that the plugin is loaded correctly:

mysql> SELECT * FROM information_schema.PLUGINS WHERE PLUGIN_NAME LIKE "%OS%";;
+-------------+----------------+---------------+--------------------+---------------------+--------------------+------------------------+-----------------+-------------------------------------+----------------+-------------+
| PLUGIN_NAME | PLUGIN_VERSION | PLUGIN_STATUS | PLUGIN_TYPE        | PLUGIN_TYPE_VERSION | PLUGIN_LIBRARY     | PLUGIN_LIBRARY_VERSION | PLUGIN_AUTHOR   | PLUGIN_DESCRIPTION                  | PLUGIN_LICENSE | LOAD_OPTION |
+-------------+----------------+---------------+--------------------+---------------------+-------------------+------------------------+-----------------+-------------------------------------+----------------+-------------+
| OS_METRICS  | 1.0            | ACTIVE        | INFORMATION SCHEMA | 50724.0             | osmetricsplugin.so | 1.7                    | Michael Patrick | OS Metrics INFORMATION_SCHEMA table | GPL            | ON          |
+-------------+----------------+---------------+--------------------+---------------------+--------------------+------------------------+-----------------+-------------------------------------+----------------+-------------+
1 row in set (0.00 sec)

To query the data, execute a SQL command such as:

mysql> SELECT * FROM information_schema.OS_METRICS;
+------------------------+-------------------+-------------------------------------------------+
| NAME                   | VALUE             | COMMENT                                         |
+------------------------+-------------------+-------------------------------------------------+
| TOTAL_RAM              |        1039118336 | Total usable main memory size                   |
| FREE_RAM               |         341049344 | Available memory size                           |
| USED_RAM               |         698068992 | Used memory size                                |
| FREE_RAM_PCT           | 32.82102966308594 | Available memory as a percentage                |
| USED_RAM_PCT           | 67.17897033691406 | Free memory as a percentage                     |
| SHARED_RAM             |                 0 | Amount of shared memory                         |
| BUFFER_RAM             |           2158592 | Memory used by buffers                          |
+------------------------+-------------------+-------------------------------------------------+
7 rows in set (0.00 sec)

There is much more work to be done with the plugin and there is more that can be done to improve it.  I believe it is a very useful feature to be able to access system metrics from within MySQL, but am very interested to hear what others think.

If interested, please check out a more advanced version of the plugin, and here you can learn more about MySQL plugins.

How To Restore MySQL Users And Passwords During Migration

$
0
0

In any MySQL replication or migration, generally, we’ll skip the mysql database from the dump file. Since it has the current server’s settings, users, etc. So we should not create a mess on the target server with this source mysql database dump. But one downside of this approach is we’ll not get the users on the target MySQL server. Maybe you tried some approaches that are mentioned here. But whenever I don’t the credentials list during the migration and don’t have time to explore/install any tools likept-show-grants, then I’ll use this trick.

DISCLAIMER You should try this approach when you don’t any user credentials on the source DB. Take a backup of the MySQL database on the target DB side. Do this with your DBA’s help. And it is tested in MySQL 5.6 and 5.7

Step #1: Get the grant:

-- Get grants
mysql -h SOURCE_DB_IP_ADDRES -u root -p'PASSWORD' --skip-column-names -A -e"SELECT CONCAT('SHOW GRANTS FOR ''',user,'''@''',host,''';') FROM mysql.user WHERE user<>''" | mysql -h IP_ADDRES -u root -p'PASSWORD' --skip-column-names -A | sed 's/$/;/g' > user_grants.sql

-- clean up the password field
sed -i 's/IDENTIFIED BY PASSWORD <secret>//g' user_grants.sql

Step #2: Generte create user statement:

-- MySQL 5.6
mysql -h source_DB -u root -p'password' --skip-column-names -A mysql -e "SELECT Concat('create user \'', user, '\'@\'', host, '\' IDENTIFIED WITH \'mysql_native_password\' AS \'', password,'\';') FROM   mysql.user where user not in ('mysql.session','mysql.sys','debian-sys-maint','root');" > create_user.sql

-- MySQL 5.7
mysql -h source_DB -u root -p'password' --skip-column-names -A mysql -e "SELECT Concat('create user \'', user, '\'@\'', host, '\' IDENTIFIED WITH \'mysql_native_password\' AS \'', authentication_string,'\';') FROM   mysql.user where user not in ('mysql.session','mysql.sys','debian-sys-maint','root');" > create_user.sql

Step #3: Restore the users:

mysql -u target_db_ip -u root -p myqsl < create_user.sql
mysql -u target_db_ip -u root -p myqsl < user_grants.sql

Again a friendly reminder, keep your DBA with you and do take a backup of the MySQL database. If required you can still use pt-show-grants and mysqlpump as well.

MySQL Terminology Changes at Continuent: Primary and Replica, Source and Target

Protect your data using ProxySQL Firewall

$
0
0

ProxySQL Firewall Overview

ProxySQL’s flexible query rules engine has many uses, from Read/Write splitting, sharding and even creating firewall blacklist. This allows ProxySQL to be loved by both Performance and Security-minded engineers.

Starting in ProxySQL 2.0.9, ProxySQL has another Security feature: the Firewall Whitelist.

Modeled on MySQL Enterprise Firewall, this allows a security-conscious administrator to tune access to only allow certain queries.

Imagine a situation where your webapp gets hacked, which exposes your user’s database credentials.

If your webapp connects directly to the database, the malicious user can do what they want to your data with the same permissions your webapp has.

So perhaps they can’t just DROP TABLE because you’ve smartly removed DDL permissions from your application user. But they can likely do DELETE FROM mytable after saving a copy to hold for ransom.

ProxySQL Firewall can prevent this situation.

Implementing ProxySQL Firewall

The documentation explains really well how to implement ProxySQL Firewall, but here are the high-level steps on how it works:

Prerequisite Point your webapp to connect to ProxySQL rather than MySQL if you haven’t already. Configure ProxySQL with users and servers to connect to the backend MySQL database.

Step 1. Allow ProxySQL to record the application’s traffic for a period of time to ensure all application queries are captured.
Protip: ProxySQL does this automatically through stats_mysql_digest and (if configured) stats_history.history_mysql_query_digest.

Step 2. Add your app user to mysql_firewall_whitelist_users table through the ProxySQL Admin interface. Let’s say our app user is myapp:

ProxySQL Admin> INSERT INTO mysql_firewall_whitelist_users
(active, username, client_address, mode) VALUES (1, 'myapp', '', 'PROTECTING');

In this example, PROTECTING means ProxySQL will stop all other traffic that is not defined in the mysql_firewall_whitelist_rules for this user in the following step.

It’s also important to note that any user not in this table will be blocked when firewall whitelist is enabled below. You can get around this by adding an entry into mysql_firewall_whitelist_users for all users to allow, with mode=’OFF’.

Step 3. Add all digests from Step 2 into mysql_firewall_whitelist_rules table. This assumes you’ve configured stats_history.history_mysql_query_digest, as this will persist stats across ProxySQL restarts.

ProxySQL Admin> INSERT INTO mysql_firewall_whitelist_rules
(active, username, client_address, schemaname, flagIN, digest, comment)
SELECT DISTINCT 1, username, client_address, schemaname, 0, digest, '',
FROM stats_history.history_mysql_query_digest WHERE username='myapp';

Step 4. Load your configuration to runtime.

LOAD MYSQL FIREWALL TO RUNTIME;
SAVE MYSQL FIREWALL TO DISK;

Step 5. Enable the ProxySQL Firewall

SET mysql-firewall_whitelist_enabled = 1;
LOAD MYSQL VARIABLES TO RUNTIME;
SAVE MYSQL VARIABLES TO DISK;

Congratulations! Now the malicious user can only perform exact queries that your application can perform.

The previous DELETE FROM mytable statement would fail, because your application doesn’t allow doing DELETEs without WHERE clauses, right?

Keeping firewall rules up to date

If you are familiar with ProxySQL query rules, you’ll notice that the mysql_firewall_whitelist_rules table behaves a little bit differently.

You have to populate it with exact digests, rather than relying on regular expressions. If you have thousands of mysql_query_rules, you were likely doing this already though.

This means that you will need to update the firewall rules any time an application deployment introduces new queries or changes query digests.

Our recommendation is to implement a process that records application digest changes before it hits production, such as a staging environment, and then update the production firewall rules as part of the deployment process.

Conclusion

If you want to be absolutely certain of the queries hitting your database from your application are not malicious, ProxySQL Firewall can help with this.

There is a bit of friction to keep this maintained, but if the security of your data is paramount, then the ease of mind ProxySQL Firewall provides is well worth it.

Authored by: Derek Downey

The post Protect your data using ProxySQL Firewall appeared first on ProxySQL.

Scaling the Percona Kubernetes Operator for Percona XtraDB Cluster

$
0
0
Scaling Percona Kubernetes Operator

Scaling Percona Kubernetes OperatorYou got yourself a Kubernetes cluster and are now testing our Percona Kubernetes Operator for Percona XtraDB Cluster. Everything is working great and you decided that you want to increase the number of Percona XtraDB Cluster (PXC) pods from the default 3, to let’s say, 5 pods.

It’s just a matter of running the following command:

kubectl patch pxc cluster1 --type='json' -p='[{"op": "replace", "path": "/spec/pxc/size", "value": 5 }]'

Good, you run the command without issues and now you will have 5 pxc pods! Right? Let’s check out how the pods are being replicated:

kubectl get pods | grep pxc
cluster1-pxc-0                                     1/1     Running   0          25m
cluster1-pxc-1                                     1/1     Running   0          23m
cluster1-pxc-2                                     1/1     Running   0          22m
cluster1-pxc-3                                     0/1     Pending   0          13m

You not only see 4 pods instead of 5 but also the one new pod is stuck in the “Pending” state. Further info shows that the kube scheduler wasn’t able to find a node to deploy the pod:

kubectl describe pod cluster1-pxc-3
Name:           cluster1-pxc-3
Namespace:      pxc
Priority:       0
…
…
…
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  69s (x5 over 5m28s)  default-scheduler  0/3 nodes are available: 3 node(s) didn't match pod affinity/anti-affinity, 3 node(s) didn't satisfy existing pods anti-affinity rules.

From that output, we can see what’s the issue: Affinity. Or more specifically: Anti-affinity

Affinity defines eligible pods that can be scheduled (can run) on the node which already has pods with specific labels. Anti-affinity defines pods that are not eligible.

The operator provides an option called “antiAffinityTopologyKey” which can have several values:

  • kubernetes.io/hostname – Pods will avoid residing within the same host.
  • failure-domain.beta.kubernetes.io/zone – Pods will avoid residing within the same zone.
  • failure-domain.beta.kubernetes.io/region – Pods will avoid residing within the same region.
  • none – No constraints are applied. It means that all PXC pods can be scheduled on one Node and you can lose all your cluster because of one Node failure.

The default value is kubernetes.io/hostname which pretty much means: “only one Pod per node”

In this case, the kubernetes cluster is running on top of 3 aws instances, hence when one tries to increase the number of pods, the scheduler will have trouble finding where to put that new pod.

Alternatives?

There are several options. One plain and simple (and obvious) one can be to add new nodes to the k8s cluster.

Another option is to set the anti-affinity to “none”. Now, why one would want to remove the guarantee of having POD distribute among the available nodes? Well, think about lower environments like QA or Staging, where the HA requirements are not hard and you just need to deploy the operator in a couple of nodes (control plane/worker).

Now, here’s how the affinity setting can be changed:

Edit the cluster configuration. My cluster is called “cluster1” so the command is:

kubectl edit pxc/cluster1

Find the line where “antiAffinityTopologyKey” is defined and change “kubernetes.io/hostname” to “none” and save the changes. This modification will be applied immediately.

Delete the old pods ONE BY ONE. Kubernetes will spawn a new one, so don’t worry about it. For example, to delete the pod named “cluster1-pxc-0”, run:

kubectl delete pod cluster1-pxc-0

You will see how the Pods are recreated and the one that was on “pending” moves on:

kubectl get pod
NAME                                               READY   STATUS              RESTARTS   AGE
cluster1-pxc-0                                     0/1     ContainerCreating   0          8s
cluster1-pxc-1                                     1/1     Running             0          4h55m
cluster1-pxc-2                                     1/1     Running             0          4h53m
cluster1-pxc-3                                     0/1     ContainerCreating   0          8m25s

Finally, the goal of having 5 pods is achieved:

kubectl get pod
NAME                                               READY   STATUS    RESTARTS   AGE
cluster1-pxc-0                                     1/1     Running   0          39m
cluster1-pxc-1                                     1/1     Running   0          36m
cluster1-pxc-2                                     1/1     Running   0          37m
cluster1-pxc-3                                     1/1     Running   0          47m
cluster1-pxc-4                                     1/1     Running   0          38m

But what if one needs a more sophisticated option? One with some degree of guarantee that HA will be met? For those cases, the operator Affinity can use an advanced approach, by using the NodeAffinity with “preferredDuringSchedulingIgnoredDuringExecution”

The whole description and configuration is available in the Operator documentation “Binding Percona XtraDB Cluster components to Specific Kubernetes/OpenShift Nodes”

And also, in the future, the operator will make use of the “topologySpreadConstraints” spec to control the degree to which Pods may be unevenly distributed.

Thanks to Ivan Pylypenko and Mykola Marzhan from the Percona Engineer Team for the guidance.

Integrating MySQL tools with Systemd Service

$
0
0

In my day to day tasks as a DB Engineer at Mydbops we have been using multiple MySQL tools for multiple use cases to ensure an optimal performance and availability for servers managed by our Remote DBA Team.

A tool like pt-online-schema can be used for any DDL changes ( Overview to DDL algorithm ), if any tool which needs to scheduled for longer period we tend to use screen or cron.

Some of the problems we face when we demonise the process or use screen for running processes.

  • The daemon process gets killed when the server reboot happens.
  • The screen might accidentally terminate while closing it.
  • To flexibility to start or stop the process when required.

These common problem can be overcome by using systemd service.

What is Systemd?

Systemd is a system and service manager for Linux operating systems. When running as the first process on boot (as PID 1), it acts as an init system that brings up and maintains userspace services.

List of few use cases that can be made simple with systemd.service.

  • Integrating Pt-heartbeat with Systemd Service
  • Integrating Auto kill using pt-kill with Systemd Service.
  • Integrating multi query killer with Systemd service

Integrating Pt-heartbeat with Systemd Service.

We had the requirement to schedule pt-heartbeat to monitor replication lag for one of our clients under our database managed Services. Here is problem statement pt-heartbeat process was running as a daemon process, the usual problem we were facing was when the system is rebooted for any maintenance , the pt-heartbeat process gets killed and we start receiving the replication lag alerts and then it needs a manual fix.

Script for pt-heartbeat

/usr/bin/pt-heartbeat --daemonize -D mydbops --host=192.168.33.11 --master-server-id 1810 --user=pt-hbt --password=vagrant --table heartbeat --insert-heartbeat-row --update

Now let us integrate it with systemd

Question : Can we schedule multiple pt-kill processes?

Question : It is possible to schedule Multiple Kill statement for different servers

$ cd etc/systemd/system/

$ vi pt-heartbeat.service
##pt-heartbeat systemd service file
[Unit]
Description="pt-heartbeat"
After=network-online.target syslog.target
Wants=network-online.target

[Install]
WantedBy=multi-user.target

[Service]
Type=forking

ExecStart=/usr/bin/pt-heartbeat --daemonize -D mydbops --host=192.168.33.11 --master-server-id 1810 --user=pt-hbt --password=vagrant --table heartbeat --insert-heartbeat-row --update

StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=pt-heartbeat
Restart=always

ExecStart = It needs the command which needs to be executed when the service kick start )

Restart = Always option specifies to start the process once the OS is booted up.

Once the new systemd script is pushed, Reload the systemctl daemon and start the service

$ systemctl daemon-reload
$ systemctl start pt-heartbeat
$ systemctl status pt-heartbeat -l
● pt-heartbeat.service - "pt-heartbeat"
Loaded: loaded (/etc/systemd/system/pt-heartbeat.service; disabled; vendor preset: enabled)
Active: active (running) since Mon 2020-06-20 13:20:24 IST; 10 days ago
Main PID: 25840 (perl)
Tasks: 1
Memory: 19.8M
CPU: 1h 1min 47.610s
CGroup: /system.slice/pt-heartbeat.service
└─25840 perl /usr/bin/pt-heartbeat --daemonize -D mydbops --host=192.168.33.11 --master-server-id 1810 --user=pt-hbt --password=vagrant --table heartbeat --insert-heartbeat-row --update

This service can be stopped by just giving ( similar to to any systemctl process )

$ systemctl stop pt-heartbeat
● pt-heartbeat.service - "pt-heartbeat"
Loaded: loaded (/etc/systemd/system/pt-heartbeat.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Jun 20 15:46:07 ip-192-168-33-11 systemd[1]: Stopping "pt-heartbeat"…
Jun 20 15:46:07 ip-192-168-33-11 systemd[1]: Stopped "pt-heartbeat".

Integrating Auto kill using pt-kill with Systemd Service

Usually in production servers long running queries will spike the system resource usage and degrade the MySQL performance drastically or might kill your MySQL process with OOM killer, in order to avoid this hiccups , we can schedule Percona pt-kill process based on the use case defined.

scheduling the pt-kill service

$ cd etc/systemd/system/

$ vi pt-kill.service
#pt-kill systemd service file

[Unit]
Description="pt-kill"
After=network-online.target syslog.target
Wants=network-online.target

[Install]
WantedBy=multi-user.target

[Service]
Type=forking

ExecStart=/usr/bin/pt-kill --user=mon_ro --host=192.168.33.11 --password=pt@123 --busy-time 200 --kill --match-command Query --match-info (select|SELECT|Select) --match-user cron_ae --interval 10 --print --daemonize

StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=pt-kill
Restart=always
$ systemctl daemon-reload
$ systemctl start pt-kill
$ systemctl status pt-kill
pt-kill.service - "pt-kill"
Loaded: loaded (/etc/systemd/system/pt-kill.service; enabled)
Active: active (running) since Wed 2020-06-24 11:00:17 IST; 5 days ago
CGroup: name=dsystemd:/system/pt-kill.service
├─20974 perl /usr/bin/pt-kill --user=mon_ro --host=192.168.33.11 --password=pt@123 --busy-time 200 --kill --match-command Query --match-info (select|SELECT|Select) --match-user cron_ae --interval 10 --print --daemonize

Now we have configured a fail safe pt-kill process.

Integrating multi query killer with Systemd service

Question : Is it possible to integrate multiple Kill Statements for different hosts as single process.

Answer – Yes ! It is possible and quite simple too.

Just add the needed commands as shell script file and make it execute it. In the below example i have chose three different server consider a RDS instance ( more on AWS RDS its Myth ) and a couple of virtual machine.

$ vi pt_kill.sh

/usr/bin/pt-kill --user=pt_kill --host=test.ap-northeast-1.rds.amazonaws.com --password=awkS --busy-time 1000 --rds --kill --match-command Query --match-info "(select|SELECT|Select)" --match-user "(mkt_ro|dash)" --interval 10 --print --daemonize >> /home/vagrant/pt_kill_slave1.log

/usr/bin/pt-kill --user=mon_ro --host=192.168.33.11 --password=pt@123 --busy-time 20 --kill --match-command Query --match-info "(select|SELECT|Select)" --match-user "(user_qa|cron_ae)" --interval 10 --print --daemonize >> /home/vagrant/pt_kill_slave2.log

/usr/bin/pt-kill --user=db_ro --host=192.168.33.12 --password=l9a40E --busy-time 200 --kill --match-command Query --match-info "(select|SELECT|Select)" --match-user sbtest_ro --interval 10 --print --daemonize >> /home/vagrant/pt_kill_slave3.log

Scheduling pt-kill.service for multiple hosts.

#pt-kill systemd service file

[Unit]
Description="pt-kill"
After=network-online.target syslog.target
Wants=network-online.target

[Install]
WantedBy=multi-user.target

[Service]
Type=forking

ExecStart=/bin/bash /home/vagrant/pt_kill.sh

StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=pt-kill
Restart=always

Reload the daemon and start the service

$ systemctl daemon-reload
$ systemctl start pt-kill
$ systemctl status pt-kill
pt-kill.service - "pt-kill"
Loaded: loaded (/etc/systemd/system/pt-kill.service; enabled)
Active: active (running) since Wed 2020-06-24 11:00:17 IST; 5 days ago
CGroup: name=dsystemd:/system/pt-kill.service
├─20974 Perl  /usr/bin/pt-kill --user=pt_kill --host=test.ap-northeast-1.rds.amazonaws.com --password=awkS --busy-time 1000 --rds --kill --match-command Query --match-info "(select... 
├─21082 perl  /usr/bin/pt-kill --user=mon_ro --host=192.168.33.11 --password=pt@123 --busy-time 20 --kill --match-command Query --match-info "(select...
├─21130 perl /usr/bin/pt-kill --user=db_ro --host=192.168.33.12 --password=l9a40E --busy-time 200 --kill --match-command Query --match-info "(select...

This makes Systemd more useful and easy tool for scheduling mysql tools in database environment. There are many more features in Systemd that be used for scheduling scripts bypassing the use of crontab, hopefully.

Note : All these are sample scripts you ensure you test well before making it in production.

ProxySQL Public Training, 21st-22nd July

$
0
0

We’re very excited to open up our ProxySQL Total Training online course to the public!

Get yourself, or your team, trained by our Professional ProxySQL Trainers who have built and actively maintain ProxySQL in our upcoming total Training course.

Designed for DBAs, application developers and IT professionals in the industry, the course will focus on real world implementation and hands-on labs that will ensure you acquire the needed skills to deploy efficient, scalable and highly available solutions with ProxySQL!

Immerse yourself in the world of ProxySQL with our two-day ProxySQL Total Training.

ProxySQL Total Training topics include:

  • ProxySQL use cases and real world examples
  • How to install, upgrade and configure ProxySQL with zero-downtime
  • Query routing, query caching and query fire-walling
  • Advanced performance monitoring and tuning
  • Deployment and scaling best practices
  • High Availability and advanced ProxySQL topology support
  • ProxySQL Cluster and configuration management
  • Failover in synchronous & asynchronous replication environments without losing transactions
  • Global Transaction ID (GTID) tracking with ProxySQL and the binary log (binlog) reader
  • Securing ProxySQL and implementing SSL
  • Application development considerations when using ProxySQL
  • Diagnosing issues and troubleshooting ProxySQL

Public 2x Day Training Details:

Dates: 21st-22nd July 2020
Time: 8am-4pm EST (UTC-5)
Where: Online

Class size is limited and filling up. If you are interested in Online Public Training, please contact us here to book your spot in the upcoming session scheduled for: 21st-22nd July 2020, 8am-4pm EST (UTC-5)

The post ProxySQL Public Training, 21st-22nd July appeared first on ProxySQL.


MySQL – How to add a foreign key

$
0
0

TL; DR

On a new TABLE:

CREATE TABLE child (ID INT PRIMARY KEY, parent_ID INT, FOREIGN KEY my_fk (parent_id) REFERENCES parent(ID));

On an existing TABLE:

ALTER TABLE child ADD FOREIGN KEY my_fk (parent_id) REFERENCES parent(ID);

MySQL has the ability to enforce a record that exists on a parent table when you are adding/modifying data or validate that a record doesn’t exist when you are deleting data from your child table, leaving your database inconsistent. This is called Foreign Key.

As seen above, you can either create your table with an FK since the beginning or modify/alter your table to add a new constrain after table creation time. Apart from syntax to refer to a field on the parent table, you can control what will be the behavior when you UPDATE or DELETE a record on the PARENT table that has a reference to in on the child table. This is controlled by the optional parameter ON UPDATE and ON DELETE, the restrictions are as follow:

We will be using as an example the below table:

mysql> SELECT * FROM parent;
+----+
| ID |
+----+
|  1 |
+----+
1 row in set (0.00 sec)

mysql> SELECT * FROM child;
+----+-----------+
| ID | parent_ID |
+----+-----------+
|  1 |         1 |
+----+-----------+
1 row in set (0.00 sec)
  • RESTRICT or NO ACTION – Default behavior when you omit ON UPDATE or ON DELETE, this means if you try to update the filed on the parent table that is referred to at the child table, your update/delete will be blocked:
    mysql> UPDATE parent SET ID = 2 WHERE ID = 1;
    ERROR 1451 (23000): Cannot delete or update a parent row: a foreign key constraint fails (`test`.`child`, CONSTRAINT `child_ibfk_1` FOREIGN KEY (`parent_ID`) REFERENCES `parent` (`ID`))
    

    Although not recommended, in extreme cases you can for MySQL to disable the FK check to by-passe above error:

    SET foreign_key_checks=0;

    Have in mind that this will despite the whole reason for having FK in the first place!

  • SET DEFAULT – It’s recognized by the parse (won´t give any error), however, its interpreted as RESTRICT.
  • CASCADE – Whatever action you do on the parent table, will replicate to the child table:
    mysql> UPDATE parent SET ID = 2 WHERE ID = 1;
    Query OK, 1 row affected (0.01 sec)
    Rows matched: 1  Changed: 1  Warnings: 0
    
    mysql> SELECT * FROM parent;
    +----+
    | ID |
    +----+
    |  2 |
    +----+
    1 row in set (0.00 sec)
    
    mysql> SELECT * FROM child;
    +----+-----------+
    | ID | parent_ID |
    +----+-----------+
    |  1 |         2 |
    +----+-----------+
    1 row in set (0.00 sec)
  • SET NULL – Whatever action you do on the parent table, child column will reset to NULL (make sure child filed is not set to NOT NULL):
    mysql> UPDATE parent SET ID = 2 WHERE ID = 1;
    Query OK, 1 row affected (0.01 sec)
    Rows matched: 1  Changed: 1  Warnings: 0
    
    mysql> SELECT * FROM parent;
    +----+
    | ID |
    +----+
    |  2 |
    +----+
    1 row in set (0.01 sec)
    
    mysql> SELECT * FROM child;
    +----+-----------+
    | ID | parent_ID |
    +----+-----------+
    |  1 |      NULL |
    +----+-----------+
    1 row in set (0.00 sec)

Webinar July 30: Database Challenges – Open Source Vs. Open Core

$
0
0
Open Source Vs. Open Core

Open Source Vs. Open CoreNote: This webinar was originally scheduled for June 30, but has been moved to July 30.  If you click “Register Now” below, it will take you to the registration form for the new date. 

Join Peter Zaitsev, CEO at Percona, as he discusses database challenges and the concepts of open source and open core. Over the years, open source companies have tried to bring products to market and maximize their revenue streams.  To date, the most popular open source business model remains “Open Core”.  But is open core software still open source?  Or is it a freemium model designed to separate you from your money?  Not all companies follow the same processes, ethics, and rules when building and launching open source products.  Let’s talk about how the open core bait and switch works for many companies in the open source space.

Join this webinar to learn more about:

– Open Source and Open core distinction
– Free enterprise-class community versions alternatives
– Use-cases that successfully moved away from vendor lock-in
Please join Peter Zaitsev on Thursday, July 30 at 12 pm EDT for his webinar “Database Challenges: Open Source Vs. Open Core“.

Register Now

If you can’t attend, sign up anyway and we’ll send you the slides and recording afterward.

MySQL – How to add a foreign key

$
0
0

TL; DR

On a new TABLE:

CREATE TABLE child (ID INT PRIMARY KEY, parent_ID INT, FOREIGN KEY my_fk (parent_id) REFERENCES parent(ID));

On an existing TABLE:

ALTER TABLE child ADD FOREIGN KEY my_fk (parent_id) REFERENCES parent(ID);

MySQL has the ability to enforce a record that exists on a parent table when you are adding/modifying data or validate that a record doesn’t exist when you are deleting data from your child table, leaving your database inconsistent. This is called Foreign Key.

As seen above, you can either create your table with an FK since the beginning or modify/alter your table to add a new constrain after table creation time. Apart from syntax to refer to a field on the parent table, you can control what will be the behavior when you UPDATE or DELETE a record on the PARENT table that has a reference to in on the child table. This is controlled by the optional parameter ON UPDATE and ON DELETE, the restrictions are as follow:

We will be using as an example the below table:

mysql> SELECT * FROM parent;
+----+
| ID |
+----+
|  1 |
+----+
1 row in set (0.00 sec)

mysql> SELECT * FROM child;
+----+-----------+
| ID | parent_ID |
+----+-----------+
|  1 |         1 |
+----+-----------+
1 row in set (0.00 sec)
  • RESTRICT or NO ACTION – Default behavior when you omit ON UPDATE or ON DELETE, this means if you try to update the filed on the parent table that is referred to at the child table, your update/delete will be blocked:
    mysql> UPDATE parent SET ID = 2 WHERE ID = 1;
    ERROR 1451 (23000): Cannot delete or update a parent row: a foreign key constraint fails (`test`.`child`, CONSTRAINT `child_ibfk_1` FOREIGN KEY (`parent_ID`) REFERENCES `parent` (`ID`))
    

    Although not recommended, in extreme cases you can for MySQL to disable the FK check to by-passe above error:

    SET foreign_key_checks=0;

    Have in mind that this will despite the whole reason for having FK in the first place!

  • SET DEFAULT – It’s recognized by the parse (won´t give any error), however, its interpreted as RESTRICT.
  • CASCADE – Whatever action you do on the parent table, will replicate to the child table:
    mysql> UPDATE parent SET ID = 2 WHERE ID = 1;
    Query OK, 1 row affected (0.01 sec)
    Rows matched: 1  Changed: 1  Warnings: 0
    
    mysql> SELECT * FROM parent;
    +----+
    | ID |
    +----+
    |  2 |
    +----+
    1 row in set (0.00 sec)
    
    mysql> SELECT * FROM child;
    +----+-----------+
    | ID | parent_ID |
    +----+-----------+
    |  1 |         2 |
    +----+-----------+
    1 row in set (0.00 sec)
  • SET NULL – Whatever action you do on the parent table, child column will reset to NULL (make sure child filed is not set to NOT NULL):
    mysql> UPDATE parent SET ID = 2 WHERE ID = 1;
    Query OK, 1 row affected (0.01 sec)
    Rows matched: 1  Changed: 1  Warnings: 0
    
    mysql> SELECT * FROM parent;
    +----+
    | ID |
    +----+
    |  2 |
    +----+
    1 row in set (0.01 sec)
    
    mysql> SELECT * FROM child;
    +----+-----------+
    | ID | parent_ID |
    +----+-----------+
    |  1 |      NULL |
    +----+-----------+
    1 row in set (0.00 sec)

Webinar July 30: Database Challenges – Open Source Vs. Open Core

$
0
0
Open Source Vs. Open Core

Open Source Vs. Open CoreNote: This webinar was originally scheduled for June 30, but has been moved to July 30.  If you click “Register Now” below, it will take you to the registration form for the new date. 

Join Peter Zaitsev, CEO at Percona, as he discusses database challenges and the concepts of open source and open core. Over the years, open source companies have tried to bring products to market and maximize their revenue streams.  To date, the most popular open source business model remains “Open Core”.  But is open core software still open source?  Or is it a freemium model designed to separate you from your money?  Not all companies follow the same processes, ethics, and rules when building and launching open source products.  Let’s talk about how the open core bait and switch works for many companies in the open source space.

Join this webinar to learn more about:

– Open Source and Open core distinction
– Free enterprise-class community versions alternatives
– Use-cases that successfully moved away from vendor lock-in
Please join Peter Zaitsev on Thursday, July 30 at 12 pm EDT for his webinar “Database Challenges: Open Source Vs. Open Core“.

Register Now

If you can’t attend, sign up anyway and we’ll send you the slides and recording afterward.

MySQL – How to add a foreign key

$
0
0

TL; DR

On a new TABLE:

CREATE TABLE child (ID INT PRIMARY KEY, parent_ID INT, FOREIGN KEY my_fk (parent_id) REFERENCES parent(ID));

On an existing TABLE:

ALTER TABLE child ADD FOREIGN KEY my_fk (parent_id) REFERENCES parent(ID);

MySQL has the ability to enforce a record that exists on a parent table when you are adding/modifying data or validate that a record doesn’t exist when you are deleting data from your child table, leaving your database inconsistent. This is called Foreign Key.

As seen above, you can either create your table with an FK since the beginning or modify/alter your table to add a new constrain after table creation time. Apart from syntax to refer to a field on the parent table, you can control what will be the behavior when you UPDATE or DELETE a record on the PARENT table that has a reference to in on the child table. This is controlled by the optional parameter ON UPDATE and ON DELETE, the restrictions are as follow:

We will be using as an example the below table:

mysql> SELECT * FROM parent;
+----+
| ID |
+----+
|  1 |
+----+
1 row in set (0.00 sec)

mysql> SELECT * FROM child;
+----+-----------+
| ID | parent_ID |
+----+-----------+
|  1 |         1 |
+----+-----------+
1 row in set (0.00 sec)
  • RESTRICT or NO ACTION – Default behavior when you omit ON UPDATE or ON DELETE, this means if you try to update the filed on the parent table that is referred to at the child table, your update/delete will be blocked:
    mysql> UPDATE parent SET ID = 2 WHERE ID = 1;
    ERROR 1451 (23000): Cannot delete or update a parent row: a foreign key constraint fails (`test`.`child`, CONSTRAINT `child_ibfk_1` FOREIGN KEY (`parent_ID`) REFERENCES `parent` (`ID`))
    

    Although not recommended, in extreme cases you can for MySQL to disable the FK check to by-passe above error:

    SET foreign_key_checks=0;

    Have in mind that this will despite the whole reason for having FK in the first place!

  • SET DEFAULT – It’s recognized by the parse (won´t give any error), however, its interpreted as RESTRICT.
  • CASCADE – Whatever action you do on the parent table, will replicate to the child table:
    mysql> UPDATE parent SET ID = 2 WHERE ID = 1;
    Query OK, 1 row affected (0.01 sec)
    Rows matched: 1  Changed: 1  Warnings: 0
    
    mysql> SELECT * FROM parent;
    +----+
    | ID |
    +----+
    |  2 |
    +----+
    1 row in set (0.00 sec)
    
    mysql> SELECT * FROM child;
    +----+-----------+
    | ID | parent_ID |
    +----+-----------+
    |  1 |         2 |
    +----+-----------+
    1 row in set (0.00 sec)
  • SET NULL – Whatever action you do on the parent table, child column will reset to NULL (make sure child filed is not set to NOT NULL):
    mysql> UPDATE parent SET ID = 2 WHERE ID = 1;
    Query OK, 1 row affected (0.01 sec)
    Rows matched: 1  Changed: 1  Warnings: 0
    
    mysql> SELECT * FROM parent;
    +----+
    | ID |
    +----+
    |  2 |
    +----+
    1 row in set (0.01 sec)
    
    mysql> SELECT * FROM child;
    +----+-----------+
    | ID | parent_ID |
    +----+-----------+
    |  1 |      NULL |
    +----+-----------+
    1 row in set (0.00 sec)

MySQL Connector/Python 8.0.21 has been released

$
0
0

Dear MySQL users,

MySQL Connector/Python 8.0.21 is the latest GA release version of the
MySQL Connector Python 8.0 series. The X DevAPI enables application
developers to write code that combines the strengths of the relational
and document models using a modern, NoSQL-like syntax that does not
assume previous experience writing traditional SQL.

To learn more about how to write applications using the X DevAPI, see

http://dev.mysql.com/doc/x-devapi-userguide/en/

For more information about how the X DevAPI is implemented in MySQL
Connector/Python, and its usage, see

http://dev.mysql.com/doc/dev/connector-python

Please note that the X DevAPI requires at least MySQL Server version 8.0
or higher with the X Plugin enabled. For general documentation about how
to get started using MySQL as a document store, see

http://dev.mysql.com/doc/refman/8.0/en/document-store.html

To download MySQL Connector/Python 8.0.21, see the “General Availability
(GA) Releases” tab at

http://dev.mysql.com/downloads/connector/python/

Enjoy!

Changes in MySQL Connector/Python 8.0.21 (2020-07-13, General Availability)

Functionality Added or Changed

* Implemented context managers to define the runtime
context to establish when executing a with statement. It
was added to the Connection and Cursor objects in the
classic protocol, and to Session in the X DevAPI. Thanks
to WEN-FENG SHIH for the patch. (Bug #19586444, Bug
#71663, Bug #28779784, Bug #89113)

* Added compression support to X Protocol connections.
Supported compression algorithms are zlib/deflate and
lz4. Specifically, the supported algorithms (in order of
preference) are lz4_message and deflate_stream. The
compression threshold is set at 1000 bytes.
A new compress X DevAPI connection option accepts either
required, disabled, or preferred (default).

+ preferred: If Connector/Python and the server cannot
reach consensus on algorithm or styles, then no
compression is used; and this is logged.

+ required: Like preferred, except the connection is
terminated with an error if the connector and server
are unable to reach agreement.

+ disabled: Compression is not used.

Bugs Fixed

* The connect_timeout option applied to all blocking socket
operations but now properly only applies to the timeout
when establishing the connection. (Bug #30996790)

* In X DevAPI implementation, an error was raised when
using the fractional part in DATETIME types. The error:
“ValueError: Datetime mapping scenario unhandled” (Bug
#30950184)

* Because MySQL stores TEXT types as BLOB and JSON as
LONGBLOB, the TEXT and JSON types are now converted to
str and the rest of the BLOB types as bytes. Previously,
as an example, a column of type TEXT that only contained
digits was read as type=integer by Connector/Python. (Bug
#29808262, Bug #95437)

* Connector/Python assumed that MySQL libraries were under
lib/ when compiling the C extension, but now uses the
mysql_config flags which adds the correct include and
libraries paths. Thanks to Daniël van Eeden for the
patch. (Bug #29181907, Bug #93846)

On Behalf of Oracle/MySQL Engineering Team,
Balasubramanian Kandasamy

How can I stop writing SQL Statement into MySQL Audit Log at run time without MySQL Restart ?

$
0
0

How can I Disable MySQL Audit Log while Server is running?



You can turn off MySQL Enterprise Audit Log without restart of MySQL , sometimes we wanted to turn off auditing to test some functionalities in database.

If you wanted to revise about 5 W (what ,why ,where,when,who) about MySQL Enterprise Audit please have a look on my previous blog.

First Check Whether Audit is running or not?

mysql> SELECT PLUGIN_NAME, PLUGIN_STATUS FROM INFORMATION_SCHEMA.PLUGINS WHERE PLUGIN_NAME LIKE'audit%';

+-------------+---------------+

| PLUGIN_NAME | PLUGIN_STATUS |

+-------------+---------------+

| audit_log   | ACTIVE        |

+-------------+---------------+

1 row in set (0.00 sec)


mysql> show variables like '%audit%';

+--------------------------------------+--------------+

| Variable_name                        | Value        |

+--------------------------------------+--------------+

| audit_log_buffer_size                | 1048576      |

| audit_log_compression                | NONE         |

| audit_log_connection_policy          | ALL          |

| audit_log_current_session            | ON           |

| audit_log_encryption                 | NONE         |

| audit_log_exclude_accounts           |              |

| audit_log_file                       | audit.log    |

| audit_log_filter_id                  | 0            |

| audit_log_flush                      | OFF          |

| audit_log_format                     | NEW          |

| audit_log_include_accounts           |              |

| audit_log_password_history_keep_days | 0            |

| audit_log_policy                     | ALL          |

| audit_log_read_buffer_size           | 32768        |

| audit_log_rotate_on_size             | 0            |

| audit_log_statement_policy           | ALL          |

| audit_log_strategy                   | ASYNCHRONOUS |

+--------------------------------------+--------------+

17 rows in set (0.01 sec)


To disable there are two ways to do
1.Edit my.cnf file and comment these two lines and restart MySQL Server.

>>Open my.ini file

      #plugin-load=audit_log.dll     ##for Linux – audit_log.so file    

      #audit-log=FORCE_PLUS_PERMANENT

>>Restart MySQL Server


 2.

Without Server Restart.

There is audit system variables(audit_log_statement_policy, audit_log_connection_policy), which control the policy how the audit log plugin writes statement events to its log file.

By default these values are ALL , which means that you can log everything’s.




mysql> set global audit_log_statement_policy=NONE;

Query OK, 0 rows affected (0.00 sec)


mysql> set global audit_log_connection_policy=NONE;

Query OK, 0 rows affected (0.00 sec)


Note:- Plugin would remain active but it audit file will not have any entry thereafter.



Thank you for using MySQL !!!
Feel free to suggest , correct , share your ideas/pains so that i can write more blog on MySQL.




MySQL – How to add a foreign key

$
0
0

TL; DR

On a new TABLE:

CREATE TABLE child (ID INT PRIMARY KEY, parent_ID INT, FOREIGN KEY my_fk (parent_id) REFERENCES parent(ID));

On an existing TABLE:

ALTER TABLE child ADD FOREIGN KEY my_fk (parent_id) REFERENCES parent(ID);

MySQL has the ability to enforce a record that exists on a parent table when you are adding/modifying data or validate that a record doesn’t exist when you are deleting data from your child table, leaving your database inconsistent. This is called Foreign Key.

As seen above, you can either create your table with an FK since the beginning or modify/alter your table to add a new constrain after table creation time. Apart from syntax to refer to a field on the parent table, you can control what will be the behavior when you UPDATE or DELETE a record on the PARENT table that has a reference to in on the child table. This is controlled by the optional parameter ON UPDATE and ON DELETE, the restrictions are as follow:

We will be using as an example the below table:

mysql> SELECT * FROM parent;
+----+
| ID |
+----+
|  1 |
+----+
1 row in set (0.00 sec)

mysql> SELECT * FROM child;
+----+-----------+
| ID | parent_ID |
+----+-----------+
|  1 |         1 |
+----+-----------+
1 row in set (0.00 sec)
  • RESTRICT or NO ACTION – Default behavior when you omit ON UPDATE or ON DELETE, this means if you try to update the filed on the parent table that is referred to at the child table, your update/delete will be blocked:
    mysql> UPDATE parent SET ID = 2 WHERE ID = 1;
    ERROR 1451 (23000): Cannot delete or update a parent row: a foreign key constraint fails (`test`.`child`, CONSTRAINT `child_ibfk_1` FOREIGN KEY (`parent_ID`) REFERENCES `parent` (`ID`))
    

    Although not recommended, in extreme cases you can for MySQL to disable the FK check to by-passe above error:

    SET foreign_key_checks=0;

    Have in mind that this will despite the whole reason for having FK in the first place!

  • SET DEFAULT – It’s recognized by the parse (won´t give any error), however, its interpreted as RESTRICT.
  • CASCADE – Whatever action you do on the parent table, will replicate to the child table:
    mysql> UPDATE parent SET ID = 2 WHERE ID = 1;
    Query OK, 1 row affected (0.01 sec)
    Rows matched: 1  Changed: 1  Warnings: 0
    
    mysql> SELECT * FROM parent;
    +----+
    | ID |
    +----+
    |  2 |
    +----+
    1 row in set (0.00 sec)
    
    mysql> SELECT * FROM child;
    +----+-----------+
    | ID | parent_ID |
    +----+-----------+
    |  1 |         2 |
    +----+-----------+
    1 row in set (0.00 sec)
  • SET NULL – Whatever action you do on the parent table, child column will reset to NULL (make sure child filed is not set to NOT NULL):
    mysql> UPDATE parent SET ID = 2 WHERE ID = 1;
    Query OK, 1 row affected (0.01 sec)
    Rows matched: 1  Changed: 1  Warnings: 0
    
    mysql> SELECT * FROM parent;
    +----+
    | ID |
    +----+
    |  2 |
    +----+
    1 row in set (0.01 sec)
    
    mysql> SELECT * FROM child;
    +----+-----------+
    | ID | parent_ID |
    +----+-----------+
    |  1 |      NULL |
    +----+-----------+
    1 row in set (0.00 sec)

MySQL Connector/ODBC 8.0.21 has been released

$
0
0

Dear MySQL users,

MySQL Connector/ODBC 8.0.21 is a new version in the MySQL Connector/ODBC
8.0 series, the ODBC driver for the MySQL Server.

The available downloads include both a Unicode driver and an ANSI driver
based on the same modern codebase. Please select the driver type you
need based on the type of your application – Unicode or ANSI.
Server-side prepared statements are enabled by default.  It is suitable
for use with the latest MySQL server version 8.0.

This release of the MySQL ODBC driver is conforming to the ODBC 3.8
specification.  It contains implementations of key 3.8 features,
including self-identification as a ODBC 3.8 driver, streaming of out for
binary types only), and support of the SQL_ATTR_RESET_CONNECTION
connection attribute (for the Unicode driver only).

The release is now available in source and binary form for a number of
platforms from our download pages at

https://dev.mysql.com/downloads/connector/odbc/

For information on installing, please see the documentation at

https://dev.mysql.com/doc/connector-odbc/en/connector-odbc-installation.html

Enjoy and thanks for the support!

==============================================================================

Changes in MySQL Connector/ODBC 8.0.21 (2020-07-13, General Availability)


     * Security Notes

     * Bugs Fixed

Security Notes


     * For platforms on which OpenSSL libraries are bundled, the
       linked OpenSSL library for Connector/ODBC has been
       updated to version 1.1.1g. Issues fixed in the new
       OpenSSL version are described at
https://www.openssl.org/news/cl111.txt and
https://www.openssl.org/news/vulnerabilities.html.
        (Bug#31296688)

Bugs Fixed


     * The MSI installer now checks for the Visual 2019 C++
       runtime, and aborts if this required runtime is not
       installed.
        This is also the first version that requires Visual 2019
        C++ runtime, when before the 2017 version was also
        supported. (Bug #31102234)

     * Only a single value was being inserted instead of the
        array, with SQLParamOptions. (Bug #30591722)

     * The SUM aggregate function did not function with ADO.
       (Bug #30277891, Bug #96642)

     * Added a workaround to account for a limitation in the
       iODBC SQLGetPrivateProfileString() implementation as
       previously DSN options could be lost. iODBC is most
       commonly used on macOS. (Bug #27851681)


On Behalf of Oracle/MySQL Engineering Team
Prashant Tekriwal

RED Method for MySQL Performance Analyses

$
0
0
RED MySQL Performance

The RED Method (Rate, Errors, Duration) is one of the more popular performance monitoring approaches.  It is often applied to Monitoring Microservices though there is nothing that prevents it from being applied to databases like MySQL.

In Percona Monitoring and Management (PMM) v2 we have all the required information stored in the ClickHouse database, and with the built-in ClickHouse datasource it is a matter of creating a dashboard to visualize the data.

While I was editing the dashboard, I added a few other panels, beyond what RED Method requires, in order to show some of the cool things you can do with Grafana + ClickHouse data source and information we store about MySQL query performance.

Let’s explore this dashboard in more detail.

Percona Monitoring and Management Dashboard

We can see the RED Method Classical panels showing Query Rate, Error Rate, as well as query latency (average and 99 percentile) for all the nodes in the system.  The panel below shows the breakdown for different nodes which is very helpful to compare performance between the nodes.  If one of the hosts starts to perform differently from the rest of similar hosts in the fleet, it will warrant investigation.

You do not have to look at the whole of your data through the “Filters” functionality at the top of the dashboard to apply any filters you want. For example, I can choose to only look at the queries which hit “sbtest” schema for hosts located in region “datacenter4”:

RED Method MySQL Queries

This Ad-Hoc Filtering is really powerful.  You can also match queries you want to look at by regular expression, look at particular QueryID, Queries from particular Client Hosts, etc.  You can check out all columns available in ClickHouse and their meaning in the blog post Advanced Query Analysis in Percona Monitoring and Management with Direct ClickHouse Access.

From most of the panels you can quickly jump to Query Analytics to see more details of query performance or if you notice one of the hosts having unusual performance you can use the “Data Links” functionality to see queries for this host only – click on the line on the graph and follow the highlighted link:

MySQL Query Rate Per Node

You can also see the same RED Metrics you show for the whole system for each of the systems separately. I would keep those rows collapsed by default, especially if you have many hosts under monitoring.

Now we’re done with RED Method Panels, so let’s see what additional panels this dashboard provides and how to use them.

RED Method Panels

Row Based Efficiency dashboard shows how many rows are traversed for every row which is sent or changed.  Values higher than 100 tend to mean you either have poor indexes or run some very complicated queries which crunch through a lot of data to send or modify a few. Both such cases are prime examples of queries to review.

Time-Based Efficiency does the same math but looks at the query execution time rather than the number of scanned rows. This allows us to float problems caused by slow disk or query contention. Generally, for a high-performance system, you should expect it to spend fractions of a millisecond to send a row to the client or modify it.  Queries that send a lot of rows or modify a lot of rows will have a lower value.

Queries Per Host is pretty self-explanatory and it is very helpful to see alongside the Query Load Per Host panel which shows which hosts have the highest amount of queries active at the same time.  We can see here while MySQL4 does not have the highest query rate in the mix, it has the highest load or highest number of average active queries.

As I was thinking about what other metrics might be helpful, I also added these additional panels:

MySQL Metrics

These panels break down the Query Processing efficiency to READ queries (those which send rows) and  WRITE queries (which has row_affected).

QueryTime Based Efficiency is the same as the panel described before, just with a focus on particular kinds of queries.

Data Crunching Efficiency is a bit different spin on the same data; it looks at how many rows are examined by those queries vs query execution time.  This, on one extent, shows system processing power; system with many cores having all data in memory may be able to crunch millions of rows per second and do a lot of work. But it does not mean all this work has value. In fact, systems that crunch a lot of data quickly are often doing many full table scans.

Finally, there are few Query Lists on the same page.

MySQL Queries

Frequent queries, queries which are the slowest on average, queries which cause the most load, and queries which terminated with an error or warning.  You can get these in Query Analytics as well, but I wanted to show those as examples.

Interested?  You can install the Dashboard to your Percona Monitoring and Management (PMM) v2  from Grafana.com. Check it out, and let us know what you think!  

MySQL – How to add a foreign key

$
0
0

TL; DR

On a new TABLE:

CREATE TABLE child (ID INT PRIMARY KEY, parent_ID INT, FOREIGN KEY my_fk (parent_id) REFERENCES parent(ID));

On an existing TABLE:

ALTER TABLE child ADD FOREIGN KEY my_fk (parent_id) REFERENCES parent(ID);

MySQL has the ability to enforce a record that exists on a parent table when you are adding/modifying data or validate that a record doesn’t exist when you are deleting data from your child table, leaving your database inconsistent. This is called Foreign Key.

As seen above, you can either create your table with an FK since the beginning or modify/alter your table to add a new constrain after table creation time. Apart from syntax to refer to a field on the parent table, you can control what will be the behavior when you UPDATE or DELETE a record on the PARENT table that has a reference to in on the child table. This is controlled by the optional parameter ON UPDATE and ON DELETE, the restrictions are as follow:

We will be using as an example the below table:

mysql> SELECT * FROM parent;
+----+
| ID |
+----+
|  1 |
+----+
1 row in set (0.00 sec)

mysql> SELECT * FROM child;
+----+-----------+
| ID | parent_ID |
+----+-----------+
|  1 |         1 |
+----+-----------+
1 row in set (0.00 sec)
  • RESTRICT or NO ACTION – Default behavior when you omit ON UPDATE or ON DELETE, this means if you try to update the filed on the parent table that is referred to at the child table, your update/delete will be blocked:
    mysql> UPDATE parent SET ID = 2 WHERE ID = 1;
    ERROR 1451 (23000): Cannot delete or update a parent row: a foreign key constraint fails (`test`.`child`, CONSTRAINT `child_ibfk_1` FOREIGN KEY (`parent_ID`) REFERENCES `parent` (`ID`))
    

    Although not recommended, in extreme cases you can for MySQL to disable the FK check to by-passe above error:

    SET foreign_key_checks=0;

    Have in mind that this will despite the whole reason for having FK in the first place!

  • SET DEFAULT – It’s recognized by the parse (won´t give any error), however, its interpreted as RESTRICT.
  • CASCADE – Whatever action you do on the parent table, will replicate to the child table:
    mysql> UPDATE parent SET ID = 2 WHERE ID = 1;
    Query OK, 1 row affected (0.01 sec)
    Rows matched: 1  Changed: 1  Warnings: 0
    
    mysql> SELECT * FROM parent;
    +----+
    | ID |
    +----+
    |  2 |
    +----+
    1 row in set (0.00 sec)
    
    mysql> SELECT * FROM child;
    +----+-----------+
    | ID | parent_ID |
    +----+-----------+
    |  1 |         2 |
    +----+-----------+
    1 row in set (0.00 sec)
  • SET NULL – Whatever action you do on the parent table, child column will reset to NULL (make sure child filed is not set to NOT NULL):
    mysql> UPDATE parent SET ID = 2 WHERE ID = 1;
    Query OK, 1 row affected (0.01 sec)
    Rows matched: 1  Changed: 1  Warnings: 0
    
    mysql> SELECT * FROM parent;
    +----+
    | ID |
    +----+
    |  2 |
    +----+
    1 row in set (0.01 sec)
    
    mysql> SELECT * FROM child;
    +----+-----------+
    | ID | parent_ID |
    +----+-----------+
    |  1 |      NULL |
    +----+-----------+
    1 row in set (0.00 sec)

MySQL Connector/NET 8.0.21 has been released

$
0
0

Dear MySQL users,

MySQL Connector/NET 8.0.21 is the latest General Availability release of
the MySQL Connector/NET 8.0 series. This version supports .NET Core 3.1
and the X DevAPI, which enables application developers to write code
that combines the strengths of the relational and document models using
a modern, NoSQL-like syntax that does not assume previous experience
writing traditional SQL.

To learn more about how to write applications using the X DevAPI, see
http://dev.mysql.com/doc/x-devapi-userguide/en/index.html. For more
information about how the X DevAPI is implemented in Connector/NET, see
http://dev.mysql.com/doc/dev/connector-net.

NuGet packages provide functionality at a project level. To get the full
set of features available in Connector/NET such as availability in the
GAC, integration with Visual Studio’s Entity Framework Designer and
integration with MySQL for Visual Studio, installation through the MySQL
Installer or the stand-alone MSI is required.

Please note that the X DevAPI requires at least MySQL Server version 8.0
or higher with the X Plugin enabled. For general documentation about how
to get started using MySQL as a document store, see
http://dev.mysql.com/doc/refman/8.0/en/document-store.html.

To download MySQL Connector/NET 8.0.21, see
http://dev.mysql.com/downloads/connector/net/

Installation instructions can be found at
https://dev.mysql.com/doc/connector-net/en/connector-net-installation.html

Changes in MySQL Connector/NET 8.0.21 (2020-07-13, General
Availability)

Functionality Added or Changed

     * The following ciphers and algorithms are deprecated for
       SSH connections made using Connector/NET:
       Encryptions

          + 3des-cbc
       Key Exchange Algorithms

          + diffie-hellman-group14-sha1

          + diffie-hellman-group-exchange-sha1
       Message Authentication Codes

          + hmac-ripemd160

          + hmac-sha1

          + hmac-sha1-96
       (Bug #31030347)

Bugs Fixed

     * Connector/NET returned an error when the name of a
       database or stored procedure contained one or more period
       characters. Now, names with this format can be used when the name
       is enclosed properly between grave accent (`) symbols; for
       example, `db_1.2.3.45678`. (Bug #31237338, Bug #99371)

     * An error was generated when the database name within a
       connection string that was passed to MySQL 5.6 or MySQL 5.7 did
       not match the casing used to search a related stored procedure.
       (Bug #31173265)

     * In Connector/NET 8.0.19, calling new
       MySqlConnection(null) returned NullReferenceException, rather
       than returning an object with a ConnectionString property equal
       to String.Empty as the previous versions of Connector/NET did.
       This fix restores the earlier behavior. (Bug #30791289,
       Bug #98322)

     * An expected empty result set generated by executing
       MySQLDataReader for a stored procedure instead returned a data
       table containing the @_cnet_param_value column. This fix
       eliminates an internal error that affected the result set and now
       GetSchemaTable() returns a null value as expected.
       (Bug #30444429, Bug #97300)

     * The BLOB type was inferred internally when a value or
       object of type MySqlGeometry was used in different situations,
       which caused to server to return either zero matching rows or an
       exception. (Bug #30169716, Bug #96499, Bug #30169715, Bug #96498)

     * Attempts to execute a function or stored procedure
       returned a null exception unexpectedly when the caller was not
       the creator of the routine. This fix introduces a mechanism to
       manage null values for these cases, permits the granting of
       privilege to SHOW_ROUTINE, and revises SqlNullValueException to
       identify when a user account lacks adequate permission to access
       a routine. (Bug #30029732, Bug #96143)

     * Columns of type BIGINT in a table that was loaded using
       MySqlDataReader did not include the UNSIGNED flag, even though
       UNSIGNED was specified in the CREATE TABLE statement. An
       exception was generated if the value of such a column exceeded
       2147483647. (Bug #29802379, Bug #95382)

     * The microseconds value in the return results was set to
       zero consistently when SqlCommand.Prepare() was called for a
       SELECT statement with a TIME(n) column. This fix revises the way
       the value is produced to ensure accurate results. (Bug #28393733,
       Bug #91770)

     * The isolation level set for a transaction did not revert
       to using the session value after the transaction finished.
       (Bug #26035791, Bug #86263)

     * A valid call made to the
       MySqlSimpleRoleProvider.AddUsersToRoles method failed to execute
       because it violated the foreign key constraint. This fix removes
       an error from the code that gets the role ID. Thanks to Stein
       Setvik for the patch. (Bug #25046352, Bug #83657)

     * The absence of a target schema in the generated WHERE
       statement produced during an Entity Framework migration caused an
       error when the identical table was present in multiple databases.
       This fix adds the table_schema column to the generated SQL query.
       (Bug #23291095, Bug #72424)

On Behalf of Oracle/MySQL Release Engineering Team,
Nawaz Nazeer Ahamed

Viewing all 18381 articles
Browse latest View live




Latest Images