Quantcast
Channel: Planet MySQL
Viewing all 18833 articles
Browse latest View live

Lessons from database failures presentation

$
0
0

At our June meetup, our guest speaker and Chief Evangelist from MariaDb Colin Charles gave a presentation on “Lessons from database failures”. Some of the details of his presentation included:

  • Notable failures causing companies to go out of business
  • varying backup commands and options
  • Understanding a/semi/synchronous replication
  • Replication topology management tools
  • Proxy and sharding tools
  • Security, SQL injections and encryption

Thanks to our sponsors for June. Grovo, Webair, MariaDB


PlanetMySQL Voting: Vote UP / Vote DOWN

Q: Does MySQL support ACID? A: Yes

$
0
0

I was recently asked this question by an experienced academic at the NY Oracle Users Group event I presented at.

Does MySQL support ACID? (ACID is a set of properties essential for a relational database to perform transactions, i.e. a discrete unit of work.)

Yes, MySQL fully supports ACID, that is Atomicity, Consistency, Isolation and Duration. (*)

This is contrary to the first Google response found searching this question which for reference states “The standard table handler for MySQL is not ACID compliant because it doesn’t support consistency, isolation, or durability”.

The question is however not a simple Yes/No because it depends on timing within the MySQL product’s lifecycle and the version/configuration used in deployment. What is also *painfully* necessary is to understand why this question would even be asked of the most popular open source relational database.

MySQL has a unique characteristic of supporting multiple storage engines. These engines enabling varying ways of storing and retrieving data via the SQL interface in MySQL and have varying features for supporting transactions, locking, index strategies, compression etc. The problem is that the default storage engine from version 3.23 (1999) to 5.1 (2010) was MyISAM, a non-transactional engine, and hence the first point of confusion.

The InnoDB storage engine has been included and supported from MySQL 3.23. This is a transactional engine supporting ACID properties. However, not all of the default settings in the various MySQL versions have fully meet all ACID needs, specifically the durability of data. This is the second point of confusion. Overtime other transactional storage engines in MySQL have come and gone. InnoDB has been there since the start so there is no excuse to not write applications to fully support transactions. The custodianship of Oracle Corporation starting in 2010 quickly corrected this *flaw* by ensuring the default storage engine in MySQL 5.5 is InnoDB. But the damage to the ecosystem that uses MySQL, that is many thousands of open source projects, and the resources that work with MySQL has been done. Recently working on a MySQL 5.5 production system in 2016, the default engine was specifically defined in the configuration defined as MyISAM, and some (but not all tables) were defined using MyISAM. This is a further conversation as to why, is this a upgrade problem? Are there legacy dependencies with applications? Are the decision makers and developers simply not aware of the configuration? Or, are developers simply not comfortable with transactions?

Like other anti-reasonable MySQL defaults the unaware administrator or developer could consider MySQL as supporting ACID properties, however until detailed testing with concurrency and error conditions not realize the impact of poor configuration settings.

The damage of having a non-transactional storage engine as the default for over a decade has created a generation of professionals and applications that abuses one of the primary usages of a relational database, that is a transaction, i.e. to product a unit for work that is all or nothing. Popular open source projects such as WordPress, Drupal and hundreds more have for a long time not supported transactions or used InnoDB. Mediawiki was at least one popular open source project that was proactive towards InnoDB and transaction usage. The millions of plugins, products and startups that build on these technologies have the same flaws.

Further confusion arises when an application uses InnoDB tables but does not use transactions, or the application abuses transactions, for example 3 different transactions that should really be 1.

While newer versions of MySQL 5.6 and 5.7 improve default configurations, until these versions a more commonly implemented non-transactional use in a relational database will remain. A recent Effective MySQL NYC Meetup survey showed that installations of version 5.0 still exist, and that few have a policy for a regular upgrade cadence.


PlanetMySQL Voting: Vote UP / Vote DOWN

2016 MySQL User Group Leaders Summit

$
0
0
MySQL User Group Leaders Summit

In this post, I’ll share my experience attending the annual MySQL User Group Leaders Summit in Bucharest, Romania.

The MySQL User Group Leaders Summit gathers together as many of the global MySQL user group leaders as possible. At the summit, we discuss further actions on how we can better act for their local communities. This year, it focused primarily on cloud technologies.

As the Azerbaijan MySQL User Group leader, I felt a keen responsibility to go. I wanted to represent our group and learn as much as possible to take back to with me. Mingling and having conversations with other group leaders helps give me more ideas about how to spread the MySQL word!

The Conference

I attended three MySQL presentations:

  • Guided tour on the MySQL source code. In this session, we reviewed the layout of the MySQL code base, roughly following the query execution path. We also covered how to extend MySQL with both built-in and pluggable add-ons.
  • How profiling SQL works in MySQL. This session gave an overview of the performance monitoring tools in MySQL: performance counters, performance schema and SYS schema. It also covered some of the details in analyzing MySQL performance with performance_schema.
  • What’s New in MySQL 5.7 Security. This session presented an overview of the new MySQL Server security-related features, as well as the MySQL 5.6 Enterprise edition tools. This session detailed the shifting big picture of secure deployments, along with all of the security-related MySQL changes.

MySQL User Group Leaders SummitI thought that the conference was very well organized, with uniformly great discussions. We also participated in some city activities and personal interactions. I even got to see Le Fred!

I learned a lot from the informative sessions I attended. The MySQL source code overview showed me the general paths of MySQL source code, including the most important directories, the most important functions and classes. The session about MySQL profiling instrumentation sessions informed us of the great MySQL profiling improvements. It reviewed some useful tools and metrics that you can use to get info from the server. The last session about MySQL security covered improved defaults, tablespace encryption and authentication plugins.

In conclusion, my time was well spent. Meeting and communicating with other MySQL user group leaders gives me insight into the MySQL community. Consequently, I highly recommend everyone gets involved in your local user groups and attend get-togethers like the MySQL User Group Leaders Summit when you can find the time.

Below you can see some of the pics from the trip. Enjoy!

MySQL User Group Leaders Summit

 

MySQL User Group Leaders Summit

 

MySQL User Group Leaders Summit

MySQL User Group Leaders Summit

 

MySQL User Group Leaders Summit

MySQL User Group Leaders Summit

MySQL User Group Leaders Summit

 


PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL Password Security Changes for PHP Developers

$
0
0
MySQL 5.7 introduced many new facets to password security. The first thing most notice is that you are assigned a random root password at installation time. You then have to search the log file for this random password, use it to login, and then change it. For the examples on the post I am using a fresh install of 5.7.13 on Oracle Linux 7.1 and was provided with the easy to remember password of nLvQRk7wq-NY which to me looked like I forgot to hit escape when trying to get out of vim. A quick ALTER USER to change the password and you are on your way.

Defaults

Password Lifetime and Complexity

5.7.13 now has the default password lifetime set to 0 or 'never expire'. My fresh install shows that the value of mysql.user.password_lifetime is set to NULL which means use the server default value. The lifetime is measured in days and stored in the password_last_changed column of the nysql.users table. If the password is expired, you are put into sandbox mode where the only command you can execute is to change the password. That works great for interactive users. But what about your application? It uses a username password pair to talk to the database but it is very unlikely that anyone planned on changing passwords upon expiration. I seriously doubt anyone has set up the exception routine to handle an expired password properly. And if so, how do you notify all involved about this new password --- securely.

What to do

The best thing would be to set the default password lifetime for accounts used by applications to zero. It simply does not expire. QED & out.

But what if your company wants ALL password changed on a regular basis? And they do mean ALL. Earlier there was a listing of the defaults. The test system are set to a password length of eight characters minimum, requires mixed case, requires at least one upper case letter, one special (nonalphanumeric) character, and is of MEDIUM complexity.

MEDIUM complexity means that passwords need one numeric, one lower case, one upper case, and one special character. LOW tests the password length only. And STRONG adds a condition that sub strings of the length of four characters or long do not match entries in a specified password file (use to make sure swear words, common names, etcetera are not part of a password).

Lets create a dummy account.

CREATE USER 'foobar'@'Localhost' IDENTIFIED BY 'Foo@Localhost1' PASSWORD EXPIRE;

Checking the entry in the user table, you will find that the account's password is expired. For extra credit notice what the authentication string is set to. We can't have just a password string as some authentication tokens or hashes are not really password.

So login as foobar and you will get a notice that the password must be reset before we can do anything else.

ALTER USER 'foobar'@'localhost' IDENTIFIED By '1NewP@assword';

Corporate Standard

Your corporate rules may require you to rotate password every N days and set the corresponding complexity. With MySQL 5.7 you can follow what their model is. If you do not have a standard and want to create one, be sure to DOCUMENT well what your standard is and make sure that standard is well known.

There are ways to use packages like PAM or LDAP for authentication but that is for another day.
PlanetMySQL Voting: Vote UP / Vote DOWN

Planets9s - MySQL on Docker: Building the Container Images, Monitoring MongoDB and more

$
0
0

Welcome to this week’s Planets9s, covering all the latest resources and technologies we create around automation and management of open source database infrastructures.

MySQL on Docker: Building the Container Image

Building a docker image for MySQL is essential if you’d like to customize MySQL to suit your needs. In this second post of our ‘MySQL on Docker’ series, we show you two ways to build your own MySQL Docker image - changing a base image and committing, or using Dockerfile. We show you how to extend the Docker team’s MySQL image, and add Percona XtraBackup to it.

Read the blog

Sign up for our webinar on Monitoring MongoDB - Tuesday July 12th

MongoDB offers many metrics through various status overviews or commands, and as MySQL DBA, it might be a little unfamiliar ground to get started with. In this webinar on July 12th, we’ll discuss the most important ones and describe them in ordinary plain MySQL DBA language. We’ll have a look at the open source tools available for MongoDB monitoring and trending. And we’ll show you how to leverage ClusterControl’s MongoDB metrics, dashboards, custom alerting and other features to track and optimize the performance of your system.

Sign up for the webinar

StreamAMG chooses ClusterControl to support its online European football streaming

This week we’re delighted to announce a new ClusterControl customer, StreamAMG (Advanced Media Group), Europe’s largest player in online video solutions, helping football teams such as Liverpool FC, Aston Villa, Sunderland AFC and the BBC keep fans watching from across the world. StreamAMG replaced its previous environment, based on a master-slave replication topology, with a multi-master Galera Cluster; and Severalnines’ ClusterControl platform was applied to automate operational tasks and provide visibility of uptime and performance through monitoring capabilities.

Read the story

That’s it for this week! Feel free to share these resources with your colleagues and follow us in our social media channels.

Have a good end of the week,

Jean-Jérôme Schmidt
Planets9s Editor
Severalnines AB


PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL with Docker – Performance characteristics

Rescuing a crashed pt-online-schema-change with pt-archiver

$
0
0
crashed pt-online-schema-change

crashed pt-online-schema-changeThis article discusses how to salvage a crashed pt-online-schema-change by leveraging pt-archiver and executing queries to ensure that the data gets accurately migrated. I will show you how to continue the data copy process, and how to safely close out the pt-online-schema-change via manual operations such as RENAME TABLE and DROP TRIGGER commands. The normal process to recover from a crashed pt-online-schema-change is to drop the triggers on your original table and drop the new table created by the script. Then you would restart pt-online-schema-change. In this case, this wasn’t possible.

A customer recently needed to add a primary key column to a very busy table (with around 200 million rows). The table only had a unique key on one column (called our_id below). The customer had concerns about slave lag, and wanted to ensure there was little or no lag. This, as well as the fact that you can’t add a primary key as an online DDL in MySQL and Percona Server 5.6, meant the obvious answer was using pt-online-schema-change.

Due to the sensitivity of their environment, they could only afford one short window for the initial metadata locks, and needed to manually do the drop swap that pt-online-schema-change normally does automatically. This is where no-drop-triggers and no-swap-tables come in. The triggers will theoretically run indefinitely to keep the new and old tables in sync once pt-online-schema-change is complete. We crafted the following command:

pt-online-schema-change
--execute
--alter-foreign-keys-method=auto
--max-load Threads-running=30
--critical-load Threads_running=55
--check-slave-lag mysql-slave1,mysql-slave2,mysql-slave3
--max−lag=10
--chunk-time=0.5
--set-vars=lock_timeout=1
--tries="create_triggers:10:2,drop_triggers:10:2"
--no-drop-new-table
--no-drop-triggers
--no-swap-tables
--chunk-index "our_id"
--alter "ADD newcol BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY FIRST"
D=website,t=largetable
--nocheck-plan

You can see some of the specifics of other flags and why we used them in the Percona Toolkit Manual.

Once we ran the command the customer got concerned, as their monitoring tools weren’t showing any work done (which is by design, pt-online-schema-change doesn’t want to hurt your running environment). The customer ran strace -p to verify it was working. This wasn’t a great choice as it crashed pt-online-schema-change.

At this point, we knew that the application (and management) would not allow us to take new metadata locks to create triggers on the table, as we had passed our metadata lock window.

So how do we recover?

First, let’s start with a clean slate. We issued the following commands to create a new table, where __largetable_new is the table created by pt-online-schema-change:

CREATE TABLE mynewlargetable LIKE __largetable_new;
RENAME TABLE __largetable_new TO __largetable_old, mynewlargetable TO __largetable_new;
DROP TABLE __largetable_old;

Now the triggers on the original table, largetable are updating the new empty table that has our new schema.

Now let’s address the issue of actually moving the data that’s already in largetable to __largetable_new. This is where pt-archiver comes in. We crafted the following command:

pt-archiver
--execute
--max-lag=10
--source D=website,t=largetable,i=our_id
--dest D=website,t=__largetable_new
--where "1=1"
--no-check-charset
--no-delete
--no-check-columns
--txn-size=500
--limit=500
--ignore
--statistics

We use pt-archiver to slowly copy records non-destructively to the new table based on our_id and WHERE 1=1 (all records). At this point, we periodically checked the MySQL data directory over the course of a day with ls -l to compare table sizes.

Once the table files were close to the same size, we ran counts on the tables. We noticed something interesting: the new table had thousands more records than the original table.

This concerned us. We wondered if our “hack” was a mistake. At this point we ran some verification queries:

select min(our_id) from __largetable_new;
select max(our_id) from __largetable_new;
select min(our_id) from largetable;
select max(our_id) from largetable;

We learned that there were older records that didn’t exist in the live table. This means that pt-archiver and the DELETE trigger may have missed each other (i.e., pt-archiver was already in a transaction but hadn’t written records to the new table until after the DELETE trigger already fired).

We verified with more queries:

SELECT COUNT(*) FROM largetable l WHERE NOT EXISTS (SELECT our_id FROM __largetable_new n WHERE n.our_id=l.our_id);

They returned nothing.

SELECT COUNT(*) FROM __largetable_new n WHERE NOT EXISTS (SELECT our_id FROM largetable l WHERE n.our_id=l.our_id);

Our result showed 4000 extra records in the new table. This shows that we ended up with extra records that were deleted from the original table. We ran other queries based on their data to verify as well.

This wasn’t a huge issue for our application, and it could have been easily dealt with using a simple DELETE query based on the unique index (i.e., if it doesn’t exist in the original table, delete it from the new one).

Now to complete the pt-online-schema-change actions. All we need to do is the atomic rename or drop swap. This should be done as soon as possible to avoid running in a degraded state, where all writes to the old table are duplicated on the new one.

RENAME TABLE largetable TO __largetable_old , __largetable_new TO largetable;

Then drop the triggers for safety:

DROP TRIGGER pt_osc_website_largetable_ins;
DROP TRIGGER pt_osc_website_largetable_upd;
DROP TRIGGER pt_osc_website_largetable_del;

At this point it is safer to wait for the old table to clear out of the buffer pool before dropping it, just to ensure there is no impact on the server (maybe a week to be safe). You can check information_schema for a more accurate reading on this:

SELECT COUNT(*) FROM INFORMATION_SCHEMA.INNODB_BUFFER_PAGE WHERE TABLE_NAME = '`website`.`__largetable_old`';
+----------+
| count(*) |
+----------+
|   279175 |
+----------+
1 row in set (8.94 sec)

Once this goes to 0 you can issue:

DROP TABLE __largetable_old;


PlanetMySQL Voting: Vote UP / Vote DOWN

Column Level Privileges in MySQL

$
0
0

Recently I experimented with column-level privileges in MySQL. Column-level privileges are fairly straightforward, but given how infrequently they are used I think there are a few areas worth discussing.

Here are a few high-level observations:

  • Users can execute INSERT and UPDATE statements that affect columns they don't have privileges on, as long as they rely on implicit defaults
  • Since SQL is row-based, it doesn't make sense to support column-level DELETE privileges, thus only SELECT, INSERT, and UPDATE are supported
  • You can grant privileges on multiple columns in one GRANT statement or multiple GRANT statements, the results are cumulative

Read on for more details on each type of column-level privilege, along with example queries.

SELECT

Users may only reference columns that they have explicity privileges on. This applies to the entire SELECT statement, not just the SELECT clause. If you try to reference a column that you do not have privileges on in the WHERE, GROUP BY, HAVING, or ORDER BY clause then you will get an error.

To illustrate this I created a table with two rows of sample data for testing:

``` mysql> create table good_stuff (

->   id int unsigned not null auto_increment primary key,
->   ctime timestamp default current_timestamp,
->   mtime timestamp default current_timestamp on update current_timestamp,
->   is_deleted tinyint not null default 0,
->   public varchar(255) null,
->   protected varchar(255) null,
->   private varchar(255) null
-> ) engine = innodb;

Query OK, 0 rows affected (0.03 sec)

mysql> insert into good_stuff (id,public,protected,private)

-> values (DEFAULT,'Hello world!','Red','Secret');

Query OK, 1 row affected (0.00 sec)

mysql> insert into good_stuff (id,public,protected,private)

-> values (DEFAULT,'Hi Scott','Blue','Boo Scott');

Query OK, 1 row affected (0.01 sec) ```

If I grant SELECT privileges on two columns in that table to a user named "scott", then scott may select those two columns:

mysql> -- as root, grant SELECT privileges to scott mysql> grant select (public,protected) on good_stuff to scott@'%'; Query OK, 0 rows affected (0.00 sec)

mysql> -- as scott, test SELECT privileges mysql> select public,protected from good_stuff; +--------------+-----------+ | public | protected | +--------------+-----------+ | Hello world! | Red | | Hi Scott | Blue | +--------------+-----------+ 2 rows in set (0.00 sec)

But scott may not reference another column in the ORDER BY clause:

mysql> -- as scott, test SELECT privileges mysql> select public from good_stuff order by id; ERROR 1143 (42000): SELECT command denied to user 'scott'@'localhost' for column 'id' in table 'good_stuff'

Table-level privileges take precendence, so granting table-level SELECT privileges to a user overrides any column-level SELECT privileges they may have.

If you happen to grant SELECT privileges on all columns in a table to a user, then the user is allowed to run SELECT * queries.

INSERT

Users may only explicitly insert data into columns for which they have the INSERT privilege. INSERT privileges do not rely on SELECT privileges, so it is possible to configure a user who may write data but not read it. Default values may be used, but only implicitly. If you try to explicitly reference a default value you will get an error.

If I give scott INSERT privileges on only the public column in my table, he can still insert a row as long as he only references that one column, and default values will be used for other columns (id, ctime, mtime, is_deleted).

mysql> -- as root, grant INSERT privileges to scott mysql> grant insert (public) on good_stuff to scott@'%'; Query OK, 0 rows affected (0.00 sec)

For example this works:

mysql> -- as scott, test INSERT privileges mysql> insert into good_stuff (public) values ('Hi everybody'); Query OK, 1 row affected (0.01 sec)

And results in a row like this:

mysql> -- as root, select full row that scott inserted mysql> select * from good_stuff where public = 'Hi everybody'; +----+---------------------+---------------------+------------+--------------+-----------+---------+ | id | ctime | mtime | is_deleted | public | protected | private | +----+---------------------+---------------------+------------+--------------+-----------+---------+ | 3 | 2016-06-30 20:37:39 | 2016-06-30 20:37:39 | 0 | Hi everybody | NULL | NULL | +----+---------------------+---------------------+------------+--------------+-----------+---------+

These statements all fail even though they have the same intent, because they are explicitly referencing columns that scott does not have privileges to INSERT:

mysql> -- as scott, test INSERT privileges mysql> insert into good_stuff (id,public) values (null,'Is it okay if I do this?'); ERROR 1143 (42000): INSERT command denied to user 'scott'@'localhost' for column 'id' in table 'good_stuff' mysql> insert into good_stuff (id,public) values (DEFAULT,'Is it okay if I do this?'); ERROR 1143 (42000): INSERT command denied to user 'scott'@'localhost' for column 'id' in table 'good_stuff'

I would get the same results if I tried to use null or DEFAULT with the other columns that have defaults values (ctime, mtime, is_deleted).

UPDATE

UPDATE statements support column-level privileges much the same way as SELECT and INSERT. In order to explicitly update a column a user needs UPDATE privileges on that column, but columns can be set to default values implicitly. If you reference a column in the WHERE clause of an UPDATE, then you need SELECT privileges on that column.

First I grant UPDATE privileges on the public column to scott:

mysql> -- as root, grant UPDATE privileges to scott mysql> grant update (public) on good_stuff to scott@'%'; Query OK, 0 rows affected (0.01 sec)

This is allowed:

mysql> -- as scott, test UPDATE privileges mysql> update good_stuff set public = lower(public); Query OK, 1 row affected (0.00 sec) Rows matched: 4 Changed: 1 Warnings: 0

This is also allowed:

mysql> -- as scott, test UPDATE privileges mysql> update good_stuff set public = upper(public) where protected = 'Blue'; Query OK, 1 row affected (0.01 sec) Rows matched: 1 Changed: 1 Warnings: 0

Please note that the previous two statements will only work if scott has both SELECT and UPDATE privileges on the public column, since the existing value is being used in the UPDATE, so that requires both a read and a write.

These are not allowed because the WHERE clause references a column for which scott does not have SELECT privileges:

mysql> -- as scott, test UPDATE privileges mysql> update good_stuff set public = upper(public) where id = 2; ERROR 1143 (42000): SELECT command denied to user 'scott'@'localhost' for column 'id' in table 'good_stuff' mysql> update good_stuff set is_deleted = DEFAULT; ERROR 1143 (42000): UPDATE command denied to user 'scott'@'localhost' for column 'is_deleted' in table 'good_stuff'

Even though scott does not have privileges to update the mtime column, the on update current_timestamp is still invoked implicitly. To verify this I can select back the "Hi everybody" row I selected earlier to confirm that the mtime value changed:

mysql> -- as root, select full row that scott inserted mysql> select * from good_stuff where id = 3; +----+---------------------+---------------------+------------+--------------+-----------+---------+ | id | ctime | mtime | is_deleted | public | protected | private | +----+---------------------+---------------------+------------+--------------+-----------+---------+ | 3 | 2016-06-30 20:37:39 | 2016-06-30 21:15:03 | 0 | hi everybody | NULL | NULL | +----+---------------------+---------------------+------------+--------------+-----------+---------+

DELETE

As mentioned above, column-level DELETE privileges are not supported in MySQL.


PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL 5.7 for production

$
0
0

With the time MySQL as database getting better in terms of High performance , scalability and security.
MySQL 5.7 new features : http://dev.mysql.com/doc/refman/5.7/en/mysql-nutshell.html

As a MySQL user my favorites are from above list:

  •  New options for replication :
    Changing replication filters online including and excluding table/db and enabling GTID transaction online.
  •  Innodb related changes:
    Online buffer pool resize and many defaults are changed to more secure and optimized values.
  • Security features :
    Improved User authentication like default users, SSL and data encryption with key capabilities in order to secure overall database.
  • Monitoring and analysis statistics :
    Improved performance schema for live transactions analysis which we can not mostly find out with SHOW PROCESSLIST command or Using INFORMATION_SCHEMA database tables.
  • All in this very important is “Setting configuration variables dynamically while server is running.This change will save many downtime’s or mysqld service restarts.
  • Optimizer:
    New optimizer changes will be the one doing magic inside for Query performance.

Something New :

  • Multi-source replication
  • Innodb tablespace encryption using key
  • MySQL x-protocol and Document store using json datatype capabilities.
  • Optimized and secure defaults settings for initial MySQL database setup.

I believe these are the most imported mysql database areas used by MySQL user in production environment.

How many of you using mysql-5.7 in production or planning for implementation in future? Please share your experience with it.



PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL Connector/NET 7.0.3 m2 development has been released

$
0
0

MySQL Connector/Net 7.0.3 is the second development release of MySQL Connector/Net  7.0 series.

MySQL Connector/Net 7.0 adds support for the new X DevAPI which enables developers to write code that combines the strengths of the relational and document models using a modern, NoSQL-like syntax that does not assume previous experience writing traditional SQL.

To learn more about how to write applications using the X DevAPI, see this User’s Guide. For more information about how the X DevAPI is implemented in Connector/Net, please check the official product documentation.

Please note that the X DevAPI requires at least MySQL Server version 5.7.12 or higher with the X Plugin enabled. For general documentation about how to get started using MySQL as a document store, see this chapter at the reference manual.

Changes in MySQL Connector/Net 7.0.3 (2016-06-20, Milestone 2)

Functionality Added or Changed:

  • Fixed binary collations as strings instead of bytes.
  • Added TLS support for TLSv1.1 and TLSv1.2 when connecting to MySQL Server 5.7.

Bugs Fixed:

  • Added results to the Commit() and Rollback() Session X DevAPI methods, in order to read Warnings. This feature has limitations that will be addressed in a future release.
  • Replaced the use of “@” for “$” in JSON path expressions for X DevAPI usage. This feature has limitations that will be addressed in a future release.
  • Added X DevAPI support for TLSv1.0. This feature has limitations that will be addressed in a future release.

Nuget packages are available at:

https://www.nuget.org/packages/MySql.Data/7.0.3-DMR

https://www.nuget.org/packages/MySql.Data.Entity/7.0.3-DMR

https://www.nuget.org/packages/MySql.Fabric/7.0.3-DMR

https://www.nuget.org/packages/MySql.Web/7.0.3-DMR

We love to hear your thoughts or any comments you have about our product. Please send us your feedback at our forums, fill a bug at our community site, or leave us any comment at the social media channels.

Enjoy and thanks for the support!

On behalf of the MySQL Release Team


PlanetMySQL Voting: Vote UP / Vote DOWN

Amazon RDS and pt-online-schema-change

$
0
0
Amazon RDS and pt-online-schema-change

Amazon RDS and pt-online-schema-changeIn this blog post, I discuss some of the insights needed when using Amazon RDS and pt-online-schema-change together.

The pt-online-schema-change tool runs DDL queries (ALTER) online so that the table is not locked for reads and writes. It is a commonly used tool by community users and customers. Using it on Amazon RDS requires knowing about some specific details. First, a high-level explanation of how the tool works.

This is an example from the documentation:

pt-online-schema-change --alter "ADD COLUMN c1 INT" D=sakila,t=actor

The tool runs an ALTER on the table “actor” from the database “sakila.” The alter adds a column named “c1” of type “integer.” In the background, the tool creates a new empty table similar to “actor” but with the new column already added. It then creates triggers on the original table to update the corresponding rows in the new table. After, it starts copying rows to the new table (this is the phase that takes the longest amount of time). When the copy is done, the tables are swapped, triggers removed and the old table dropped.

As we can see, it is a tool that uses the basic features of MySQL. You can run it on MySQL, Percona Server, MariaDB, Amazon RDS and so on. But when using Amazon, there is a hidden issue: you don’t have SUPER privileges. This means that if you try to run the tool on an RDS with binary logs enabled, you could get the following error:

DBD::mysql::db do failed: You do not have the SUPER privilege and binary logging is enabled (you might want to use the less safe log_bin_trust_function_creators variable) [for Statement "CREATE TRIGGER `pt_osc_db_table_del` AFTER DELETE ON `db`.`table` FOR EACH ROW DELETE IGNORE FROM `db`.`_table_new` WHERE `db`.`_table_new`.`table_id` <=> OLD.`table_id` AND `db`.`_table_new`.`account_id` <=> OLD.`account_id`"] at /usr/bin/pt-online-schema-change line 10583.

The following documentation page explains the reason for this message:

http://dev.mysql.com/doc/refman/5.7/en/stored-programs-logging.html

The bottom line is creating triggers on a server with binary logs enabled requires a user with SUPER privileges (which is impossible in Amazon RDS). The error message specifies the workaround. We need to enable the variable log_bin_trust_function_creators. Enabling it is like saying to the server:

“I trust regular users’ triggers and functions, and that they won’t cause problems, so allow my users to create them.”

Since the database functionality won’t change, it becomes a matter of trusting your users. log_bin_trust_function_creators is a global variable that can be changed dynamically:

mysql> SET GLOBAL log_bin_trust_function_creators = 1;

Run the tool again. This time, it will work. After the ALTER is done, you can change the variable to 0 again.

After you’re done with the ALTER process, you can change the variable to “0” again.


PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL for Visual Studio 2.0.3 has been released

$
0
0

The MySQL Windows Experience Team is proud to announce the release of MySQL for Visual Studio 2.0.3 m2. Note that this is a development preview release and not intended for production usage.

MySQL for Visual Studio 2.0.3 M2 is the second development preview release of the MySQL for Visual Studio 2.0 series.  This series adds support for the new X DevAPI. The X DevAPI enables application developers to write code that combines the strengths of the relational and document models using a modern, NoSQL-like syntax that does not assume previous experience writing traditional SQL.

To learn more about how to write applications using the X DevAPI, see http://dev.mysql.com/doc/x-devapi-userguide/en/. For more information about how the X DevAPI is implemented in MySQL for Visual Studio, and its usage, see http://dev.mysql.com/doc/refman/5.7/en/mysql-shell-visual-studio.html.

Please note that the X DevAPI requires at least MySQL Server version 5.7.12 or higher with the X Plugin enabled. For general documentation about how to get started using MySQL as a document store, see http://dev.mysql.com/doc/refman/5.7/en/document-store.html.

You can download MySQL Installer from our official Downloads page at http://dev.mysql.com/downloads/installer/.

MySQL for Visual Studio 2.0.3 m2 can also be downloaded by using the product standalone installer found at http://dev.mysql.com/downloads/windows/visualstudio/, under the tab “Development Releases”.

Changes in MySQL for Visual Studio 2.0.3 m2

Bugs Fixed

  • The “mysqlx” module was not imported properly to execute JavaScript queries.  (Bug #23091964, Bug #81052)
  • After opening a valid MySQL connection and creating a new JavaScript MySQL script, disconnecting then reconnecting to the MySQL Server while changing the port to 33060 would fail.
  • MySQL for Visual Studio now shows a message stating that a SSL connection is required by the MySQL server if the require_secure_transport variable is set.
  • All script editors now display detailed information about the connection used. Before, the information was displayed in the toolbar as labels, but now all information is consolidated in a menu opened where the connection name is displayed. Additional information includes the connection method, host identifier, server version, user, and schema.
  • Output from executing JavaScript and Python commands were not visible unless the Output window was already opened.  The Output window now automatically opens when executing commands.

What’s new in 2.0.3 m2

  • Improved the handling of errors, warnings and execution stats output of X DevAPI statements. All messages are properly handled and displayed after batch or console execution.
  • Added SSL support for MySQL connections that use the X Protocol. SSL support works with PEM files, so SSL connections need to be created through the “MySQL Connections Manager” in MySQL for Visual Studio, or from MySQL Workbench.
  • Added support for the following X DevAPI functions:
    parseUri() and isOpen().
  • A new “MySQL Output” pane was added that contains a results grid view similar to the view found in MySQL Workbench. It contains the following data for executed statements: Success, Execution index, Execution Time, Query Text, Message (output from the server), and Duration / Fetch. This functionality is available for JavaScript and Python queries.
  • Added “Console Mode” support for JavaScript and Python script editors, where query execution mimics the way the MySQL Shell works, meaning X DevAPI statements are executed after hitting “ENTER” and results are displayed inline.
  • Added the ability to switch between “Batch” (execute multiple statements) and “Console” (execute each statement after pressing Enter) modes, from the Query Editor toolbar as a dropdown list.
  • A MySQL connection manager dialog was added to help fully manage MySQL connections. It supports connection sharing with MySQL Workbench, and supports create, edit, configure, and delete actions.  MySQL connections created with the connection manager where the password is securely stored in the system’s password vault functions with the Server Explorer in Visual Studio. The password is extracted from the password vault, and persists in the Server Explorer connections.

Known limitations

  • Some features such as Entity Framework and some Server Explorer functionality like drag & drop elements into a Dataset Designer or Design Tables do not work in this version.

Quick links

Enjoy and thanks for the support!

MySQL for Visual Studio Team.


PlanetMySQL Voting: Vote UP / Vote DOWN

Performing a Live Upgrade to MySQL 5.7

$
0
0

After studying the differences between MySQL 5.6 and 5.7, and going through a vigorous regression test process, it’s now time for perform the actual upgrade itself. How do we best introduce 5.7 in our live environment? How can we minimize risks? What do we do if something goes wrong? And what tools are available out there to assist us?

The upgrade process

You will most likely perform a rolling upgrade - this means that you will upgrade one slave at a time, taking them out of rotation for the time needed to complete the upgrade. As the binary, in-place upgrade is supported for 5.6 -> 5.7, we can save lot of time by avoiding long dump and reload operations. This makes the upgrade process prompt and easy to perform.

One of the very important steps while performing a binary upgrade is that we need to disable innodb_fast_shutdown on the slave before we stop it for the upgrade. This is needed to avoid potential problems with InnoDB incompatibilities. You also have to remember to execute mysql_upgrade - another step required to fix some of the incompatibilities which may otherwise impact your workload.

The switchover process

Once you have upgraded the slaves, you’ll have to execute a switchover and promote one of the slaves as a master. How to do that, it’s up to you. If you use GTID in your replication setup, switchover and subsequent reconfiguration will be easier.  Some external tools can also be used to make the switchover process smooth - for example, ClusterControl can do the switchover for you, along with all preparations like reslaving hosts, setting up grants, etc.. The only requirement is that GTID is used.

In the proxy layer, you can think about using ProxySQL which, when configured correctly, allows for fully graceful master switches where no error is sent to the application. When combined with ClusterControl, you can execute a fully automated switchover without any rollbacked transaction and very slight impact to the application.

The upgrade process along with how to achieve a graceful switchover is covered in more details in our ebook “Upgrading to MySQL 5.7”.


PlanetMySQL Voting: Vote UP / Vote DOWN

FTP server with PureFTPd, MariaDB and Virtual Users (incl. Quota and Bandwidth Management) on CentOS 7.2

$
0
0
This document describes how to install a PureFTPd server that uses virtual users from a MariaDB (MySQL compatible) database instead of real system users. This is much more performant and allows to have thousands of FTP users on a single machine. In addition to that, I will show the use of quota and upload/download bandwidth limits with this setup. Passwords will be stored encrypted as MD5 strings in the database.
PlanetMySQL Voting: Vote UP / Vote DOWN

MariaDB 10.2.1 Alpha and other releases now available

$
0
0

The MariaDB project is pleased to announce the immediate availability of MariaDB 10.2.1 Alpha, MariaDB Connector/C 2.3.0, MariaDB Galera Cluster 5.5.50, and MariaDB Galera Cluster 10.0.26. See the release notes and changelogs for details on these releases. Download MariaDB 10.2.1 Alpha Release Notes Changelog What is MariaDB 10.2? MariaDB APT and YUM Repository Configuration Generator […]

The post MariaDB 10.2.1 Alpha and other releases now available appeared first on MariaDB.org.


PlanetMySQL Voting: Vote UP / Vote DOWN

Develop By Example – New MySQL Document Store Series

$
0
0

Examples are a great way to learn new things. As many of you may or may not know we’ve added some new things to MySQL Server 5.7.12  and the ecosystem around it, extending it  to allow you to use the MySQL as a Document Store. Meeting the challenge meant expanding Developer Interfaces and Database tools.

  • Addressing information with a both classic and modern data architectures
  • For all types of data – structured, semi, and unstructured
  • Empowering developers – Simpler, Faster, Flexible
  • Leveraging latest NoSQL oriented tools/methods – JavaScript, Node.js, JSON, CRUD, Methods chaining, and more

From the developer side the MySQL Document Store new APIs by introducing a JSON/Document Store oriented called the MySQL X DevAPI. This programming API provides the option for accessing MySQL; and its design unifies JSON document and table access; and it includes SQL support as well. Since the API features a popular fluent interface style, you will be able to use a NoSQL-like syntax to execute Create, Read, Update, Delete (CRUD) operations against these documents.

This new API is provided to developers in our latest MySQL Connectors. As again as learning by example is often the best way to get started with new things we’re also providing an example application we’re calling Movie Review to show you how it’s used in real applications with use cases that likely map to how you’d want to develop.

As there are a range of languages supported by the MySQL Connectors and Drivers – New Connector/Node.js as well as connectors for Java, Python, .NET, C, C++, and PHP – we plan to do Movie Review examples applications across the various languages.
Were calling our new example application – Movie Review – and as you might have guessed it revolves around developing web applications that allows users to review movies via a simple application that demonstrates the usage of the new features available in the X Dev API and connectors.

We will have 2 user types within the application – users and administrators.

The users can:

  • Search for a movie to view its description and any existing reviews
  • Review a movie
  • Update or delete a review they have written

The administrators can:

  • See the movies to view its description and any existing reviews
  • Upload new information to the database manually or from a JSON file
  • View the current data in the collections to edit it or delete it
  • View some simple reports.

With these use cases we hope to quickly and simply take you through the key development concepts quickly using CRUD type programming.

The Document Store Data Model

Since the application will be kept simple, we’re also including simple document store database you can easy load. It includes four collections: Actors, Movies, Reviews, and Users. It comes with the entire example data loaded that you’ll need as well. This will help to teach you some of the basics of document store style modeling.

We’ll provide the steps to install, etc in the example blogs and have you up and running and developing with MySQL Document Store in no time.

See you in the next blog post – where we get into the Movie Review application written with Node.js.


PlanetMySQL Voting: Vote UP / Vote DOWN

Sign up for our webinar on monitoring MongoDB (if you’re really a MySQL DBA)

$
0
0

In this new webinar on July 12th, we’ll discuss the most important metrics MongoDB offers and will describe them in ordinary plain MySQL DBA language. We’ll have a look at the open source tools available for MongoDB monitoring and trending. And finally, we’ll show you how to leverage ClusterControl’s MongoDB metrics, dashboards, custom alerting and other features to track and optimize the performance of your system.

To operate MongoDB efficiently, you need to have insight into database performance. And with that in mind, we’ll dive into monitoring in this second webinar in the ‘Become a MongoDB DBA’ series.

Date, Time & Registration

Europe/MEA/APAC

Tuesday, July 12th at 09:00 BST / 10:00 CEST (Germany, France, Sweden)
Register Now

North America/LatAm

Tuesday, July 12th at 09:00 Pacific Time (US) / 12:00 Eastern Time (US)
Register Now

Agenda

  • How does MongoDB monitoring compare to MySQL
  • Key MongoDB metrics to know about
  • Trending or alerting?
  • Available open source MongoDB monitoring tools
  • How to monitor MongoDB using ClusterControl
  • Demo 

Speaker

Art van Scheppingen is a Senior Support Engineer at Severalnines. He’s a pragmatic MySQL and Database expert with over 16 years experience in web development. He previously worked at Spil Games as Head of Database Engineering, where he kept a broad vision upon the whole database environment: from MySQL to Couchbase, Vertica to Hadoop and from Sphinx Search to SOLR. He regularly presents his work and projects at various conferences (Percona Live, FOSDEM) and related meetups.

We look forward to “seeing” you there!


This session is based upon the experience we have using MongoDB and implementing it for our database infrastructure management solution, ClusterControl. For more details, read through our ‘Become a MongoDB DBA’ blog series.


PlanetMySQL Voting: Vote UP / Vote DOWN

Server monitoring with Munin and Monit on CentOS 7.2

$
0
0
In this article, I will describe how you can monitor your CentOS 7.2 server with Munin and Monit. Munin produces nifty little graphics about nearly every aspect of your server (load average, memory usage, CPU usage, MySQL throughput, eth0 traffic, etc.) without much configuration, whereas Monit checks the availability of services like Apache, MySQL, Postfix and takes the appropriate action such as a restart if it finds a service is not behaving as expected. The combination of the two gives you full monitoring: graphics that lets you recognize current or upcoming problems (like "We need a bigger server soon, our load average is increasing rapidly."), and a watchdog that ensures the availability of the monitored services.
PlanetMySQL Voting: Vote UP / Vote DOWN

Speaking in July 2016

$
0
0
  • Texas LinuxFest – July 8-9 2016 – Austin, Texas – I’ve never spoken at this event before but have heard great things about it. I’ve got a morning talk about what’s in MariaDB Server 10.1, and what’s coming in 10.2.
  • db tech showcase – July 13-15 2016 – Tokyo, Japan – I’ve regularly spoken at this event and its a case of a 100% pure database conference, with a very captive audience. I’ll be talking about the lessons one can learn from other people’s database failures (this is the kind of talk that keeps on changing and getting better as the software improves).
  • The MariaDB Tokyo Meetup – July 21 2016 – Tokyo, Japan – Not the traditional meetup timing, since its 1.30pm-7pm, there will be many talks and its organised by the folk behind the SPIDER storage engine. It should be fun to see many people and food is being provided too. In Japanese: MariaDB コミュニティイベント in Tokyo, MariaDB Community Event in TOKYO.

PlanetMySQL Voting: Vote UP / Vote DOWN

Let's meet at Percona Live Amsterdam

$
0
0
I am very happy that my talk, MySQL Parallel Replication: inventory, use-cases and limitations, is included in the Sneak Peek of Percona Live Amsterdam.  As a member of the Conference Committee, I knew this was being discussed, but I refrained from commenting on discussion about my talk and the submissions of my colleagues from Booking.com. Mentioning the conference committee, as you can guess,
PlanetMySQL Voting: Vote UP / Vote DOWN
Viewing all 18833 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>