Quantcast
Channel: Planet MySQL
Viewing all 18842 articles
Browse latest View live

The 10 Things We Built on Twitch in March

$
0
0

This year feels like it's on moving too fast! March is done and it was an eventful month for us here at Scotch.

We've gotten the following done:

Twitch streams have been quite the learning experience for me. Between video and audio gear and learning how to read chat/code/stream at the same time while keep it interesting. Big learning month!

On these Twitch streams, we build out demos, mini-projects and hangout while having fun.

Let's take a look a the 10 things we built on these Twitch streams in March. You really should check them out. We'll be streaming today if you want to join! https://www.twitch.tv/chrisoncode

1. Building a React Calendar

In this stream, a calendar was built from scratch using ReactJs and Styled Components. Each React component was written using Hooks, a date range can also be choosen and displayed when you hover on the calendar.

You have to click through to the demo to see how this one works.

https://codesandbox.io/s/yqoq6y6j3z?runonclick=1&view=preview

2. Building a Pokemon Battle Simulator using Vue and PokeAPI

Here, we built a fun battle simulator with Pokemons attacking each other and you can visualize their health bar reducing. This was built with Vue and utilizes the Vue lifecycle methods to make the app tick. This got trickier than I expected but ended up being a fun one.

https://codesandbox.io/s/jnorz9wjzy?runonclick=1&view=preview

3. Building a Meme Generator in Vue

Chris built a Gif generator using the Vue-CLI and the Giphy API. This mini-project also uses CSS grid. In this application, you can search for a certain Gif using a provided search bar, also you can see trending Gifs on display.

https://codesandbox.io/s/k57o0j3wov?runonclick=1&view=preview

4. Build a Markdown Parser in React

In this stream, a markdown parser which converts markdown text while typing, into normal text. This application was built with React, Styled Components, react-markdown and PrismJS. The markdown parser handles header text, normal text, links, code blocks, block quotes and more.

https://codesandbox.io/s/nwm83w9y1l?runonclick=1&view=preview

5. Using Tailwind to Build a Dashboard in React

Tailwind is a fun utility-first framework in that is give you low-level classes to cutomize your site. I've heard of everyone talking and using Tailwind so I thought we could try it out. This was about my 2nd time using Tailwind.

https://codesandbox.io/s/x55vj11rq?runonclick=1&view=preview

6. Building a TailwindCSS Cheatsheet

While I am really liking Tailwind, I find the docs are a little hard to navigate. We made a cheatsheet that will quickly search for the right class that you want!

We even deployed it to a live Netlify site! https://tailwind-cheatsheet.netlify.com/

https://codesandbox.io/s/yjlvn462wx?runonclick=1&view=preview

7. Build a Modal Component in React

In this stream, a modal component was built in React. This app utilized React Hooks to handle state and lifecycle actions. Also styled-components was utilized to to handle styling.

https://codesandbox.io/s/yvz4royxp9?runonclick=1&view=preview

8. Build a Trivia App in Vue

This was probably my favorite stream. Building a trivia app automatically gets the chatroom involved and we had a lot of fun answering the random questions that showed up.

https://codesandbox.io/s/n9lz67mlrm?runonclick=1&view=preview

9. Build a Stripe-like Menu Carousel in Vue

This is one that you have to click through and see. The hover and CSS animations are what we focused on. Pulled from Stripe.com's website since they always have the best UI tricks.

https://codesandbox.io/s/1yoz2432m7?runonclick=1&view=preview

10. React Infinite Scroll Challenge

This stream is a solution to Scotch code challenge #16. In this stream, an image gallery masonry with infinte scroll was built with React, the unsplash API, react-infinite-scroll-component and Bulma. CSS grid was also used to create the masonry effect on the gallery.

https://codesandbox.io/s/yvnr3qo109?runonclick=1&view=preview

Let's Keep Going!

While we are stoked about these streams, we'll keep doing them and you can join us weekly here. Got a fun application or mini-project to build out in these streams? Know any way we can make these streams better?

Let us know by mentioning Scotch on Twitter or letting Chris know. Happy keyboard slapping!


Authentication in MariaDB 10.4 — understanding the changes

$
0
0

MariaDB Server 10.4 came with a whole lot of Security related changes. Some of them are merely optimizations (like MDEV-15649), some improve existing features to be more robust (MDEV-15473, MDEV-7598) or convenient (MDEV-12835, MDEV-16266). Some are MySQL compatibility features, requested by our users (MDEV-7597, MDEV-13095). But the first thing any MariaDB Server user, whether an […]

The post Authentication in MariaDB 10.4 — understanding the changes appeared first on MariaDB.org.

SQL Create Table Statement Example | Create Table in SQL Tutorial

$
0
0
SQL Create Table Statement Example | Create Table in SQL Tutorial

SQL Create Table Statement Example | Create Table in SQL Tutorial is today’s topic. A CREATE TABLE statement is used to create a new table in the database. SQL is the Standard Query Language for manipulating, storing and retrieving data in databases. SQL is used in MySQL, SQL Server, MS Access, Oracle, Sybase, Informix, Postgres, and other database systems. SQL stands for Structured Query Language. SQL lets you access and manipulates databases. RDBMS stands for Relational Database Management System. RDBMS is the basis for SQL, and for all modern database systems such as MS SQL Server, IBM DB2, Oracle, MySQL, and Microsoft Access.

SQL Create Table Statement Example

Working with SQL for data analysis and manipulation sometimes requires creating the new tables. Requirements like, Do you want to store the output of your SQL queries? Do you need to pull the new data sources (for example, csv files) into your data analysis? Do you want to store your transformed data without deleting your original data sets? In all of those scenarios, First, you have to know how to create tables in SQL.

Creating a primary table involves naming the table and defining its columns and each column’s data type.

See the following syntax of creating a table in SQL is following.

CREATE TABLE table_name (
    column1 datatype,
    column2 datatype,
    column3 datatype,
   ....
);

The column parameters indicate the names of the columns of a table.

The data type parameter specifies the type of data the column can hold (e.g., varchar, integer, date, boolean, etc.).

CREATE TABLE is a SQL keyword. You should always have it at the beginning of your SQL statement.

CREATE TABLE is the keyword telling the DBMS what you want to do. In this case, you want to create the new table. The unique name or identifier for the table follows the CREATE TABLE statement.

Then in brackets comes the list defining each column in the table and what sort of data type it is.

Let’s take the example of creating a table.

CREATE TABLE Apps (
    AppID int,
    AppName varchar(255),
    CreatorName varchar(255),
    AppCategory varchar(255),
    AppPrice int 
);

Now, run the query. I am using Sequel Pro for running the SQL queries. I have already connected the client to the Database. After running the query, Apps table will be created in the database.

 

SQL CREATE TABLE Example

It has created AppID, AppName, CreatorName, AppCategory, and AppPrice columns.

Create Table Using Another Table

The copy of an existing table can also be created using the CREATE TABLE.

The new table gets the same column definitions. All columns or specific columns can be selected based on your requirement.

If you create a new table using the existing one, then a new table will be filled with all the existing values from the old table.

The syntax for creating a table using another table is following.

CREATE TABLE new_table_name AS
    SELECT column1, column2,...
    FROM existing_table_name
    WHERE ....;

Let’s see the following example.

CREATE TABLE DummyApp AS
SELECT AppID, AppName
FROM Apps

So, we have created DummyApp table from Apps table.

Create Table With Extra parameters

After defining a data type of the column, you can add some extra parameters too. These are optional arguments and mostly technical things, but still, I will highlight the three most important parameters:

  1. NOT NULL: If you add this to your column that means you can’t add NULL values to the given column.
  2. UNIQUE: If you add this to your column that means you can’t add the same value to a column twice. It is especially important when you store unique user IDs. In these cases, duplicate values are not allowed.
  3. PRIMARY KEY: Practically speaking, this is the combination of NOT NULL and UNIQUE, but it has some technical advantages as well. You can only have one PRIMARY KEY column per table.

Let’s see the following example by creating a new table with extra parameters.

CREATE TABLE test_results
(
  name         TEXT,
  student_id   INTEGER   PRIMARY KEY,
  birth_date   DATE,
  test_result  DECIMAL   NOT NULL,
  grade        TEXT      NOT NULL,
  passed       BOOLEAN   NOT NULL
);

So, in the above code, we have defined the student_id as a PRIMARY KEY and test_result, grade, and passed column has NOT NULL attribute. It means that those columns do not take NULL values while inserting the values inside the table. If NULL found, then it will throw an error.

The output is following.

 

Create Table in SQL Tutorial

Finally, SQL Create Table Statement Example | Create Table in SQL Tutorial is over.

The post SQL Create Table Statement Example | Create Table in SQL Tutorial appeared first on AppDividend.

Astronomers set to make 'groundbreaking' black hole announcement - CNET

$
0
0
We may be about to see the first ever photo of a black hole.

dbdeployer cookbook - Advanced techniques

$
0
0

In the previous post about the dbdeployer recipes we saw the basics of using the cookbook command and the simpler tutorials that the recipes offer.

Here we will see some more advanced techniques, and more demanding examples.


We saw that the recipe for a single deployment would get a NOTFOUND when no versions were available, or the highest MySQL version when one was found.

$ dbdeployer cookbook  show single | grep version=
version=$1
[ -z "$version" ] && version=8.0.16

But what if we want the latest Percona Server or MariaDB for this recipe? One solution would be to run the script with an argument, but we can ask dbdeployer to find the most recent version for a given flavor and use it in our recipe:

$ dbdeployer cookbook  show single --flavor=percona | grep version=
version=$1
[ -z "$version" ] && version=ps8.0.15

$ dbdeployer cookbook show single --flavor=pxc | grep version=
version=$1
[ -z "$version" ] && version=pxc5.7.25

$ dbdeployer cookbook show single --flavor=mariadb | grep version=
version=$1
[ -z "$version" ] && version=ma10.4.3

This works for all the recipes that don’t require a given flavor. When one is indicated (see dbdeployer cookbook list) you can override it using --flavor, but do that at your own risk. Running the ndb recipe using pxc flavor won’t produce anything usable.


Replication between sandboxes

When I proposed dbdeployer support for NDB, the immediate reaction was that this was good to test cluster-to-cluster replication. Although I did plenty of such topologies in one of my previous jobs, I had limited experience replicating between single or composite sandboxes. Thus, I started thinking about how to do it. In the old MySQL-Sandbox, I had an option --slaveof that allowed a single sandbox to replicate from an existing one. I did not implement the same thing in dbdeployer, because that solution looked limited, and only useful in a few scenarios.

I wanted something more dynamic, and initially I thought of creating a grandiose scheme, involving custom templates and user-defined fillers. While I may end up doing that some day, I quickly realized that it was overkill for this purpose, and that the sandboxes had already all the information needed to replicate from and to every other sandbox. I just had to expose the data in such a way that it can be used to plug one sandbox to the other.

Now every sandbox has a script named replicate_from, and a companion script called metadata. Using a combination of the two (in fact, replicate_from on the would-be replica calls metadata from the donor) we can quickly define the replication command needed for most situations.


Replication between single sandboxes

Before we tackle the most complex one, let’s demonstrate that the system works with a simple case.

There is a recipe named replication_between_single that creates a file named, aptly, ./recipes/replication-between-single.sh.

If you run it, you will see something similar to the following:

$ ./recipes/replication-between-single.sh  5.7.25
+ dbdeployer deploy single 5.7.25 --master --gtid --sandbox-directory=msb_5_7_25_1 --port-as-server-id
Database installed in $HOME/sandboxes/msb_5_7_25_1
run 'dbdeployer usage single' for basic instructions'
. sandbox server started
0
+ dbdeployer deploy single 5.7.25 --master --gtid --sandbox-directory=msb_5_7_25_2 --port-as-server-id
Database installed in $HOME/sandboxes/msb_5_7_25_2
run 'dbdeployer usage single' for basic instructions'
. sandbox server started
0
+ dbdeployer sandboxes --full-info
.--------------.--------.---------.---------------.--------.-------.--------.
| name | type | version | ports | flavor | nodes | locked |
+--------------+--------+---------+---------------+--------+-------+--------+
| msb_5_7_25_1 | single | 5.7.25 | [5725 ] | mysql | 0 | |
| msb_5_7_25_2 | single | 5.7.25 | [5726 ] | mysql | 0 | |
'--------------'--------'---------'---------------'--------'-------'--------'
0
+ $HOME/sandboxes/msb_5_7_25_1/replicate_from msb_5_7_25_2
Connecting to $HOME/sandboxes/msb_5_7_25_2
--------------
CHANGE MASTER TO master_host="127.0.0.1",
master_port=5726,
master_user="rsandbox",
master_password="rsandbox"
, master_log_file="mysql-bin.000001", master_log_pos=4089
--------------

--------------
start slave
--------------

Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 4089
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Exec_Master_Log_Pos: 4089
Retrieved_Gtid_Set:
Executed_Gtid_Set: 00005725-0000-0000-0000-000000005725:1-16
Auto_Position: 0
0
# Inserting data in msb_5_7_25_2
+ $HOME/sandboxes/msb_5_7_25_2/use -e 'create table if not exists test.t1 (id int not null primary key, server_id int )'
+ $HOME/sandboxes/msb_5_7_25_2/use -e 'insert into test.t1 values (1, @@server_id)'
# Retrieving data from msb_5_7_25_1
+ $HOME/sandboxes/msb_5_7_25_1/use -e 'select *, @@port from test.t1'
+----+-----------+--------+
| id | server_id | @@port |
+----+-----------+--------+
| 1 | 5726 | 5725 |
+----+-----------+--------+

The script deploys two sandboxes of the chosen version, using different directory names (dbdeployer takes care of choosing a free port) and then starts replication between the two using $SANDBOX1/replicate_from $SANDBOX2. Then a quick test shows that the data created in a sandbox can be retrieved in the other.


Replication between group replication clusters

The method used to replicate between two group replications is similar to the one seen for single sandboxes. The script replicate_from on the group top directory delegates the replication task to its first node, which points to the second group.

$ ./recipes/replication-between-groups.sh  5.7.25
+ dbdeployer deploy replication 5.7.25 --topology=group --concurrent --port-as-server-id --sandbox-directory=group_5_7_25_1
[...]
+ dbdeployer deploy replication 5.7.25 --topology=group --concurrent --port-as-server-id --sandbox-directory=group_5_7_25_2
[...]
+ dbdeployer sandboxes --full-info
.----------------.---------------------.---------.----------------------------------------.--------.-------.--------.
| name | type | version | ports | flavor | nodes | locked |
+----------------+---------------------+---------+----------------------------------------+--------+-------+--------+
| group_5_7_25_1 | group-multi-primary | 5.7.25 | [20226 20351 20227 20352 20228 20353 ] | mysql | 3 | |
| group_5_7_25_2 | group-multi-primary | 5.7.25 | [20229 20354 20230 20355 20231 20356 ] | mysql | 3 | |
'----------------'---------------------'---------'----------------------------------------'--------'-------'--------'
0
+ $HOME/sandboxes/group_5_7_25_1/replicate_from group_5_7_25_2
Connecting to $HOME/sandboxes/group_5_7_25_2/node1
--------------
CHANGE MASTER TO master_host="127.0.0.1",
master_port=20229,
master_user="rsandbox",
master_password="rsandbox"
, master_log_file="mysql-bin.000001", master_log_pos=1082
--------------

--------------
start slave
--------------

Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 1082
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Exec_Master_Log_Pos: 1082
Retrieved_Gtid_Set:
Executed_Gtid_Set: 00020225-bbbb-cccc-dddd-eeeeeeeeeeee:1-3
Auto_Position: 0
0
# Inserting data in group_5_7_25_2 node1
+ $HOME/sandboxes/group_5_7_25_2/n1 -e 'create table if not exists test.t1 (id int not null primary key, server_id int )'
+ $HOME/sandboxes/group_5_7_25_2/n1 -e 'insert into test.t1 values (1, @@server_id)'
# Retrieving data from one of group_5_7_25_1 nodes
# At this point, the data was replicated twice
+ $HOME/sandboxes/group_5_7_25_1/n2 -e 'select *, @@port from test.t1'
+----+-----------+--------+
| id | server_id | @@port |
+----+-----------+--------+
| 1 | 20229 | 20227 |
+----+-----------+--------+

The interesting thing about this recipe is that the sandboxes are created using the option --port-as-server-id. While it was used also in the replication between single sandboxes as an excess of caution, in this recipe, and in all the recipes involving compound sandboxes, it is a necessity, as the replication would fail if primary and replica servers have the same server_id.

All the work is done by the replicate_from script, which knows how to check whether the target is a single sandbox or a composite one, and where to find the primary server.

Using a similar method, we can run more recipes on the same tune.


Replication between different things

I won’t reproduce the output of all recipes here. I will just mention what every recipe needs to prepare to ensure a positive outcome.

  • Replication between NDB clusters. Nothing special here, except making sure to use a MySQL Cluster tarball. If you don’t dbdeployer will detect it and refuse the installation. For the rest, it’s like replication between groups.
  • Replication between master/slave. This is a bit trickier, because the replication data comes to a master, and if we want to propagate to its slaves we need to activate log-slave-update. The recipe shows how to do it.
  • Replication between group and master/slave. In addition to the trick mentioned in the previous recipe, we need to make sure that the master/slave deployment is using GTID.
  • Replication between master/slave and group. See the previous one.
  • Replication between group and single (and vice versa). We just need to make sure the single sandbox has GTID enabled.

Replication between different versions

This is a simple recipe that comes from a feature request. All you need to do is make sure that the version on the master is lower than the one on the slaves. The recipe script replication-multi-versions.sh, looks for tarballs of 5.6, 5.7, and 8.0, but you can start it using three versions that you’d like. For example:

./recipes/replication-multi-versions.sh 5.7.23 5.7.24 5.7.25

The first version will be used as the master.


Circular replication

I didn’t want to do this, as I consider ring replication to be weak and difficult to handle. I stated that much in the feature request and in the list of dbdeployer features. But then I saw that with the latest enhancements it was so easy, that I had to at least make a recipe for it. And then you have it. recipes/circular-replication.sh does what it promises, but the burden of maintenance is still on the user’s shoulders. I suggest looking at it, and then forgetting it.


Upgrade from MySQL 5.5 to 8.0 (through 5.6 and 5.7)

This is one of the most advanced recipes. To enjoy it, you need to have expanded tarballs from 5.5, 5.6, 5.7, and 8.0.

Provided that you do, running this script will do the following:

  1. deploy MySQL 5.5
  2. Create a table upgrade_log and insert some data.
  3. deploy MySQL 5.6
  4. run mysql_upgrade (through dbdeployer)
  5. Add data to the log table
  6. deploy MySQL 5.7
  7. run mysql_upgrade again
  8. add data to the log table
  9. deploy MySQL 8.0
  10. run mysql_upgrade for the last time
  11. Show the data from the table

Here’s a full transcript of the operation. It’s interesting to see how the upgrade procedure has changed from older versions to current ones.


$ ./recipes/upgrade.sh

# ****************************************************************************
# Upgrading from 5.5.53 to 5.6.41
# ****************************************************************************
+ dbdeployer deploy single 5.5.53 --master
Database installed in $HOME/sandboxes/msb_5_5_53
run 'dbdeployer usage single' for basic instructions'
.. sandbox server started
0
+ dbdeployer deploy single 5.6.41 --master
Database installed in $HOME/sandboxes/msb_5_6_41
run 'dbdeployer usage single' for basic instructions'
. sandbox server started
0
+ $HOME/sandboxes/msb_5_5_53/use -e 'CREATE TABLE IF NOT EXISTS test.upgrade_log(id int not null auto_increment primary key, server_id int, vers varchar(50), urole varchar(20), ts timestamp)'
+ $HOME/sandboxes/msb_5_5_53/use -e 'INSERT INTO test.upgrade_log (server_id, vers, urole) VALUES (@@server_id, @@version, '\''original'\'')'
+ dbdeployer admin upgrade msb_5_5_53 msb_5_6_41
stop $HOME/sandboxes/msb_5_5_53
stop $HOME/sandboxes/msb_5_6_41
Data directory msb_5_5_53/data moved to msb_5_6_41/data
. sandbox server started
Looking for 'mysql' as: $HOME/opt/mysql/5.6.41/bin/mysql
Looking for 'mysqlcheck' as: $HOME/opt/mysql/5.6.41/bin/mysqlcheck
Running 'mysqlcheck' with connection arguments: '--port=5641' '--socket=/var/folders/rz/cn7hvgzd1dl5y23l378dsf_c0000gn/T/mysql_sandbox5641.sock'
Running 'mysqlcheck' with connection arguments: '--port=5641' '--socket=/var/folders/rz/cn7hvgzd1dl5y23l378dsf_c0000gn/T/mysql_sandbox5641.sock'
mysql.columns_priv OK
mysql.db OK
mysql.event OK
mysql.func OK
mysql.general_log OK
mysql.help_category OK
mysql.help_keyword OK
mysql.help_relation OK
mysql.help_topic OK
mysql.host OK
mysql.ndb_binlog_index OK
mysql.plugin OK
mysql.proc OK
mysql.procs_priv OK
mysql.proxies_priv OK
mysql.servers OK
mysql.slow_log OK
mysql.tables_priv OK
mysql.time_zone OK
mysql.time_zone_leap_second OK
mysql.time_zone_name OK
mysql.time_zone_transition OK
mysql.time_zone_transition_type OK
mysql.user OK
Running 'mysql_fix_privilege_tables'...
Running 'mysqlcheck' with connection arguments: '--port=5641' '--socket=/var/folders/rz/cn7hvgzd1dl5y23l378dsf_c0000gn/T/mysql_sandbox5641.sock'
Running 'mysqlcheck' with connection arguments: '--port=5641' '--socket=/var/folders/rz/cn7hvgzd1dl5y23l378dsf_c0000gn/T/mysql_sandbox5641.sock'
test.upgrade_log OK
OK

The data directory from msb_5_6_41/data is preserved in msb_5_6_41/data-msb_5_6_41
The data directory from msb_5_5_53/data is now used in msb_5_6_41/data
msb_5_5_53 is not operational and can be deleted
+ dbdeployer delete msb_5_5_53
List of deployed sandboxes:
$HOME/sandboxes/msb_5_5_53
Running $HOME/sandboxes/msb_5_5_53/stop
Running rm -rf $HOME/sandboxes/msb_5_5_53
Directory $HOME/sandboxes/msb_5_5_53 deleted
+ $HOME/sandboxes/msb_5_6_41/use -e 'INSERT INTO test.upgrade_log (server_id, vers, urole) VALUES (@@server_id, @@version, '\''upgraded'\'')'
+ $HOME/sandboxes/msb_5_6_41/use -e 'SELECT * FROM test.upgrade_log'
+----+-----------+------------+----------+---------------------+
| id | server_id | vers | urole | ts |
+----+-----------+------------+----------+---------------------+
| 1 | 5553 | 5.5.53-log | original | 2019-04-01 20:27:38 |
| 2 | 5641 | 5.6.41-log | upgraded | 2019-04-01 20:27:46 |
+----+-----------+------------+----------+---------------------+

# ****************************************************************************
# The upgraded database is now upgrading from 5.6.41 to 5.7.25
# ****************************************************************************
+ dbdeployer deploy single 5.7.25 --master
Database installed in $HOME/sandboxes/msb_5_7_25
run 'dbdeployer usage single' for basic instructions'
. sandbox server started
0
+ $HOME/sandboxes/msb_5_6_41/use -e 'CREATE TABLE IF NOT EXISTS test.upgrade_log(id int not null auto_increment primary key, server_id int, vers varchar(50), urole varchar(20), ts timestamp)'
+ $HOME/sandboxes/msb_5_6_41/use -e 'INSERT INTO test.upgrade_log (server_id, vers, urole) VALUES (@@server_id, @@version, '\''original'\'')'
+ dbdeployer admin upgrade msb_5_6_41 msb_5_7_25
stop $HOME/sandboxes/msb_5_6_41
stop $HOME/sandboxes/msb_5_7_25
Data directory msb_5_6_41/data moved to msb_5_7_25/data
.. sandbox server started
Checking if update is needed.
Checking server version.
Running queries to upgrade MySQL server.
Checking system database.
mysql.columns_priv OK
mysql.db OK
mysql.engine_cost OK
mysql.event OK
mysql.func OK
mysql.general_log OK
mysql.gtid_executed OK
mysql.help_category OK
mysql.help_keyword OK
mysql.help_relation OK
mysql.help_topic OK
mysql.host OK
mysql.innodb_index_stats OK
mysql.innodb_table_stats OK
mysql.ndb_binlog_index OK
mysql.plugin OK
mysql.proc OK
mysql.procs_priv OK
mysql.proxies_priv OK
mysql.server_cost OK
mysql.servers OK
mysql.slave_master_info OK
mysql.slave_relay_log_info OK
mysql.slave_worker_info OK
mysql.slow_log OK
mysql.tables_priv OK
mysql.time_zone OK
mysql.time_zone_leap_second OK
mysql.time_zone_name OK
mysql.time_zone_transition OK
mysql.time_zone_transition_type OK
mysql.user OK
Upgrading the sys schema.
Checking databases.
sys.sys_config OK
test.upgrade_log
error : Table rebuild required. Please do "ALTER TABLE `upgrade_log` FORCE" or dump/reload to fix it!

Repairing tables
`test`.`upgrade_log`
Running : ALTER TABLE `test`.`upgrade_log` FORCE
status : OK
Upgrade process completed successfully.
Checking if update is needed.

The data directory from msb_5_7_25/data is preserved in msb_5_7_25/data-msb_5_7_25
The data directory from msb_5_6_41/data is now used in msb_5_7_25/data
msb_5_6_41 is not operational and can be deleted
+ dbdeployer delete msb_5_6_41
List of deployed sandboxes:
$HOME/sandboxes/msb_5_6_41
Running $HOME/sandboxes/msb_5_6_41/stop
Running rm -rf $HOME/sandboxes/msb_5_6_41
Directory $HOME/sandboxes/msb_5_6_41 deleted
+ $HOME/sandboxes/msb_5_7_25/use -e 'INSERT INTO test.upgrade_log (server_id, vers, urole) VALUES (@@server_id, @@version, '\''upgraded'\'')'
+ $HOME/sandboxes/msb_5_7_25/use -e 'SELECT * FROM test.upgrade_log'
+----+-----------+------------+----------+---------------------+
| id | server_id | vers | urole | ts |
+----+-----------+------------+----------+---------------------+
| 1 | 5553 | 5.5.53-log | original | 2019-04-01 20:27:38 |
| 2 | 5641 | 5.6.41-log | upgraded | 2019-04-01 20:27:46 |
| 3 | 5641 | 5.6.41-log | original | 2019-04-01 20:27:51 |
| 4 | 5725 | 5.7.25-log | upgraded | 2019-04-01 20:28:01 |
+----+-----------+------------+----------+---------------------+

# ****************************************************************************
# The further upgraded database is now upgrading from 5.7.25 to 8.0.15
# ****************************************************************************
+ dbdeployer deploy single 8.0.15 --master
Database installed in $HOME/sandboxes/msb_8_0_15
run 'dbdeployer usage single' for basic instructions'
.. sandbox server started
0
+ $HOME/sandboxes/msb_5_7_25/use -e 'CREATE TABLE IF NOT EXISTS test.upgrade_log(id int not null auto_increment primary key, server_id int, vers varchar(50), urole varchar(20), ts timestamp)'
+ $HOME/sandboxes/msb_5_7_25/use -e 'INSERT INTO test.upgrade_log (server_id, vers, urole) VALUES (@@server_id, @@version, '\''original'\'')'
+ dbdeployer admin upgrade msb_5_7_25 msb_8_0_15
stop $HOME/sandboxes/msb_5_7_25
Attempting normal termination --- kill -15 10357
stop $HOME/sandboxes/msb_8_0_15
Data directory msb_5_7_25/data moved to msb_8_0_15/data
... sandbox server started
Checking if update is needed.
Checking server version.
Running queries to upgrade MySQL server.
Upgrading system table data.
Checking system database.
mysql.columns_priv OK
mysql.component OK
mysql.db OK
mysql.default_roles OK
mysql.engine_cost OK
mysql.func OK
mysql.general_log OK
mysql.global_grants OK
mysql.gtid_executed OK
mysql.help_category OK
mysql.help_keyword OK
mysql.help_relation OK
mysql.help_topic OK
mysql.host OK
mysql.innodb_index_stats OK
mysql.innodb_table_stats OK
mysql.ndb_binlog_index OK
mysql.password_history OK
mysql.plugin OK
mysql.procs_priv OK
mysql.proxies_priv OK
mysql.role_edges OK
mysql.server_cost OK
mysql.servers OK
mysql.slave_master_info OK
mysql.slave_relay_log_info OK
mysql.slave_worker_info OK
mysql.slow_log OK
mysql.tables_priv OK
mysql.time_zone OK
mysql.time_zone_leap_second OK
mysql.time_zone_name OK
mysql.time_zone_transition OK
mysql.time_zone_transition_type OK
mysql.user OK
Found outdated sys schema version 1.5.1.
Upgrading the sys schema.
Checking databases.
sys.sys_config OK
test.upgrade_log OK
Upgrade process completed successfully.
Checking if update is needed.

The data directory from msb_8_0_15/data is preserved in msb_8_0_15/data-msb_8_0_15
The data directory from msb_5_7_25/data is now used in msb_8_0_15/data
msb_5_7_25 is not operational and can be deleted
+ dbdeployer delete msb_5_7_25
List of deployed sandboxes:
$HOME/sandboxes/msb_5_7_25
Running $HOME/sandboxes/msb_5_7_25/stop
Running rm -rf $HOME/sandboxes/msb_5_7_25
Directory $HOME/sandboxes/msb_5_7_25 deleted
+ $HOME/sandboxes/msb_8_0_15/use -e 'INSERT INTO test.upgrade_log (server_id, vers, urole) VALUES (@@server_id, @@version, '\''upgraded'\'')'
+ $HOME/sandboxes/msb_8_0_15/use -e 'SELECT * FROM test.upgrade_log'
+----+-----------+------------+----------+---------------------+
| id | server_id | vers | urole | ts |
+----+-----------+------------+----------+---------------------+
| 1 | 5553 | 5.5.53-log | original | 2019-04-01 20:27:38 |
| 2 | 5641 | 5.6.41-log | upgraded | 2019-04-01 20:27:46 |
| 3 | 5641 | 5.6.41-log | original | 2019-04-01 20:27:51 |
| 4 | 5725 | 5.7.25-log | upgraded | 2019-04-01 20:28:01 |
| 5 | 5725 | 5.7.25-log | original | 2019-04-01 20:28:07 |
| 6 | 8015 | 8.0.15 | upgraded | 2019-04-01 20:28:20 |
+----+-----------+------------+----------+---------------------+

What else can we do?

The replication recipes seen so far use the same principles. The method used in these recipes doesn’t work for all-masters and fan-in replication, because mixing named channels and nameless ones is not allowed. Also, there are things that don’t respond to replication commands at all, like TiDB. But it should be easy to enhance the current scripts (or to add some more specialized ones) that will include also these exceptions. Given the recent wave of collaboration, I expect it will happen relatively soon.

Simple STONITH with ProxySQL and Orchestrator

$
0
0
3 DC Orchestrator ProxySQL

Distributed systems are hard – I just want to echo that. In MySQL, we have quite a number of options to run highly available systems. However, real fault tolerant systems are difficult to achieve.

Take for example a common use case of multi-DC replication where Orchestrator is responsible for managing the topology, while ProxySQL takes care of the routing/proxying to the correct server, as illustrated below. A rare case you might encounter is that the primary MySQL

node01
on DC1 might have a blip of a couple of seconds. Because Orchestrator uses an adaptive health check – not only the node itself but also consults its replicas – it can react really fast and promote the node in DC2.

Why is this problematic?

The problem occurs when

node01
resolves its temporary issue. A race condition could occur within ProxySQL that could mark it back as read-write. You can increase an “offline” period within ProxySQL to make sure Orchestrator rediscovers the node first. Hopefully, it will set it to read-only immediately, but what we want is an extra layer of predictable behavior. This normally comes in the form of STONITH – by taking the other node out of action, we practically reduce the risk of conflict close to zero.

The solution

Orchestrator supports hooks to do this, but we can also do it easily with ProxySQL using its built in scheduler. In this case, we create a script where Orchestrator is consulted frequently for any nodes recently marked as

downtimed
, and we also mark them as such in ProxySQL. The script proxy-oc-tool.sh can be found on Github.

What does this script do? In the case of our topology above:

  • If for any reason, connections to MySQL on
    node01
    fail, Orchestrator will pick
    node02
      as the new primary.
  • Since
    node01
    is unreachable –  cannot modify
    read_only
    nor update replication – it will be marked as
    downtimed
    with
    lost-in-recovery
    as the reason.
  • If
    node01
    comes back online, and ProxySQL sees it before the next Orchestrator check, it can rejoin the pool. Then it’s possible that you have two writeable nodes in the hostgroup.
  • To prevent the condition above, as soon as the node is marked with downtime from Orchestrator, the script proxy-oc-tool.sh will mark it
    OFFLINE_SOFT
    so it never rejoins the
    writer_hostgroup
      in ProxySQL.
  • Once an operator fixes
    node01
    i.e. reattaches as a replica and removes the
    downtimed
    mark, the script proxy-oc-tool.sh will mark it back
    ONLINE
      automatically.
  • Additionally, if DC1 gets completely disconnected from DC2 and AWS, the script will not be able to reach Orchestrator’s raft-leader and will set all nodes to
    OFFLINE_SOFT
    preventing isolated writes on DC1.

Adding the script to ProxySQL is simple. First you download and set permissions. I placed the script in

/usr/bin/
– but you can put it anywhere accessible by the ProxySQL process.
wget https://gist.githubusercontent.com/dotmanila/1a78ef67da86473c70c7c55d3f6fda89/raw/b671fed06686803e626c1541b69a2a9d20e6bce5/proxy-oc-tool.sh
chmod 0755 proxy-oc-tool.sh
mv proxy-oc-tool.sh /usr/bin/

Note, you will need to edit some variables in the script i.e.

ORCHESTRATOR_PATH
 .

Then load into the scheduler:

INSERT INTO scheduler (interval_ms, filename)
  VALUES (5000, '/usr/bin/proxy-oc-tool.sh');
LOAD SCHEDULER TO RUNTIME;
SAVE SCHEDULER TO DISK;

I’ve set the interval to five seconds since inside ProxySQL, a shunned node will need about 10 seconds before the next read-only check is done. This way, this script is still ahead of ProxySQL and is able to mark the dead node as

OFFLINE_SOFT
 .

Because this is the simple version, there are obvious additional improvements to be made in the script like using scheduler args to specify and

ORCHESTRATOR_PATH
implement error checking.

How does a relational database execute SQL statements and prepared statements

$
0
0

Introduction In this article, we are going to see how a relational database executes SQL statements and prepared statements. SQL statement lifecycle The main database modules responsible for processing a SQL statement are: the Parser, the Optimizer, the Executor. A SQL statement execution looks like in the following diagram. Parser The Parser checks the SQL statement and ensures its validity. The statements are verified both syntactically (the statement keywords must be properly spelled and following the SQL language guidelines) and semantically (the referenced tables and column do exist in the database). During... Read More

The post How does a relational database execute SQL statements and prepared statements appeared first on Vlad Mihalcea.

MySQL JSON Document Store

$
0
0
MySQL 8.0 provides another way to handle JSON documents, actually in a "Not only SQL" (NoSQL) approach... In other words, if you need/want to manage JSON documents (collections) in a non-relational manner, with CRUD (acronym for Create/Read/Update/Delete) operations then you can use MySQL 8.0! Did you know that?

Terraform on OCI – Building MySQL On Compute – initial setups

$
0
0
I have written previous blog posts about Oracle Cloud OCI and this series continues. My post titled with Iaas Getting Started was to get us acquainted with important security-focused items like Compartments and network Services like NAT and Internet-Gateways. Then I posted about building MySQL on Compute with Scripting using a mix of OCI Web console navigation… Read More »

Percona XtraDB Cluster Operator 0.3.0 Early Access Release Is Now Available

$
0
0
Percona XtraDB Cluster Operator

Percona announces the release of Percona XtraDB Cluster Operator 0.3.0 early access.

The Percona XtraDB Cluster Operator simplifies the deployment and management of Percona XtraDB Cluster in a Kubernetes or OpenShift environment. It extends the Kubernetes API with a new custom resource for deploying, configuring and managing the application through the whole life cycle.Percona XtraDB Cluster Operator

You can install the Percona XtraDB Cluster Operator on Kubernetes or OpenShift. While the operator does not support all the Percona XtraDB Cluster features in this early access release, instructions on how to install and configure it are already available along with the operator source code, hosted in our Github repository.

The Percona XtraDB Cluster Operator is an early access release. Percona doesn’t recommend it for production environments.

New features

Improvements

Fixed Bugs

  • CLOUD-148: Pod Disruption Budget code caused the wrong configuration to be applied for ProxySQL and had lack of multiple availability zones support.
  • CLOUD-138: The restore-backup.sh script was exiting with an error because its code was not taking into account images version numbers.
  • CLOUD-118: The backup recovery job was unable to start if Persistent Volume for backup and Persistent Volume for Pod-0 were placed in different availability zones.

Percona XtraDB Cluster is an open source, cost-effective and robust clustering solution for businesses. It integrates Percona Server for MySQL with the Galera replication library to produce a highly-available and scalable MySQL® cluster complete with synchronous multi-master replication, zero data loss and automatic node provisioning using Percona XtraBackup.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.

SQL Insert Query Tutorial | SQL INSERT INTO Statement Example

$
0
0
SQL INSERT INTO Statement Example

SQL Insert Query Tutorial | SQL Insert Into Statement Example is today’s topic. If you are not familiar with creating a table in SQL, then please check out my how to create a table in SQL tutorial. The INSERT INTO statement is used to add new values into the database. The INSERT INTO statement adds the new record to the table. INSERT INTO can contain the values for some or all of its columns. INSERT INTO can be combined with a SELECT to insert records.

SQL Insert Query Tutorial

The general syntax of SQL INSERT INTO Statement is following.

INSERT INTO table-name (column-names) 
VALUES (values)

Here, column-names could be column1, column2, column3,…columnN are the names of the columns in a table into which you want to insert a data into the database.

You may not need to specify a column(s) name in the SQL query if you are adding the values for all the columns of a table. But make sure you need to preserve the order of the columns. Let’s take an example.

Step 1: Create a SQL Table.

I am using MacOS and SQLite client. You can use phpMyAdmin or Oracle or other database software.

Now, type the following query to create a table.

CREATE TABLE Apps (
    AppID int,
    AppName varchar(255),
    CreatorName varchar(255),
    AppCategory varchar(255),
    AppPrice int 
);

So, we have created a table called Apps which has five columns.

Step 2: Insert the values inside the table in SQL

Okay, next step is to use the SQL INSERT INTO query to add the rows.

INSERT INTO Apps (AppID,`AppName`,`CreatorName`,`AppCategory`,`AppPrice`)
VALUES (1, 'AppDividend', 'Krunal', 'Software', 50 );

Now, run the query and you will see in the table that one row is added.

So, we have successfully inserted data into the Database.

Insert Multiple Values in Database

We can also insert the multiple values to the table. See the following query.

INSERT INTO Apps (AppID,`AppName`,`CreatorName`,`AppCategory`,`AppPrice`)
VALUES 
(2, 'Escrow', 'LVVM', 'Fashion', 60 ),
(3, 'KGB', 'MJ', 'Music', 70 ),
(4, 'Moscow', 'Mayor', 'Area', 80 ),
(5, 'MoneyControl', 'Mukesh', 'Investment', 90 ),
(6, 'Investing', 'Bill', 'Stocks', 100 )

The output of the above SQL query is following.

 

SQL Insert Query Tutorial | SQL INSERT INTO Statement Example

If you are adding values for all of the columns of a table, you do not need to specify the column names in a SQL query. However, make sure the order of the values is in the same order as the columns in a table. 

Populate one table using another table

You can populate the values into the table through the select statement over another table; provided that the other table has the set of fields, which are required to fill the first table. See the following SQL query. Let’s try to add the data to the DummyApp table.

INSERT INTO DummyApp (AppID, `AppName`)
   SELECT AppID, AppName
   FROM Apps;

In the above query, we are inserting two column values inside the DummyApp table using Apps table.

The DummyApp table has only two columns which are AppID and AppName.

Also, Apps table has two columns. That is why we can easily add all the values from one table to another table. If you run the above query and you have set up the DummyApp table then all the values from Apps table will be copied to the DummyApp table.

Insert The Data Only in Specified Columns

It is also possible to only insert data in the specific columns. See the following query.

INSERT INTO Apps (AppID, `AppName`)
VALUES (10, 'Stavanger');

So, the above query only inserts one row and two column values.

All the other column values should be NULL for that particular row.

Here, you need to be very careful because if some column does not allow the NULL values then SQL Engine throw an exception and the data will not be inserted in the database.

Finally, SQL Insert Query Tutorial | SQL INSERT INTO Statement Example is over.

The post SQL Insert Query Tutorial | SQL INSERT INTO Statement Example appeared first on AppDividend.

React Tutorial: Consume a JSON REST API with Fetch and Styling UI with Bootstrap 4

$
0
0
In this tutorial we'll learn how to build a React application that consumes a third-party REST API using the fetch() API. We'll also use Bootstrap 4 to style the UI. We'll consume a third-party API available from this link. We'll also see some basics of React such as: The state object to hold the state of the app and the setState() method to mutate the state. The componentDidMount() life-cycle method for running code when the component is mounted in the DOM. How to embed JavaScript expressions in JSX using curly braces. How to render lists of data using the map() method and JSX and conditionally render DOM elements using the logical && operator. React is the most popular UI library for building user interfaces built and used internally by Facebook. React has many features such as: Declarative: React makes it easy to build interactive UIs by creating views for each state in your application, and let React render just the right components when the data changes. Component-Based: Build components that have their own state and compose them to build complex UIs. Learn Once, Write Anywhere: React can be rendered on the server using Node and can be use to build native mobile apps using React Native. Prerequisites You will need the following prerequisites to successfully complete the steps in this tutorial: Working experience of JavaScript, Basic understanding of REST APIs, Node and NPM installed on your machine. React is a client side library but you need Node.js to run the create-react-app utility that can be used to generate React projects and work with them locally. You can very easily install Node and NPM by getting the binaries from your system from the official website. A better way is to use NVM or Node Version Manager to easily install and manage multiple active Node versions. If you are ready, let's get started! Installing create-react-app We'll use the create-react-app utility to generate a React project with best practices and development scripts created by the official team behind React. Open a new terminal and run the following command to install the utility in your system: $ npm install -g create-react-app Note: Please note that you may need to add sudo before you command in Debian systems and macOS to be able to install npm packages globally. You can also just fix your npm permissions to avoid using sudo. If you installed Node and NPM using NVM, this will be handled automatically for you. At the time of this writing, create-react-app v2.1.8 is installed in our system. Creating a React Project After install create-react-app, let's use it to generate our React project. Head back to your terminal and run the following commands: $ cd ~ $ npx create-react-app react-fetch-rest-api We navigated to the home folder and issued the npx create-react-app command to create our project. Note: You can obviously navigate to any folder you choose for your project. npx is a tool that allows you to run executables from the node_modules folder, you can find more details from the official website. Wait for the installation process to finish. This may take a while! Next, navigate to your project's root folder and run the development server using the following commands: $ cd create-react-app react-fetch-rest-api $ npm start Your local development server will be running from the http://localhost:3000 address and you web browser will be opened automatically and navigated to your React application. Since we use a live-reload dev server, you can leave the current terminal window open and start a new one for running the rest of the commands in this tutorial. After any changes, you server will be automatically restarted and your application will be live-reloaded in the browser. This is a screenshot of our application at this point: Open the src/App.js file and let's remove the default boilerplate code that we are not using in our example. Simply, change the content with the following: import React, { Component } from 'react'; class App extends Component { render() { return ( // Your JSX code goes here. ); } } export default App; Styling the UI with Bootstrap 4 We'll use Bootstrap 4 for styling the UI. Integrating Bootstrap 4 with React is quite easy. Open the public/index.html file and add the following code: <head> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css" integrity="sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO" crossorigin="anonymous"> <!-- [...] --> Now, let's display an example todo just for making sure Bootstrap is successfully added. Open the src/App.js and replacte with following code: import React, { Component } from 'react'; class App extends Component { render() { return ( <div className="container"> <div className="col-xs-12"> <h1>My Todos</h1> <div className="card"> <div className="card-body"> <h5 className="card-title">Example Todo</h5> <h6 className="card-subtitle mb-2 text-muted">Completed</h6> </div> </div> </div> </div> ); } } export default App; In React we use className instead of class for adding a CSS class to DOM elements. This is a screenshot of the UI: Fetching and Displaying the REST API As we said earlier we are going to use the browser fetch API to consume JSON data from the todos API endpoint. First, in the src/App.js file and add a state object to hold our todos once we fetch them: import React, { Component } from 'react'; class App extends Component { state = { todos: [] } // [...] } export default App; We created a state variable called todos. Next, add a componentDidMount() life-cycle method in our src/App.js file and add the code to fetch the JSON data inside it. This method is executed when the component is mounted in the DOM so it's the right place to place our JSON fetching logic: import React, { Component } from 'react'; class App extends Component { state = { todos: [] } componentDidMount() { fetch('http://jsonplaceholder.typicode.com/todos') .then(res => res.json()) .then((data) => { this.setState({ todos: data }) console.log(this.state.todos) }) .catch(console.log) } // [...] } export default App; We simply send a GET request to the /todos endpoint. Once the returned Promise is resolved we use the setState() method to assign the returned data to the todos state variable. If there is an error we simply display it in the console. If you open your browser console, you should the fetched todos displayed as an array of objects. Let's render them in our UI. Update the render() method in the src/App.js file as follows: render() { return ( <div className="container"> <div className="col-xs-12"> <h1>My Todos</h1> {this.state.todos.map((todo) => ( <div className="card"> <div className="card-body"> <h5 className="card-title">{todo.title}</h5> <h6 className="card-subtitle mb-2 text-muted"> { todo.completed && <span> Completed </span> } { !todo.completed && <span> Pending </span> } </h6> </div> </div> ))} </div> </div> ); } In React, you can build lists of elements and include them in JSX using curly braces {}. In the code, we loop through the state.todos array using the JavaScript map() method and we return a Bootstrap 4 card element for each todo. We can also embed any expressions in JSX by wrapping them in curly braces. We used the logical && operator for conditionally including the <span>Completed</span> or <span>Pending</span> elements depending on the value of the completed boolean value of the todo element. This works because in JavaScript, true && expression always evaluates to expression, and false && expression always evaluates to false. If the completed variable is true, the element right after && will appear in the output. If it is false, React will ignore and skip it. See: Conditional Rendering. This will render the todos in the state object. This is a screenshot of our UI: Conclusion That's the end of this tutorial. As a recap: We have installed the create-react-app and used it to create a React project. Next, we integrated Bootstrap 4 in our React application and used to fetch API to send a GET request to consume JSON data from a third-party REST API in the componendDidMount() life-cycle method. We also used the state object to hold our fetched JSON data and the setState() method to set the state. Finally we have seen how to embed JS expressions in JSX and how to render lists of data and conditionally render DOM elements using the logical && operator.

MySQL 8.0 GIS -- Inserting Data & Fun Functions

$
0
0
The last blog entry was very popular and there were lots of requests for some introductory information on the spatial data types. 

Well Known Text Or Binary


I am going to use the GEOMETRY data type over POINT, LINESTRING, or POLYGON as it can store any of those three while the other three can only contain data matching their name (so POINT can holds only point data, etc.). The values are stored in an internal geometry format but it takes wither WKT or WKB formatted data.

Those are Well-Known Text (WKT) or Well-Known Binary (WKB) formats repectively. I am hoping most of your are better with text than binary so the following examples demonstrate how to insert geometry values into a table by converting WKT values to internal geometry format.

So let us start with a simple table.

mysql> create table geom (type text, g geometry);
Query OK, 0 rows affected (0.04 sec)

We can use the ST_GeomFromText function to take some strings and convert into the internal format.

mysql> insert into geom values 
       ('point', st_geomfromtext('point(1 1)'));
Query OK, 1 row affected (0.01 sec)

mysql> insert into geom values 
       ('linestring', st_geomfromtext('linestring(0 0,1 1, 2 2)'));
Query OK, 1 row affected (0.01 sec)

There are type specific functions for POINT, LINESTRING, and POLYGON that we can also take advantage of for this work.

mysql> SET @g = 
     'POLYGON((0 0,10 0,10 10,0 10,0 0),(5 5,7 5,7 7,5 7, 5 5))';
Query OK, 0 rows affected (0.00 sec)

mysql> INSERT INTO geom 
       VALUES ('polygon',ST_PolygonFromText(@g));
Query OK, 1 row affected (0.00 sec)

If you do a SELECT * FROM geom; you will get the g column output in binary.  Thankfully we can use ST_AsText() to provide us with something more readable.


mysql> select type, st_astext(g) from geom;
+------------+----------------------------------------------------------+
| type       | st_astext(g)                                             |
+------------+----------------------------------------------------------+
| point      | POINT(1 1)                                               |
| linestring | LINESTRING(0 0,1 1,2 2)                                  |
| polygon    | POLYGON((0 0,10 0,10 10,0 10,0 0),(5 5,7 5,7 7,5 7,5 5)) |
+------------+----------------------------------------------------------+
3 rows in set (0.00 sec)

mysql>


Put the 'fun' in Function

There are functions that can be use with each of these type to provide information., For instance there are the x & Y coordinates for a point.

mysql> select type, st_x(g), st_y(g) from geom where type='point';
+-------+---------+---------+
| type  | st_x(g) | st_y(g) |
+-------+---------+---------+
| point |       1 |       1 |
+-------+---------+---------+
1 row in set (0.00 sec)

We can even use our linestring data to get the minimal bounding rectangle for the given coordinates (basically if we had to put an envelope around the points the results are the coordinates points to enclose those x,y coordinates)

mysql> select st_astext(st_envelope(g)) 
       from geom where type='linestring';
+--------------------------------+
| st_astext(st_envelope(g))      |
+--------------------------------+
| POLYGON((0 0,2 0,2 2,0 2,0 0)) |
+--------------------------------+
1 row in set (0.00 sec)


And we can get the area of a polygon too.

mysql> select type, st_area((g)) from geom where type='polygon';
+---------+--------------+
| type    | st_area((g)) |
+---------+--------------+
| polygon |           96 |
+---------+--------------+
1 row in set (0.00 sec)

And find the mathematical center of that polygon.

mysql> select type, 
       st_astext(st_centroid(g)) 
       from geom where type='polygon';
+---------+--------------------------------------------+
| type    | st_astext(st_centroid(g))                  |
+---------+--------------------------------------------+
| polygon | POINT(4.958333333333333 4.958333333333333) |
+---------+--------------------------------------------+
1 row in set (0.00 sec)

Plus we can get the linestring data if we wanted to draw out polygon.

mysql> select type, 
       st_astext(st_exteriorring(g)) 
       from geom where type='polygon';
+---------+-------------------------------------+
| type    | st_astext(st_exteriorring(g))       |
+---------+-------------------------------------+
| polygon | LINESTRING(0 0,10 0,10 10,0 10,0 0) |
+---------+-------------------------------------+
1 row in set (0.00 sec)



The perils of ALTER TABLE in MySQL/MariaDB

Multi-Region AWS Aurora Webinar (4/18/19)

$
0
0

Register now for the live webinar: “Multi-Region MySQL with AWS Aurora” on Thursday, April 18th, 2019.

Compare building a global, multi-region MySQL cloud back-end using AWS Aurora versus Continuent Tungsten. 


Running MySQL / Percona Server in Kubernetes with a Custom Config

$
0
0
modify MySQL config in Kubernetes

modify MySQL config in KubernetesAs we continue the development of our Percona Operators to simplify database deployment in Kubernetes (Percona Server for MongoDB Operator 0.3.0 and Percona XtraDB Cluster Operator 0.3.0), one very popular question I get is: how does deployment in Kubernetes affect MySQL performance? Is there a big performance penalty? So I plan to look at how to measure and compare the performance of Kubernetes deployments to bare metal deployments. Kubernetes manages a lot of infrastructure resources like network, storage, cpu, and memory, so we need to look individually at different components.

To begin: I plan to run a single MySQL (Percona Server) instances in a Kubernetes deployment, and use local storage (fast NMVe device). I also want to customize my MySQL configuration, as the one that is supplied in public images are pretty much all set to defaults.

Let’s take a look at how we can customize it.

  1. We are going to use a public Percona Server docker image “percona:ps-8.0.15-5”, it will deploy the latest version (at the time of writing) Percona Server for MySQL 8.0.15
  2. We will deploy this on a specific node and will assign specific local storage to use for MySQL data
  3. We’ll set up a custom configuration for MySQL.

Setting up Kubernetes

Here’s an example yaml file:

=====================
apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
  - port: 3306
  selector:
    app: mysql
  clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
  spec:
  selector:
    matchLabels:
      app: mysql
    strategy:
      type: Recreate
    template:
      metadata:
        labels:
          app: mysql
      spec:
        nodeSelector:
          kubernetes.io/hostname: smblade01
        volumes:
          - name: mysql-persistent-storage
            hostPath:
              path: /mnt/fast/mysql
              type: Directory
        containers:
        - image: percona:ps-8.0.15-5
          name: mysql
          env:
            # Use secret in real usage
          - name: MYSQL_ROOT_PASSWORD
            value: password
          ports:
          - containerPort: 3306
            name: mysql
          volumeMounts:
          - name: mysql-persistent-storage
            mountPath: /var/lib/mysql
===============

There is a lot of typical Kubernetes boilerplate to create a deployment, but the most important parts to note:

  • We choose the node where to deploy with nodeSelector (lines 28–29).
  • We allocate the local storage for MySQL volume with hostPath (lines 31–34).

After deploying this, we make sure the Pod is running

Kubectl get pods
NAME                      READY    STATUS     RESTARTS   AGE     IP             NODE         NOMINATED NODE    READINESS GATES
mysql-d74d5d459-b6zrs     1/1      Running    0          3m8s    192.168.1.15   smblade01    <none>            <none>

Set up MySQL to access fast storage and modify the default config for performance

Now as we are running MySQL on a dedicated node with fast storage, we want to customize the MySQL configuration to allocate a big buffer pool and adjust its IO settings.

As I said, a downloaded image will most likely run with default settings and there is no straightforward way to pass our custom my.cnf to deployment. I’ll show you how to resolve this now.

The default my.cnf contains the directive

!includedir /etc/my.cnf.d

So the solution for the custom my.cnf is as follows:

  • Create a Kubernetes configmap from our custom my.cnf. Here’s how to achieve that:

kubectl create configmap mysql-config --from-file=my.cnf

  • Define yaml to load the configmap into the volume that mapped to /etc/my.cnf.d (nb lines 23–26 and 40-41).

apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
    app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      nodeSelector:
        kubernetes.io/hostname: smblade01
      volumes:
        - name: mysql-persistent-storage
          hostPath:
            path: /mnt/fast/mysql
            type: Directory
       - name: config-volume
          configMap:
            name: mysql-config
            optional: true
      containers:
      - image: percona:ps-8
        name: mysql
        env:
          # Use secret in real usage
        - name: MYSQL_ROOT_PASSWORD
          value: password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
        - name: config-volume
          mountPath: /etc/my.cnf.d

And here’s our example my.cnf:

[mysqld]
skip-log-bin
ssl=0
table_open_cache = 200000
table_open_cache_instances=64
back_log=3500
max_connections=4000
innodb_file_per_table
innodb_log_file_size=10G
innodb_log_files_in_group=2
innodb_log_buffer_size=64M
innodb_open_files=4000
innodb_buffer_pool_size= 100G
innodb_buffer_pool_instances=8
innodb_flush_log_at_trx_commit = 1
innodb_doublewrite=1
innodb_flush_method = O_DIRECT
innodb_file_per_table = 1
innodb_io_capacity=2000
innodb_io_capacity_max=4000
innodb_flush_neighbors = 0
innodb_use_native_aio=1
join_buffer_size=256K
sort_buffer_size=256K

When we deploy this yaml, we will have a MySQL instance running on a dedicated box with fast local storage, big log files, and 100GB allocated for its InnoDB buffer pool.

Now we’re set to proceed to our performance measurements. Stay tuned!


Photo by Joseph Barrientos on Unsplash

MySQL Workbench Spatial Viewer or How to See your GEOMETRY Data

$
0
0
The past couple of blog entries have been on Geographic Information Systems and Geometric Data.  Visualizing that data with MySQL Workbench makes it easier for me to see what the results really mean.
Example drawn by MySQL Workbench 8.0.15
Workbench 8.0.15 will draw the polygon with the Spatial View Option

So how do you get there?

Start Workbench, create a new SQL Tab in your favorite scratch schema, and crate the table below. 

CREATE TABLE `test` (
  `id` INT NOT NULL AUTO_INCREMENT,
  `geom` GEOMETRY NULL,
  PRIMARY KEY (`id`));

Next add some data.

INSERT INTO `test`
  (`geom`)
VALUES
  (st_geomfromtext
  ('polygon((0 0,0 3,3 0, 2 2,0 0),(1 1,1 2,2 1,2 2, 1 1))')
   );

The run the query.

select geom from test;

However the result will default to the Result Grid. Look off to the right hand side of the results window to see a series of stacked icons and the default is the Result Grid.  And that 'BLOB' is the result of the query.  But that result is not exactly visually stunning.

Output from the query in the results grid view
The 'BLOB' is the result of the query.

Click on the Form Editor icon. It is directly below the Result Grid Icon

Select the Form Editor Icon
And you should see the image from the top of this blog entry.

Bonus!


Now scroll down below the Form Editor icon and select Spatial View.

Spatial View of the Query




Validating a Login Form With React

$
0
0

For almost every form that you create, you will want some sort of validation. In React, working with and validating forms can be a bit verbose, so in this article we are going to use a package called Formik to help us out!

TLDR

  • Create a React project
  • Add the Formik (and Yup) packages
  • Customize the Formik component with an onSubmit callback and a validate function for error messages
  • then display those error messages to the user.> View the final code on CodeSandbox!

Here's a sneak peak at what we are going to create.

https://codesandbox.io/s/4203r4582w

Creating the React Project

For this demo, I'll be using CodeSandbox. You can use CodeSandbox as well or use your local environment. Totally up to you.

Regardless of what you use for this demo, you need to start with a new React app using Create React App. In CodeSandbox, I'm going to choose to do just that.

Installing Necessary Packages

Now that we have our initial project created, we need to install three packages.

  • Formik - makes handling validation, error messages, and form submission easier
  • Email-validator - tiny package to validate emails (I hope this one is self-explanatory : )
  • Yup - schema validator that is commonly used in conjuntion with Formik

Formik

In your terminal, you'll need to install Formik.

npm install Formik

I'll do the same in the CodeSandbox dependency GUI.

Email-Validator

Now install email-validator.

npm install email-validator

Again installing from the CodeSandbox GUI.

Yup

npm install Yup

And again in CodeSandbox.

Creating the Validated Form Component

Now, we can start to stub out our ValidatedFormComponent. For now, we just want to create the basics and import it into the root file in the app to see it get displayed.

  • Create new functional component
  • Add dummy display content
  • Import in index.js

So, create a new file in your src directory called ValidatedLoginForm.js. Inside of that file, add the basic code for a functional component.

import React from "react";
const ValidatedLoginForm = () => (
  <div>
    <h1>Validated Form Component</h1>
  </div>
);

export default ValidatedLoginForm;

Then, include it in your index.js file.

function App() {
  return (
    <div className="App">
      <ValidatedLoginForm />
    </div>
  );
}

and you should see it displayed.

Now, let's start with the Formik stuff. First, import Formik, Email-Valiator, and Yup in your new component.

import { Formik } from "formik";
import _ as EmailValidator from "email-validator";
import _ as Yup from "yup";

Now, let's stub out the Formik tag with initial values. Think of initial values as setting your state initially.

You'll also need an onSubmit callback. This callback will take two parameters, values and an object that we can destructure. The values represented the input values from your form. I'm adding some dummy code here to simulate an async login call, then logging out what the values are.

In the callback, I'm also calling the setSubmitting function that was destructured from the second parameters. This will allow us to enable/disable the submit button while the asynchronous login call is happening.

<Formik
  initialValues={{ email: "", password: "" }}
  onSubmit={(values, { setSubmitting }) => {
    setTimeout(() => {
      console.log("Logging in", values);
      setSubmitting(false);
    }, 500);
  }}
>
  <h1>Validated Login Form</h1>
</Formik>

Render Props

The Formik component uses render props to supply certain variables and functions to the form that we create. If you're not very familiar with render props, I would take a second to check out Render Props Explained.

In short, render props are used to pass properties to children elements of a component. In this case, Formik will pass properties to our form code, which is the child. Notice that I'm using destructuring to get a reference to several specific variables and functions.

    { props => {
      const {
        values,
        touched,
        errors,
        isSubmitting,
        handleChange,
        handleBlur,
        handleSubmit
      } = props;
      return (
        <div>
          <h1>Validated Login Form</h1>
        </div>
      );
    }}

Display the Form

Now, we can actually start to write the code to display the form. For what it's worth, in the finished CodeSandbox, I also created a LoginForm.js component to show how basic login forms are handled from scratch. You can also use that as a reference for the form we are going to add now.

The form is pretty simple with two inputs (email and password), labels for each, and a submit button.

{ props => {
      const {
        values,
        touched,
        errors,
        isSubmitting,
        handleChange,
        handleBlur,
        handleSubmit
      } = props;
      return (
        <form onSubmit={handleSubmit}>
          <label htmlFor="email">Email</label>
          <input name="email" type="text" placeholder="Enter your email" />

          <label htmlFor="email">Password</label>
          <input
            name="password"
            type="password"
            placeholder="Enter your password"
          />
          <button type="submit" >
            Login
          </button>
        </form>
      );
    }}

Notice that the onSubmit is calling the handleSubmit from the props.

I mentioned earleir that we could disable our submit button while the user is already attempting to login. We can add that small change now by using the isSubmitting property that we destructured from props above.

  <button type="submit" disabled={isSubmitting}>
      Login
  </button>

I would recommend adding the CSS from the finished CodeSandbox as well. Otherwise you won't get the full effect. You can copy the below css into your styles.css file.

.App {
  font-family: sans-serif;
}

h1 {
  text-align: center;
}

form {
  max-width: 500px;
  width: 100%;
  margin: 0 auto;
}

label,
input {
  display: block;
  width: 100%;
}

label {
  margin-bottom: 5px;
  height: 22px;
}

input {
  margin-bottom: 20px;
  padding: 10px;
  border-radius: 3px;
  border: 1px solid #777;
}

input.error {
  border-color: red;
}

.input-feedback {
  color: rgb(235, 54, 54);
  margin-top: -15px;
  font-size: 14px;
  margin-bottom: 20px;
}

button {
  padding: 10px 15px;
  background-color: rgb(70, 153, 179);
  color: white;
  border: 1px solid rgb(70, 153, 179);
  background-color: 250ms;
}

button:hover {
  cursor: pointer;
  background-color: white;
  color: rgb(70, 153, 179);
}

Adding Validation Messages Logic

Now we need to figure out how to validate our inputs. The first question is, what constraints do we want to have on our input. Let's start with email. Email input should...

  • Be required
  • Look like a real email

Password input should...

  • Be required
  • Be at least 8 characters long
  • contain at least one number

We'll cover two ways to create these messages, one using Yup and one doing it yourself. We recommend using Yup and you'll see why shortly.

Doing it Yourself

The first option is creating our validate function. The purpose of the function is to iterate through the values of our form, validate these values in whatever way we see fit, and return an errors object that has key value pairs of value->message.

Inside of the Formik tag, you can add the following code. This will always add an "Invalid email" error for email. We will start with this and go from there.

    validate={values => {
      let errors = {};
      errors.email = "Invalid email";
      return errors;
    }}

Now, we can ensure that the user has input something for the email.

validate={values => {
      let errors = {};
      if (!values.email) {
        errors.email = "Required";
      } 
 return errors;
}}

Then, we can check that the email is actually a valid looking email by using the email-validator package. This will look almost the same as the equivalent check for email.

  validate={values => {
      let errors = {};
      if (!values.email) {
        errors.email = "Required";
      } else if (!EmailValidator.validate(values.email)) {
        errors.email = "Invalid email address";
      }
      return errors;
    }}

That takes care of email, so now for password. We can first check that the user input something.

validate={values => {
      let errors = {};
      if (!values.password) {
        errors.password = "Required";
      } 
 return errors;
}}

Now we need to check the length to be at least 8 characters.

validate={values => {
      const passwordRegex = /(?=.*[0-9])/;
      if (!values.password) {
        errors.password = "Required";
      } else if (values.password.length < 8) {
        errors.password = "Password must be 8 characters long.";
      } 

      return errors;
    }}

And lastly, that the password contains at least one number. For this, we can use regex.

 validate={values => {
      let errors = {};

      const passwordRegex = /(?=.*[0-9])/;
      if (!values.password) {
        errors.password = "Required";
      } else if (values.password.length < 8) {
        errors.password = "Password must be 8 characters long.";
      } else if (!passwordRegex.test(values.password)) {
        errors.password = "Invalida password. Must contain one number";
      }

      return errors;
    }}

Here's the whole thing.

  validate={values => {
      let errors = {};
      if (!values.email) {
        errors.email = "Required";
      } else if (!EmailValidator.validate(values.email)) {
        errors.email = "Invalid email address";
      }

      const passwordRegex = /(?=.*[0-9])/;
      if (!values.password) {
        errors.password = "Required";
      } else if (values.password.length < 8) {
        errors.password = "Password must be 8 characters long.";
      } else if (!passwordRegex.test(values.password)) {
        errors.password = "Invalida password. Must contain one number";
      }

      return errors;
    }}

Using Yup (Recommended)

Ok, you might have noticed that handling the validate logic on our own gets a bit verbose. We have to manually do all of the checks ourselves. It wasn't that bad I guess, but with the Yup package, it gets all the more easy!

Yup is the recommended way to handle validation messages.

Yup makes input validation a breeze!

When using Yup, we no longer will see the Validate property, but insead use validationSchema. Let's start with email. Here is the equivalent validation using Yup.

validationSchema={Yup.object().shape({
      email: Yup.string()
        .email()
        .required("Required")
    })}

Much shorter right?! Now, for password.

validationSchema={Yup.object().shape({
  email: Yup.string()
    .email()
    .required("Required"),
  password: Yup.string()
    .required("No password provided.")
    .min(8, "Password is too short - should be 8 chars minimum.")
    .matches(/(?=.*[0-9])/, "Password must contain a number.")
})}

Pretty SWEET!

Displaying Validation/Error Messages

Now that we have the logic for creating error messages, we need to display them. We will need to update the inputs in our form a bit.

We need to update several properties for both email and password inputs.

  • value
  • onChange
  • onBlur
  • className

Email

Let's start by updating value, onChange, and onBlur. Each of these will use properties from the render props.

<input
  name="email"
  type="text"
  placeholder="Enter your email"
  value={values.email}
  onChange={handleChange}
  onBlur={handleBlur}
/>

Then we can add a conditional "error" class if there are any errors. We can check for errors by looking at the errors object (remeber how we calculated that object ourselves way back when).

We can also check the touched property, to see whether or not the user has interacted with the email input before showing an error message.

<input
  name="email"
  type="text"
  placeholder="Enter your email"
  value={values.email}
  onChange={handleChange}
  onBlur={handleBlur}
  className={errors.email && touched.email && "error"}
/>

And lastly, if there are errors, we will display them to the user. All in all, email will look like this.

<label htmlFor="email">Email</label>
<input
  name="email"
  type="text"
  placeholder="Enter your email"
  value={values.email}
  onChange={handleChange}
  onBlur={handleBlur}
  className={errors.email && touched.email && "error"}
/>
{errors.email && touched.email && (
  <div className="input-feedback">{errors.email}</div>
)}

Password

Now we need to do the same with password. I won't walk through each step beause they are exactly the same as email. Here's the final code.

<label htmlFor="email">Password</label>
<input
  name="password"
  type="password"
  placeholder="Enter your password"
  value={values.password}
  onChange={handleChange}
  onBlur={handleBlur}
  className={errors.password && touched.password && "error"}
/>
{errors.password && touched.password && (
  <div className="input-feedback">{errors.password}</div>
)}

Test it Out

Let's try it out! You can start by clicking the button without entering anything. You should see validation messages.

Now, we can get more specific for testing messages. Refresh your page to do this.Click inside of the email input, but don't type anything.

Then, click away from the input. You should see the "Required" message pop up. Notice that this message doesn't pop up automatically when the page loads. We only want to display error messages after the user has interacted with the input.

Now, start to type. You should get a message about not being a valid email.

And lastly, type in a valid looking email, and your error message goes away.

Now, same for password. Click on the input, then away, and you'll get the required message.

Then, start typing and you'll see the length validation.

Then, type 8 or more characters that does not include a number, and you'll see the "must contain a number" message.

And lastly, add a number, and error messages go away.

Conclusion

Whew, that was a long one! Again, validation can be a tricky thing, but with the help of a few packages, it becomes a bit easier. At the end of the day though, I think we've got a pretty legit login form!

Terraform on OCI – Provisioning MySQL for InnoDB Cluster Setups

$
0
0
In my prior blog post on Terraform, I demonstrated building the dependent infrastructure that MySQL implementations need.  Building MySQL isn’t much different, but does identify a need for a Webserver to provide configuration files for Terraform to execute on as was done in my prior MySQL on OCI post, and a Yum Repo Webserver to… Read More »

Fun with Bugs #82 - On MySQL Bug Reports I am Subscribed to, Part XVIII

$
0
0
I've got few comments to my post on references to MariaDB in MySQL bug reports (not in the blog, but via social media and in personal messages), and all but one comments from current and former colleagues whose opinion I value a lot confirmed that this really looks like a kind of attempt to advertise MariaDB. So, from now on I'll try to keep my findings on how tests shared by MySQL bug reporters work in MariaDB for myself, MariaDB JIRA and this blog (where I can and will advertise whatever makes sense to me), and avoid adding them to MySQL bug reports.

That said, I still think that it's normal to share links to MariaDB bug reports that add something useful (like patches, explanations or better test cases), and I keep insisting that this kind of feedback should not be hidden. Yes, I want to mention Bug #94610 (and related MDEV-15641) again, as a clear example of censorship that is not reasonable and should not be tolerated.

In the meantime, since my previous post in this series I've subscribed to 30 or so new MySQL bug reports. Some of them are listed below, started from the oldest. This time I am not going to exclude "inactive" reports that were not accepted by Oracle MySQL engineers as valid:
  • Bug #94629 - "no variable can skip a single channel error in mysql replication". This is a request to add support for per-channel options to skip N transactions or specific errors. It is not accepted ("Not a Bug") just because one can stop replication on all channels and start on one to skip transaction(s) there, then resume replication for all channels. Do you really think this is a right and only way to process such a report?
  • Bug #94647 - "Memory leak in MEMORY table by glibc". This is also not a bug because one ca use something like malloc-lib=jemalloc with mysqld_safe or Environment="LD_PRELOAD=/path/to/jemalloc" with systemd services. There might be some cost related to that in older versions... Note that similar MDEV-14050 is still open.
  • Bug #94655 - "Some GIS function do not use spatial index anymore". yet another regression vs MySQL 5.7 reported by Cedric Tabin. It ended up verified as feature request without a regression tag...
  • Bug #94664 - "Binlog related deadlock leads to all incoming connection choked.". This report from Yanmin Qiao ended up as a duplicate of  Bug #92108 - "Deadlock by concurrent show binlogs, pfs session_variables table & binlog purge" (fixed in MySQL 5.7.25+, thanks Sveta Smirnova for the hint). See also Bug #91941.
  • Bug #94665 - "enabling undo-tablespace encryption doesn't mark tablespace encryption flag". Nice finding by Krunal Bauskar from Percona.
  • Bug #94699 - "Mysql deadlock and bugcheck on aarch64 under stress test". Bug report with a patch contributed by Cai Yibo. The fix is included in upcoming MySQL 8.0.17 and the bug is already closed.
  • Bug #94709 - "Regression behavior for full text index". This regression was reported by Carlos Tutte and properly verified (with regression tag added and all versions checked) by Umesh Shastry. See also detailed analysis of possible reason in the comment from Nikolai Ikhalainen.
  • Bug #94723 - "Incorrect simple query result with func result and FROM table column in where". Michal Vrabel found this interesting case when MySQL 8.0.215 returns wrong results. I've checked the test case on MariaDB 10.3.7 and it is not affected. Feel free to consider this check and statement my lame attempt to advertise MariaDB. I don't mind.
  • Bug #94730 - "Kill slave may cause start slave to report an error.". This bug was declared a duplicate of a nice Bug #93397 - "Replication does not start if restart MySQL after init without start slave." reported by Jean-François Gagné earlier. Both bugs were reported for MySQL 5.7.x, but I do not see any public attempt to verify if MySQL 5.6 or 8.0 is also affected. In the past it was required to check/verify bug on all GA versions supported if the test case applies. Nowadays this approach is not followed way too often, even when bug reporter cared enough to provide MTR test case.
  • Bug #94737 - "MySQL uses composite hash index when not possible and returns wrong result". Yet another optimizer bug was reported by Simon Banaan. Again, MariaDB 10.3.7 is NOT affected. I can freely and happily state this here if it's inappropriate to state so in the bug report itself. By the way, other MySQL versions were probably not checked. Also, unlike Oracle engineer who verified the bug, I do not hesitate to copy/paste the entire results of my testing here:
    MariaDB [test]> show create table tmp_projectdays_4\G*************************** 1. row ***************************
           Table: tmp_projectdays_4
    Create Table: CREATE TABLE `tmp_projectdays_4` (
      `id` int(11) NOT NULL AUTO_INCREMENT,
      `project` int(11) NOT NULL,
      `datum` date NOT NULL,
      `voorkomen` tinyint(1) NOT NULL DEFAULT 1,
      `tijden` tinyint(1) NOT NULL DEFAULT 0,
      `personeel` tinyint(1) NOT NULL DEFAULT 0,
      `transport` tinyint(1) NOT NULL DEFAULT 0,
      `materiaal` tinyint(1) NOT NULL DEFAULT 0,
      `materiaaluit` tinyint(1) NOT NULL DEFAULT 0,
      `materiaalin` tinyint(1) NOT NULL DEFAULT 0,
      `voertuigen` varchar(1024) DEFAULT '',
      `medewerkers` varchar(1024) DEFAULT '',
      `personeel_nodig` int(11) DEFAULT 0,
      `personeel_gepland` int(11) DEFAULT 0,
      `voertuigen_nodig` int(11) DEFAULT 0,
      `voertuigen_gepland` int(11) DEFAULT 0,
      `created` datetime DEFAULT NULL,
      `modified` datetime DEFAULT NULL,
      `creator` int(11) DEFAULT NULL,
      PRIMARY KEY (`id`),
      KEY `project` (`project`,`datum`) USING HASH
    ) ENGINE=MEMORY AUTO_INCREMENT=2545 DEFAULT CHARSET=utf8mb4
    1 row in set (0.001 sec)

    MariaDB [test]> explain SELECT COUNT(1) FROM `tmp_projectdays_4` WHERE `project`
     IN(15409,15911,15929,15936,16004,16005,16007,16029,16031,16052,16054,16040,1248
    5,15892,16035,16060,16066,16093,16057,16027,15988,15440,15996,11457,15232,15704,
    12512,12508,14896,15594,16039,14997,16058,14436,16006,15761,15536,16016,16019,11
    237,13332,16037,14015,15537,15369,15756,12038,14327,13673,11393,14377,15983,1251
    4,12511,13585,12732,14139,14141,12503,15727,15531,15746,15773,15207,13675,15676,
    15663,10412,13677,15528,15530,10032,15535,15693,15532,15533,15534,15529,16056,16
    064,16070,15994,15918,16045,16073,16074,16077,16069,16022,16081,15862,16048,1606
    2,15610,15421,16001,15896,15004,15881,15882,15883,15884,15886,16065,15814,16076,
    16085,16174,15463,15873,15874,15880,15636,16092,15909,16078,15923,16026,16047,16
    094,16111,15914,15919,16041,16063,16068,15971,16080,15961,16038,16096,16127,1564
    1,13295,16146,15762,15811,15937,16150,16152,14438,16086,16156,15593,16147,15910,
    16106,16107,16161,16132,16095,16137,16072,16097,16110,16114,16162,16166,16175,16
    176,16178,15473,16160,15958,16036,16042,16115,16165,16167,16170,16177,16185,1582
    3,16190,16169,15989,16194,16116,16131,16157,16192,16197,16203,16193,16050,16180,
    16209,15522,16148,16205,16201,15990,16158,16216,16033,15974,16112,16133,16181,16
    188,16189,16212,16238,16241,16183,15640,15638,16087,16088,16129,16186,16164,1610
    8,15985,16244,15991,15763,16049,15999,16104,16208,13976,16122,15924,16046,16242,
    16151,16117,16187);

    +------+-------------+-------------------+------+---------------+------+--------
    -+------+------+-------------+
    | id   | select_type | table             | type | possible_keys | key  | key_len
     | ref  | rows | Extra       |
    +------+-------------+-------------------+------+---------------+------+--------
    -+------+------+-------------+
    |    1 | SIMPLE      | tmp_projectdays_4 | ALL  | project       | NULL | NULL
     | NULL | 2544 | Using where |
    +------+-------------+-------------------+------+---------------+------+--------
    -+------+------+-------------+
    1 row in set (0.004 sec)

    MariaDB [test]> SELECT COUNT(1) FROM `tmp_projectdays_4` WHERE `project` IN(1540
    9,15911,15929,15936,16004,16005,16007,16029,16031,16052,16054,16040,12485,15892,
    16035,16060,16066,16093,16057,16027,15988,15440,15996,11457,15232,15704,12512,12
    508,14896,15594,16039,14997,16058,14436,16006,15761,15536,16016,16019,11237,1333
    2,16037,14015,15537,15369,15756,12038,14327,13673,11393,14377,15983,12514,12511,
    13585,12732,14139,14141,12503,15727,15531,15746,15773,15207,13675,15676,15663,10
    412,13677,15528,15530,10032,15535,15693,15532,15533,15534,15529,16056,16064,1607
    0,15994,15918,16045,16073,16074,16077,16069,16022,16081,15862,16048,16062,15610,
    15421,16001,15896,15004,15881,15882,15883,15884,15886,16065,15814,16076,16085,16
    174,15463,15873,15874,15880,15636,16092,15909,16078,15923,16026,16047,16094,1611
    1,15914,15919,16041,16063,16068,15971,16080,15961,16038,16096,16127,15641,13295,
    16146,15762,15811,15937,16150,16152,14438,16086,16156,15593,16147,15910,16106,16
    107,16161,16132,16095,16137,16072,16097,16110,16114,16162,16166,16175,16176,1617
    8,15473,16160,15958,16036,16042,16115,16165,16167,16170,16177,16185,15823,16190,
    16169,15989,16194,16116,16131,16157,16192,16197,16203,16193,16050,16180,16209,15
    522,16148,16205,16201,15990,16158,16216,16033,15974,16112,16133,16181,16188,1618
    9,16212,16238,16241,16183,15640,15638,16087,16088,16129,16186,16164,16108,15985,
    16244,15991,15763,16049,15999,16104,16208,13976,16122,15924,16046,16242,16151,16
    117,16187);

    +----------+
    | COUNT(1) |
    +----------+
    |     2544 |
    +----------+
    1 row in set (0.025 sec)

    MariaDB [test]> select version();
    +--------------------+
    | version()          |
    +--------------------+
    | 10.3.7-MariaDB-log |
    +--------------------+
    1 row in set (0.021 sec)
    When the job was done properly I see no reasons NOT to share the results.
  • Bug #94747 - "4GB Limit on large_pages shared memory set-up". My former colleague Nikolai Ikhalainen from Percona noted this nice undocumented "feature" (Had I forgotten to advertise Percona recently? Sorry about that...) He proved with a C program that one can create shared memory segments on Linux large than 4GB, one just had to use proper data type, unsigned long integer, in MySQL's code. Still, this report ended up as non-critical bug in "MySQL Server: Documentation" category, or even maybe a feature request internally. What a shame!
    Spring in Paris is nice, as this photo made 3 years ago proves. The way MySQL bug reports are handled this spring is not any nice in some cases.
    To summarize:
    1. It seems recently the fact that there is some limited workaround already published somewhere is a good enough reason NOT to accept valid feature request. Noted.
    2. Regression bugs (reports about drop in performance or problem that had not happened with older version but happens with some recent) are still not marked with regression tag sometimes. Moreover, clear performance regressions in MySQL 8.0.x vs MySQL 5.7.x may end up as just feature requests... A request to "Make MySQL Great Again" maybe?
    3. MySQL engineers who verify bugs often do not care to check all major versions and/or share the results of their tests. This is unfortunate.
    4. Some bugs are not classified properly upon verification. The fact that wrong data type is used is anything but severity 3 documentation problem, really.
    Viewing all 18842 articles
    Browse latest View live


    Latest Images

    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>