Quantcast
Channel: Planet MySQL
Viewing all 18798 articles
Browse latest View live

SQL TOP, LIMIT Or ROWNUM Clause Example Tutorial

$
0
0
SQL TOP, LIMIT Or ROWNUM Clause Example Tutorial

SQL TOP, LIMIT Or ROWNUM Clause Example Tutorial is today’s topic. SQL SELECT TOP clause is used to specify the number of records to be returned. The SELECT TOP clause is used on large tables having thousands of records because returning a very large number of records that can impact on the performance. The operation performed by TOP, LIMIT, and ROWNUM clause has almost the same functionality.

SQL TOP, LIMIT Or ROWNUM Clause

In some situations, you may not be interested in all of of the rows returned by a query, for example, if you want to retrieve the top 10 employees who have recently joined the organization and get top 3 students by score, or something like that.

If we want to handle such situations, you can use the SQL’s TOP clause in the SELECT statement. However, a TOP clause is only supported by the SQL Server and MS Access database systems. 

Not all the database systems support the SELECT TOP clause.

MySQL supports the LIMIT clause to select a limited number of records.

Oracle uses ROWNUM.

#SYNTAX: (For SQL SERVER)

SELECT TOP number|percent column_name(s)
FROM table_name
WHERE condition;

#Parameters

  1. TOP number: The numbers of records to be retrieved.
  2. TOP percent: Percentage of records to be retrieved.
  3. Table_name: Name of the table.
  4. Condition: Condition to be imposed on the Select statement.

#Syntax: (For Oracle)

SELECT column_name(s)
FROM table_name
WHERE ROWNUM <= number;

See the syntax of MySQL Databases.

#Syntax: (For MySQL databases)

SELECT&nbsp;column_name(s)
FROM&nbsp;table_name
WHERE&nbsp;condition
LIMIT&nbsp;number;

Let’s understand all these with examples.

Consider table: CUSTOMERS

ID NAME AGE ADDRESS SALARY 
1 Tom 21 Kolkata 500
2 Karan 22 Allahabad 600
3 Hardik 23 Dhanbad 700
4 Komal 24 Mumbai 800

 

QUERY: (For SQL SERVER)

Select Top 3 * From Customers;

Output:

ID NAME AGE ADDRESS SALARY 
1 Tom 21 Kolkata 500
2 Karan 22 Allahabad 600
3 Hardik 23 Dhanbad 700

 

So, here the first three records are displayed as we have used Top number clause.

#SQL TOP PERCENT Example

The following SQL statement selects the first 50% of the records from the “Customers” table.

SELECT TOP 50 PERCENT * FROM Customers;

#QUERY: (For MySQL databases)

Select * from Customers where ID >= 1 LIMIT 3;

#Output:

ID NAME AGE ADDRESS SALARY 
1 Tom 21 Kolkata 500
2 Karan 22 Allahabad 600
3 Hardik 23 Dhanbad 700

So, here, the records whose Id’s are more than 1 and is equal to 1 are displayed up to limit 3.

#QUERY: (For Oracle)

SELECT * FROM Customers WHERE ROWNUM <= 3;

#Output:

ID NAME AGE ADDRESS SALARY 
1 Tom 21 Kolkata 500
2 Karan 22 Allahabad 600
3 Hardik 23 Dhanbad 700

 

#Using LIMIT along with OFFSET

LIMIT a OFFSET b means skip the first b entries and then return the next a entries.

OFFSET can only be used with an ORDER BY clause. It cannot be used on its own. An OFFSET value must be greater than or equal to zero. It cannot be negative, else it returns the error.

See the following syntax.

SELECT expressions
FROM tables
[WHERE conditions]
[ORDER BY expression [ ASC | DESC ]]
LIMIT number_rows [ OFFSET offset_value ];

In the above query, we are using the SELECT, WHERE, ORDER BY, and LIMIT Clause.

#Parameters or Arguments

#expressions

The columns that you wish to retrieve.
#tables
The tables that you wish to retrieve the records from. There must be at least one table listed in a FROM clause.
#WHERE conditions
Optional. The conditions that must be met for the records to be returned.
#ORDER BY expression
Optional. It is used in a SELECT LIMIT statement so that you can order the results and target those that you wish to return. The ASC is ascending order and DESC which means descending order.
#LIMIT number_rows
It specifies the limited number of rows in the result set to be returned based on the number_rows. Let’s say, LIMIT 11 would return the first 11 rows matching the SELECT criteria. This is where the sorting order matters, so you need to be sure to use the ORDER BY clause appropriately.
#OFFSET offset_value
Optional. The first row returned by the LIMIT will be determined by offset_value.

#Using LIMIT ALL

The LIMIT ALL clause implies no limit. See the following syntax.

SELECT *
FROM Student
LIMIT ALL;

The above query returns all the entries in the table.

Finally, SQL TOP, LIMIT Or ROWNUM Clause Example Tutorial is over.

The post SQL TOP, LIMIT Or ROWNUM Clause Example Tutorial appeared first on AppDividend.


Assessing MySQL Performance Amongst AWS Options – Part Two

$
0
0
Compare Amazon RDS to Percona Server

See part one of this series here

This post is part two of my series “Assessing MySQL Performance Amongst AWS Options”, taking a look at how current Amazon RDS services – Amazon Aurora and Amazon RDS for MySQL – compare with Percona Server with InnoDB and RocksDB engines on EC2 instances. This time around, I am reviewing the total cost of one test run for each database as well as seeing which databases are the most efficient.

First, a quick recap of the evaluation scenario:

The benchmark scripts

For these evaluations, we use the sysbench/tpcc LUA test with a scale factor of 500 warehouses/10 tables. This is the equivalent of 5000 warehouses of the official TPC-C benchmark.

Amazon MySQL Environments

These are the AWS MySQL environments under analysis:

  • Amazon RDS Aurora
  • Amazon RDS for MySQL with the InnoDB storage engine
  • Percona Server for MySQL with the InnoDB storage engine on Amazon EC2
  • Percona Server for MySQL with the RocksDB storage engine on Amazon EC2

Technical Setup – Server

These general notes apply across the board:

  • AWS region us-east-1(N.Virginia) was used for all tests
  • Server and client instances were spawned in the same availability zone
  • All data for tests were prepared in advance, stored as snapshots, and restored before the test
  • Encryption was not used

And we believe that these configuration notes allow for a fair comparison of the different technologies:

  • AWS EBS optimization was enabled for EC2 instances
  • For RDS/Amazon Aurora only a primary DB instance was created and used
  • In the case of RDS/MySQL, a single AZ deployment was used for RDS/MySQL
  • EC2/Percona Server for MySQL tests were run with binary log enabled

Finally, here are the individual server configurations per environment:

Server test #1: Amazon RDS Aurora

  • Database server: Aurora MySQL 5.7
  • DB instances: r5.large, r5.xlarge, r5.2xlarge, r5.4xlarge
  • volume: used ~450GB(>15000 IOPS)

Server test #2: Amazon RDS for MySQL with InnoDB Storage Engine

  • Database server: MySQL Server 5.7.25
  • RDS instances: db.m5.large, db.m5.xlarge, db.m5.2xlarge, db.m5.4xlarge
  • volumes(allocated space):
    • gp2: 5400GB(~16000 IOPs)
    • io1: 700GB(15000 IOPs)

Server test #3: Percona Server for MySQL with InnoDB Storage Engine

  • Database server: Percona Server 5.7.25
  • EC2 instances: m5.large, m5.xlarge, m5.2xlarge, m5.4xlarge
  • volumes(allocated space):
    • gp2: 5400GB(~16000 IOPs)
    • io1: 700GB(15000 IOPs)

Server test #4: Percona Server for MySQL with RocksDB using LZ4 compression

  • Database server: Percona Server 5.7.25
  • EC2 instances: m5.large, m5.xlarge, m5.2xlarge, m5.4xlarge
  • volumes(allocated space):
    • gp2: 5400GB(~16000 IOPs)
    • io1: 350GB(15000 IOPs)

Technical Setup – Client

Common to all tests, we used an EC2 instance: m5.xlarge. And now that we have established the setup, let’s take a look at what we found.

Costs

Now we are getting down to the $’s! First, let’s review the total cost of one test run for each database:

Sorting the costs of one test run in order from cheapest to most expensive we see this order emerge:

  1. EC2/gp2 carrying server tests #3 or #4 featuring Percona Server for MySQL [represents the LEAST cost in $’s]
  2. RDS/gp2 carrying server test #2, RDS/MySQL
  3. EC2/io1 carrying server tests #3 or #4
  4. RDS/io1 carrying server test #2, RDS/MySQL
  5. RDS/Aurora, server test #1  [GREATEST COST IN $’s]

How does that translate to $’s? Let’s find out how the structure of these costs looks like for every database. Before we study that, though, there are some things to bear in mind:

  • Our calculations include only server-side costs
  • Per instance, the price we used as a baseline was RESERVED INSTANCE STANDARD 1-YEAR TERM
  • For RDS/Amazon Aurora the values for volume size and amount of I/O requests represent real data obtained from CloudWatch metrics (VolumeBytesUsed for used volume space and VolumeReadIOPs+VolumeWriteIOPs for IOPs used) after the test run
  • In the case of Percona Server/RocksDB due to LZ4 compression, the database on disk is 5x smaller, so we used a half-sized io1 volume – 350GB vs 700GB for either Percona Server with InnoDB or RDS/MySQL. This still complies with the requirement for io1 volumes to deliver 50 IOPS per GB.
  • The duration set for the test run is 30 mins

Our total cost formulas

These are the formulas we used in calculating these costs:

  • EC2/gp2, EC2/io1, RDS/gp2, RDS/io1
    • total cost = server instance size cost + allocated volume size cost + requested amount of IOPS cost
  • RDS/Amazon Aurora
    • total cost = server instance size cost + allocated volume size cost + actually used amount of I/O cost

The results

Here are our calculations in chart form, you can click on the chart to enlarge it on screen:

One interesting observation here is that, as you can see from the costs structure chart, the most significant part of costs is IO provisioning – either the requested amount of IOPS (EC2/io1 or RDS/io1) or the actually used amount of IOPS (RDS/Aurora). In the former case, the cost is a function of time, and in the latter case, costs depend only on the amount of I/O requests actually issued.

Let’s check how these costs might look like if we provision EC2/io1, RDS/io1 volumes and RDS/aurora storage for one month. From the cost structure, it’s clear that in case of RDS/aurora 4xlarge – db instance performed 51M I/O requests for half an hour. So we effectively got 51000000 (I/O request) / 1800(seconds) ~= 28000 IOPs.

EC2/io1:    (28000 (IOPS)      * 0.065(IOPs price)    * 24(hours)*30(days)/(24(hours)*30(days))   1820$
RDS/io1:    (28000 (IOPS)      *   0.1(IOPs price)    * 24(hours)*30(days)/(24(hours)*30(days))   2800$
RDS/aurora: 102M(I/O per hour) *   0.2(I/O req price) * 24(hours)*30(days)                       14688$

In this way, IO provisioning of 28000 IOPS for EC2/io1 costs 8x less and for RDS/io1 costs 5x less. That means that to be cost-efficient, the throughput of RDS/Aurora should be at least 5x or even 8x better than that of EC2 or RDS with io1 volume.

Conclusion: the IO provisioning factor should be taken into account during your planning of deployments with io1 volumes or RDS/aurora

Efficiency

Now it’s time to review which databases perform the most efficiently by analyzing their transaction/cost ratio:

Below you can find the minimum and maximum prices for 1000 transactions for each of the database servers in our tests, again running from cheapest to most expensive in $ terms:

Server Min $’s per 1000 TX Server Config Min $’s per 1000 TX Server Config
Server test #4 EC2#Percona Server/RocksDB 0.42 4xlarge/io1 1.93 large/io1
Server test #3 EC2#Percona Server/InnoDB 1.66 4xlarge/gp2 12.11 large/io1
Server test #2 RDS#MySQL/InnoDB 2.23 4xlarge/gp2 22.3 large/io1
Server test #1 RDS#Amazon Aurora 8.29 4xlarge 13.31 xlarge

Some concluding thoughts

  • EC2#Percona Server/RocksDB offers the lowest price per 1000 transactions – $0.42 on m5.4xlarge instance with 350GB io1 volume/15000 IOPs
  • RDS/MySQL looked to be the most expensive in this evaluation – $22.3 for 1000 transactions – db.m5.large with 700GB io1 volume/15000 IOPs
  • Lowest price for each database was obtained on 4xlarge instances, most expensive on large instances.
  • IO provisioning is a key factor that impacts run costs
  • For both EC2 and RDS gp2/5400GB (~16000 IOPS) is the cost wise choice
  • RDS/Aurora – the lowest price per 1000 transactions is $8.29, but that is 4x more expensive than the best price of 1000 transactions for RDS/MySQL, 5x more expensive than for EC2#Percona/InnoDB, and 20x more expensive than for EC2#Percona/RockDB. That means that despite the fact that Amazon Aurora shows very good throughput (actually the best among InnoDB-like engines), it may not be as cost-effective as other options.

One Final Note

When estimating your expenses, you will need to keep in mind that each company is different in terms of what they offer, how they build and manage those offerings, and of course, their pricing structure and cost per transaction. For AWS, you do need to be aware of the expenses of building and managing those things yourself that AWS handles for you; i.e. built into their cost. We can see, however, that in these examples, MyRocks is definitely a cost-effective solution when comparing direct costs.

Shinguz: Who else is using my memory - File System Cache analysis

$
0
0

See also our former articles:

When we do analysis of MariaDB Database servers we also check the memory (RAM and Swap) available:

# free --kilo --wide
              total        used        free      shared     buffers       cache   available
Mem:       16106252     4329952      703356      199008      307872    10765072    11042748
Swap:      31250428      528684    30721744

The values for buffers and especially for cache can be sometimes quite big. In this case they use about 10 GiB. So let us have a look what these things called buffers and cache are, using our valuable RAM... When we check the man pages of free we will find:

# man free
...
buffers    Memory used by kernel buffers (Buffers in /proc/meminfo)
cache      Memory used by the page cache and slabs (Cached and Slab in /proc/meminfo)
buff/cache Sum of buffers and cache

So let us check a more fine grained information in /proc/meminfo which is an interface to the kernel data structures:

# cat /proc/meminfo | grep -e ^Cached -e Slab -e Buffers
Buffers:          307872 kB
Cached:         10155156 kB
Slab:             609916 kB

Same values! Then let us have a look at the man pages of proc what we can find about these values:

# man proc
...
Buffers     Relatively temporary storage for raw disk blocks that shouldn't get tremendously large (20MB or so).
Cached      In-memory cache for files read from the disk (the page cache).  Doesn't include SwapCached.
Slab        In-kernel data structures cache.

So it looks like we have a raw I/O Cache (called Buffer Cache) and a File System I/O Cache (called Page Cache). So how does this work? What is a raw I/O? And is a Files System I/O cached once (Cached) or twice (Cached and Buffers)?

When we dig a bit deeper we can find that prior to Linux Kernels 2.4 the two Caches were distinct. So that was a waste of memory (RAM). It seems like today this is not the case any more [1], [2], [3]. And man pages are a bit out of date or at least not very precise?

Analysing the Linux Page Cache

A very good source when it comes to Linux Performance Tuning and Measuring is Brendan Gregg's Website. To measure Linux Page Cache Hit Ratio he provides a tool called cachestat which is part of the perf-tools collection on GitHub.

With cachestat we get a per second statistics of the Buffer Cache and the Page Cache (without Slabs), Cache Hits, Cache Misses, Dirty Buffer Entries in the Cache and a Cache Hit Ratio:

# sudo cachestat 
Counting cache functions... Output every 1 seconds.
    HITS   MISSES  DIRTIES    RATIO   BUFFERS_MB   CACHE_MB
    1419        8        0    99.4%          338       9406
    1368        0        0   100.0%          338       9406
    1391        0        0   100.0%          338       9406
    8558        0       29   100.0%          338       9406
   31870        0      163   100.0%          338       9406
    1374        0       24   100.0%          338       9406
    1388        0        0   100.0%          338       9406
    1370        0        0   100.0%          338       9406
    1388        0        0   100.0%          338       9406

Brendan Gregg also mentions a tool called pcstat (on GitHub) by Al Tobey which gets Page Cache Statistics for Files. Unfortunately I had some problems building it on my Ubuntu 16.04 with Go version 1.6. So I built it on an Ubuntu 18.04 (Go 1.10) and copied it over to to Ubuntu 16.04):

# export GOPATH=/tmp/
# cd $GOPATH
# go get golang.org/x/sys/unix
# go get github.com/tobert/pcstat/pcstat
# bin/pcstat $GOPATH/bin/pcstat 

Then I tried pcstat out against a MariaDB 10.4 instance. In the output we can see how big the files are in bytes, how many pages of 4 kib this corresponds to, how many of these 4 kib pages are cached and the percentage of pages cached:

# pcstat /home/mysql/database/mariadb-104/data/ib* /home/mysql/database/mariadb-104/data/test/test*
+------------------------------------------------------+----------------+------------+-----------+---------+
| Name                                                 | Size (bytes)   | Pages      | Cached    | Percent |
|------------------------------------------------------+----------------+------------+-----------+---------|
| /home/mysql/database/mariadb-104/data/ib_buffer_pool | 14642          | 4          | 0         | 000.000 |
| /home/mysql/database/mariadb-104/data/ibdata1        | 79691776       | 19456      | 0         | 000.000 |
| /home/mysql/database/mariadb-104/data/ib_logfile0    | 268435456      | 65536      | 0         | 000.000 |
| /home/mysql/database/mariadb-104/data/ib_logfile1    | 268435456      | 65536      | 0         | 000.000 |
| /home/mysql/database/mariadb-104/data/ibtmp1         | 12582912       | 3072       | 0         | 000.000 |
| /home/mysql/database/mariadb-104/data/test/test.frm  | 1097           | 1          | 0         | 000.000 |
| /home/mysql/database/mariadb-104/data/test/test.ibd  | 13631488       | 3328       | 0         | 000.000 |
+------------------------------------------------------+----------------+------------+-----------+---------+

When we run pcstat over time with the famous watch command we can even see how the Page Cache is heating up:

# watch -d -n 1 'pcstat /home/mysql/database/mariadb-104/data/ib* /home/mysql/database/mariadb-104/data/test/test* ; free -w'
+------------------------------------------------------+----------------+------------+-----------+---------+
| Name                                                 | Size (bytes)   | Pages      | Cached    | Percent |
|------------------------------------------------------+----------------+------------+-----------+---------|
| /home/mysql/database/mariadb-104/data/ib_buffer_pool | 14642          | 4          | 0         | 000.000 |
| /home/mysql/database/mariadb-104/data/ibdata1        | 79691776       | 19456      | 2416      | 012.418 |
| /home/mysql/database/mariadb-104/data/ib_logfile0    | 268435456      | 65536      | 3165      | 004.829 |
| /home/mysql/database/mariadb-104/data/ib_logfile1    | 268435456      | 65536      | 5890      | 008.987 |
| /home/mysql/database/mariadb-104/data/ibtmp1         | 12582912       | 3072       | 0         | 000.000 |
| /home/mysql/database/mariadb-104/data/test/test.frm  | 1097           | 1          | 1         | 100.000 |
| /home/mysql/database/mariadb-104/data/test/test.ibd  | 13631488       | 3328       | 1164      | 034.976 |
+------------------------------------------------------+----------------+------------+-----------+---------+
              total        used        free      shared     buffers       cache   available
Mem:       16106252     4329952      703356      199008      307872    10765072    11042748
Swap:      31250428      528684    30721744

An other tool which was discussed on Brendans Website was vmtouch - the Virtual Memory Toucher (on GitHub, Documentation). With vmtouch we can see for example how much of the directory /home/mysql/database/mariadb-104/data (datadir) is currently in cache:

# vmtouch -f /home/mysql/database/mariadb-104/data
           Files: 503
     Directories: 9
  Resident Pages: 29356/231060  114M/902M  12.7%
         Elapsed: 0.009668 seconds

Or more fine grained how much of InnoDB System Files are currently in memory:

# vmtouch -f -v /home/mysql/database/mariadb-104/data/ib*
/home/mysql/database/mariadb-104/data/ib_buffer_pool   [    ] 0/4
/home/mysql/database/mariadb-104/data/ibdata1          [oOooooo      ooooo                                          ] 2416/19456
/home/mysql/database/mariadb-104/data/ib_logfile0      [o                                                        oOO] 3165/65536
/home/mysql/database/mariadb-104/data/ib_logfile1      [OOOOOOOOOOOOOOOOOOOOOo                                      ] 23192/65536
/home/mysql/database/mariadb-104/data/ibtmp1           [                                                            ] 0/3072

           Files: 5
     Directories: 0
  Resident Pages: 28773/153604  112M/600M  18.7%
         Elapsed: 0.005499 seconds

A further question to answer is: Can I see all files cached in the Page Cache? So it seem like this is not easily possible:

There is no efficient search mechanism for doing the reverse - getting a file name belonging to a data block would require reading all inodes and indirect blocks on the file system. If you need to know about every single file's blocks stored in the page cache, you would need to supply a list of all files on your file system(s) to fincore. But that again is likely to spoil the measurement as a large amount of data would be read traversing the directories and getting all inodes and indirect blocks - putting them into the page cache and evicting the very page cache data you were trying to examine. [5]

Also in this article we can read about the Linux File Tools (linux-ftools) by Google. It seems to be a bit more complicated to make them work. So I let it be.

How is the Page Cache related to MariaDB

After all this technical O/S discussion, how is Linux Page Cache related to your MariaDB Database? Your MariaDB Database caches Data and Indexes as well. For the InnoDB Storage Engine this is the InnoDB Buffer Pool and for the Aria Storage Engine this is the Aria Page Cache Buffer. So if your MariaDB Database caches pages and if your Linux O/S caches pages the probability is high they cache the same data twice and thus waste valuable RAM! Fortunately InnoDB is configurable in a way it does NOT cache InnoDB files in the Page Cache. This is controlled with the InnoDB Server Variable innodb_flush_method.

When we look at InnoDB Files which were opened in a "normal" way (default: innodb_flush_method = fsync) we get the following information about how the files were opened (man 2 open and [6]):

# lsof +fg ./ib*
COMMAND  PID  USER   FD   TYPE     FILE-FLAG DEVICE  SIZE/OFF    NODE NAME
mysqld  2098 mysql    7uW  REG RW,LG,0x80000    8,1  79691776 9175185 ./ibdata1
mysqld  2098 mysql   11uW  REG RW,LG,0x80000    8,1 268435456 9175186 ./ib_logfile0
mysqld  2098 mysql   12uW  REG RW,LG,0x80000    8,1 268435456 9175187 ./ib_logfile1
mysqld  2098 mysql   13uW  REG RW,LG,0x80000    8,1  12582912 9175280 ./ibtmp1

The interesting column here is the FILE-FLAG column which indicates (man lsof):

# man lsof
       FILE-FLAG  when g or G has been specified to +f, this field contains the contents of the f_flag[s] member of the kernel file structure  and  the
                  kernel's  per-process  open  file flags (if available); `G' causes them to be displayed in hexadecimal; `g', as short-hand names; two
                  lists may be displayed with entries separated by commas, the lists separated by  a  semicolon  (`;');  the  first  list  may  contain
                  short-hand names for f_flag[s] values from the following table:

                       DIR       direct
                       LG        large file
                       RW        read and write access

The output is not so clear or completely understandable yet thus we want to have the open file flags in hexadecimal notation:

# lsof +fG ./ib*
COMMAND  PID  USER   FD   TYPE   FILE-FLAG DEVICE  SIZE/OFF    NODE NAME
mysqld  2098 mysql    7uW  REG 0x88002;0x0    8,1  79691776 9175185 ./ibdata1
mysqld  2098 mysql   11uW  REG 0x88002;0x0    8,1 268435456 9175186 ./ib_logfile0
mysqld  2098 mysql   12uW  REG 0x88002;0x0    8,1 268435456 9175187 ./ib_logfile1
mysqld  2098 mysql   13uW  REG 0x88002;0x0    8,1  12582912 9175280 ./ibtmp1

The Linux Kernel open file flags can be found here: fcntl.h. I have extracted the most relevant open file flags for our examination:

#define O_RDWR       00000002 (oct, 0x00002)
#define O_DIRECT     00040000 (oct, 0x04000)   /* direct disk access hint */
#define O_LARGEFILE  00100000 (oct, 0x08000)
#define O_CLOEXEC    02000000 (oct, 0x80000)   /* set close_on_exec */

So we can see that these 4 InnoDB files where opened with O_RDWR (RW), O_LARGE_FILE (LG) and O_CLOEXEC (not available (yet?) in lsof translation output).

Now let us start the MariaDB Database with the server variable set to: innodb_flush_method = O_DIRECT and check how the files where opened:

# lsof +fg ./ib*
COMMAND  PID  USER   FD   TYPE         FILE-FLAG DEVICE  SIZE/OFF    NODE NAME
mysqld  2098 mysql    7uW  REG RW,DIR,LG,0x80000    8,1  79691776 9175185 ./ibdata1
mysqld  2098 mysql   11uW  REG     RW,LG,0x80000    8,1 268435456 9175186 ./ib_logfile0
mysqld  2098 mysql   12uW  REG     RW,LG,0x80000    8,1 268435456 9175187 ./ib_logfile1
mysqld  2098 mysql   13uW  REG RW,DIR,LG,0x80000    8,1  12582912 9175280 ./ibtmp1

# lsof +fG ./ib*
COMMAND  PID  USER   FD   TYPE   FILE-FLAG DEVICE  SIZE/OFF    NODE NAME
mysqld  2098 mysql    7uW  REG 0x8c002;0x0    8,1  79691776 9175185 ./ibdata1
mysqld  2098 mysql   11uW  REG 0x88002;0x0    8,1 268435456 9175186 ./ib_logfile0
mysqld  2098 mysql   12uW  REG 0x88002;0x0    8,1 268435456 9175187 ./ib_logfile1
mysqld  2098 mysql   13uW  REG 0x8c002;0x0    8,1  12582912 9175280 ./ibtmp1

We can see a new flag DIR or 0x04000 which means the files where opened with O_DIRECT. But only the InnoDB Temporary Table Tablespace and the InnoDB System Tablespace but not the two InnoDB Transaction Logs.

Translation of hex to oct: 0x8c002 = 02140002.

But what does O_DIRECT mean? Looking at the open(2) man pages we can find:

# man 2 open
O_DIRECT (since Linux 2.4.10)
       Try  to  minimize  cache effects of the I/O to and from this file.  In general this will degrade performance, but it is useful in special
       situations, such as when applications do their own caching.  File I/O is done directly to/from user-space buffers.  The O_DIRECT flag  on
       its own makes an effort to transfer data synchronously, but does not give the guarantees of the O_SYNC flag that data and necessary meta‐
       data are transferred.  To guarantee synchronous I/O, O_SYNC must be used in addition to O_DIRECT.

So O_DIRECT is exactly what we want in this case: Bypassing the File System Page Cache to not cache the Database blocks twice!

To verify the impact we run pcstat again:

# pcstat /home/mysql/database/mariadb-104/data/ib* /home/mysql/database/mariadb-104/data/test/test*
+------------------------------------------------------+----------------+------------+-----------+---------+
| Name                                                 | Size (bytes)   | Pages      | Cached    | Percent |
|------------------------------------------------------+----------------+------------+-----------+---------|
| /home/mysql/database/mariadb-104/data/ib_buffer_pool | 16020          | 4          | 4         | 100.000 |
| /home/mysql/database/mariadb-104/data/ibdata1        | 79691776       | 19456      | 140       | 000.720 |
| /home/mysql/database/mariadb-104/data/ib_logfile0    | 268435456      | 65536      | 36844     | 056.219 |
| /home/mysql/database/mariadb-104/data/ib_logfile1    | 268435456      | 65536      | 65536     | 100.000 |
| /home/mysql/database/mariadb-104/data/ibtmp1         | 12582912       | 3072       | 0         | 000.000 |
| /home/mysql/database/mariadb-104/data/test/test.frm  | 1097           | 1          | 1         | 100.000 |
| /home/mysql/database/mariadb-104/data/test/test.ibd  | 67108864       | 16384      | 13400     | 081.787 |
+------------------------------------------------------+----------------+------------+-----------+---------+

But... part of the InnoDB Tablespace files is still cached! Also checking the total amount of Buffers and Cache shows the same:

# free
              total        used        free      shared  buff/cache   available
Mem:       16106252     4401788      368200      456716    11336264    10691792
Swap:      31250428     1348440    29901988

So restarting the MariaDB database does not purge the Page Cache! Note: This is important to notice because bypassing the Page Cache helps to not wasting valuable RAM but it makes Database restart much more costly because Page Cache does not help/support InnoDB Buffer Pool heating any more!

Then let us clear the Linux Page Cache and check the result:

# echo 1 > /proc/sys/vm/drop_caches

# free -w
              total        used        free      shared     buffers       cache   available
Mem:       16106252     4395892    10539864      441708         696     1169800    10882984
Swap:      31250428     1348428    29902000

Checking with pcstat shows now that all InnoDB pages are wiped out of the Page Cache:

# pcstat /home/mysql/database/mariadb-104/data/ib* /home/mysql/database/mariadb-104/data/test/test*
+------------------------------------------------------+----------------+------------+-----------+---------+
| Name                                                 | Size (bytes)   | Pages      | Cached    | Percent |
|------------------------------------------------------+----------------+------------+-----------+---------|
| /home/mysql/database/mariadb-104/data/ib_buffer_pool | 16020          | 4          | 0         | 000.000 |
| /home/mysql/database/mariadb-104/data/ibdata1        | 79691776       | 19456      | 0         | 000.000 |
| /home/mysql/database/mariadb-104/data/ib_logfile0    | 268435456      | 65536      | 0         | 000.000 |
| /home/mysql/database/mariadb-104/data/ib_logfile1    | 268435456      | 65536      | 0         | 000.000 |
| /home/mysql/database/mariadb-104/data/ibtmp1         | 12582912       | 3072       | 0         | 000.000 |
| /home/mysql/database/mariadb-104/data/test/test.frm  | 1097           | 1          | 0         | 000.000 |
| /home/mysql/database/mariadb-104/data/test/test.ibd  | 67108864       | 16384      | 0         | 000.000 |
+------------------------------------------------------+----------------+------------+-----------+---------+

And after a while running traffic on the test table we can see that InnoDB Transaction Log Files are cached again in the Page Cache but NOT the InnoDB Tablespace files:

# pcstat /home/mysql/database/mariadb-104/data/ib* /home/mysql/database/mariadb-104/data/test/test*
+------------------------------------------------------+----------------+------------+-----------+---------+
| Name                                                 | Size (bytes)   | Pages      | Cached    | Percent |
|------------------------------------------------------+----------------+------------+-----------+---------|
| /home/mysql/database/mariadb-104/data/ib_buffer_pool | 16020          | 4          | 0         | 000.000 |
| /home/mysql/database/mariadb-104/data/ibdata1        | 79691776       | 19456      | 0         | 000.000 |
| /home/mysql/database/mariadb-104/data/ib_logfile0    | 268435456      | 65536      | 3012      | 004.596 |
| /home/mysql/database/mariadb-104/data/ib_logfile1    | 268435456      | 65536      | 0         | 000.000 |
| /home/mysql/database/mariadb-104/data/ibtmp1         | 12582912       | 3072       | 0         | 000.000 |
| /home/mysql/database/mariadb-104/data/test/test.frm  | 1097           | 1          | 0         | 000.000 |
| /home/mysql/database/mariadb-104/data/test/test.ibd  | 71303168       | 17408      | 0         | 000.000 |
+------------------------------------------------------+----------------+------------+-----------+---------+

Also with vmtouch we can see the difference:

./vmtouch -f -v /home/mysql/database/mariadb-104/data/ib* /home/mysql/database/mariadb-104/data/test/test*
/home/mysql/database/mariadb-104/data/ib_buffer_pool  [    ] 0/4
/home/mysql/database/mariadb-104/data/ibdata1         [                                                            ] 0/19456
/home/mysql/database/mariadb-104/data/ib_logfile0     [o                                oOOOo                      ] 4252/65536
/home/mysql/database/mariadb-104/data/ib_logfile1     [                                                            ] 0/65536
/home/mysql/database/mariadb-104/data/ibtmp1          [                                                            ] 0/3072
/home/mysql/database/mariadb-104/data/test/test.frm   [ ] 0/1
/home/mysql/database/mariadb-104/data/test/test.ibd   [                                                            ] 0/17408

           Files: 7
     Directories: 0
  Resident Pages: 4252/171013  16M/668M  2.49%
         Elapsed: 0.003264 seconds

And also cachestat shows the effect of a flushed Buffer Cache and Page Cache:

# ./cachestat 
Counting cache functions... Output every 1 seconds.
    HITS   MISSES  DIRTIES    RATIO   BUFFERS_MB   CACHE_MB
  677882       19      740   100.0%           67       1087
  679213       10      700   100.0%           67       1087
  677236        0      732   100.0%           67       1087
  685673       11      932   100.0%           67       1088
  677933        5      703   100.0%           67       1088

Caution: Depending on your underlying I/O system it makes nevertheless sense to run your MariaDB Database with innodb_flush_method = fsync in certain cases! See also PostgreSQL behaviour.

Note: This information could also be interesting for PostgreSQL DBAs because they do redundant buffering with their shared_buffers (why plural? It is just one!?!) and the O/S Page Cache as well!

What is Slab

Beside Buffer Cache and the Page Cache itself we have a third thing in the /proc/meminfo statistics listed as Slabs. So what are Slabs? Slab seems to be a specific memory management (allocation) mechanism. It is used for frequently used objects in the Linux Kernel (buffer heads, inodes, dentries, etc.) [7-15]. So it contains something like other Linux Kernel Buffers and Kernel Caches.

What kind of other Linux Kernel Buffer and Kernel Caches exists can be found with the following command:

# sudo cat /proc/slabinfo 
slabinfo - version: 2.1
# name             : tunables  : slabdata 
nf_conntrack_1     14183  15275    320   25    2 : tunables    0    0    0 : slabdata    611    611      0
ext4_groupinfo_4k   8575   8596    144   28    1 : tunables    0    0    0 : slabdata    307    307      0
i915_gem_vma         523    950    320   25    2 : tunables    0    0    0 : slabdata     38     38      0
UDPv6                120    120   1088   30    8 : tunables    0    0    0 : slabdata      4      4      0
tw_sock_TCPv6       2668   2668    280   29    2 : tunables    0    0    0 : slabdata     92     92      0
request_sock_TCPv6    24     72    328   24    2 : tunables    0    0    0 : slabdata      3      3      0
TCPv6                 68    105   2112   15    8 : tunables    0    0    0 : slabdata      7      7      0
cfq_queue            391    442    232   17    1 : tunables    0    0    0 : slabdata     26     26      0
mqueue_inode_cache    72     72    896   18    4 : tunables    0    0    0 : slabdata      4      4      0
fuse_request          20     40    400   20    2 : tunables    0    0    0 : slabdata      2      2      0
fuse_inode             1     21    768   21    4 : tunables    0    0    0 : slabdata      1      1      0
fat_cache            102    408     40  102    1 : tunables    0    0    0 : slabdata      4      4      0
hugetlbfs_inode_cache 28     84    584   28    4 : tunables    0    0    0 : slabdata      3      3      0
squashfs_inode_cache  25     50    640   25    4 : tunables    0    0    0 : slabdata      2      2      0
jbd2_journal_handle  340    340     48   85    1 : tunables    0    0    0 : slabdata      4      4      0
jbd2_journal_head   2040   2040    120   34    1 : tunables    0    0    0 : slabdata     60     60      0
jbd2_revoke_table_s  260    512     16  256    1 : tunables    0    0    0 : slabdata      2      2      0
jbd2_revoke_record_s1152   1408     32  128    1 : tunables    0    0    0 : slabdata     11     11      0
ext4_inode_cache  208751 210840   1072   30    8 : tunables    0    0    0 : slabdata   7028   7028      0
ext4_free_data       320    448     64   64    1 : tunables    0    0    0 : slabdata      7      7      0
ext4_allocation_cont 128    128    128   32    1 : tunables    0    0    0 : slabdata      4      4      0
ext4_io_end          392    560     72   56    1 : tunables    0    0    0 : slabdata     10     10      0
ext4_extent_status 64412  77928     40  102    1 : tunables    0    0    0 : slabdata    764    764      0
dquot                144    160    256   16    1 : tunables    0    0    0 : slabdata     10     10      0
mbcache              226    292     56   73    1 : tunables    0    0    0 : slabdata      4      4      0
dio                  273    350    640   25    4 : tunables    0    0    0 : slabdata     14     14      0
pid_namespace         42     42   2224   14    8 : tunables    0    0    0 : slabdata      3      3      0
ip4-frags             32     64    248   16    1 : tunables    0    0    0 : slabdata      4      4      0
RAW                  396    396    896   18    4 : tunables    0    0    0 : slabdata     22     22      0
UDP                   68     68    960   17    4 : tunables    0    0    0 : slabdata      4      4      0
tw_sock_TCP        10750  11136    280   29    2 : tunables    0    0    0 : slabdata    384    384      0
request_sock_TCP      96     96    328   24    2 : tunables    0    0    0 : slabdata      4      4      0
TCP                  119    136   1920   17    8 : tunables    0    0    0 : slabdata      8      8      0
blkdev_queue          27     48   1336   24    8 : tunables    0    0    0 : slabdata      2      2      0
blkdev_requests      394    506    368   22    2 : tunables    0    0    0 : slabdata     23     23      0
blkdev_ioc           516    546    104   39    1 : tunables    0    0    0 : slabdata     14     14      0
user_namespace       104    104    304   26    2 : tunables    0    0    0 : slabdata      4      4      0
dmaengine-unmap-256   15     15   2112   15    8 : tunables    0    0    0 : slabdata      1      1      0
sock_inode_cache    1707   1950    640   25    4 : tunables    0    0    0 : slabdata     78     78      0
file_lock_cache      665    665    208   19    1 : tunables    0    0    0 : slabdata     35     35      0
net_namespace         40     40   7296    4    8 : tunables    0    0    0 : slabdata     10     10      0
shmem_inode_cache   3315   3432    656   24    4 : tunables    0    0    0 : slabdata    143    143      0
taskstats             96     96    328   24    2 : tunables    0    0    0 : slabdata      4      4      0
proc_inode_cache    6895   7072    624   26    4 : tunables    0    0    0 : slabdata    272    272      0
sigqueue             100    100    160   25    1 : tunables    0    0    0 : slabdata      4      4      0
bdev_cache            29     76    832   19    4 : tunables    0    0    0 : slabdata      4      4      0
kernfs_node_cache  43625  44982    120   34    1 : tunables    0    0    0 : slabdata   1323   1323      0
mnt_cache            518    546    384   21    2 : tunables    0    0    0 : slabdata     26     26      0
inode_cache        17519  17668    568   28    4 : tunables    0    0    0 : slabdata    631    631      0
dentry            424185 439992    192   21    1 : tunables    0    0    0 : slabdata  20952  20952      0
buffer_head     1112865 1112865    104   39    1 : tunables    0    0    0 : slabdata  28535  28535      0
vm_area_struct     53945  55300    200   20    1 : tunables    0    0    0 : slabdata   2765   2765      0
files_cache          260    299    704   23    4 : tunables    0    0    0 : slabdata     13     13      0
signal_cache         509    630   1088   30    8 : tunables    0    0    0 : slabdata     21     21      0
sighand_cache        346    405   2112   15    8 : tunables    0    0    0 : slabdata     27     27      0
task_struct         1189   1269   3584    9    8 : tunables    0    0    0 : slabdata    141    141      0
Acpi-Operand        5703   5824     72   56    1 : tunables    0    0    0 : slabdata    104    104      0
Acpi-Parse          1314   1314     56   73    1 : tunables    0    0    0 : slabdata     18     18      0
Acpi-State           204    204     80   51    1 : tunables    0    0    0 : slabdata      4      4      0
Acpi-Namespace      4077   4182     40  102    1 : tunables    0    0    0 : slabdata     41     41      0
anon_vma           19831  21522     80   51    1 : tunables    0    0    0 : slabdata    422    422      0
numa_policy          170    170     24  170    1 : tunables    0    0    0 : slabdata      1      1      0
radix_tree_node   321937 327740    584   28    4 : tunables    0    0    0 : slabdata  11705  11705      0
trace_event_file    3985   4002     88   46    1 : tunables    0    0    0 : slabdata     87     87      0
ftrace_event_field 86541  88570     48   85    1 : tunables    0    0    0 : slabdata   1042   1042      0
idr_layer_cache      533    555   2096   15    8 : tunables    0    0    0 : slabdata     37     37      0
kmalloc-8192        1246   1246   8192    4    8 : tunables    0    0    0 : slabdata    502    502      0
kmalloc-4096         658    720   4096    8    8 : tunables    0    0    0 : slabdata     90     90      0
kmalloc-2048        1955   2144   2048   16    8 : tunables    0    0    0 : slabdata    134    134      0
kmalloc-1024       44217  44384   1024   16    4 : tunables    0    0    0 : slabdata   2774   2774      0
kmalloc-512         3037   3808    512   16    2 : tunables    0    0    0 : slabdata    238    238      0
kmalloc-256        17465  20384    256   16    1 : tunables    0    0    0 : slabdata   1274   1274      0
kmalloc-192        27708  28665    192   21    1 : tunables    0    0    0 : slabdata   1365   1365      0
kmalloc-128       140581 143744    128   32    1 : tunables    0    0    0 : slabdata   4492   4492      0
kmalloc-96        168044 168378     96   42    1 : tunables    0    0    0 : slabdata   4009   4009      0
kmalloc-64        117533 123264     64   64    1 : tunables    0    0    0 : slabdata   1926   1926      0
kmalloc-32         80425  90368     32  128    1 : tunables    0    0    0 : slabdata    706    706      0
kmalloc-16          9513  11264     16  256    1 : tunables    0    0    0 : slabdata     44     44      0
kmalloc-8           6616   7168      8  512    1 : tunables    0    0    0 : slabdata     14     14      0
kmem_cache_node      320    320     64   64    1 : tunables    0    0    0 : slabdata      5      5      0
kmem_cache           208    208    256   16    1 : tunables    0    0    0 : slabdata     13     13      0

If you want to see the most frequently used (hottest) Slabs you can see them top-like with slabtop. If you press c you can sort the Slabs by CACHE_SIZE:

# sudo slabtop
 Active / Total Objects (% used)    : 2249113 / 2280136 (98.6%)
 Active / Total Slabs (% used)      : 70256 / 70256 (100.0%)
 Active / Total Caches (% used)     : 86 / 121 (71.1%)
 Active / Total Size (% used)       : 597547.86K / 605445.30K (98.7%)
 Minimum / Average / Maximum Object : 0.01K / 0.26K / 18.56K

  OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME                   
294308 289889  98%    0.57K  10511       28    168176K radix_tree_node
105030 104435  99%    1.05K   3501       30    112032K ext4_inode_cache
745446 745446 100%    0.10K  19114       39     76456K buffer_head
 59984  59909  99%    1.00K   3749       16     59984K ecryptfs_inode_cache
 47520  47157  99%    1.00K   2970       16     47520K kmalloc-1024
215166 214987  99%    0.19K  10246       21     40984K dentry
139744 138452  99%    0.12K   4367       32     17468K kmalloc-128
179508 179011  99%    0.09K   4274       42     17096K kmalloc-96
 47140  45768  97%    0.20K   2357       20      9428K vm_area_struct
 14700  14700 100%    0.55K    525       28      8400K inode_cache
...

Literature

How can I tell which Tungsten Connector mode I am using: Bridge, Proxy/Direct or Proxy/SmartScale?

$
0
0

Overview

The Skinny

Part of the power of Tungsten Clustering for MySQL / MariaDB is its intelligent MySQL Proxy, known as the Tungsten Connector. Tungsten Connector has three main modes, and depending on the type of operations you are performing (such as if you need read-write splitting), we help you choose which mode is best.


The Question

Recently, a customer asked us:

How can I tell which Tungsten Connector mode I am using: Bridge, Proxy/Direct or Proxy/SmartScale?


The Answer

Connect and Observe

You may login through the Connector to tell the difference between Bridge mode and Proxy mode (either Direct or SmartScale):

In Proxy mode, you will see the -tungsten tag appended to the Server version string:

tungsten@db1:/opt/continuent/software/tungsten-clustering-6.0.5-41 # tpm connector
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 34
Server version: 5.7.26-log-tungsten MySQL Community Server (GPL)

Once logged into the Connector in Proxy mode, you have the full set of interactive tungsten commands available:

mysql> tungsten help;
+---------------------------------------------------------------------------------------------------------------------------------+
| Message                                                                                                                         |
+---------------------------------------------------------------------------------------------------------------------------------+
| tungsten connection status:                 display information about the connection used for the last request ran              |
| tungsten connection count:                  gives the count of current connections to each one of the cluster datasources       |
| tungsten cluster status:                    prints detailed information about the cluster view this connector has               |
| tungsten show [full] processlist:           list all running queries handled by this connector instance                         |
| tungsten show variables [like '<string>']:  list connector configuration options in use. The <string> may contain '%' wildcards |
| tungsten flush privileges:                  reload user.map and refresh user credentials                                        |
| tungsten mem info:                          display memory information about current JVM                                        |
| tungsten gc:                                calls garbage collector                                                             |
| tungsten help:                              display this help message                                                           |
+---------------------------------------------------------------------------------------------------------------------------------+

For more information about the Connector’s command-line interface, please visit http://docs.continuent.com/tungsten-clustering-6.0/connector-inline.html

For Bridge mode, you will not see that:

tungsten@db1:/opt/continuent/software/tungsten-clustering-6.0.5-41 # tpm connector
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 34
Server version: 5.7.26-log MySQL Community Server (GPL)

In Bridge mode, the tungsten commands do not work:

mysql> tungsten help;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'tungsten help' at line 1

The Library

Please read the docs!

For more information about the Tungsten Connector:

For more documentation about Tungsten software, please visit https://docs.continuent.com


Summary

The Wrap-Up

In this blog post we discussed how one can tell which Tungsten Connector mode is in use: Bridge, Proxy/Direct or Proxy/SmartScale.

Tungsten Clustering is the most flexible, performant global database layer available today – use it underlying your SaaS offering as a strong base upon which to grow your worldwide business!

For more information, please visit https://www.continuent.com/solutions

Want to learn more or run a POC? Contact us

SQL Aliases Example | Alias In SQL Tutorial

$
0
0
SQL Aliases Example | Alias In SQL Tutorial

SQL Aliases Example | Alias In SQL Tutorial is today’s topic. Aliases are the temporary names that exist for a particular query. It is used for better understanding and reducing the workload of writing big column names. Aliases are the temporary name and the changes made with the column name while retrieving does not reflect in the original database.

SQL Aliases Example

An Alias is a shorthand for the table or column name. Aliases reduce the amount of typing required to write the query. Complex queries with the aliases are generally easier to read. Aliases are useful with JOINs and SQL aggregates: SUM, COUNT, etc. SQL Alias only exists for the duration of that query.

Aliases are useful in these scenarios. When there is more than one table involved in the query Functions are used in that query. When the column names are big or not very readable. When there are two or more columns are combined together. Alias is also used when more than one table is involved, i.e., when the joins are used. These types of Aliases are known as Table Alias.

SYNTAX: (For column alias)

Select column as alias_name FROM table_name;

PARAMETERS:

  1. Column: Fields in the table.
  2. Alias_name: Temporary names that are to be used apart from the original column names.
  3. Table_name: Name of the table.

SYNTAX: (For Table Alias)

Select column from table_name as alias_name;

PARAMETERS:

  1. Column: Fields in the table.
  2. Table_name: Name of the table.
  3. Alias_name: Temporary names that are to be used apart from the original table names.

NOTE:

  1. The alias name should be enclosed within the quotes if it contains whitespace.
  2. The alias name is valid within the scope of the SQL statement.

Let’s understand the above syntaxes with the help of an example.

EXAMPLE:

Consider two tables:

Students

ID NAME CITY
1 Shubh Kolkata
2 Karan Allahabad
3 Suraj Kota
4 Akash Vizag

 

Marks:

ID NAME MARKS Age
1 Shubh 90 21
2 Rounak 91 21
3 Suraj 92 22
4 Akash 93 22

 

Let’s Begin with column alias:

QUERY:

Select ID AS Roll, Name from Students where city like ‘K%’;

Output:

Roll NAME CITY
1 Shubh Kolkata
3 Suraj Kota

 

Explanation:

So, in the above query, we have used AS keyword to change the name of ID column to Roll and displayed the details of those students whose city first character belongs with K.

For more information about showing the information in this format, refer to SQL WILDCARD OPERATORS.

In the above example, we have also filtered the data using Where clause and Select an ID as Roll and name columns from a table.

Now, Let’s discuss an example for table alias.

Query:

Select S.ID, S.NAME, S.CITY, M.MARKS from Students AS S, Marks AS M where S.ID=M.ID;

Output:

ID NAME CITY MARKS
1 Shubh Kolkata 90
3 Suraj Kota 92
4 Akash Vizag 93

 

Explanation:

So, in the above query, we have displayed the marks of those students whose Id’s in students and marks table is same.

Finally, SQL Aliases Example | Alias In SQL Tutorial is over.

The post SQL Aliases Example | Alias In SQL Tutorial appeared first on AppDividend.

Advanced WordPress Search using WpSolr and ElasticSearch

$
0
0

Finding good content on your website is really important. The search feature improves user interaction and helps you to build a readership on your website. WordPress uses the default MySQL database to perform a search which is not great. MySQL is not built for search and if you are serious about building an authority website on WordPress then the search is the module you need to pay good attention.

I have already implemented WordPress search with ElasticSearch here. In this article, I am going to use and review the amazing product called ‘WPSOLR’ built for search.

What is WPSOLR

WpSolr is an advanced WordPress search plugin that can work with Apache Solr and Elasticsearch.

WPSOLR provides out of the box search solution with the following features:

  • Built-in language, synonyms, stop words.
  • Search filter using checkbox, radio box, sliders, and other UI elements.
  • Unlimited posts search (thousands, millions, hundreds of millions….)
  • Boosting the results.
  • The live suggestion in the search box.
  • SEO friendly.

How WPSOLR Works

WPSOLR works with Apache Solr and Elasticsearch. Assuming you want to index your data in Elasticsearch. In this case, WpSolr plugin will index the data from MySQL database of WordPress to Elasticsearch.

The default WordPress search will be replaced with WPSOLR Search once the data is indexed and search is configured.

Check out this image for graphical representation.

WPSOLR process

How to use WPSOLR plugin

In order to use WPSOLR plugin, you need to have Elasticsearch installed either locally or deploy it on any free server.

You can follow this tutorial to install the ElasticSearch on openshift platform.

Once Elasticsearch is installed. You can install WPSOLR WordPress plugin.

Once the plugin is installed, we need to connect it to our Elasticsearch. Make sure you have an Elasticsearch URL ready.

Open the WPSOLR plugin admin panel.

Index images

Click on “Connect to your Elasticsearch / Apache Solr” button. This will open up another window.

configure results

In this screen, select the “Elasticsearch” from search engine dropdown. Give your index a name, if the index is not created, WPSOLR will create it for you.

Paste the Elasticsearch URL in server host text box and provide the port. You can use the key and secret provided by the WPSOLR.

Once done then click on the button provided at the bottom of the screen.

Now you need to replace the default WordPress search. Go to the results options screen to do that.

replace results

How much it costs?

WPSOLR costs 29 euros a month, or 199 euros a year. However, the team has come up with a coupon for you guys for 15% off on your order. You can use coupon codeforgeek15 on checkout to avail it.

You need to install Solr or Elasticsearch on your server before purchasing the plugin.

Conclusion

There is a search as a service solution that will cost you more than $10,000 for a million records. WPSOLR lets you index and searches as much data as you want on a flat price. This is really amazing and needed for the developer community. Kudos to the team for such an awesome product.

The MySQL 8.0.17 Maintenance Release is Generally Available

$
0
0

The MySQL Development team is very happy to announce that MySQL 8.0.17 is now available for download at dev.mysql.com. In addition to bug fixes there are a few new features added in this release.  Please download 8.0.17 from dev.mysql.com or from the MySQL  YumAPT, or SUSE repositories.…

MySQL Shell 8.0.17 – What’s New?

$
0
0

The MySQL Development team is proud to announce a new version of the MySQL Shell in which the following new features can be highlighted:

  • MySQL Shell Plugins
  • Parallel table import
  • In InnoDB Cluster:
    • Automatic instance provisioning through cloning
    • Automatic server version compatibility handling
    • Simplification of internal recovery accounts

The following enhancements were also introduced:

  • On the X DevAPI area:
    • Support for array indexes in collections
    • Support for overlaps operator in expressions
  • Uniform SQL execution API in classic and X protocol
  • Support for connection attributes
  • New utility functions:
    • shell.unparseUri(…)
    • shell.dumpRows(…)
  • Support for –verbose output
  • Upgrade Checker: Addition of checks for variables on the configuration files

MySQL Shell Plugins

The MySQL Shell now supports user extensions through MySQL Shell Plugins, which includes User Defined Reports (Introduced in 8.0.16) as well as the new Extension Objects introduced in 8.0.17.…


MySQL InnoDB Cluster – What’s new in Shell AdminAPI 8.0.17 release

$
0
0

The MySQL Development Team is very excited and proud to announce a new 8.0 Maintenance Release of InnoDB Cluster – 8.0.17!

In addition to important bug fixes and improvements, 8.0.17 brings a game-changer feature!

This blog post will cover MySQL Shell and the AdminAPI, for detailed information of what’s new in MySQL Router stay tuned for an upcoming blog post!…

MySQL 8.0.17 Replication Enhancements

$
0
0

MySQL 8.0.17 is out. In addition to fixing a few bugs here and there, we also have a couple of new replication features that I would like to present. Thence, allow me to give you a quick summary. As usual, there shall be follow-up blog posts providing details, so stay tuned.…

MySQL InnoDB Cluster – Automatic Node Provisioning

$
0
0

The MySQL Development Team is very excited and proud of what was achieved in this 8.0.17 GA release!

The spotlight is on… A game-changer feature – Automatic Node provisioning!

This has been an extremely desired and important feature, and it has been accomplished, once again, with tight integration and cooperation of MySQL Components:

  • The new MySQL Clone Plugin: To take a physical snapshot of the database and transfer it over the network to provision a server, all integrated into the server, using regular MySQL connections.

MySQL Connector/J 8.0.17 has been released

$
0
0

Dear MySQL users,

MySQL Connector/J 8.0.17 is the latest General Availability
release of the MySQL Connector/J 8.0 series.  It is suitable
for use with MySQL Server versions 8.0, 5.7, and 5.6.
It supports the Java Database Connectivity (JDBC) 4.2 API,
and implements the X DevAPI.

This release includes the following new features and changes, also
described in more detail on

https://dev.mysql.com/doc/relnotes/connector-j/8.0/en/news-8-0-17.html

As always, we recommend that you check the “CHANGES” file in the
download archive to be aware of changes in behavior that might affect
your application.

To download MySQL Connector/J 8.0.17 GA, see the “Generally Available
(GA) Releases” tab at http://dev.mysql.com/downloads/connector/j/

Enjoy!

———————————————————————–

Changes in MySQL Connector/J 8.0.17 (2019-07-22, General
Availability)

Functionality Added or Changed

     * X DevAPI: The following methods have been deprecated:

          + Collection.find().where()

          + Collection.modify().where()

          + Collection.remove().where()

     * X DevAPI: Two new operators for JSON objects and arrays,
       overlaps and not_overlaps, are now supported. See the X
       DevAPI User Guide
       (https://dev.mysql.com/doc/x-devapi-userguide/en/) for
       details.

     * X DevAPI: Indexing for array fields is now supported. See
       Indexing Array Fields
       (https://dev.mysql.com/doc/x-devapi-userguide/en/collecti
       on-indexing.html#collection-indexing-array) in the X
       DevAPI User Guide
       (https://dev.mysql.com/doc/x-devapi-userguide/en/) for
       details.

     * The README and LICENSE files are now included inside the
       Connector/J JAR archive delivered in the
       platform-independent tarballs and zip files. (Bug
       #29591275)

     * A number of private parameters of ProfilerEvents (for
       example, hostname) had no getters for accessing them from
       outside of the class instance. Getter methods have now
       been added for all the parameters of the class. (Bug
       #20010454, Bug #74690)

     * A new connection property, databaseTerm, sets which of
       the two terms is used in an application to refer to a
       database. The property takes one of the two values
       CATALOG or SCHEMA and uses it to determine which
       Connection methods can be used to set/get the current
       database, which arguments can be used within the various
       DatabaseMetaData methods to filter results, and which
       fields in the ResultSet returned by DatabaseMetaData
       methods contain the database identification information.
       See the entry for databaseTerm in Configuration
       Properties
(https://dev.mysql.com/doc/connector-j/8.0/en/connector-j-reference-configuration-properties.html)
       for details. Also, the connection property
       nullCatalogMeansCurrent has been renamed to
       nullDatabaseMeansCurrent. The old name remains an alias
       for the connection property. Thanks to Harald Aamot for
       contributing to the patch.
       (Bug #11891000, Bug #27356869, Bug #89133)

     * A new CONTRIBUTING file has been added to the Connector/J
       repository on GitHub
       (https://github.com/mysql/mysql-connector-j), which
       provides guidelines for code contribution and bug
       reporting.

     * The MySQL Connector/J X DevAPI Reference can now be
       generated from the Connector/J source code as an Ant
       target, xdevapi-docs.

     * Added support for host names that are longer than 60
       characters (up to 255 characters), as they are now
       supported by MySQL Server 8.0.17.

     * Added support for the utf8mb4_0900_bin collation, which
       is now supported by MySQL Server 8.0.17.

     * A cached server-side prepared statement can no longer be
       effectively closed by calling Statement.close() twice. To
       close and de-cache the statement, do one of the
       following:

          + Close the connection (assuming the connection is
            tracking all open resources).

          + Use the implementation-specific method
            JdbcPreparedStatement.realClose().

          + Set the statement as non-poolable by calling the
            method Statement.setPoolable(false) before or after
            closing it.

Bugs Fixed

     * X DevAPI: The IN operator in X DevAPI expressions, when
       followed by a square bracket ([), got mapped onto the
       wrong operation in X Protocol. (Bug #29821029)

     * When using a replication connection, retrieving data from
       BlobFromLocator resulted in a ClassCastException. It was
       due to some wrong and unnecessary casting, which has been
       removed by this fix. (Bug #29807741, Bug #95210)

     * ResultSetMetaData.getTableName() returned null when no
       applicable results could be returned for a column.
       However, the JDBC documentation specified an empty string
       to be returned in that case. This fix makes the method
       behave as documented. The same correction has been made
       for getCatalogName() and getSchemaName(). (Bug #29452669,
       Bug #94585)

     * ResultSetImpl.getObject(), when autoboxing a value of a
       primitive type retrieved from a column, returned a
       non-null object when the retrieved value was null. (Bug
       #29446100, Bug #94533)

     * ResultSetImpl.getDouble() was very inefficient because it
       called FloatingPointBoundsEnforcer.createFromBigDecimal,
       which needlessly recreated BigDecimal objects for the
       fixed minimum and maximum bounds. With this fix, the
       objects BigDecimal.valueOf(min) and
       BigDecimal.valueOf(max) are cached after they are first
       created, thus avoiding their recreations. (Bug #29446059,
       Bug #94442)

     * Enabling logSlowQueries resulted in many unnecessary
       calls of LogUtils.findCallingClassAndMethod(). With this
       fix, LogUtils.findCallingClassAndMethod() is called only
       when profileSQL is true and even in that case, the number
       of calls are reduced to a minimal to avoid the excessive
       stack trace data the function used to generate. Thanks to
       Florian Agsteiner for contributing to the fix. (Bug
       #29277648, Bug #94101, Bug #17640628, Bug #70677)

     * Characters returned in a ResultSet were garbled when a
       server-side PreparedStatement was used, and the query
       involved concatenation of a number and a string with
       multi-byte characters. That was due to an issue with the
       number-to-string conversion involved, which has been
       corrected by this fix. (Bug #27453692)

     * Calling ProfilerEvent.pack() resulted in an
       ArrayIndexOutOfBoundsException. It was due to a
       mishandling of data types, which has been corrected by
       this fix. (Bug #11750577, Bug #41172)

Enjoy and thanks for the support!

On Behalf of MySQL/ORACLE RE Team
Sreedhar S

MySQL InnoDB Cluster from scratch – even more easy since 8.0.17

$
0
0

Create a MySQL InnoDB Cluster using MySQL 8.0 has always been very easy. Certainly thanks to MySQL Shell and server enhancements like SET PERSIST and RESTART statement (see this post).

The most complicated part to deal with was the existing and none existing data. In fact GTID sets must be compatible.

Let me explain that with some examples:

Example 1 – empty servers

If you have empty servers with GTID enabled, manually creating credentials to connect to each MySQL instances will generate GTIDs that will prevent nodes to join the cluster. Before 8.0.17 if you were doing this, you had to explicitly avoid to write the users in binary log.

Example 2 – servers with data but purged binary logs

When you want to add a new servers to a cluster, before 8.0.17 when the new server (joiner) wants to join the cluster, this is a very high summary of what they say:

- joiner: hello I don't have any GTID
- group: ok we are now at trx (gtid sequence) 1000, 
         you will then need from 1 to 1000, let's see 
         if somebody part of the group as those trx
- member1: no, I don't, I've purged my inital binlogs, I've from 500
- member2: no, I don't, the first binlog I've starts with gtid 600
- joiner: damn ! I can't join then, bye ! 

And this is the same when you prepare a new member with a backup, you need to be sure that next gtid since the backup was made is still available at least in one of the members.

My colleague Ivan blogged about that too.

Clone Plugin

Since MySQL 8.0.17, all this is not necessary anymore, the clone plugin can handle the provisioning automatically. If we take the second example above, instead of saying bye , the joiner will copy all the data (clone) from another member directly, without calling an external shell script or program, but directly in the server using the clone plugin !

10 minutes to install MySQL and setup an InnoDB Cluster

Let’s see this in action:

Since MySQL InnoDB Cluster is out, we received a lot of very good feedback, but the main feature request was always the same: automatic provisioning ! This is now part of the reality ! Wooohoooo \o/

And of course it’s all integrated directly into MySQL.

Three New JSON Functions in MySQL 8.0.17

$
0
0
MySQL 8.0.17 adds three new functions to the JSON repertoire.  All three can take advantage of the new Multi-Value Index feature or can be used on JSON arrays.

JSON_CONTAINS(target, candiate[, path])


This function indicates with a 1 or 0 if a  candidate document is contained in the target document. The optional path argument lets you seek information in embedded documents.  And please note the 'haystack' is before the 'needle' for this function.

mysql> SELECT JSON_CONTAINS('{"Moe": 1, "Larry": 2}','{"Moe": 1}');
+------------------------------------------------------+
| JSON_CONTAINS('{"Moe": 1, "Larry": 2}','{"Moe": 1}') |
+------------------------------------------------------+
|                                                    1 |
+------------------------------------------------------+
1 row in set (0.00 sec)

mysql> SELECT JSON_CONTAINS('{"Moe": 1, "Larry": 2}','{"Shemp": 1}');
+--------------------------------------------------------+
| JSON_CONTAINS('{"Moe": 1, "Larry": 2}','{"Shemp": 1}') |
+--------------------------------------------------------+
|                                                      0 |
+--------------------------------------------------------+
1 row in set (0.00 sec)


Objects as must match both key and value. Be careful as an array is considered to be contained in a target array only if every element in the candidate is contained in some element of the target. So JSON_CONTAINS("[1,2,3]","[2,3]") will return a '1' while JSON_CONTAINS("[1,2,3]","[3,4]") will return a '0'.

You can always use JSON_CONTAINS_PATH() to test if any matches exist on the entire path and JSON_CONTAINS() for a simple match.

JSON_OVERLAPS(document1, document2)


 This functions compares two JSON documents and returns 1 if it has any key/value pairs or array elements in common.

mysql> SELECT JSON_OVERLAPS("[1,3,5,7]","[2,3,4,5]");
+----------------------------------------+
| JSON_OVERLAPS("[1,3,5,7]","[2,3,4,5]") |
+----------------------------------------+
|                                      1 |
+----------------------------------------+
1 row in set (0.00 sec)

mysql> SELECT JSON_OVERLAPS("[1,3,5,7]","[2,4,6]");
+--------------------------------------+
| JSON_OVERLAPS("[1,3,5,7]","[2,4,6]") |
+--------------------------------------+
|                                    0 |
+--------------------------------------+
1 row in set (0.00 sec)


So what is the difference between these two new functions? JSON_CONTAINS() requires ALL elements of the array searched for to be present while JSON_OVERLAPS() looks for any matches. So think JSON_CONTAINS() as the AND operation on KEYS while JSON_OVERLAP is the OR operator.

mysql> SELECT JSON_OVERLAPS("[1,3,5,7]","[1,3,5,9]");
+----------------------------------------+
| JSON_OVERLAPS("[1,3,5,7]","[1,3,5,9]") |
+----------------------------------------+
|                                      1 |
+----------------------------------------+
1 row in set (0.00 sec)


mysql> SELECT JSON_CONTAINS("[1,3,5,7]","[1,3,5,9]");
+----------------------------------------+
| JSON_CONTAINS("[1,3,5,7]","[1,3,5,9]") |
+----------------------------------------+
|                                      0 |
+----------------------------------------+
1 row in set (0.00 sec)


value MEMBER OF(json_array)


This function returns a 1 if the value is an element of the json_array.


mysql> SELECT 3 MEMBER OF('[1, 3, 5, 7, "Moe"]');
+------------------------------------+
| 3 MEMBER OF('[1, 3, 5, 7, "Moe"]') |
+------------------------------------+
|                                  1 |
+------------------------------------+
1 row in set (0.00 sec)

mysql> SELECT 2 MEMBER OF('[1, 3, 5, 7, "Moe"]');
+------------------------------------+
| 2 MEMBER OF('[1, 3, 5, 7, "Moe"]') |
+------------------------------------+
|                                  0 |
+------------------------------------+
1 row in set (0.00 sec)


This function does not convert to and from strings for you so do not try something like this.

mysql> SELECT "3" MEMBER OF('[1, 3, 5, 7, "Moe"]');
+--------------------------------------+
| "3" MEMBER OF('[1, 3, 5, 7, "Moe"]') |
+--------------------------------------+
|                                    0 |
+--------------------------------------+


So "3" is not equal to 3.  And you may have to explicitly cast the value as an array or use JSON_ARRAY().

mysql> SELECT CAST('[3,4]' AS JSON) MEMBER OF ('[[1,2],[3,4]]');
+---------------------------------------------------+
| CAST('[3,4]' AS JSON) MEMBER OF ('[[1,2],[3,4]]') |
+---------------------------------------------------+
|                                                 1 |
+---------------------------------------------------+
1 row in set (0.00 sec)


mysql> SELECT JSON_ARRAY(3,4) MEMBER OF ('[[1,2],[3,4]]');
+---------------------------------------------+
| JSON_ARRAY(3,4) MEMBER OF ('[[1,2],[3,4]]') |
+---------------------------------------------+
|                                           1 |
+---------------------------------------------+
1 row in set (0.00 sec)


Use with Multi-Value Indexes

Queries using JSON_CONTAINS(), JSON_OVERLAPS(), or MEMBER OF() on JSON columns of an InnoDB table can be optimized to use Multi-Valued Indexes.  More on MVIs in another blog post!







JSON Schema Validation with MySQL 8.0.17

$
0
0
JSON has become the standard document interchange format over the last several years.  MySQL 5.7 added a native JSON data type and it has been  greatly enhanced with version 8.0.  But many in the relational world have complained the the NoSQL approach does not allow you to have rigor on your data. That is to make sure an integer value is really an integer and within specified ranges or string of the proper length. And there was no way to make sure that email addresses are not listed under a combination of E-mail, e-mail, eMail, and eMAIL.  JSON is great for many things but traditional, normalized data was better for making certain that your data matched what was specified.

If only there was a way to enforce come rigor on JSON data! Or a way to annotate (pronounced 'document') your JSON data. Well there is. MySQL 8.0.17 has added the ability to validate JSON documents against a schema following the guidelines of the JSON-Schema.org's fourth draft standard. You can find both the manual page 12.17.7 JSON Schema Validation Functions and the JSON Schema information online.


Valid JSON and Really Valid JSON


As you are probably already aware, MySQL will reject an invalid JSON document when using the JSON data type.  But there is a difference between syntactically valid and validation against a schema. With schema validation you can define how the data should be formatted. This will help with automated testing and help ensure the quality of your data.


Overly Simple Example


Lets create a simple document schema that looks at a key named 'myage' and set up rules that the minimum value is 28 and the maximum value is 99.

set @s='{"type": "object",
     "properties": {
       "myage": {
       "type" : "number",
       "minimum": 28,
       "maximum": 99
   }
}
}';

And here is our test document where we use a value for 'myage' what is between the minimum and the maximum.

set @d='{  "myage": 33}';

Now we use JSON_SCHEMA_VALID() to test if the test document passes the validation test, with 1 or true as a pass and 0 or false as a fail.

select JSON_SCHEMA_VALID(@s,@d);
+--------------------------+
| JSON_SCHEMA_VALID(@s,@d) |
+--------------------------+
|                        1 |
+--------------------------+
1 row in set (0.00 sec)


Now try with a non-numeric value.

set @d='{  "myage": "foo"}';
Query OK, 0 rows affected (0.00 sec)

mysql> select JSON_SCHEMA_VALID(@s,@d);
+--------------------------+
| JSON_SCHEMA_VALID(@s,@d) |
+--------------------------+
|                        0 |
+--------------------------+

And a value below the minimum.

mysql> set @d='{  "myage": 16}';
Query OK, 0 rows affected (0.00 sec)

mysql> select JSON_SCHEMA_VALID(@s,@d);
+--------------------------+
| JSON_SCHEMA_VALID(@s,@d) |
+--------------------------+
|                        0 |
+--------------------------+
1 row in set (0.00 sec)

Validity Report

We can use JSON_SCHEMA_VALIDATION_REPORT() to get more information on why a document is failing with JSON_SCHEMA_VALID().

mysql> select JSON_SCHEMA_VALIDATION_REPORT(@s,@d)\G
*************************** 1. row ***************************
JSON_SCHEMA_VALIDATION_REPORT(@s,@d): {"valid": false, "reason": "The JSON document location '#/myage' failed requirement 'minimum' at JSON Schema location '#/properties/myage'", "schema-location": "#/properties/myage", "document-location": "#/myage", "schema-failed-keyword": "minimum"}
1 row in set (0.00 sec)

And, you should note, the response is in JSON format. And you can neaten the output up with JSON_PRETTY() wrapped around the above query.


select JSON_PRETTY(JSON_SCHEMA_VALIDATION_REPORT(@s,@d))\G
*************************** 1. row ***************************
JSON_PRETTY(JSON_SCHEMA_VALIDATION_REPORT(@s,@d)): {
  "valid": false,
  "reason": "The JSON document location '#/myage' failed requirement 'minimum' at JSON Schema location '#/properties/myage'",
  "schema-location": "#/properties/myage",
  "document-location": "#/myage",
  "schema-failed-keyword": "minimum"
}

Required Keys


If you want to make sure certain keys are included in a document, you can use a the required option in your schema definition.  So if you are working with GIS information, you can specify requiring longitude and latitude.

""required": ["latitude", "longitude"]


So we can no have required fields and specify their value ranges. And we can verify BEFORE committing the JSON document to the MySQL server that the data conforms to our schema.





Using JSON SCHEMA Validation with Check Constraint

SO the next logical step is to use the CONSTRAINT CHECK option on table creation to assure that we are not only getting a valid JSON document but a verified JSON document.

 
CREATE TABLE `testx` (
`col` JSON,
CONSTRAINT `myage_inRange`
CHECK (JSON_SCHEMA_VALID('{"type": "object",
"properties": {
"myage": {
"type" : "number",
"minimum": 28,
"maximum": 99
}
},"required": ["myage"]
}', `col`) = 1)
);


And the proof that it works.

mysql> insert into testx values('{"myage":27}');
ERROR 3819 (HY000): Check constraint 'myage_inRange' is violated.
mysql> insert into testx values('{"myage":97}');
Query OK, 1 row affected (0.02 sec)
 


So two of the big criticisms on using JSON in a relational database are now gone. We can add rigor and value checks.  While not as easy to do as with normalized  relational data, this is a huge win for those using JSON.
 


More on JSON Schema


I highly recommend going through the basics of JSON Schema as they is a lot of material that can not be covered in a simple blog.












MySQL Connector/Python 8.0.17 has been released

$
0
0

Dear MySQL users,

MySQL Connector/Python 8.0.17 is the latest GA release version of the
MySQL Connector Python 8.0 series. The X DevAPI enables application
developers to write code that combines the strengths of the relational
and document models using a modern, NoSQL-like syntax that does not
assume previous experience writing traditional SQL.

To learn more about how to write applications using the X DevAPI, see

http://dev.mysql.com/doc/x-devapi-userguide/en/

For more information about how the X DevAPI is implemented in MySQL
Connector/Python, and its usage, see

http://dev.mysql.com/doc/dev/connector-python

Please note that the X DevAPI requires at least MySQL Server version 8.0
or higher with the X Plugin enabled. For general documentation about how
to get started using MySQL as a document store, see

http://dev.mysql.com/doc/refman/8.0/en/document-store.html

To download MySQL Connector/Python 8.0.17, see the “General Available
(GA) releases” tab at

http://dev.mysql.com/downloads/connector/python/

Enjoy!

======================================================================

Changes in MySQL Connector/Python 8.0.17 (2019-07-22)

Functionality Added or Changed

     * Prepared statement support was added to the C extension’s
       (use_pure=False) implementation.
       (Bug #27364973, Bug #21670979, Bug #77780)

     * Added connection attribute support for the classic
       connector; new connection attributes can be passed in
       with the “conn_attrs” connection argument. Thanks to
       Daniël van Eeden for the patch. Example usage:

           test_config = {‘user’: ‘myuser’, ‘port’:3306, ‘host’:’localhost’}
           test_config[‘conn_attrs’] = {“foo”: “bar”, “_baz”: “qux”, “hello”: “world”}
           _ = connect(**test_config)

       Default connection attributes are set for both the pure
       and c-ext implementations, but these attributes are
       different due to limitations of the client library.
       For general information about connection attributes, see
       Performance Schema Connection Attribute Tables
(https://dev.mysql.com/doc/refman/8.0/en/performance-schema-connection-attribute-tables.html).
       (Bug #21072758, Bug #77003)

     * Document fields containing arrays can now be indexed by
       setting array to true in an index fields definition.

     * Added support for the OVERLAPS and NOT OVERLAPS
       operators; which is equivalent to the SQL JSON_OVERLAPS()
       function.
       These binary operators are used with a general
       “expression operator expression” syntax; and the
       expressions return a JSON array or object. Example usage:

         [“A”, “B”, “C”] overlaps $.field

     * Added support for the “utf8mb4_0900_bin” collation added in
       MySQL Server 8.0.17.

     * Added “CONTRIBUTING.rst” and replaced “README.txt” with
       “README.rst”.

Bugs Fixed

     * Executing a Collection.find() without first fetching
       results would raise an AttributeError with an unclear
       message. (Bug #29327931)

     * An error was generated when used with the combination of
       MySQL 5.7, Python 3, and having the C-extension enabled.
       (Bug #28568665)

Enjoy and thanks for the support!

Galera Cluster with new Galera Replication Library 3.27 and MySQL 5.6.44, MySQL 5.7.26 is GA

$
0
0

Codership is pleased to announce a new Generally Available (GA) release of Galera Cluster for MySQL 5.6 and 5.7, consisting of MySQL-wsrep 5.6.44-25.26 and MySQL-wsrep 5.7.26-25.18 with a new Galera Replication library 3.27 (release notes, download), implementing wsrep API version 25. This release incorporates all changes into MySQL 5.6.44 (release notes, download) and MySQL 5.7.26 (release notes, download) respectively.

Compared to the previous 3.26 release, the Galera Replication library has a few fixes: to prevent a protocol downgrade upon a rolling upgrade and also improvements for GCache page storage on NVMFS devices.

One point of note is that this is also the last release for SuSE Linux Enterprise Server 11, as upstream has also put that release into End-of-Life (EOL) status.

You can get the latest release of Galera Cluster from http://www.galeracluster.com. There are package repositories for Debian, Ubuntu, CentOS, RHEL, OpenSUSE and SLES. The latest versions are also available via the FreeBSD Ports Collection.

MySQL Connector/C++ 8.0.17 has been released

$
0
0

Dear MySQL users,

MySQL Connector/C++ 8.0.17 is a new release version of the MySQL
Connector/C++ 8.0 series.

Connector/C++ 8.0 can be used to access MySQL implementing Document
Store or in a traditional way, using SQL queries. It allows writing
both C++ and plain C applications using X DevAPI and X DevAPI for C.
It also supports the legacy API of Connector/C++ 1.1 based on JDBC4.

To learn more about how to write applications using X DevAPI, see
“X DevAPI User Guide” at

https://dev.mysql.com/doc/x-devapi-userguide/en/

See also “X DevAPI Reference” at

https://dev.mysql.com/doc/dev/connector-cpp/devapi_ref.html

and “X DevAPI for C Reference” at

https://dev.mysql.com/doc/dev/connector-cpp/xapi_ref.html

For generic information on using Connector/C++ 8.0, see

https://dev.mysql.com/doc/dev/connector-cpp/

For general documentation about how to get started using MySQL
as a document store, see

http://dev.mysql.com/doc/refman/8.0/en/document-store.html

To download MySQL Connector/C++ 8.0.17, see the “Generally Available (GA)
Releases” tab at

https://dev.mysql.com/downloads/connector/cpp/

==================================================

Changes in MySQL Connector/C++ 8.0.17 (2019-07-22, General
Availability)


     * Character Set Support

     * Compilation Notes

     * Configuration Notes

     * Function and Operator Notes

     * X DevAPI Notes

     * Functionality Added or Changed

     * Bugs Fixed

Character Set Support


     * Connector/C++ now supports the new utf8mb4_0900_bin
       collation added for the utf8mb4 Unicode character set in
       MySQL 8.0.17. For more information about this collation,
       see Unicode Character Sets
(https://dev.mysql.com/doc/refman/8.0/en/charset-unicode-sets.html).

Compilation Notes


     * Connector/C++ now compiles cleanly using the C++14
       compiler. This includes MSVC 2017. Binary distributions
       from Oracle are still built in C++11 mode using MSVC 2015
       for compatibility reasons.

Configuration Notes


     * The maximum permitted length of host names throughout
       Connector/C++ has been raised to 255 ASCII characters, up
       from the previous limit of 60 characters. Applications
       that expect host names to be a maximum of 60 characters
       should be adjusted to account for this change.

Function and Operator Notes


     * Connector/C++ now supports the OVERLAPS and NOT OVERLAPS
       operators for expressions on JSON arrays or objects:
expr OVERLAPS expr
expr NOT OVERLAPS expr

       Suppose that a collection has these contents:
[{
   “_id”: “1”,
   “list”: [1, 4]
 }, {
   “_id”: “2”,
   “list”: [4, 7]
}]

       This operation:
auto res = collection.find(“[1, 2, 3] OVERLAPS $.list”).fields(“_id”).
execute();
res.fetchAll();

       Should return:
[{ “_id”: “1” }]

       This operation:
auto res = collection.find(“$.list OVERLAPS [4]”).fields(“_id”).execut
e();
res.fetchAll();

       Should return:
[{ “_id”: “1” }, { “_id”: “2” }]

       An error occurs if an application uses either operator
       and the server does not support it.

X DevAPI Notes


     * For index specifications passed to the
       Collection::createIndex() method (for X DevAPI
       applications) or the mysqlx_collection_create_index()
       function (for X DevAPI for C applications), Connector/C++
       now supports indexing array fields. A single index field
       description can contain a new member name array that
       takes a Boolean value. If set to true, the field is
       assumed to contain arrays of elements of the given type.
       For example:
coll.createIndex(“idx”,
  R”({ “fields”: [{ “field”: “foo”, “type”: “INT”, “array”: true }] })

);

       In addition, the set of possible index field data types
       (used as values of member type in index field
       descriptions) is extended with type CHAR(N), where the
       length N is mandatory. For example:
coll.createIndex(“idx”,
  R”({ “fields”: [{ “field”: “foo”, “type”: “CHAR(10)” }] })”
);

Functionality Added or Changed


     * Previously, Connector/C++ reported INT in result set
       metadata for all integer result set columns, which
       required applications to check column lengths to
       determine particular integer types. The metadata now
       reports the more-specific TINYINT, SMALLINT, MEDIUMINT,
       INT, and or BIGINT types for integer columns. (Bug
       #29525077)

Bugs Fixed


     * Calling a method such as .fields() or .sort() on existing
       objects did not overwrite the effects of any previous
       call. (Bug #29402358)

     * When Connector/C++ applications reported connection
       attributes to the server upon establishing a new
       connection, some attributes were taken from the host on
       which Connector/C++ was built, not the host on which the
       application was being run. Now application host
       attributes are sent. (Bug #29394723)

     * Assignments of the following form on CollectionFind
       objects invoked a copy assignment operator, which was
       nonoptimal and prevented potential re-execution of
       statements using prepared statements:
find = find.limit(1);

       (Bug #29390170)

     * Legal constructs of this form failed to compile:
for (string id : res.getGeneratedIds()) { … }

       (Bug #29355100)

     * During build configuration, CMake could report an
       incorrect OpenSSL version. (Bug #29282948)

Enjoy and thanks for the support!

Announcing MySQL Server 8.0.17, 5.7.27 and 5.6.45

$
0
0
MySQL Server 8.0.17, 5.7.27 and 5.6.45, new versions of the popular Open Source Database Management System, have been released. These releases are recommended for use on production systems. For an overview of what’s new, please see http://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.htmlhttp://dev.mysql.com/doc/refman/5.7/en/mysql-nutshell.htmlhttp://dev.mysql.com/doc/refman/5.6/en/mysql-nutshell.html For information on installing the release on new servers, please see the MySQL installation documentation at http://dev.mysql.com/doc/refman/8.0/en/installing.htmlhttp://dev.mysql.com/doc/refman/5.7/en/installing.htmlhttp://dev.mysql.com/doc/refman/5.6/en/installing.html These […]

MySQL Shell 8.0.17 for MySQL Server 8.0 and 5.7 has been released

$
0
0

Dear MySQL users,

MySQL Shell 8.0.17 is a maintenance release of MySQL Shell 8.0 Series (a
component of the MySQL Server). The MySQL Shell is provided under
Oracle’s dual-license.

MySQL Shell 8.0 is highly recommended for use with MySQL Server 8.0 and
5.7. Please upgrade to MySQL Shell 8.0.17.

MySQL Shell is an interactive JavaScript, Python and SQL console
interface, supporting development and administration for the MySQL
Server. It provides APIs implemented in JavaScript and Python that
enable you to work with MySQL InnoDB cluster and use MySQL as a document
store.

The AdminAPI enables you to work with MySQL InnoDB cluster, providing an
integrated solution for high availability and scalability using InnoDB
based MySQL databases, without requiring advanced MySQL expertise. For
more information about how to configure and work with MySQL InnoDB
cluster see

https://dev.mysql.com/doc/refman/en/mysql-innodb-cluster-userguide.html

The X DevAPI enables you to create “schema-less” JSON document
collections and perform Create, Update, Read, Delete (CRUD) operations
on those collections from your favorite scripting language.  For more
information about how to use MySQL Shell and the MySQL Document Store
support see

https://dev.mysql.com/doc/refman/en/document-store.html

For more information about the X DevAPI see

https://dev.mysql.com/doc/x-devapi-userguide/en/

If you want to write applications that use the the CRUD based X DevAPI
you can also use the latest MySQL Connectors for your language of
choice. For more information about Connectors see

https://dev.mysql.com/doc/index-connectors.html

For more information on the APIs provided with MySQL Shell see

https://dev.mysql.com/doc/dev/mysqlsh-api-javascript/8.0/

and

https://dev.mysql.com/doc/dev/mysqlsh-api-python/8.0/

Using MySQL Shell’s SQL mode you can communicate with servers using the
legacy MySQL protocol. Additionally, MySQL Shell provides partial
compatibility with the mysql client by supporting many of the same
command line options.

For full documentation on MySQL Server, MySQL Shell and related topics,
see

https://dev.mysql.com/doc/mysql-shell/8.0/en/

For more information about how to download MySQL Shell 8.0.17, see the
“Generally Available (GA) Releases” tab at

http://dev.mysql.com/downloads/shell/

We welcome and appreciate your feedback and bug reports, see

http://bugs.mysql.com/

Enjoy and thanks for the support!


Changes in MySQL Shell 8.0.17 (2019-07-22, General Availability)


InnoDB Cluster Added or Changed Functionality


     * Important Change: The handling of internal recovery
       accounts created by InnoDB cluster has been changed so that by
       default accounts are always created as
       “mysql_innodb_cluster_server_id@%”, where server_id is instance
       specific. This generated recovery account name is stored in the
       InnoDB cluster metadata, to ensure the correct account is always
       removed if the instance is removed from the cluster.  The
       previous behavior where multiple accounts would be created if
       ipWhitelist was given has been removed. In addition
       Cluster.removeInstance() no longer removes all recovery accounts
       on the instance being removed. It now removes the recovery
       account of the instance being removed on the primary and waits
       for the changes to be replicated before actually removing the
       instance from the group. Similarly, Cluster.rejoinInstance() no
       longer drops any recovery accounts. It only creates the recovery
       account of the instance being rejoined if it no longer exists on
       the primary (which it should in normal circumstances). If the
       recovery account already exists, it is reused by
       Cluster.rejoinInstance().  When a cluster is adopted from an
       existing Group Replication deployment, new recovery accounts are
       created and set for each member. Pre-existing accounts configured
       by the user are left unchanged and not dropped, unless they have
       the “mysql_innodb_cluster_” prefix.  As part of this work, the
       behavior of dba.createCluster() and
       Cluster.rebootClusterFromCompleteOutage() operations has been
       changed. Now, if these operations encounter an instance which has
       super_read_only=ON, it is disabled automatically. Therefore the
       clearReadOnly option has been deprecated for these operations.
       References: See also: Bug #29629121, Bug #29559303.

     * The dba.createCluster() operation has been improved, and
       as part of this work the order in which some steps of the
       operation are executed was changed. Now, the creation of the
       recovery (replication) user and updates to the Metadata are
       performed after bootstrapping the Group Replication group. As
       part of this work, the dba.createCluster() operation has been
       updated to support the interactive option, which is a boolean
       value that controls the wizards provided. When interactive is
       true, prompts and confirmations are displayed by the operation.
       The default value of interactive is equal to useWizards option.

     * The compatibility policies that Group Replication
       implements for member versions in groups now consider the patch
       version of a member’s MySQL Server release. Previously, when
       combining instances running different MySQL versions, only the
       major version was considered.  InnoDB cluster has been updated to
       support cluster operations where these compatibility policies
       have an impact. Using the patch version ensures better
       replication safety for mixed version groups during group
       reconfiguration and upgrade procedures. As part of this work the
       information provided about instances has been extended.  The
       following InnoDB cluster changes have been made to support the
       compatibility policies:

          + The Cluster.addInstance() operation now detects
            incompatibilities due to MySQL versions and in the
            event of an incompatibility aborts with an
            informative error.

          + The Cluster.status() attribute mode now considers
            the value of super_read_only and whether the cluster
            has quorum.

          + The Cluster.status() output now includes the boolean
            attribute autoRejoinRunning, which is displayed per
            instance belonging to the cluster and is true when
            automatic rejoin is running.

          + The extended option has been changed to accept
            integer or Boolean values. This makes the behavior
            similar to the queryMembers option, so that option
            has now been deprecated.
       References: See also: Bug #29557250.

     * InnoDB cluster supports the new MySQL Clone plugin on
       instances running 8.0.17 and later. When an InnoDB cluster is
       configured to use MySQL Clone, instances which join the cluster
       choose whether to use Group Replication’s distributed recovery or
       MySQL Clone to recover the transactions processed by the cluster.
       You can optionally configure this behavior, for example to force
       cloning, which replaces any transactions already processed. You
       can also configure how Cluster.addInstance() behaves, letting
       cloning operations proceed in the background or showing different
       levels of progress in MySQL Shell. This enables you to
       automatically provision instances in the most efficient way. In
       addition, the output of Cluster.status() for members in
       RECOVERING state has been extended to include recovery progress
       information to enable you to easily monitor recovery operations,
       whether they be using MySQL Clone or distributed recovery.

InnoDB Cluster Bugs Fixed


     * Important Change: The sandboxes deployed using the
       AdminAPI did not support the RESTART statement. Now, the wrapper
       scripts call mysqld in a loop so that there is a monitoring
       process which ensures that RESTART is supported. (Bug #29725222)

     * The Cluster.addInstance() operation did not validate if
       the server_id of the joining instance was not unique among all
       cluster members. Although the use of a unique server_id is not
       mandatory for Group Replication to work properly (because all
       internal replication channels use –replicate-same-server-id=ON),
       it was recommended that all instances in a replication stream
       have a unique server_id. Now, this recommendation is a
       requirement for InnoDB cluster, and when you use the
       Cluster.addInstance() operation if the server_id is already used
       by an instance in the cluster then the operation fails with an
       error. (Bug #29809560)

     * InnoDB clusters do not support instances that have binary
       log filters configured, but replication filters were being
       allowed. Now, instances with replication filters are also blocked
       from InnoDB cluster usage. (Bug #29756457) References: See also:
       Bug #28064729, Bug #29361352.

     * On instances running version 8.0.16, the
       Cluster.rejoinInstance() operation failed when one or more
       cluster members were in RECOVERING state, because the Group
       Replication communication protocol could not be obtained. More
       specifically, the group_replication_get_communication_protocol()
       User-Defined function (UDF) failed because it could only be
       executed if all members were ONLINE. Now, in the event of the UDF
       failing when rejoining an instance a warning is displayed and
       AdminAPI proceeds with the execution of the operation.  Starting
       from MySQL 8.0.17, the
       group_replication_get_communication_protocol() UDF no longer
       issues an error if a member is RECOVERING. (Bug #29754915)

     * On Debian-based hosts, hostname resolves to the IP
       address 127.0.1.1 by default, which does not match a real network
       interface. This is not supported by Group Replication, which made
       sandboxes deployed on such hosts unusable unless a manual change
       to the configuration file was made. Now, the sandbox
       configuration files created by MySQL Shell contain the following
       additional line: report_host = 127.0.0.1 In other words the
       report_host variable is set to the loopback IP address. This
       ensures that sandbox instances can be used on Debian-based hosts
       without any additional manual changes. (Bug #29634828)

     * If the binary logs had been purged from all cluster
       instances, Cluster.checkInstanceState() lacked the ability to
       check the instance’s state, resulting in erroneous output values.
       Now, Cluster.checkInstanceState() validates the value of
       GTID_PURGED on all cluster instances and provides the correct
       output and also an informative message mentioning the possible
       actions to be taken. In addition, Cluster.addInstance() and
       Cluster.rejoinInstance() were not using the checks performed by
       Cluster.checkInstanceState() in order to verify the GTID status
       of the target instance in relation to the cluster.  In the event
       of all cluster instances having their binary logs purged, the
       Cluster.addInstance() command would succeed but the instance
       would never be able to join the cluster as distributed recovery
       failed to execute. Now, both operations make use of the checks
       performed by Cluster.checkInstanceState() and provide informative
       error messages. (Bug #29630591, Bug #29790569)

     * When using the dba.configureLocalInstance() operation in
       interactive mode, if you provided the path to an option file it
       was ignored. (Bug #29554251)

     * Calling cluster.removeInstance() on an instance that did
       not exist, for example due to a typo or because it was already
       removed, resulted in a prompt asking whether the instance should
       be removed anyway, and the operation then failing.
       (Bug #29540529)

     * To use an instance for InnoDB cluster, whether it is to
       create a cluster on it or add it to an existing cluster, requires
       that the instance is not already serving as a slave in
       asynchronous (master-slave) replication. Previously,
       dba.checkInstanceConfiguration() incorrectly reported that a
       target instance which was running as an asynchronous replication
       slave as valid for InnoDB cluster usage. As a consequence,
       attempting to use such instances with operations such as
       dba.createCluster() and Cluster.addInstance() failed without
       informative errors.  Now, dba.checkInstanceConfiguration()
       verifies if the target instance is already configured as a slave
       using asynchronous replication and generates an error if that is
       the case. Similarly, the dba.createCluster(),
       Cluster.addInstance(), and Cluster.rejoinInstance() operations
       detect such instances and block them from InnoDB cluster usage.
       Note that this does not prevent instances which belong to a
       cluster also functioning as the master in asynchronous
       replication. (Bug #29305551)

     * The dba.createCluster() operation was allowed on a target
       instance that already had a populated Metadata schema, when the
       instance was already in that Metadata. The Metadata present on
       the target instance was being overridden, which was unexpected.
       Now, in such a situation the dba.createCluster() throws an
       exception and you can choose to either drop the Metadata schema
       or reboot the cluster. (Bug #29271400)

     * When a sandbox instance of MySQL had been successfully
       started from MySQL Shell using dba.startSandboxInstance(),
       pressing Ctrl+C in the same console window terminated the sandbox
       instance. Sandbox instances are now launched in a new process
       group so that they are not affected by the interrupt.
       (Bug #29270460)

     * During the creation of a cluster using the AdminAPI, some
       internal replication users are created with user names which
       start with “mysql_innodb_cluster”. However, if the MySQL server
       had a global password expiration policy defined, for example if
       default_password_lifetime was set to a value other than zero,
       then the passwords for the internal users expired after reaching
       the specified period. Now, the internal user accounts are created
       by the AdminAPI with password expiration disabled.
       (Bug #28855764)

     * The dba.checkInstanceConfiguration() and
       dba.configureInstance() operations were not checking the validity
       of persisted configurations, which can be different from the
       corresponding system variable value, in particular when changed
       with SET PERSIST_ONLY. This could lead these operations to report
       wrong or inaccurate results, for example reporting that the
       instance configuration is correct when in reality the persisted
       configuration was invalid and wrong settings could be applied
       after a restart of the server, or inaccurately reporting that a
       server update was needed when only a restart was required. (Bug
       #28727505) References: See also: Bug #29765093.

     * When you removed an instance’s metadata from a cluster
       without removing the metadata from the instance itself (for
       example because of wrong authentication or when the instance was
       unreachable) the instance could not be added again to the
       cluster. Now, another validation has been added to
       Cluster.addInstance() to verify if the instance already belongs
       to the cluster’s underlying group but is not in the InnoDB
       cluster metadata, issuing an error if it already belongs to the
       ReplicaSet. Similarly, an error is issued when the default port
       automatically set for the local address is invalid (out of range)
       instead of using a random port. (Bug #28056944)

     * When issuing dba.configureInstance() in interactive mode
       and after selecting option number 2 “Create a new admin account
       for InnoDB cluster with minimal required grants” it was not
       possible to enter a password for the new user.

Functionality Added or Changed


     * MySQL Shell has a new function for SQL query execution
       for X Protocol sessions that works in the same way as the
       function for SQL query execution in classic MySQL protocol
       sessions. The new function, Session.runSql(), can be used in
       MySQL Shell only as an alternative to X Protocol’s Session.sql()
       to create a script that is independent of the protocol used for
       connecting to the MySQL server. Note that Session.runSql() is
       exclusive to MySQL Shell and is not part of the standard X
       DevAPI. As part of this change, the ClassicSession.query function
       for SQL query execution, which is a synonym of
       ClassicSession.runSQL(), is now deprecated.  A new function
       fetchOneObject() is also provided for the classic MySQL protocol
       and X Protocol to return the next result as a scripting object.
       Column names are used as keys in the dictionary (and as object
       attributes if they are valid identifiers), and row values are
       used as attribute values in the dictionary. This function enables
       the query results to be browsed and used in protocol-independent
       scripts. Updates made to the returned object are not persisted on
       the database.

     * MySQL Shell’s new parallel table import utility provides
       rapid data import to a MySQL relational table for large data
       files. The utility analyzes an input data file, divides it into
       chunks, and uploads the chunks to the target MySQL server using
       parallel connections. The utility is capable of completing a
       large data import many times faster than a standard
       single-threaded upload using a LOAD DATA statement.  When you
       invoke the parallel table import utility, you specify the mapping
       between the fields in the data file and the columns in the MySQL
       table. You can set field- and line-handling options as for the
       LOAD DATA command to handle data files in arbitrary formats. The
       default dialect for the utility maps to a file created using a
       SELECT … INTO OUTFILE statement with the default settings for
       that statement. The utility also has preset dialects that map to
       the standard data formats for CSV files (created on DOS or UNIX
       systems), TSV files, and JSON, and you can customize these using
       the field- and line-handling options as necessary.

     * MySQL Shell has a number of new display options for query
       results:

          + The shell.dumpRows() function can format a result
            set returned by a query in any of the output formats
            supported by MySQL Shell, and dump it to the
            console. Note that the result set is consumed by the
            function. This function can be used in MySQL Shell
            to display the results of queries run by scripts to
            the user in the same ways as the interactive SQL
            mode can.

          + The new MySQL Shell output format json/array
            produces raw JSON output wrapped in a JSON array.
            The output format ndjson is added as a synonym for
            json/raw, and both those output formats produce raw
            JSON output delimited by newlines. You can select
            MySQL Shell output formats by starting MySQL Shell
            with the –result-format=[value] command line
            option, or setting the MySQL Shell configuration
            option resultFormat.
       A new function shell.unparseUri() is also added, which converts a
       dictionary of URI components and connection options into a valid
       URI string for connecting to MySQL.

     * You can now extend MySQL Shell with plugins that are
       loaded at startup. MySQL Shell plugins can be written in either
       JavaScript or Python, and the functions they contain are
       available in MySQL Shell in both JavaScript and Python modes. The
       plugins can be used to contain functions that are registered as
       MySQL Shell reports, and functions that are members of extension
       objects that are made available as part of user-defined MySQL
       Shell global objects.  You can create a MySQL Shell plugin by
       storing code in a subfolder of the plugins folder in the MySQL
       Shell user configuration path, with an initialization file that
       MySQL Shell locates and executes at startup. You can structure a
       plugin group, with a collection of related plugins that can share
       common code, by placing the subfolders for multiple plugins in a
       containing folder under the plugins folder.

     * You can now extend the base functionality of MySQL Shell
       by defining extension objects and making them available as part
       of additional MySQL Shell global objects. Extension objects can
       be written in JavaScript or Python.  When you create and register
       an extension object, it is available in MySQL Shell in both
       JavaScript and Python modes. You construct and register extension
       objects using functions provided by the built-in global object
       shell.

     * You can now configure MySQL Shell to send logging
       information to the console, in addition to sending it to the
       application log. The –verbose command-line option and the
       verbose MySQL Shell configuration option activate this function.
       By default, when the option is set, internal error, error,
       warning, and informational messages are sent to the console,
       which is the equivalent to a logging level of 5 for the
       application log. You can add three further levels of debug
       messages, up to the highest level of detail.

     * MySQL Shell’s upgrade checker utility (the
       util.checkForServerUpgrade() operation) carries out two new
       checks. When checking for upgrade from any MySQL 5.7 release to
       any MySQL 8.0 release, the utility identifies partitioned tables
       that use storage engines other than InnoDB or NDB and therefore
       rely on generic partitioning support from the MySQL server, which
       is no longer provided. When checking for upgrade from any release
       to MySQL 8.0.17, the utility identifies circular directory
       references in tablespace data file paths, which are no longer
       permitted.

     * X DevAPI now supports indexing array fields. A single
       index field description can contain a new member name array that
       takes a Boolean value. If set to true, the field is assumed to
       contain arrays of elements of the given type. In addition, the
       set of possible index field data types (used as values of member
       type in index field descriptions) is extended with type CHAR(N),
       where the length N is mandatory.

     * MySQL Shell now supports the ability to send connection
       attributes (key-value pairs that application programs can pass to
       the server at connect time). MySQL Shell defines a default set of
       attributes, which can be disabled or enabled. In addition,
       applications can specify attributes to be passed in addition to
       the default attributes. The default behavior is to send the
       default attribute set.  You specify connection attributes as a
       connection-attributes parameter in a connection string.  The
       connection-attributes parameter value must be empty (the same as
       specifying true), a Boolean value (true or false to enable or
       disable the default attribute set), or a list or zero or more
       key=value specifiers separated by commas (to be sent in addition
       to the default attribute set). Within a list, a missing key value
       evaluates as an empty string. Examples:
       “mysqlx://user@host?connection-attributes”
       “mysqlx://user@host?connection-attributes=true”
“mysqlx://user@host?connection-attributes=false”
“mysqlx://user@host?connection-attributes=[attr1=val1,attr2,attr3=]”
       “mysqlx://user@host?connection-attributes=[]”

       You can specify connection attributes for both X Protocol
       connections and MySQL classic protocol connections. The
       default attributes set by MySQL Shell are:
> \sql SELECT ATTR_NAME, ATTR_VALUE FROM performance_schema.session_ac
count_connect_attrs;
+———————–+——————–+
| ATTR_NAME       | ATTR_VALUE |
+———————–+——————–+
| _pid            | 28451      |
| _platform       | x86_64     |
| _os             | Linux      |
| _client_name    | libmysql   |
| _client_version | 8.0.17     |
| program_name    | mysqlsh    |
+———————-+——————–+

       Application-defined attribute names cannot begin with _ because
       such names are reserved for internal attributes.  If connection
       attributes are not specified in a valid way, an error occurs and
       the connection attempt fails.  For general information about
       connection attributes, see Performance Schema Connection
       Attribute Tables
 (https://dev.mysql.com/doc/refman/8.0/en/performance-schema-connection-attribute-tables.html).

     * MySQL Shell now supports the OVERLAPS and NOT OVERLAPS
       operators for expressions on JSON arrays or objects:
expr OVERLAPS expr
expr NOT OVERLAPS expr

       These operators behave in a similar way to the
       JSON_OVERLAPS() function. Suppose that a collection has
       these contents:
mysql-js> myCollection.add([{ “_id”: “1”, “list”: [1, 4] }, { “_id”: ”
2″, “list”: [4, 7] }])

       This operation:
mysql-js> var res = myCollection.find(“[1, 2, 3] OVERLAPS $.list”).fie
lds(“_id”).execute();
mysql-js> res

       Should return:
{
“_id”: “1”
}
1 document in set (0.0046 sec)

       This operation:
mysql-js> var res = myCollection.find(“$.list OVERLAPS [4]”).fields(“_
id”).execute();
mysql-js> res

       Should return:
{
“_id”: “1”
}
{
“_id”: “2”
}
2 documents in set (0.0031 sec)

       An error occurs if an application uses either operator
       and the server does not support it.

Bugs Fixed


     * With MySQL Shell in Python mode, using auto-completion on
       a native MySQL Shell object caused informational messages about
       unknown attributes to be written to the application log file.
       (Bug #29907200)

     * The execution time for statements issued in MySQL Shell
       in multiple-line mode has been reduced by reparsing the code only
       after the delimiter is found. (Bug #29864587)

     * Python’s sys.argv array was only initialized when MySQL
       Shell was started in batch mode, and was not initialized when
       MySQL Shell was started in interactive mode. (Bug #29811021)

     * MySQL Shell incorrectly encoded the CAST operation as a
       function call rather than a binary operator, resulting in SQL
       syntax errors. (Bug #29807711)

     * MySQL Shell now supports the unquoting extraction
       operator ->> for JSON. (Bug #29794340)

     * Handling of empty lines in scripts processed by MySQL
       Shell in batch mode has been improved. (Bug #29771369)

     * On Windows, when a MySQL Shell report was displayed using
       the \watch command, pressing Ctrl+C to interrupt execution of the
       command did not take effect until the end of the refresh interval
       specified with the command.  The interrupt now takes effect
       immediately. Also, any queries executed by reports run using the
       \show or \watch commands are now automatically cancelled when
       Ctrl+C is pressed. (Bug #29707077)

     * In Python mode, native dictionary objects created by
       MySQL Shell did not validate whether they contained a requested
       key, which could result in random values being returned or in a
       SystemError exception being thrown. Key validation has now been
       added, and a KeyError exception is thrown if an invalid key is
       requested. (Bug #29702627)

     * When using MySQL Shell in interactive mode, if raw JSON
       output was being displayed from a source other than a terminal
       (for example a file or a pipe), in some circumstances the prompt
       was shown on the same line as the last line of the output. The
       issue has now been corrected, and a new line is printed before
       the prompt message if the last line of the output did not end
       with one. (Bug #29699640)

     * The MySQL Shell \sql command, which executes a single SQL
       statement while another language is active, now supports the \G
       statement delimiter to print result sets vertically.
       (Bug #29693853)

     * Some inconsistencies in MySQL Shell’s choice of stdout or
       stderr for output have been corrected, so that only expected
       output that is intended to be processed by other programs goes to
       stdout, and all informational messages, warnings, and errors go
       to stderr. (Bug #29688637)

     * When MySQL Shell was started with the option
       –quiet-start=2 to print only error messages, warning messages
       about the operation of the upgrade checker utility
       checkForServerUpgrade() were still printed. (Bug #29620947)

     * In Python mode, native dictionary objects created by
       MySQL Shell did not provide an iterator, so it was not possible
       to iterate over them or use them with the in keyword.
       Functionality to provide Python’s iterator has now been added.
       (Bug #29599261)

     * When a MySQL Shell report was displayed using the \watch
       command, the screen was cleared before the report was rerun. With
       a report that executed a slow query, this resulted in a blank
       screen being displayed for noticeable periods of time. The screen
       is now cleared just before the report generates its first text
       output. (Bug #29593246)

     * MySQL Shell’s upgrade checker utility
       checkForServerUpgrade() returned incorrect error text for each
       removed system variable that was detected in the configuration
       file. (Bug #29508599)

     * MySQL Shell would hang when attempting to handle output
       from a stored procedure that produced results repeatedly from a
       single statement. The issues have now been corrected. (Bug
       #29451154, Bug #94577)

     * You can now specify the command line option –json to
       activate JSON wrapping when you start MySQL Shell to use the
       upgrade checker utility. In this case, JSON output is returned as
       the default, and you can choose raw JSON format by specifying
       –json=raw. Also, warning and error messages relating to running
       the utility have been removed from the JSON output.
       (Bug #29416162)

     * In SQL mode, when MySQL Shell was configured to use an
       external pager tool to display output, the pager was invoked
       whether or not the query result was valid. For an invalid query,
       this resulted in the pager displaying an empty page, and the
       error message was only visible after quitting the pager. The
       pager tool is now only invoked when a query returns a valid
       result, otherwise the error message is displayed.
       (Bug #29408598, Bug #94393)

     * MySQL Shell did not take the ANSI_QUOTES SQL mode into
       account when parsing quote characters. (Bug #27959072)

     * Prompt theme files for MySQL Shell that were created on
       Windows could not be used on other platforms. The issue, which
       was caused by the parser handling the carriage return character
       incorrectly, has now been fixed. (Bug #26597468)

     * The use of the mysqlsh command-line option –execute (-e)
       followed by –file (-f) when starting MySQL Shell is now
       disallowed, as these options are mutually exclusive. If the
       options are specified in that order, an error is returned. Note
       that if –file is specified first, –execute is treated as an
       argument of the processed file, so no error is returned.
       (Bug #25686324)

     * Syntax errors returned by MySQL Shell’s JavaScript
       expression parser have been improved to provide context and
       clarify the position of the error. (Bug #24916806)

On Behalf of Oracle/MySQL Release Engineering Team,
Nawaz Nazeer Ahamed

Viewing all 18798 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>