Quantcast
Channel: Planet MySQL
Viewing all 18783 articles
Browse latest View live

TEXT and BLOB good practices


MySQL Shell 8.0.16: Built-in Reports

$
0
0

Readers of my blog know that I like how MySQL Shell allows you to customize it and use it’s Python and JavaScript support to create custom libraries with tools that help with your daily tasks and even creating auto-refreshing reports. Lefred has even taken this a step further and started to port Innotop to MySQL Shell.

One disadvantage of my example of auto-refreshing reports and the Innotop port is they both rely on the curses Python module to refresh the screen. While avoiding to reinvent the wheel is usually a good thing, and the curses library is both powerful and easy to use, it is not well supported on Microsoft Windows. The good news is that in MySQL 8.0.16 and later, you can also get auto-refreshing reports with a new built-in reporting framework in MySQL Shell. This blog shows how this framework works.

Example of using the \watch command to generate a auto-refreshing report.

Built-In Features

The great thing with the built-in framework is that you can start using it even without coding as it comes with a pre-configured report. The framework consists of three parts:

  • \show: This is the most basic command which runs a report once and displays the result.
  • \watch: This is similar to the watch command on Linux, where a command (report in this case) is repeatedly executed with the screen refreshed to show the new result.
  • shell.registerReport(): This method can be used to register custom reports. The details of custom reports will be saved for a later blog.

The \show command is a good place to start.

The \show Command

You can get more information about how the \show command works and reports in general using the built-in help system:

mysql-js> \h \show
NAME
      \show - Executes the given report with provided options and arguments.

SYNTAX
      \show <report_name> [options] [arguments]

DESCRIPTION
      The report name accepted by the \show command is case-insensitive, '-'
      and '_' characters can be used interchangeably.

      Common options:

      - --help - Display help of the given report.
      - --vertical, -E - For 'list' type reports, display records vertically.

      The output format of \show command depends on the type of report:

      - 'list' - displays records in tabular form (or vertically, if --vertical
        is used),
      - 'report' - displays a YAML text report,
      - 'print' - does not display anything, report is responsible for text
        output.

      If executed without the report name, lists available reports.

      Note: user-defined reports can be registered with shell.registerReport()
      method.

EXAMPLES
      \show
            Lists available reports, both built-in and user-defined.

      \show query show session status like 'Uptime%'
            Executes 'query' report with the provided SQL statement.

      \show query --vertical show session status like 'Uptime%'
            As above, but results are displayed in vertical form.

      \show query --help
            Displays help for the 'query' report.

SEE ALSO

Additional entries were found matching \show

The following topics were found at the SQL Syntax category:

- SHOW

For help on a specific topic use: \? <topic>

e.g.: \? SHOW

This already gives a lot of information, not only about the \show command, but also about reports. Reports can be in one of three formats (more on that in a later blog), if they are using the list format (which the query report discussed below uses), you can get the output in tabular format (the default) or vertical using the --vertical or -e option. And finally, you can get more information about known reports by running the report with the --help option, and you can get a list of known reports running \show without arguments:

mysql-js> \show
Available reports: query.

Let’s take a closer look at the query report.

The Query Report

The query report is a very simple report that take a query and runs it. You can get the help text for it by executing \show query --help:

mysql-js> \show query --help
query - Executes the SQL statement given as arguments.

Usage:
       \show query [OPTIONS] [ARGUMENTS]
       \watch query [OPTIONS] [ARGUMENTS]

Options:
  --help                        Display this help and exit.
  --vertical, -E                Display records vertically.

Arguments:
  This report accepts 1-* arguments.

So, to run it, you simply provide the query as an argument – you can do this either just providing the query as is or as a quoted string. Let’s say you want to use the following query for the report:

SELECT conn_id,
       sys.format_statement(current_statement) AS statement,
       format_pico_time(statement_latency) AS latency
  FROM sys.x$session
 ORDER BY statement_latency DESC
 LIMIT 10

This will show the longest running queries limited to 10 queries. Note that is uses the new format_pico_time() function that replaces the sys.format_time() function in MySQL 8.0.16. Newlines are not allowed in the query when generating the report, so the command becomes:

mysql-js> \show query SELECT conn_id, sys.format_statement(current_statement) AS statement, format_pico_time(statement_latency) AS latency FROM sys.x$session ORDER BY statement_latency DESC LIMIT 10
+---------+-------------------------------------------------------------------+----------+
| conn_id | statement                                                         | latency  |
+---------+-------------------------------------------------------------------+----------+
| 8       | SELECT conn_id, sys.format_sta ... tatement_latency DESC LIMIT 10 | 33.34 ms |
| 4       | NULL                                                              |   0 ps   |
+---------+-------------------------------------------------------------------+----------+

Granted, this is not particular useful – you could just have executed the query on its own. However, if you consider the \watch command instead, it become more useful.

Tip

The \show command is more useful for more complex reports that does more than just execute a single query or executes a complex query where the report functions as a stored query.

The \watch Command

The \watch the command supports two additional arguments on its own:

  • --interval=float, -i float: The amount of time in seconds to wait between displaying the result of the report until the report is run again. Valid values are 0.1 second to 86400 seconds (one day).
  • --nocls: Do not clear the screen between iterations of the report. This will make the subsequent output be displayed below the previous output. This can for example be useful for reports returning a single line of output and you that way have the history of the report up the screen.

Report may also add options of their own. The query report for example accepts one argument, which is the query to execute. Other reports may accept other arguments.

Otherwise, you start the report the same way as when using \show. For example, to run the query every five seconds:

mysql-js> \watch query --interval=5 SELECT conn_id, sys.format_statement(current_statement) AS statement, format_pico_time(statement_latency) AS latency FROM sys.x$session ORDER BY statement_latency DESC LIMIT 10

That’s it. If you want to stop the report again, use CTRL+c and the report will stop after the next refresh.

Conclusion

The report framework in MySQL Shell 8.0.16 gives a nice starting point for generating reports. The built-in query function may not be the most fancy you can think of, but it is very easy way to quickly make a query run repeatedly at set intervals. However, the real power of the report framework is that you now have a framework to create cross-platform custom reports. That will be the topic of a later blog.

MySQL InnoDB Cluster – What’s new in the 8.0.16 release

$
0
0

The MySQL Development Team is very happy to announce a new 8.0 GA Maintenance Release of InnoDB Cluster – 8.0.16!

In addition to important bug fixes, 8.0.16 brings very useful new features!

This blog post will cover MySQL Shell and the AdminAPI, for detailed information of what’s new in MySQL Router stay tuned for an upcoming blog post!…

Bye Bye to mysql_upgrade, change to skip_grant_tables, and One Year of MySQL 8.0 GA

$
0
0
The MySQL 8.0.16 Release Notes are very interesting and sadly not read enough. One thing that may have escaped attention is that you no longer have to run mysql_upgrade after updating the binaries.

Let me repeat: you no longer have to run mysql_upgrade after updating the binaries. 

From the release notes:
Previously, after installation of a new version of MySQL, the MySQL server automatically upgrades the data dictionary tables at the next startup, after which the DBA is expected to invoke mysql_upgrade manually to upgrade the system tables in the mysql schema, as well as objects in other schemas such as the sys schema and user schemas.

The server, starting 8.0.16, does the work previously done by mysql_upgrade for itself.  And mysql_upgrade itself is deprecated.

I have had to help too many folks who either forgot to run mysql_upgrade after an upgrade or did not know they could not run it properly do to a lack of permissions from their unprivileged user account. 

One Year of 8.0

And speaking of MySQL 8.0, it has been out for OVER one year now.  Woot!

Skip_grant_tables

Another change to note concerns the much abused skip_grant_tables option.

Previously, if the grant tables were corrupted, the MySQL server wrote a message to the error log but continued as if the --skip-grant-tables option had been specified. This resulted in the server operating in an unexpected state unless --skip-grant-tables had in fact been specified. Now, the server stops after writing a message to the error log unless started with --skip-grant-tables.








MySQL Shell 8.0.16: User Defined Reports

$
0
0

In my blog yesterday, I wrote about the new reporting framework in MySQL Shell. It is part of the 8.0.16 release. I also noted that it includes the possibility to create your own custom reports and use those with the \show and \watch commands. This blog will explore how you can create a report and register it, so it automatically is available when you start MySQL Shell.

The help text for the example sessions report.
The help text for the example sessions report.

Background

You can write the code that generates the report in either JavaScript or Python. The reports can be used from either language mode – even SQL – irrespective of which language you choose, so go with what you are most comfortable with.

Once you have written your code, you save it in the init.d folder (does not exist by default) inside the user configuration folder. By default this is at the following location depending on whether you are on Microsoft Windows or Linux:

  • Microsoft Windows: %AppData%MySQL\mysqlsh
  • Linux: ~/.mysqlsh

You can overwrite this path with the MYSQLSH_USER_CONFIG_HOME environment variable.

You are free to choose any file name, but a good rule is to name the file the same as the report. However, it is required that you use .py as the file name extension if you wrote the report in Python and .js if you used JavaScript.

At that point, you need to register the report, so you can use it through the reporting framework. You do that using the shell.registerReport() method from inside the same file that has the report code. It takes four arguments: the name of the report, the report type, the function generating the report (as a function object), and optional a dictionary with the description. I will not go into the details of these argument here beyond providing an example of using it. The manual has a quite detailed section on registering your report including what the arguments are.

One thing that is worth discussing a bit as it influences how the report content should be formatted is the report type. This can have one of three values:

  • list: The content of the report is returned as a list of lists constituting the rows of a table. The \show and \watch commands can then show the data either using the familiar tabular format or in vertical. The decision of which display format to use can be made when running the report.
  • report: The report content is returned in YAML.
  • print: The report code print the output directly.

The report and print types are the more flexible, but the list type works well with query results.

This can all feel very abstract. The best way to actually understand how it works is to write an example report to go through the steps.

Example Custom Report

The custom report, I will create is based on the one in the reference manual, but modified to allow you to choose what to sort by. The example should help make it clearer how to create your own reports.

The example is quite simple and could be generated using the built-in query report, but it serves as a good starting point to understand the mechanics of custom reports, and even simple reports like this provides a way to have your report logic saved in one place and easily accessible from within MySQL Shell. The example is written in Python, but a report generating the same result written in JavaScript would look similar (although not identical).

Download the Source

You do not need to copy and paste all the code snippets if you want to try this example. You can download the entire sessions.zip file from below and extract the file with the report source code.

The Report Function

The first thing is to define the report itself. This report is called sessions, so the function with the code is also called sessions. This is not required, but it is best practice:

sort_allowed = {
    'thread': 'thd_id',
    'connection': 'conn_id',
    'user': 'user',
    'db': 'db',
    'latency': 'statement_latency',
    'memory': 'current_memory',
}

def sessions(session, args, options):

First a dictionary is defined with the keys specifying the allowed values for the --sort option and the values as what will actually be used for the ordering. Then there is the definition of the reporting function itself. The function takes three arguments:

  • session: A MySQL Shell session object. This gives you access to all of the session properties and methods when you create the report.
  • args: A list of any additional arguments passed to the the report. This is what the query report uses to get the query that will be executed. This report does not use any such arguments, so anything passed this way will be ignored.
  • options: This is a dictionary with named options. This report will support two such named options:
    • --limit or -l which sets the maximum number of rows to retrieve. The option will use the limit key in the dictionary. The default is not to impose any limit.
    • --sort or -s which chooses what to sort by. The option will use the sort key in the dictionary. The report will support ordering by thread, connection, user, db, latency, and memory. The default is to sort by latency.

You can choose different names for the arguments if you prefer.

The next thing is to define the query that will retrieve the result that will be used in the report. You can do this in several ways. If you want to execute an SQL query, you can use session.sql() (where session is the name of the session object in your argument list). However, it is simpler to code the query using the X DevAPI as that makes it trivial to customize the query, for example with the limit option and what to order by.

    sys = session.get_schema('sys')
    session_view = sys.get_table('x$session')
    query = session_view.select(
        'thd_id', 'conn_id', 'user', 'db',
        'sys.format_statement(current_statement) AS statement',
        'sys.format_time(statement_latency) AS latency',
        'format_bytes(current_memory) AS memory')

The statement will query the sys.x$session view. This is the non-formatted version of sys.session. The reason for using this is to allow custom sorting of the result set according to the --sort option. The view is obtained using the session.get_schema() method first to get a schema object for the sys schema, then the get_table() method of the schema object.

The query can then be defined from the table (view in this case) object by using the select() method. The arguments are the columns that should be included in the result. As you can see, it if possible to manipulate the columns and rename them.

Want to Learn More?

If you want to learn more about the MySQL X DevAPI and how to use the Python version of it, then I have written MySQL Connector/Python Revealed published by Apress. The book is available from Apress (print and DRM free ePub+PDF), Amazon (print and Kindle), Barnes & Noble (print), and others.

The X DevAPI makes it trivial to modify the query with the options the report are invoked with. First handle the --sort option:

    # Set what to sort the rows by (--sort)
    try:
        order_by = options['sort']
    except SystemError:
        order_by = 'latency'

    if order_by not in sort_allowed:
        raise ValueError(
            'Unknown sort value: "{0}". Supported values: {1}'
            .format(order_by, sort_allowed.keys()))

    if order_by in ('latency', 'memory'):
        direction = 'DESC'
    else:
        direction = 'ASC'
    query.order_by('{0} {1}'.format(
        sort_allowed[order_by], direction))

    # If ordering by latency, ignore those statements with a NULL latency
    # (they are not active)
    if order_by == 'latency':
        query.where('statement_latency IS NOT NULL')

If the --sort option is not provided, then a SystemError exception is raised. The first part of the snippet handles this, and ensures that the report default to ordering by the latency. Then, it is checked if the provided value is one of the supported values.

The next step is to decide whether to sort in descending or ascending order. You can of course add another option for this, but here the logic is contained within the report choosing descending when sorting by latency or memory usage; otherwise ascending.

The final step is to tell MySQL what to order by which is done in lines 34-35 by invoking the order_by() method. This is where the programmatic approach of the X DevAPI makes it easier to gradually put the query together compared to working directly with the SQL statement.

This report adds a little extra logic to the query. If the result is ordered by latency, only queries that are currently executing (the latency IS NOT NULL are included). This is one of the advantages of creating a custom report rather than writing the query ad-hoc as you can include logic like that.

The --limit option is handled in a similar way:

    # Set the maximum number of rows to retrieve is --limit is set.
    try:
        limit = options['limit']
    except SystemError:
        limit = 0
    if limit > 0:
        query.limit(limit)

There is not much to note about this code snippet. In line 48 the limit is applied (if the value is greater than 0) by invoking the limit() method. Finally, the query can be executed and the report generated:

    result = query.execute()
    report = [result.get_column_names()]
    for row in result.fetch_all():
        report.append(list(row))

    return {'report': report}

The execute() method is used to tell MySQL that the query can be executed. This returns a result object. The get_column_names() method of the result object can be used to get the column names. Then, the rows are added by iterating over them. As you can see, there is only one report list: the first element is a list with the column headers, the remaining are the row values.

Tip

The first element in the report list contains the column headers. The remaining elements contain the values.

Finally, the result is returned as a dictionary. That is it for generating the report, but it should also be registered.

Registering the Report

The registration of the report is done in the same file as where the report function was defined. You perform the registration by calling the shell.register_report() method:

shell.register_report(
    'sessions',
    'list',
    sessions,
    {
        'brief': 'Shows which sessions exist.',
        'details': ['You need the SELECT privilege on sys.session view and the '
                    + 'underlying tables and functions used by it.'],
        'options': [
            {
                'name': 'limit',
                'brief': 'The maximum number of rows to return.',
                'shortcut': 'l',
                'type': 'integer'
            },
            {
                'name': 'sort',
                'brief': 'The field to sort by. Allowed values are: {0}'.format(
                    sort_allowed.keys()),
                'shortcut': 's',
                'type': 'string'
            }
        ],
        'argc': '0'
    }
)

The first argument is the name of the report, ‘sessions’, then the report type. The third argument is the function itself. Then comes a dictionary describing the report.

There are two parts to the dictionary: the two first arguments with a description of the report – first a short (brief) description, then more details. Then a list of the options that the report supports. The final argument is the number of additional arguments.

Now, you are ready to test the report.

Testing the Report

First the report must be installed. If you do not already have the init.d directory, create it under %AppData%MySQL\mysqlsh if you are on Microsoft Windows or under ~/.mysqlsh if you are on Linux. Then copy sessions.py into the directory.

Now, start MySQL Shell and the report is ready to be used:

mysql-js> \show
Available reports: query, sessions.

mysql-js> \show sessions --help
sessions - Shows which sessions exist.

You need the SELECT privilege on sys.session view and the underlying tables and
functions used by it.

Usage:
       \show sessions [OPTIONS]
       \watch sessions [OPTIONS]

Options:
  --help                        Display this help and exit.
  --vertical, -E                Display records vertically.
  --limit=integer, -l           The maximum number of rows to return.
  --sort=string, -s             The field to sort by. Allowed values are:
                                ['latency', 'thread', 'db', 'connection',
                                'user', 'memory']

mysql-js> \show sessions
+--------+---------+---------------+------+-------------------------------------------------------------------+----------+------------+
| thd_id | conn_id | user          | db   | statement                                                         | latency  | memory     |
+--------+---------+---------------+------+-------------------------------------------------------------------+----------+------------+
| 65     | 28      | mysqlx/worker | NULL | SELECT `thd_id`,`conn_id`,`use ... ER BY `statement_latency` DESC | 38.09 ms | 965.58 KiB |
+--------+---------+---------------+------+-------------------------------------------------------------------+----------+------------+

mysql-js> \show sessions -E
*************************** 1. row ***************************
   thd_id: 65
  conn_id: 28
     user: mysqlx/worker
       db: NULL
statement: SELECT `thd_id`,`conn_id`,`use ... ER BY `statement_latency` DESC
  latency: 35.49 ms
   memory: 968.88 KiB

Notice how the help text has been generated from the information that was provided when the report was registered, and how the -E option can be used to turn the tabular output format into the vertical format. Note also that the report is invoked from JavaScript mode and still works even though the report is written in Python – MySQL Shell will automatically handle that for you and ensure the report is executed using the correct interpreter.

It is left as an exercise for the reader to add the --sort and --limit options and to use the report with the \watch command.

Note

On Microsoft Windows, it sometimes happens that when an option is not explicitly passed to the report, then the options dictionary is still set with a value. You can avoid that by providing the options explicitly.

One related feature that is worth covering before finishing is the shell.reports object.

The shell.reports Object

This far the \show and \watch commands have been used to invoke the reports, but there is a lower level way to do it – using the shell.reports object. It is also a very useful way to explore which reports are available.

Let’s start with the latter – exploring reports – as that also shows you how the shell.reports object work. As usual in MySQL Shell, it has built-in help:

mysql-py> shell.reports.help()
NAME
      reports - Gives access to built-in and user-defined reports.

SYNTAX
      shell.reports

DESCRIPTION
      The 'reports' object provides access to built-in reports.

      All user-defined reports registered using the shell.register_report()
      method are also available here.

      The reports are provided as methods of this object, with names
      corresponding to the names of the available reports.

      All methods have the same signature: Dict report(Session session, List
      argv, Dict options), where:

      - session - Session object used by the report to obtain the data.
      - argv (optional) - Array of strings representing additional arguments.
      - options (optional) - Dictionary with values for various report-specific
        options.

      Each report returns a dictionary with the following keys:

      - report (required) - List of JSON objects containing the report. The
        number and types of items in this list depend on type of the report.

      For more information on a report use: shell.reports.help('report_name').

FUNCTIONS
      help([member])
            Provides help about this object and it's members

      query(session, argv)
            Executes the SQL statement given as arguments.

      sessions(session[, argv][, options])
            Shows which sessions exist.

This includes a list of the functions available – and notice that the two reports that exist, query and sessions, are among the functions. You can also use the help() function with the report name as a string argument to get the report specific help.

If you invoke one of the report functions, you execute the report. This is much similar to invoking the report using the \show command, but it will be the raw report result that is returned. Let’s try it both for the query and sessions reports:

mysql-py> shell.reports.query(shell.get_session(), ["SELECT NOW()"])
{
    "report": [
        [
            "NOW()"
        ],
        [
            "2019-04-27 15:53:21"
        ]
    ]
}

mysql-py> shell.reports.sessions(shell.get_session(), [], {'limit': 10, 'sort': 'latency'})
{
    "report": [
        [
            "thd_id",
            "conn_id",
            "user",
            "db",
            "statement",
            "latency",
            "memory"
        ],
        [
            66,
            29,
            "mysqlx/worker",
            null,
            "SELECT `thd_id`,`conn_id`,`use ... ment_latency` DESC LIMIT 0, 10",
            "39.76 ms",
            "886.99 KiB"
        ]
    ]
}

It is not often this is needed, but in case you want to manually manipulate the output, it can be useful.

Tip

If you use JavaScript mode, then use shell.getSession() instead of shell.get_session() to get a session object to pass to the report.

That is all. Now over to you to create your own reports.

MySQL Shell 8.0.16 – What’s New?

$
0
0

The MySQL Development team is proud to announce a new version of the MySQL Shell which includes the following features:

  • Addition of a reporting framework:
    • API to register custom reports.
    • Shell command to display a specific report (\show).
    • Shell command to monitor a specific report (\watch).

Fun with Bugs #84 - On Some Public Bugs Fixed in MySQL 5.7.26

$
0
0
Oracle released minor MySQL Server versions in all supported branches on April 25, 2019. MySQL 5.7.26 is just one of them, but recently I prefer to ignore MySQL 8 releases (after checking that I can build them from source code at least somewhere, even if it takes 18G+ of disk space and that they work in basic tests), as there are more chances for MySQL 5.7 bug fixes to affect me (and customers I care about) directly.

So, in this yet another boring blog post (that would never be a reason for any award) I plan to concentrate on bugs reported in public MySQL bugs database and fixed in MySQL 5.7.26. As usual I name bug reporters explicitly and give links to their remaining currently active bug reports, if any. This time the list is short enough, so I do not even split it by categories:
  • Bug #93164 - "Memory leak in innochecksum utility detected by ASan". This bug was reported by Yura Sorokin from Percona, who also had contributed a patch (for some reason this is not mentioned in the official release notes).
  • Bug #90402 - "innodb async io error handling in io_event". Wei Zhao found yet another case when wrong data type was used in the code and I/O error was not handled, and this could lead even to crashes. He had submitted a patch.
  • Bug #89126 - "create table panic on innobase_parse_hint_from_comment". Nice bug report with a patch from Yan Huang. Note also detailed analysis and test case provided by Marcelo Altmann in the comment. It's a great example of cooperation of all sides: Oracle MySQL developers, bugs verification team, bug reporter and other community users.
  • Bug #92241 - "alter partitioned table add auto_increment diff result depending on algorithm". Yet another great finding from Shane Bester himself!
  • Bug #94247 - "Contribution: Fix fractional timeout values used with WAIT_FOR_EXECUTED_GTI ...". This bug report was created based on pull request from Dirkjan Bussink, who had suggested a patch to fix the problem. Note the comment from Shlomi Noach that refers to Bug #94311 (still private).
  • Bug #85158 - "heartbeats/fakerotate cause a forced sync_master_info". Note MTR test case contributed by Sveta Smirnova and code analysis in a comment from Vlad Lesin (both from Percona at that time) in this bug report from Trey Raymond.
  • Bug #92690 - "Group Replication split brain with faulty network". I do not care about group replication (I have enough Galera in my life instead), but I could not skip this report by Przemyslaw Malkowski from Percona, with detailed steps on how to reproduce. Note comments from other community members. Yet another case to show that good bug reports attract community feedback and are fixed relatively fast.
  • Bug #93750 - "Escaping of column names for GRANT statements does not persist in binary logs". Clear and simple bug report from Andrii Ustymenko. I wonder why it was not found by internal testing/QA. Quick test shows that MariaDB 10.3.7, for example, is not affected:
    c:\Program Files\MariaDB 10.3\bin>mysql -uroot -proot -P3316 test
    Welcome to the MariaDB monitor.  Commands end with ; or \g.
    Your MariaDB connection id is 9
    Server version: 10.3.7-MariaDB-log mariadb.org binary distribution

    Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

    MariaDB [test]> create table t_from(id int primary key, `from` int, c1 int);
    Query OK, 0 rows affected (0.582 sec)

    MariaDB [test]> create user 'user01'@'%' identified by 'user01';
    Query OK, 0 rows affected (0.003 sec)

    MariaDB [test]> grant select (`id`,`from`) on `test`.`t_from` to 'user01'@'%';
    Query OK, 0 rows affected (0.054 sec)

    MariaDB [test]> show master status;
    +------------------+----------+--------------+------------------+
    | File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
    +------------------+----------+--------------+------------------+
    | pc-PC-bin.000007 |      852 |              |                  |
    +------------------+----------+--------------+------------------+
    1 row in set (0.030 sec)

    MariaDB [test]> show binlog events in 'pc-PC-bin.000007';
    +------------------+-----+-------------------+-----------+-------------+--------
    -------------------------------------------------------------------+
    | Log_name         | Pos | Event_type        | Server_id | End_log_pos | Info
                                                                       |
    +------------------+-----+-------------------+-----------+-------------+--------
    -------------------------------------------------------------------+
    | pc-PC-bin.000007 |   4 | Format_desc       |         1 |         256 | Server
    ver: 10.3.7-MariaDB-log, Binlog ver: 4                             |
    | pc-PC-bin.000007 | 256 | Gtid_list         |         1 |         299 | [0-1-42
    ]                                                                  |
    | pc-PC-bin.000007 | 299 | Binlog_checkpoint |         1 |         342 | pc-PC-b
    in.000007                                                          |
    ...
    | pc-PC-bin.000007 | 708 | Query             |         1 |         852 | use `te
    st`; grant select (`id`,`from`) on `test`.`t_from` to 'user01'@'%' |

    +------------------+-----+-------------------+-----------+-------------+--------
    -------------------------------------------------------------------+
    9 rows in set (0.123 sec)
  • Bug #73936 - "If the storage engine supports RBR, unsafe SQL statementes end up in binlog". Nice bug report with MTR test case by Santosh Praneeth Banda. Note that last comment about the fix mentions only MySQL 8.0.15, not a single work about the fix in MySQL 5.7.26 (or anything about MySQL 5.6.x while the bug was reported for 5.6).
  • Bug #93341 - "Check for tirpc needs improvement". The need for improvement of CMake check was noted by Terje Røsten.
  • Bug #91803 - "mysqladmin shutdown does not wait for MySQL to shut down anymore". This regression bug (without a "regression" tag) was reported by Christian Roser.
  • Bug #91541 - ""Flush status" statement adds twice to global values ". Yura Sorokin contributed a detailed anlysis, MTR test case and a patch in this bug reported by Carlos Tutte.
  • Bug #90351 - "GLOBAL STATUS variables drift after rollback". Zsolt Parragi contibuted a patch to this bug found and reported by Iwo P. For some reason this contribution is not highlighted in the release notes.
  • Bug #81441 - "Warning about localhost when using skip-name-resolve". One of many bug reports from Monty Solomon in which he (and other community members like Jean-François Gagné) had to spend a lot of efforts and fight with a member of bugs verification team to get the bug accepted as a real code bug and then get it fixed in all versions affected.
  • Bug #90902 - "Select Query With Complex Joins Leaks File Handles". This bug was reported by James Wilson. I still wonder if MySQL 5.6 was affected. Bug reports says nothing about this (while I expect all supported GA versions to be checked when the bug is verified, and the results of such check clearly documented).
The future looks bright for MySQL 5.7
To summarize:
  1. Consider upgrade to 5.7.26 if you use complex joins, partitioned tables with auto_increment columns or rely on InnoDB or replication a lot.
  2. It's good to see crashing bugs that do not end up as hidden/"security", maybe because they are reported with patches...
  3. It's good to see examples of cooperation of several community users contributing to the same bug report!
  4. Percona engineers contribute a lot to MySQL, both in form of bug reports, patches and by helping other community users to make their point and get their bugs fixed fast.
  5. There are still things to improve in a way Oracle egnineers handle bugs verification, IMHO.
  6.  It's also a bit strange to see only one optimizer-related fix in this release. It means that either MySQL optimizer is already near perfect and there are no bugs to fix (check yourself, but I see 123 bugs here), or that nobody cares that much about MySQL optimizer in 5.7 these days.
  7. It seems for some bugs fixed in previous MySQL 8.0.x minor release there is no extra check/updates in public comments about the versions with the fix when it is released in MySQL 5.6 or 5.7.

React & Axios JWT Authentication Tutorial with PHP & MySQL Server: Signup, Login and Logout

$
0
0
In this tutorial, we'll learn how to use React to build login, signup and logout system and Axios to send API calls and handle JWT tokens. For building the PHP application that implements the JWT-protected REST API, check out PHP JWT Authentication Tutorial. We'll be using the same application built in the previous tutorial as the backend for our React application we'll be building in this tutorial. Prerequisites You will need to have the following prerequisites to follow this tutorial step by step: Knowledge of JavaScript, Knowledge of React, Knowledge of PHP, PHP, Composer and MySQL installed on your development machine, Node.js and NPM installed on your system. That's it. Let's get started! Cloning the PHP JWT App Our example application implements JWT Authentication. It exposes three endpoints api/login.php api/register.php api/protected.php How to Run the PHP App First clone the GitHub repository: $ git clone https://github.com/techiediaries/php-jwt-authentication-example.git Next, navigate inside the project's folder and run the following commands to install the PHP dependencies and start the development server: $ cd php-jwt-authentication-example $ composer install $ php -S 127.0.0.1:8000 Enabling CORS Since we'll be making use of two frontend and backend apps - The React/Webpack development server and the PHP server which are running from two different ports in our local machine (considered as two different domains) we'll need to enable CORS in our PHP app. Open the api/register.php, api/login.php and api/protected.php files and add the following CORS header to enable any domain to send HTTP requests to these endpoints: <?php header("Access-Control-Allow-Origin: *"); > Installing create-react-app Let's start by installing the create-react-app tool which will be used to create the React project. Open a new terminal and run the following command: $ npm install -g create-react-app create-react-app is the official tool created by the React team to quickly start developing React apps. Creating a React Project Let's now generate our React project. In your terminal, run the following command: $ create-react-app php-react-jwt-app This will generate a React project with a minimal directory structure. Installing Axios & Consuming JWT REST API We'll be using JWT for sending HTTP requests to our PHP JWT REST API so we'll need to install it first. Go back to your terminal and run the following commands to install Axios from npm: $ cd php-react-jwt-app $ npm install axios --save As of this writing, this will install axios v0.18.0. Next, let's create a component that encapsulates the code for communicating with the JWT REST API. In the src/ folder, create an utils folder then create a JWTAuth.js file inside of it: $ mkdir utils $ touch JWTAuth.js Open the src/utils/JWTAuth.js file and add the following code: import axios from 'axios'; const SERVER_URL = "http://127.0.0.1:8000"; We import axios and define the SERVER_URL variable that contains the URL of the JWT authentication server. Next, define the login() method which will be used to log users in: const login = async (data) => { const LOGIN_ENDPOINT = `${SERVER_URL}/api/login.php`; try { let response = await axios.post(LOGIN_ENDPOINT, data); if(response.status === 200 && response.data.jwt && response.data.expireAt){ let jwt = response.data.jwt; let expire_at = response.data.expireAt; localStorage.setItem("access_token", jwt); localStorage.setItem("expire_at", expire_at); } } catch(e){ console.log(e); } } First, we construct the endpoint by concatenating the server URL with the /api/login.php path. Next, we send a POST request to the login endpoint with the data passed as a parameter to the login() method. Next, if the response is successful, we store the JWT token and expiration date in the local storage. Note: Since Axios, returns a Promise, we use the async/await syntax to make our code look synchronous. Next, define the register() method which creates a new user in the database: const register = async (data) => { const SIGNUP_ENDPOINT = `${SERVER_URL}/api/register.php`; try { let response = await axios({ method: 'post', responseType: 'json', url: SIGNUP_ENDPOINT, data: data }); } catch(e){ console.log(e); } } We first construct the endpoint by concatenating the server URL with the /api/register.php path. Next, we use Axios to send a POST request to the register endpoint with the data passed as a parameter to the method. Note: We use the async/await syntax to avoid working with Promises. Finally, let's define the logout() method which simply removes the JWT access token and expiration date from the local storage: const logout = () => { localStorage.removeItem("access_token"); localStorage.removeItem("expire_at"); } We use the removeItem() method of localStorage to remove the access_token and expire_at keys. Now, we need to export these methods so they can be imported from the other React components: export { login, register, logout } Calling the JWTAuth Methods in the React Component Let's now make sure our login system works as expected. Open the src/App.js file and import the login(), register() and logout() methods from the src/utils/JWTAuth.js file: import { login, register, logout } from "./utils/JWTAuth.js"; Next, define a login() method in the App component as follows: class App extends Component { async login(){ let info = { email: "kaima.abbes@email.com", password: "123456789" }; await login(info); } This methods simply calls the login() method of JWTAuth.js with hardcoded user information to log the user in. Next, define the register() method as follows: async register(){ let info = { first_name: "kaima", last_name: "Abbes", email: "kaima.abbes@email.com", password: "123456789" }; await register(info); } Note: We don't need to wrap the logout() method since we don't have to pass any parameters to the method. Finally, update the render() method to create the buttons for login, register and logout: render() { return ( <div className="container"> <div className="row"> <h1>React JWT Authentication Example</h1> <button className="btn btn-primary" onClick = { this.register }>Sign up</button> <button className="btn btn-primary" onClick = { this.login }>Log in</button> <button className="btn btn-primary" onClick = { logout }>Log out</button> </div> </div> ); } You should be able to use these buttons to test the register(), login() and logout() methods. Note: We used Bootstrap for styling the UI. In the next tutorial, we'll build the actual login and register UIs with forms to get the user's information and submit them to the PHP JWT authentication server. Conclusion In this tutorial, we've seen how to implement JWT authentication in React with Axios, PHP and MySQL.

SQL Update Query Example | SQL Update Statement Tutorial

$
0
0
SQL Update Query Example

SQL Update Query Example | SQL Update Statement Tutorial is today’s topic. The SQL UPDATE statement is used to modify the existing records in a table. You need to be very careful when updating records in a table. SQL WHERE clause in the UPDATE statement specifies which record(s) that should be updated. If you omit the WHERE clause completely, then all records in the table will be updated!

SQL Update Query Example

You need to specify which record needs to be updated via WHERE clause, otherwise all the rows would be affected. We can update the single column as well as multiple columns using the UPDATE statement as per our requirement.

SQL Update Syntax

The syntax is following.

UPDATE table_name
SET column1 = value1, column2 = value2, ...
WHERE condition;

In the above query, the SET statement is used to set the new values to a particular column and the WHERE clause condition is used to select the rows for which the columns needed to be updated.

See the following example.

I have the following table called Apps. 

select * from Apps

The output is following.

SQL Update Query Example | SQL Update Statement Tutorial

 

So, we have a table with six rows. Now, we will write an update query to modify one or multiple rows.

If you do not know how to create a table, then check out SQL Create Table and SQL Insert Query tutorials.

Update one row in SQL

Now, let’s write an update query that can affect only one row.

UPDATE Apps
SET CreatorName = 'Tony Stark', AppName= 'IronSpider'
WHERE AppID = 1

Here, we are updating the row whose AppID = 1. We are updating the CreatorName and AppName.

It will not return anything. So, if we need to check the output, write the following query.

select * from Apps

See the below output.

SQL Update Statement Tutorial

 

See, we have updated the row whose AppID = 1.

Update multiple rows in SQL

Let’s write a query where we update multiple rows.

UPDATE Apps
SET CreatorName = 'CaptainAmerica', AppName= 'Shield'
WHERE AppID IN('4', '5');

The above query will update the rows whose AppIDs are and 5.

Now, check the output using the below query.

Update multiple rows in SQL

You can see that AppID 4 and 5 have updated values.

Update All records in SQL

We can write an SQL Update query in which we update all the rows if we do not specify the WHERE condition. See the below query.

UPDATE Apps
SET CreatorName = 'Thor', AppName= 'Asguard'

It will update the CreatorName and AppName for all the six rows. Verify the output by the following query.

Update All records in SQL

 

Conclusively, SQL Update Query Example | SQL Update Statement Tutorial is over.

The post SQL Update Query Example | SQL Update Statement Tutorial appeared first on AppDividend.

MySQL 8.0.16 Replication Enhancements

$
0
0

MySQL 8.0.16 has been released last Thursday. In it, you can find some new replication features. Here is a quick summary. Follow-up blog posts will provide details about these features.

  • Large Messages Fragmentation Layer for Group Replication. Tiago Vale’s work, introduces message fragmentation to the Group Communication Framework.

Rotating binary log master key online

$
0
0

Starting on version 8.0.16, MySQL server introduces a new command that allows
for the binary log master key rotation, online!

When binary log encryption is enabled, the binary log master key can be rotated online by using the following new command:

ALTER INSTANCE ROTATE BINLOG MASTER KEY;

This new command can be used to rotate the binary log master key periodically or whenever you suspect that a key might have been compromised.…

2019 MySQL Community Contributor Award Program

$
0
0
Building the Vitess community has been our pride and joy. Being able to contribute to the MySQL community, even more so. Vitess’ Sugu Sougoumarane has been nominated by the MySQL group for Oracle’s 2019 MySQL Community Contributor Award Program, where he joins folks like Shlomi Noach, Peter Zaitsev, Gabriela D’Ávila Ferrara, Giuseppe Maxia and many other active MySQL community members recognised for their contributions to MySQL. Criteria for the nominations included: most active code contributor, bug report,most active MySQL blogger, people who play an active role in translating or documenting MySQL articles, people who provide feedback on DMR releases, Labs release, or change proposal, as well as anyone in the community who did really useful work that ought to be thanked publicly.

MySQL 8.0 GIS Units of Measure - Meter, foot, Clarke's yard, or Indian Foot

$
0
0
The ST_DISTANCE function has been upgraded in MySQL 8.0.16 to allow you to specify the unit of measure between to locations.  Previously you had to convert from meters to what you desired but now you can use the INFORMATION_SCHEMA.ST_UNITS_OF_MEASURE table to help you get many of the more popular measurements (foot, yard, statue mile, nautical mile, fathom) and some ones that are new to me (chain, link, various feet).   However some measures are omitted (furlong,smoot) that may have some relevance in your life.

select * from information_schema.ST_UNITS_OF_MEASURE;
Fetching table and column names from `mysql` for auto-completion... Press ^C to stop.
+--------------------------------------+-----------+---------------------+-------------+
| UNIT_NAME                            | UNIT_TYPE | CONVERSION_FACTOR   | DESCRIPTION |
+--------------------------------------+-----------+---------------------+-------------+
| metre                                | LINEAR    |                   1 |             |
| millimetre                           | LINEAR    |               0.001 |             |
| centimetre                           | LINEAR    |                0.01 |             |
| German legal metre                   | LINEAR    |        1.0000135965 |             |
| foot                                 | LINEAR    |              0.3048 |             |
| US survey foot                       | LINEAR    | 0.30480060960121924 |             |
| Clarke's yard                        | LINEAR    |        0.9143917962 |             |
| Clarke's foot                        | LINEAR    |        0.3047972654 |             |
| British link (Sears 1922 truncated)  | LINEAR    |          0.20116756 |             |
| nautical mile                        | LINEAR    |                1852 |             |
| fathom                               | LINEAR    |              1.8288 |             |
| US survey chain                      | LINEAR    |   20.11684023368047 |             |
| US survey link                       | LINEAR    |  0.2011684023368047 |             |
| US survey mile                       | LINEAR    |  1609.3472186944375 |             |
| Indian yard                          | LINEAR    |  0.9143985307444408 |             |
| kilometre                            | LINEAR    |                1000 |             |
| Clarke's chain                       | LINEAR    |       20.1166195164 |             |
| Clarke's link                        | LINEAR    |      0.201166195164 |             |
| British yard (Benoit 1895 A)         | LINEAR    |           0.9143992 |             |
| British yard (Sears 1922)            | LINEAR    |  0.9143984146160288 |             |
| British foot (Sears 1922)            | LINEAR    |  0.3047994715386762 |             |
| Gold Coast foot                      | LINEAR    |  0.3047997101815088 |             |
| British chain (Sears 1922)           | LINEAR    |  20.116765121552632 |             |
| yard                                 | LINEAR    |              0.9144 |             |
| British link (Sears 1922)            | LINEAR    |  0.2011676512155263 |             |
| British foot (Benoit 1895 A)         | LINEAR    |  0.3047997333333333 |             |
| Indian foot (1962)                   | LINEAR    |           0.3047996 |             |
| British chain (Benoit 1895 A)        | LINEAR    |          20.1167824 |             |
| chain                                | LINEAR    |             20.1168 |             |
| British link (Benoit 1895 A)         | LINEAR    |         0.201167824 |             |
| British yard (Benoit 1895 B)         | LINEAR    |  0.9143992042898124 |             |
| British foot (Benoit 1895 B)         | LINEAR    | 0.30479973476327077 |             |
| British chain (Benoit 1895 B)        | LINEAR    |  20.116782494375872 |             |
| British link (Benoit 1895 B)         | LINEAR    |  0.2011678249437587 |             |
| British foot (1865)                  | LINEAR    | 0.30480083333333335 |             |
| Indian foot                          | LINEAR    | 0.30479951024814694 |             |
| Indian foot (1937)                   | LINEAR    |          0.30479841 |             |
| Indian foot (1975)                   | LINEAR    |           0.3047995 |             |
| British foot (1936)                  | LINEAR    |        0.3048007491 |             |
| Indian yard (1937)                   | LINEAR    |          0.91439523 |             |
| Indian yard (1962)                   | LINEAR    |           0.9143988 |             |
| Indian yard (1975)                   | LINEAR    |           0.9143985 |             |
| Statute mile                         | LINEAR    |            1609.344 |             |
| link                                 | LINEAR    |            0.201168 |             |
| British yard (Sears 1922 truncated)  | LINEAR    |            0.914398 |             |
| British foot (Sears 1922 truncated)  | LINEAR    | 0.30479933333333337 |             |
| British chain (Sears 1922 truncated) | LINEAR    |           20.116756 |             |
+--------------------------------------+-----------+---------------------+-------------+
47 rows in set (0.0019 sec)



MySQL Connector/ODBC 5.3.13 has been released

$
0
0

Dear MySQL users,

MySQL Connector/ODBC 5.3.13, a new version of the ODBC driver for the
MySQL database management system, has been released.

The available downloads include both a Unicode driver and an ANSI
driver based on the same modern codebase. Please select the driver
type you need based on the type of your application – Unicode or ANSI.
Server-side prepared statements are enabled by default. It is suitable
for use with any MySQL version from 5.6.

This is the sixth release of the MySQL ODBC driver conforming to the
ODBC 3.8 specification. It contains implementations of key 3.8
features, including self-identification as a ODBC 3.8 driver,
streaming of output parameters (supported for binary types only), and
support of the SQL_ATTR_RESET_CONNECTION connection attribute (for the
Unicode driver only).

The release is now available in source and binary form for a number of
platforms from our download pages at

http://dev.mysql.com/downloads/connector/odbc/5.3.html

For information on installing, please see the documentation at

http://dev.mysql.com/doc/connector-odbc/en/connector-odbc-installation.html

Changes in MySQL Connector/ODBC 5.3.13 (2019-04-29, General Availability)

Bugs Fixed

* Connector/ODBC 5.3 is now built with MySQL client library
5.7.26, which includes OpenSSL 1.0.2R. Issues fixed in
the new OpenSSL version are described at
http://www.openssl.org/news/vulnerabilities.html. (Bug
#29489006)

* An exception was emitted when fetching contents of a
BLOB/TEXT records after executing a statement as a
server-side prepared statement with a bound parameter.
The workaround is not using parameters or specifying
NO_SSPS=1 in the connection string; this allows the
driver to fetch the data. (Bug #29282638, Bug #29512548,
Bug #28790708, Bug #93895, Bug #94545, Bug #92078)

On Behalf of Oracle/MySQL Release Engineering Team,
Hery Ramilison

MySQL 8.0.16: mysql_upgrade is going away

$
0
0

As of 8.0.16, the mysql_upgrade binary is deprecated, but its functionality is moved into the server. Let’s call this functionality the “server upgrade”. This is added alongside the Data Dictionary upgrade (DD Upgrade), which is a process to update the data dictionary table definitions.


What configuration settings did I change on my MySQL Server ?

$
0
0

This post is just a reminder on how to find which settings have been set on MySQL Server.

If you have modified some settings from a configuration file or during runtime (persisted or not), these two queries will show you what are the values and how they were set. Even if the value is the same as the default (COMPILED) in MySQL, if you have set it somewhere you will be able to see where you did it.

Global Variables

First, let’s list all the GLOBAL variables that we have configured in our server:

SELECT t1.VARIABLE_NAME, VARIABLE_VALUE, VARIABLE_SOURCE
FROM performance_schema.variables_info t1
JOIN performance_schema.global_variables t2
ON t2.VARIABLE_NAME=t1.VARIABLE_NAME
WHERE t1.VARIABLE_SOURCE != 'COMPILED';

This is an example of the output:

Session Variables

And now the same query for the session variables:

SELECT t1.VARIABLE_NAME, VARIABLE_VALUE, VARIABLE_SOURCE
FROM performance_schema.variables_info t1
JOIN performance_schema.session_variables t2
ON t2.VARIABLE_NAME=t1.VARIABLE_NAME
WHERE t1.VARIABLE_SOURCE = 'DYNAMIC';

And an example:

You can also find some more info in this previous post. If you are interested in default values of different MySQL version, I also invite you to visit Tomita Mashiro‘s online tool : https://tmtm.github.io/mysql-params/

In case you submit bugs to MySQL, I invite you to also add the output of these two queries.

Shinguz: FromDual Ops Center for MariaDB and MySQL 0.9 has been released

$
0
0

FromDual has the pleasure to announce the release of the new version 0.9 of its popular FromDual Ops Center for MariaDB and MySQL focmm.

The FromDual Ops Center for MariaDB and MySQL (focmm) helps DBA's and System Administrators to manage MariaDB and MySQL database farms. Ops Center makes DBA and Admins life easier!

The main task of Ops Center is to support you in your daily MySQL and MariaDB operation tasks. More information about FromDual Ops Center you can find here.

Download

The new FromDual Ops Center for MariaDB and MySQL (focmm) can be downloaded from here. How to install and use focmm is documented in the Ops Center User Guide.

In the inconceivable case that you find a bug in the FromDual Ops Center for MariaDB and MySQL please report it to the FromDual bug tracker or just send us an email.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Installation of Ops Center 0.9

A complete guide on how to install FromDual Ops Center you can find in the Ops Center User Guide.

Upgrade from 0.3 to 0.9

Upgrade from 0.3 to 0.9 should happen automatically. Please do a backup of you Ops Center Instance befor you upgrade! Please also check Upgrading.

Changes in Ops Center 0.9

Everything has changed!

Taxonomy upgrade extras: 

Flashback Recovery in MariaDB/MySQL/Percona

$
0
0

In this blog, we will see how to do flashback recovery or rolling back the data in MariaDB, MySQL and Percona.

As we know the saying  “All human make mistakes”, following that in Database environment the data can be deleted or updated in the database either by intentionally or by accidentally.

To recover the lost data we have multiple ways.

  • The data can be recovered from the latest full backup or incremental backup when data size is huge it could take hours to restore it.
  • From backup of Binlogs.
  • Data can also be recovered from delayed slaves, this case would be helpful when the mistake is found immediately, within the period of delay.

The above either way can help to recover the lost data, but what really matters is, What is the time taken to rollback or recover the data? and How much downtime was taken to get back to the initial state?

To overcome this disaster mysqlbinlog has a very useful option i.e –flashback that comes along with binary of MariaDB server though it comes with Mariaserver, it works well with Oracle Mysql servers and Percona flavour of MySQL.

What is Flashback?

Restoring back the data to the previous snapshot in a MySQL database or in a table is called Flashback.

Flashback options help us to undo the executed row changes(DML events).

For instance, it can change DELETE events to INSERTs and vice versa, and also it will swap WHERE and SET parts of the UPDATE events.

Prerequisites for using flashback :

  • binlog_format = ROW
  • binlog_row_image = FULL

Let us simulate a few test cases where flashback comes as a boon for recovering data.

For simulating the test cases I am using employees table and mariadb version 10.2

MariaDB [employees]> select @@version;
+---------------------+
| @@version           |
+---------------------+
| 10.2.23-MariaDB-log |
+---------------------+
1 row in set (0.02 sec)

Table structure :

MariaDB [employees]> show create table employees\G
*************************** 1. row ***************************
       Table: employees
Create Table: CREATE TABLE `employees` (
  `emp_no` int(11) NOT NULL,
  `birth_date` date NOT NULL,
  `first_name` varchar(14) NOT NULL,
  `last_name` varchar(16) NOT NULL,
  `gender` enum('M','F') NOT NULL,
  `hire_date` date NOT NULL,
  PRIMARY KEY (`emp_no`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

Case 1:  Rollbacking the Deleted data.

Consider the data is deleted was from employees table where first_name =’Chirstian’ .

MariaDB [employees]> select COUNT(*) from employees where first_name ='Chirstian';
+----------+
| COUNT(*) |
+----------+
|      226 |
+----------+
1 row in set (0.07 sec)

MariaDB [employees]> delete from employees where first_name ='Chirstian';
Query OK, 226 rows affected (0.15 sec)

To revert the data to the intial state ,we need to decode the binlog and get the start and stop position of the delete event happened to the employees table.

It is necessary to take a proper start and stop positions. Start position should be taken exactly after BEGIN and Stop position is before the final COMMIT.

[root@vm3 vagrant]# mysqlbinlog -v --base64-output=DECODE-ROWS /var/lib/mysql/mysql-bin.000007 > mysql-bin.000007.txt
BEGIN
/*!*/;
# at 427
# at 501
#190417 17:49:49 server id 1  end_log_pos 501 CRC32 0xc7f1c84b  Annotate_rows:
#Q> delete from employees where first_name ='Chirstian'
#190417 17:49:49 server id 1  end_log_pos 569 CRC32 0x6b1b5c98  Table_map: `employees`.`employees` mapped to number 29
# at 569
#190417 17:49:49 server id 1  end_log_pos 7401 CRC32 0x6795a972         Delete_rows: table id 29 flags: STMT_END_F
### DELETE FROM `employees`.`employees`
### WHERE
###   @1=10004
###   @2='1954:05:01'
###   @3='Chirstian'
###   @4='Koblick'
###   @5=1
###   @6='1986:12:01'
# at 23733
#190417 17:49:49 server id 1  end_log_pos 23764 CRC32 0xf9ed5c3e        Xid = 455
### DELETE FROM `employees`.`employees`
### WHERE
### @1=498513
### @2='1964:10:01'
### @3='Chirstian'
### @4='Mahmud'
### @5=1
### @6='1992:06:03'
# at 7401
COMMIT/*!*/;
# at 23764
#190417 17:49:49 server id 1  end_log_pos 23811 CRC32 0x60dfac86        Rotate to mysql-bin.000008  pos: 4
DELIMITER ;
# End of log file
ROLLBACK /* added by mysqlbinlog */;

Once the count is verified the from the taken positions, we can prepare the data file or the .sql file using flashback as below

[root@vm3 vagrant]# mysqlbinlog  -v --flashback --start-position=427 --stop-position=7401 /var/lib/mysql/mysql-bin.000007  > insert.sql

Below is the comparison of conversion from Delete to Insert for a single record:

### DELETE FROM `employees`.`employees`
### WHERE
### @1=498513
### @2='1964:10:01'
### @3='Chirstian'
### @4='Mahmud'
### @5=1
### @6='1992:06:03'

### INSERT INTO `employees`.`employees`
### SET
### @1=498513
### @2='1964:10:01'
### @3='Chirstian'
### @4='Mahmud'
### @5=1
### @6='1992:06:03'
MariaDB [employees]> source insert.sql
Query OK, 0 rows affected (0.01 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.01 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)

And the count is verified after the data load.

MariaDB [employees]> select COUNT(*) from employees where first_name ='Chirstian';
+----------+
| COUNT(*) |
+----------+
|      226 |
+----------+
1 row in set (0.06 sec)

Case 2 :  Rollbacking the Updated data.

The data was updated based on below conditions

MariaDB [employees]> select COUNT(*) from employees where first_name ='Chirstian' and gender='M';
+----------+
| COUNT(*) |
+----------+
|      129 |
+----------+
1 row in set (0.14 sec)

MariaDB [employees]> update employees set gender='F' where first_name ='Chirstian' and gender='M';
Query OK, 129 rows affected (0.16 sec)
Rows matched: 129  Changed: 129  Warnings: 0

MariaDB [employees]> select COUNT(*) from employees where first_name ='Chirstian' and gender='M';
+----------+
| COUNT(*) |
+----------+
|        0 |
+----------+
1 row in set (0.07 sec)

To revert back the updated data, the same steps to be followed as in case 1.

[root@vm3 vagrant]# mysqlbinlog -v --flashback --start-position=427 --stop-position=8380 /var/lib/mysql/mysql-bin.000008 > update.sql

MariaDB [employees]> source update.sql
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.01 sec)
Query OK, 0 rows affected (0.00 sec)

MariaDB [employees]> select COUNT(*) from employees where first_name ='Chirstian' and gender='M';
+----------+
| COUNT(*) |
+----------+
|      129 |
+----------+
1 row in set (0.06 sec)

In the above two cases by using flashback option we were able to change Event Type statements from DELETE to INSERT and Update_Event statements by Swapping the SET part and WHERE part.

There are few Limitations of Flashback 

  • It Doesn’t support DDL ( DROP/TRUNCATE or other DDL’s)
  • It Doesn’t support encrypted binlog
  • It Doesn’t support compressed binlog

Key Takeaways:

  • To reverse the mishandled operations from binary logs.
  • No need to stop the server to carry out this operation.
  • When the data is small to revert back, flashback process is very faster than recovering the data from Full Backup.
  • Point in time recovery (PITR) becomes easy.

Photo by Jiyeon Park on Unsplash

Exposing MyRocks Internals Via System Variables: Part 2, Initial Data Flushing

$
0
0

In this blog post, we continue on our series of exploring MyRocks mechanics by looking at configurable server variables and column family options. In our last post, I explained at a high level how data first entered memory space and in this post, we’re going to pick up where we left off and talk about how the flush from immutable memtable to disk occurs. We’re also going to talk about how newly created secondary indexes on existing tables get written to disk.

We already know from our previous post in the series that a flush can be prompted by one of several events, the most common of which would be when an active memtable is filled to its maximum capacity and is rotated into immutable status.

When your immutable memtable(s) is ready to flush, MyRocks will call a background thread to collect the data from memory and write it to disk. Depending on how often your records are updated, it’s possible that multiple versions of the same record may exist in the immutable memtable(s), so the flush thread will have to check and remove any record duplication so only the latest record state gets added to the data file.

Once deduplication is complete, the contents of the immutable memtable are written to disk in a series of data blocks (4k by default) that make up the data file that you can find in your MyRocks data directory with the extension ‘.sst’. The size of the data file is going to be the same size as the immutable memtable(s). Remember, the size of your memtable is designated by the column family option write_buffer_size.

There is also metadata added to the file including checksums to ensure that there have been no issues with storage between the time the data file was written and the next time it’s read. Top level index information is also written in order to speed up the time it takes to locate the record you’re seeking within the file itself. Other forms of metadata within the data file will be addressed later in the series when we cover bloom filters.

Variables and CF_OPTIONS

Now that we know a little bit about how data is flushed from memory to disk, let’s take a more detailed look at the mechanics associated with the flushing process as well as the variables and options that are associated with them.

Rocksdb_max_background_jobs

In my last post, we mentioned when flushing would occur based on memtable capacity, the size of the write buffer, etc. Once the system determines that a flush is required it will use a background thread to take the data from an immutable memtable and write it to disk.

The variable rocksdb_max_background_jobs allows you to specify how many threads will sit in the background to support flushing. Keep in mind that this pool of background threads are used to support both flushing and compaction (we’ll discuss compaction in the next post in the series). In previous versions, the number of threads to support memtable to disk flushes was defined by the variable rocksdb_max_background_flushes; however, this is no longer the case as rocksdb_max_background_jobs replaced this variable and similar variables used to define the number of threads that would support compaction. Now all of these have been grouped together and the number of threads that will be used for memtable to disk flushes versus compaction will be automatically decided by MyRocks.

Default: 2

The value of 2 isn’t all that surprising considering that in previous versions the value of rocksdb_max_background_compactions and rocksdb_max_background_flushes were both 1, meaning there was one thread for flushing and one for compaction, two threads total. We still have two threads, but now MyRocks will decide which process those threads are allocated to.

Rocksdb_datadir

When MyRocks flushes data from immutable memtable to disk with the extension ‘.sst’, it will add data files to the MyRocks data directory. You can specify the location of this directory using the system variable rocksdb_datadir.

Default: ./.rocksdb

The default indicates that a directory called ‘.rocksdb’ will be created in the MySQL data directory. The MySQL data directory is defined in system variable datadir, which has a default value of /var/lib/mysql. Assuming both datadir and rocksdb_datadir are their default values, the location of the MyRocks data directory would be /var/lib/mysql/.rocksdb.

I mentioned in my first post in this series that it’s not uncommon for users to want to separate sequential and random I/O, which is why you may want to put your write-ahead logs on one set of disks and your data directory on another. You can take this a step further by isolating your logs, data, and operating system. This is becoming less common as SSDs and similar technologies become the database storage of choice, but if you want to go the route of isolating your OS, data, and logs, you can leverage the rocksdb_datadir system variable for that purpose.

Rocksdb_block_size

Data from immutable memtables are flushed into data files in their entirety after deduplication, but are further broken down into data blocks. These blocks serve as a way to partition the data within the data file in order to speed up lookups when you are looking for a specific record by its key. In each data file there is also what’s called a “top-level index” that provides a key name and offset for each block based on the last key that’s found in each block. If you’re looking for a specific key it will let you know what data block it’s, in leveraging the top level index and a blockhandle.

You can specify how large each data block will be using the system variable rocksdb_block_size.

Default: 4096 (4 Kb)

Rocksdb_rate_limiter_bytes_per_sec

When the background thread is called upon to do a flush from memtable to disk, it will limit its disk activity based on the value of this variable. Keep in mind that this is the total amount of disk throughput allowed for all background threads, and as we stated earlier, background threads include memtable flushing and compaction.

Default: 0

The default of 0, in this case, means that there is no limit imposed for disk activity. It may be worth considering setting this to a non zero value if you want to make sure disk activity from flushing and compaction doesn’t consume all of your I/O capacity, or if you want to save capacity for other processes such as reads. Keep in mind that if you slow down the process of writing memtable data to disk, under significant load you could theoretically stop write activity if you hit the maximum number of immutable memtables.

For those of you that are familiar with InnoDB you may want to think of this as acting like innodb_io_capacity_max.

One other important thing to note is that according to the documentation, this variable should be able to be changed dynamically; however, my testing has shown that changing this variable to/from a zero value requires a restart. I have created a bug with Percona to bring it to their attention.

Rocksdb_force_flush_memtable_now

In the case that you would like to flush all the data in memtables to disk immediately, you can toggle this variable. A flush will occur, and then the variable will go back to its original value of ‘OFF’.

One important thing to note is that during this flush process, all writes will be blocked until the flush has completed.

Another important point to raise is that during my testing, I found that you could set this variable to any value including false and ‘OFF’ and it would still flush data to disk. You will need to exercise caution when working with this variable as setting it to its default value will still force a flush. In short, don’t set this variable to anything unless you want a flush to occur. I have opened a bug with Percona to raise bring this to their attention.

Default: OFF

Rocksdb_checksums_pct

When data is written to disk there is a checksum that’s written to the SST file of a percentage of its contents. Much like the InnoDB variable innodb_checksum_algorithm, for those of you who are familiar, the purpose of this is to ensure that a checksum can be read as a data file is retrieved from disk in order to assure disk storage issues such as bit rot didn’t corrupt the data between the time it was written to disk and the time when it was later retrieved for a read operation.

Default: 100

You may be able to increase overall performance by reducing the amount of data that is read to support the checksum, but I would recommend against it as you want to have that assurance that data being read is the same as when it was written.

Rocksdb_paranoid_checks

In addition to checksumming, there is another fault tolerance measure you can take to ensure the accuracy of written data and this is called “‘paranoid checking”. With paranoid checking enabled, data files will be read immediately after they are written in order to ensure the accuracy of the data.

Default: ON

I would be inclined to leave this enabled as I prefer to do all possible checking in order to make sure data is written with the highest degree of accuracy.

Rocksdb_use_fsync

When data files are written to disk, they are typically done using fdatasync() which utilizes caching in Linux, but doesn’t offer assurances that data is actually on disk at the end of the call. In order to get that assurance, you can use the variable rocksdb_use_fsync to specify that you would rather have MyRocks call fsync() which will assure a disk sync at the time that the request to write data is complete.

Default: OFF

The most likely reason that this is disabled is to allow for the performance gains achieved by the asynchronous data writing nature of fdatasync(). Potential data loss of data sitting in the OS cache but not on disk during a full system crash may or may not be acceptable for your workload, so you may want to consider adjusting this variable.

Rocksdb_use_direct_io_for_flush_and_compaction

If you would rather avoid the disk caching elements of fdatasync() or fsync() for writes to data files via memtables flushes or compaction, you have the option to do so by enabling the variable rocksdb_use_direct_io_for_flush_and_compaction. When it comes to flushing and compaction this will override the value of rocksdb_use_fsync and instead will specify that MyRocks should call O_DIRECT() when writing data to disk.

Default: OFF

In the wiki for RocksDB, you will see that performance gains from using O_DIRECT() are dependent on your use case and are not assured. This is true of all storage engines and testing should always be performed before attempting to adjust a variable like this.

Keep in mind that I have recommended O_DIRECT in the past for InnoDB, but that doesn’t apply here as MyRocks is a very different engine and there really isn’t enough data out there to say what is the best write method for most use cases so far. Exercise caution when changing your write method.

Rocksdb_bytes_per_sync

Another thing that is important to understand about syncing to disk is knowing how often it occurs, and that’s where rocksdb_bytes_per_sync comes into play. This variable controls how often a call is made to sync data during the process while data is being written to disk, specifically after how many bytes have been written. Keep in mind that write-ahead logs have their own variable, rocksdb_wal_bytes_per_sync, so rocksdb_bytes_per_sync is just for data files. Also, be aware that depending on what syncing function is called (see above for rocksdb_use_fsync and rocksdb_use_direct_io_for_flush_and_compaction) this may be an asynchronous request for a disk sync.

Default: 0

With the default value of 0, MyRocks will disable the feature of requesting syncs after the designated number of bytes and instead will rely on the OS to determine when syncing should occur.

It is recommended that users of MyRocks not use this feature as a way of ensuring a persistency guarantee.

Rocksdb_merge_buf_size

Typically, when new data gets flushed into persisted space, it ends up in the highest compaction layer, L0. This will be explained in more detail in the next blog post. There is one exception to this rule and that’s when a new secondary index is added to an existing table, which will skip this process and gets written to the bottom-most level of compaction available, which in MyRocks is L6 by default. Think of this as a way for secondary index data to get to its final destination faster. It does this by doing a merge sort of existing data to support the secondary index.

In order to better understand merge sort processes, I would recommend reading this blog post on hackernoon.

There is a memory cache that is used to support the merge sort process specifically for secondary index creation and it’s called the ‘merge buffer’. The rocksdb_merge_buf_size determines how large this buffer will be.

Default: 64Mb

Rocksdb_merge_combine_read_size

If you checked out the blog post on hackernoon that I mentioned, you’ll know that sorting eventually requires combining the smaller broken down sub-arrays back into the full sorted list. In the case of MyRocks, this uses a completely separate buffer called the “merge combine buffer”. The variable rocksdb_merge_combine_read_size determines how large the merge combine buffer will be.

Default: 1 Gb

You’ll see in the next variable we cover (rocksdb_merge_tmp_file_removal_delay_ms) that MyRocks will create merge files on disk to help support the process of creating new secondary indexes so I/O can occur, but with larger memory buffers you will see less IO.

My take on this would be to not change the global value of this variable, but instead to change it only within the session that I’m using to create the secondary index. Keep in mind that the tradeoff here is that you’re using more memory to speed up index creation; however, if you set the global value of this variable to a large size and forget about it, that large amount of memory may be allocated when you didn’t expect it, which may consume more memory resources than you anticipated, which could lead to issues like OOM, etc.

Rocksdb_merge_tmp_file_removal_delay_ms

In addition to the in-memory resources used to work with the merge sort process of creating new secondary indexes, you may also get merge files created on disk. These are temporary files that you will find in the MyRocks data directly with the .tmp extension. Once the secondary index completion process is created, it will immediately delete these files. For storage solutions like flash, removing large amounts of data can cause trim stalls. This variable will allow you to apply a rate limit delay to this process in order to help prevent this issue.

Default: 0 (no delay)

I wouldn’t change the value of this variable unless you have flash storage. If you do use flash storage, you can test by creating and removing indexes to determine what value would be best for this variable. Given that there are no other implications to setting this variable, I would recommend setting the variable globally, including an addition to the my.cnf.

Associated Metrics

Here are some of the metrics you should be paying attention to when it comes to initial data flushing.

You can find the following information using system status variables

  • Rocksdb_flush_write_bytes: Shows the amount of data that has been written to disk as part of a flush, in bytes, since the last MySQL restart.
  • Rocksdb_number_sst_entry_delete: The number of record delete markers written by MyRocks to a data file since the last MySQL restart.
  • Rocksdb_number_sst_entry_singledelete: The number of record single delete markers written by MyRocks to a data file since the last MySQL restart. This will make a bit more sense after we cover SingleDelete() markers in the next post in the series.
  • Rocksdb_number_deletes_filtered: Shows the number of times a deleted record was not persisted to disk if it made reference to a key that not exist since the last MySQL restart.
  • Rocksdb_stall_memtable_limit_slowdowns: The number of slowdowns that have occurred due to MyRocks getting close to the maximum number of allowed memtables since the last MySQL restart.
  • Rocksdb_stall_memtable_limit_stops: The number of write stalls that have occurred due to MyRocks hitting the maximum number of allowed memtables since the last MySQL restart.
  • Rocksdb_stall_total_slowdowns: The total number of slowdowns that have occurred in the MyRocks engine since the last MySQL restart.
  • Rocksdb_stall_total_stops: The total number of write stalls that have occurred in the MyRocks engine since the last MySQL restart.
  • Rocksdb_stall_micros: How long the data writer had to wait for a flush to finish since the last restart of MySQL.

In the information_schema.ROCKSDB_CFSTATS table, you can find the following information about each column family.

  • MEM_TABLE_FLUSH_PENDING: Shows you if there is a pending operation to flush an immutable memtable to disk.

In the perfomance_schema, you may find the following setup instrument to be helpful.

  • wait/synch/mutex/rocksdb/sst commit: Shows the amount of mutex time wait during the sst (data file) commit process.

Conclusion

In this post, we talked about the mechanics that are involved in flushing data from immutable memtables to disk. We also mentioned a few things about compaction layers, but just enough to help illustrate what’s going on with that initial flush from immutable memtable to disk. Stay tuned for my next post where we’ll do a deeper dive into the mechanics surrounding compaction.

MySQL is Ready for Fedora 30

$
0
0
Fedora 30 is out today. We congratulate the Fedora community on another rev of their favorite Linux distro. As usual, we support the latest Fedora from day one, and we have added the following MySQL products to our official MySQL yum repos: MySQL Server (8.0.16 and 5.7.26) Connector C++ 8.0.16 Connector Python 8.0.16 Connector ODBC […]
Viewing all 18783 articles
Browse latest View live