Exploring the Kubernetes Application Lifecycle With Percona
Where can you find MySQL during March - May 2024
MySQL HeatWave Day in Zurich
Backing up and Restoring to AWS S3 With Percona Kubernetes Operators
MySQL Shell for VS Code – Bastion Host & Invalid fingerprint detected
If you use MySQL Shell for Visual Studio Code, using a bastion host is the easiest method to connect to a MySQL HeatWave DB Instance on OCI.
If you already have a connection setup using a bastion to host, you may experience the same problem as me, MySQL Shell complains about an invalid fingerprint detected:
This error has nothing to do with the fingerprint of your user OCI Key. The problem is related to the key of your bastion host as you can see in the output window:
This happens if you have changed your bastion host for example.
To resolve the problem, remove the current ssh host key for the bastion host stored in your know_hosts
:
$ ssh-keygen -R "host.bastion.us-ashburn-1.oci.oraclecloud.com"
Use the name of your bastion host of course.
When done, it’s already fixed, you can connect back to your MySQL HeatWave DB Instance using MySQL Shell for Visual Studio Code.
Using the Oracle Cloud TypeScript SDK Part 4 - Listing MySQL HeatWave Backups
2nd Round MySQL Workshop at HK Association for Computer Education (HKACE) - Recap.
MySQL Keyring Component Installation for TDE
MySQL Plugin has been extensively used with MySQL. It is being evolved into COMPONENT deployment. This article is written to share the steps with MySQL Keyring Component Installation.
MySQL Enterprise Edition includes encrypted file component for Keyring. This provides a more secure way to store the master key with TDE.
Installation of component with keyring has to be static rather than running SQL command "INSTALL COMPONENT". There are 2 scopes with component installation.
Global vs Local
With Global component installation, the configuration is located with the MySQL installation folder.
With Local component installation, the global configuration is referenced to locate the local configuration from the Datadir.
Global Configuration
Assuming the installation is with the package and installed within /usr. The mysqld is located in /usr/sbin. The plugin folder can be found with SQL Command :
mysql> show variables like 'plugin_dir'';
for example :
mysqld folder : /usr/sbin
plugin_dir : /usr/lib64/mysql/plugin
There are 2 configuration files.
1. mysqld.my : This is the configuration with what components are deployed with mysqld. It is located with the same folder as 'mysqld'.
# cat mysqld.my
{
"components": "file://component_keyring_encrypted_file"
}
The file must be accessible (R) by the OS user which starts the mysqld.
2. component_keyring_encrypted_file.cnf : This is the configuration file with the component defined with mysqld.my. It is located with the plugin folder.
# cat /usr/lib64/mysql/plugin/component_keyring_encrypted_file.cnf
{
"path": "/var/lib/mysql-keyring/component_keyring_encrypted_file",
"password": "password",
"read_only": false
}
The "path" with the configuration determines the location of the encrypted key file. The folder must be a valid and accessible (RW) by the OS user to start mysqld.
Local Configuration
Assuming the installation is with the package and installed within /usr. The mysqld is located in /usr/sbin. The plugin folder can be found with SQL Command :
mysql> show variables like 'plugin_dir'';
for example :
mysqld folder : /usr/sbin
plugin_dir : /usr/lib64/mysql/plugin
There are 2 configuration files.
1. mysqld.my : This is the configuration with what components are deployed with mysqld. It is located with the same folder as 'mysqld'. The content "read_local_manifest" : true indicates that the components configuration file is with the DATADIR for the MySQL server startup.
# cat /usr/sbin/mysqld.my
{"read_local_manifest": true}
# cat $DATADIR/mysqld.my
{
"components": "file://component_keyring_encrypted_file"
}
The file must be accessible (R) by the OS user which starts the mysqld.
2. component_keyring_encrypted_file.cnf : This is the configuration file with the component defined with mysqld.my. It is located with the plugin folder. The "read_local_config":true indicates the configuration is referenced with the configuration file located in $DATADIR.
# cat /usr/lib64/mysql/plugin/component_keyring_encrypted_file.cnf
{
"read_local_config": true
}
# cat $DATADIR/component_keyring_encrypted_file.cnf
{
"path": "/var/lib/mysql-keyring/component_keyring_encrypted_file",
"password": "password",
"read_only": false
}
The "path" with the configuration determines the location of the encrypted key file. The folder must be a valid and accessible (RW) by the OS user to start mysqld.
Once the configuration is done. The startup of MySQL Server will create the encrypted file located with the "path" definition.
To validate the installation, the following SQL Command shows the keyring installation status :
+---------------------+---------------------------------------------------------+
| STATUS_KEY | STATUS_VALUE |
+---------------------+---------------------------------------------------------+
| Component_name | component_keyring_encrypted_file |
| Author | Oracle Corporation |
| License | PROPRIETARY |
| Implementation_name | component_keyring_encrypted_file |
| Version | 1.0 |
| Component_status | Active |
| Data_file | /var/lib/mysql-keyring/component_keyring_encrypted_file |
| Read_only | No |
+---------------------+---------------------------------------------------------+
8 rows in set (0.00 sec)
For any empty row result, please check the mysql error log for more information.
One of the known issue is the privilege setting for the files. If they are not accessible by the mysql startup OS user, it might throw error.
Reference
https://dev.mysql.com/doc/refman/8.0/en/keyring-component-installation.html
https://dev.mysql.com/doc/refman/8.0/en/keyring-encrypted-file-component.html
Webinar recording: Mastering Galera Cluster, Best Practices and New Features
This exclusive webinar is tailored for database administrators and IT professionals aiming to enhance their systems’ efficiency and reliability using Galera Cluster. This session focuses on practical best practices, showcases new features, and provides an extended platform for your queries.
What You Will Learn:
* Core Best Practices: Dive into essential practices, from employing primary keys and leveraging InnoDB to deciding if to optimise read/write splits and managing AUTO_INCREMENT settings.
* Advanced Configuration: Uncover advanced techniques for error monitoring, configuring Galera across networks, and fine-tuning the gcache for optimal performance. * Innovative Features: Stay ahead with insights on implementing Non-Blocking Operations for seamless schema changes, coordinating distributed transactions with XA transactions, and securing your GCache through encryption.
* Protocol and Network Enhancements: Discover the latest advancements in handling unstable networks, protocol improvements, and explore new options to elevate your cluster operations.
Have in-depth questions or faced intricate production challenges? This extended Q&A session is your opportunity to seek advice, clarify doubts, and engage directly with Galera Cluster experts.
Parametric Queries
In 2021, I wrote a MySQL example for my class on the usefulness of Common Table Expressions (CTEs). When discussing the original post, I would comment on how you could extend the last example to build a parametric reporting table.
Somebody finally asked for a concrete example. So, this explains how to build a sample MySQL parametric query by leveraging a filter cross join and tests the parameter use with a Python script.
You can build this in any database you prefer but I used a studentdb database with the sakila sample database installed. I’ve granted privileges to both databases to the student user. The following SQL is required for the example:
-- Conditionally drop the levels table.
DROP TABLE IF EXISTS levels;
-- Create the levels list.
CREATE TABLE levels
( level_id int unsigned primary key auto_increment
, parameter_set enum('Three','Five')
, description varchar(20)
, min_roles int
, max_roles int );
-- Insert values into the list table.
INSERT INTO levels
( parameter_set
, description
, min_roles
, max_roles )
VALUES
('Three','Hollywood Star', 30, 99999)
,('Three','Prolific Actor', 20, 29)
,('Three','Newcommer',1,19)
,('Five','Newcommer',1,9)
,('Five','Junior Actor',10,19)
,('Five','Professional Actor',20,29)
,('Five','Major Actor',30,39)
,('Five','Hollywood Star',40,99999);
The sample lets you use the three or five value labels while filtering on any partial full_name value as the result of the query below:
-- Query the data.
WITH actors AS
(SELECT a.actor_id
, a.first_name
, a.last_name
, COUNT(*) AS num_roles
FROM sakila.actor a INNER JOIN sakila.film_actor fa
ON a.actor_id = fa.actor_id
GROUP BY actor_id)
SELECT CONCAT(a.last_name,', ',a.first_name) full_name
, l.description
, a.num_roles
FROM actors a CROSS JOIN levels l
WHERE a.num_roles BETWEEN l.min_roles AND l.max_roles
AND l.parameter_set = 'Five'
AND a.last_name LIKE CONCAT('H','%')
ORDER BY a.last_name
, a.first_name;
They extends a concept exercise found in Chapter 9 on subqueries in Alan Beaulieu’s Learning SQL book.
This is the parametric Python program, which embeds the function locally (to make it easier for those who don’t write a lot of Python). You could set the PYTHONPATH to a relative src directory and import your function if you prefer.
#!/usr/bin/python
# Import the libraries.
import sys
import mysql.connector
from mysql.connector import errorcode
# ============================================================
# Define function to check and replace arguments.
def check_replace(argv):
# Set defaults for incorrect parameter values.
defaults = ("Three","_")
# Declare empty list variables.
inputs = []
args = ()
# Check whether or not parameters exist after file name.
if isinstance(argv,list) and len(argv) != 0:
# Check whether there are at least two parameters.
if len(argv) >= 2:
# Loop through available command-line arguments.
for element in argv:
# Check first of two parameter values and substitute
# default value if input value is an invalid option.
if len(inputs) == 0 and (element in ('Three','Five')) or \
len(inputs) == 1 and (isinstance(element,str)):
inputs.append(element)
elif len(inputs) == 0:
inputs.append(defaults[0])
elif len(inputs) == 1:
inputs.append(defaults[1])
# Assign arguments to parameters.
args = (inputs)
# Check whether only one parameter value exists.
elif len(argv) == 1 and (argv[0] in ('Three','Five')):
args = (argv[0],"_")
# Assume only one parameter is valid and substitute an
# empty string as the second parameter.
else:
args = (defaults[0],"_")
# Substitute defaults when missing parameters.
else:
args = defaults
# Return parameters as a tuple.
return args
# ============================================================
# Assign command-line argument list to variable by removing
# the program file name.
# ============================================================
params = check_replace(sys.argv[1:])
# ============================================================
# Attempt the query.
# ============================================================
# Use a try-catch block to manage the connection.
# ============================================================
try:
# Open connection.
cnx = mysql.connector.connect(user='student', password='student',
host='127.0.0.1',
database='studentdb')
# Create cursor.
cursor = cnx.cursor()
# Set the query statement.
query = ("WITH actors AS "
"(SELECT a.first_name "
" , a.last_name "
" , COUNT(*) AS num_roles "
" FROM sakila.actor a INNER JOIN sakila.film_actor fa "
" ON a.actor_id = fa.actor_id "
" GROUP BY a.first_name "
" , a.last_name ) "
" SELECT CONCAT(a.last_name,', ',a.first_name) AS full_name "
" , l.description "
" , a.num_roles "
" FROM actors a CROSS JOIN levels l "
" WHERE a.num_roles BETWEEN l.min_roles AND l.max_roles "
" AND l.parameter_set = %s "
" AND a.last_name LIKE CONCAT(%s,'%') "
" ORDER BY a.last_name "
" , a.first_name")
# Execute cursor.
cursor.execute(query, params)
# Display the rows returned by the query.
for (full_name, description, num_roles) in cursor:
print('{0} is a {1} with {2} films.'.format( full_name.title()
, description.title()
, num_roles))
# Close cursor.
cursor.close()
# ------------------------------------------------------------
# Handle exception and close connection.
except mysql.connector.Error as e:
if e.errno == errorcode.ER_ACCESS_DENIED_ERROR:
print("Something is wrong with your user name or password")
elif e.errno == errorcode.ER_BAD_DB_ERROR:
print("Database does not exist")
else:
print("Error code:", e.errno) # error number
print("SQLSTATE value:", e.sqlstate) # SQLSTATE value
print("Error message:", e.msg) # error message
# Close the connection when the try block completes.
else:
cnx.close()
As always, I hope this helps those trying to understand how CTEs can solve problems that would otherwise be coded in external imperative languages like Python.
How to avoid data loss in MySQL Primary Key change
Primary keys are the backbone of efficient data access and maintaining data consistency in your MySQL databases. However, altering them requires careful planning and execution, as incorrect procedures can lead…
The post How to avoid data loss in MySQL Primary Key change first appeared on Change Is Inevitable.How to Change a Column Type in MySQL
This article will walk you through the essentials of MySQL, shedding light on the intricacies of column types, and explore scenarios where altering column types becomes a necessity, exemplified through the lens of dbForge Studio for MySQL.
The post How to Change a Column Type in MySQL appeared first on Devart Blog.
MySQL install ‘n’ config one-liners
Back again, now with MySQL installs. And this means using the MySQL repository this time around.
I’ve been installing and configuring InnoDB Clusters and ClusterSets and thinking about the Ansible and Terraform users amongst us, maybe one-liners might help someone out there.
So, what about if I share how to install the MySQL repo, install the MySQL instance, create an InnoDB Cluster, add a MySQL Router, create a ClusterSet, make sure the Router is ClusterSet-aware, and then test it out. And all via one-liners.
First up, obrigado Miguel for https://github.com/miguelaraujo/ClusterSet-Demo.
To simplify the command execution sequence, these sections aim to help summarize the technical commands required to create the whole platform. And on a default path & port configuration, to ease operational deployments for all those 000’s of installs and posterior admin & ops tasks.
First download the mysql repo binary for your env from https://dev.mysql.com/downloads/
On all servers / nodes:
yum -y localinstall ./mysql80-community-release-el8-9.noarch.rpm
On the database nodes only:
yum install -y mysql-community-server-0:8.0.36-1.el8.x86_64
yum install -y mysql-shell-0:8.0.36-1.el8.x86_64
On the MySQL Router dedicated nodes only:
yum install -y mysql-router-community-0:8.0.36-1.el8.x86_64
On each of the database nodes (be VERY careful of security requirements and where you store passwords please!):
systemctl start mysqld
systemctl enable mysqld
pswd=grep -oP '(?<= A temporary password is generated for root@localhost: ).*' /var/log/mysqld.log | tail -1
mysql -uroot -p$pswd -S /var/lib/mysql/mysql.sock
alter user 'root'@'localhost' identified by 'Contr4sen!A';
flush privileges;
SET sql_log_bin = OFF;
create user 'icadmin'@'localhost' identified by 'Contr4sen!A';
grant all on . to 'icadmin'@'localhost' with grant option;
create user 'icadmin'@'%' identified by 'Contr4sen!A';
grant all on . to 'icadmin'@'%' with grant option;
flush privileges;
SET sql_log_bin = ON;
And now for MySQL Shell’s turn:
mysqlsh icadmin:'Contr4sen!A'@localhost:3306 -- dba check-instance-configuration
mysqlsh icadmin:'Contr4sen!A'@localhost:3306 -- dba configure-local-instance --restart=true --interactive=FALSE
On just one of the database nodes:
mysqlsh icadmin:'Contr4sen!A'@dbnode01:3306 -- dba create-cluster VLC
mysqlsh icadmin:'Contr4sen!A'@dbnode01:3306 -- cluster status --extended=0
mysqlsh icadmin:'Contr4sen!A'@dbnode01:3306 -- cluster status --extended=1
Continuing on the same database node:
mysqlsh icadmin@dbnode01:3306 -- cluster add-instance icadmin:'Contr4sen!A'@dbnode02:3306 --recoveryMethod=clone
mysqlsh icadmin@dbnode01:3306 -- cluster add-instance icadmin:'Contr4sen!A'@dbnode03:3306 --recoveryMethod=clone
mysqlsh icadmin@dbnode01:3306 -- cluster status
mysqlsh icadmin@dbnode01:3306 -- cluster describe
On one of the database nodes:
mysqlsh icadmin@dbnode01:3306 -- cluster setup-router-account 'routerAdmin' --password='Contr4sen!A'
On the first router node, rtnode01:
mysqlrouter --bootstrap icadmin:'Contr4sen!A'@dbnode02:3306 \
--name="router_VLC01" --account=’routerAdmin’ \
--conf-base-port=3306 --report-host=rtnode01 -u mysqlrouter
systemctl start mysqlrouter
On the 2nd router node, rtnode02:
mysqlrouter --bootstrap icadmin:'Contr4sen!A'@dbnode02:3306 \
--name="router_VLC02" --account=’routerAdmin’ \
--conf-base-port=3306 --report-host=rtnode02 -u mysqlrouter
systemctl start mysqlrouter
On one of the database nodes, connected directly or via any router:
mysqlsh icadmin@dbnode01:3306 -- cluster create-cluster-set csVLC
mysqlsh icadmin@dbnode01:3306 -- clusterset status
On router server rtnode01:
systemctl stop mysqlrouter
mysqlrouter --bootstrap icadmin:'Contr4sen!A'@dbnode02:3306 \
--name="router_VLC01" --account=’routerAdmin’ \
--conf-base-port=3306 --report-host=rtnode01 --force -u mysqlrouter
systemctl start mysqlrouter
On router server rtnode02:
systemctl stop mysqlrouter
mysqlrouter --bootstrap icadmin:'Contr4sen!A'@dbnode02:3306 \
--name="router_VLC02" --account=’routerAdmin’ \
--conf-base-port=3306 --report-host=rtnode02 --force -u mysqlrouter
systemctl start mysqlrouter
Validation:
mysqlsh icadmin@rtnode01:3306 -- clusterset routing-options
All yours! Thanks for getting through it all.
What’s New in MySQL 8.3: Feature Overview
The latest version of MySQL Server, 8.3, has been available as a General Availability (GA) release for a while. In case you have missed it, here is a brief recap of the newly available features and enhancements.
The post What’s New in MySQL 8.3: Feature Overview appeared first on Devart Blog.
Help Us Improve MySQL Usability and Double Win!
Announcing Vitess 19
Announcing Vitess 19
New Features to MySQL Enterprise Data Masking and De-Identification
MySQL HeatWave Inbound Replication - On-Premises to MySQL HeatWave DB System
Percona Operator for MySQL Now Supports Automated Volume Expansion in Technical Preview
Master MySQL Point in Time Recovery
Data loss or corruption can be daunting. With MySQL point-in-time recovery, you can restore your database to the moment before the problem occurs.
This article delivers a practical roadmap for using backups and binary logs to achieve accurate MySQL recovery, detailed steps for setting up your server, and tips for managing recovery and backups effectively without overwhelming you with complexity.
Key Takeaways
- MySQL Point in Time Recovery (PITR) enables restoration to a specific point after a full backup. It relies heavily on binary log files to record the incremental changes needed for recovery.
- Preparation for PITR is crucial and involves enabling binary logging and creating a full database backup. Monitoring and managing binary log activities, such as file retention, are essential for effective recovery.
- Executing PITR requires restoring from the full backup and then applying binary log events in sequence up to the desired point in time, with advanced techniques and third-party tools available to optimize large dataset handling and automate the recovery process.
Understanding MySQL Point in Time Recovery
Effective database management must understand the significance of point-in-time recovery (PITR). PITR within MySQL enables you to:
- Revert your database to an exact moment after conducting a full backup
- Address issues stemming from data losses
- Diminish the extent of data loss and ensure resilience against unforeseen incidents in your database
Imagine possessing a temporal device that could reverse your database’s condition before an unanticipated occurrence impacted it. In such scenarios, PITR acts as an indispensable recovery mechanism.
The restoration capability is primarily dependent on binary log files. These logs meticulously document every modification executed within the database in the data directory, providing essential incremental updates that facilitate time-specific recovery efforts.
The Role of Binary Logs in Point-in-Time Recovery
The binary log is a crucial component for point-in-time recovery (PITR). It operates behind the scenes to capture all alterations applied to the database, including, but not limited to, table creation, data manipulation, and schema modifications.
By harnessing MySQL’s binary logging feature, every event is recorded in real-time. This recording occurs promptly after completing any given SQL statement or transaction execution while maintaining lock conditions intact. Through this meticulous process, commits are accurately logged in their respective sequences.
Three distinct modes are available within MySQL when setting up your binary logs’ format. Each caters to specific needs.
- STATEMENT level – at which only SQL statements causing changes in data are documented succinctly.
- ROW level – where details down to individual row modifications get captured exhaustively, potentially leading to increased file sizes.
- MIXED mode – combines elements from both statement-level and row-level documentation strategies, thus striking an optimal balance between comprehensiveness and resource utilization.
Administrators managing MySQL servers with enabled binary logging functionality should carefully assess these options against their unique system demands. They should ensure alignment with operational requirements while choosing the most appropriate log format configuration to efficiently manage server activities directly or indirectly related to potential data transformations on databases under stewardship.
Preparing for Point-in-Time Recovery
“By failing to prepare, you are preparing to fail” is significant in the context of point-in-time recovery (PITR). It is important to begin with a thorough database backup to prepare for such situations.
This can be achieved through ‘mysqldump’, which effectively captures the entire state of the database at any particular time. Activating binary logging is essential as it meticulously logs all modifications made within the database and facilitates recovery until an exact instant.
Setting Up Your MySQL Server for Recovery Success
Having gained an understanding of Point-in-Time Recovery (PITR) and its essential requirements, discussing how to configure your MySQL server to facilitate a successful recovery properly is important.
This isn’t merely about taking a full backup or enabling binary logging. Rather, this process includes automating server configuration and guaranteeing consistent settings to expedite the time recovery procedure.
By automating the setup for point-in-time recovery on your MySQL server, you simplify activating binary logging and streamline creating backups comprehensively. These measures enhance efficiency during backup creation and effectiveness when restoring data at any given time.
Enabling Binary Logging on Your Server
The binary log is crucial for successful point-in-time recovery on your MySQL server. To enable this feature, you must:
- Turn the ‘log_bin’ variable to ‘ON’.
- Insert the ‘log-bin’ directive into your server’s configuration file (my.cnf or my.ini).
- Use the ‘–log-bin’ option to designate a base name for the binary log files. Then, they’ll automatically receive numeric extensions and generate an ongoing sequence of logs.
Restarting the MySQL service is necessary to activate binary logging after it has been set up on your server. Following this restart, you can confirm that binary logging has been enabled by executing the ‘SHOW MASTER STATUS;’ command.
Binary logging serves as a mechanism for point-in-time data recovery and enhances database integrity. When binary logging is turned on, every alteration made to data is systematically documented.
Taking a Baseline Full Backup
The cornerstone of Point-in-Time Recovery (PITR) is a thorough full backup, which enables the complete restoration of all tables and databases as the first step in recovery.
To create a backup within MySQL, you may employ the mysqldump utility to produce an SQL file crucial for server data revival. To establish a reliable PITR foundation, using mysqldump should incorporate several key command options that manage binary log positioning and clear logs.
- –all-databases
- –flush-logs
- –master-data=2
- –delete-master-logs
These specific instructions are essential for handling tasks related to binary logs.
Even though many rely on mysqldump for generating full backups due to its popularity and effectiveness, alternative methods exist, like MySQL Enterprise Backup or Percona XtraBackup, offering different capabilities, including incremental or partial backups suited for supporting PITR operations.
Regardless of which tool you choose, it must fully cover system and user databases to provide comprehensive recovery potential. Following creation, best practices dictate carefully transferring the secure backup file and its associated binary log SQL files to your designated recovery host.
Capturing Changes with Binary Log Files
Upon configuring your server and establishing a full backup, the next step involves documenting subsequent data modifications using binary log files.
To create incremental backups that record these changes, one must periodically issue the FLUSH LOGS command or utilize mysqldmin flush logs. This routine ensures you compile a structured series of updates constituting the incremental backups essential for restorative processes.
To restore your database to a precise point, you must methodically apply binary log files through a solitary connection with the MySQL server. Alternatively, you may direct those logs into a file that can be processed by employing the MySQL client tool for restoration.
Monitoring Current Binary Log File Activity
Keeping track of binary log files is crucial for backup management. By executing the Show Master Status Command, you can pinpoint the current active binary log, revealing its name and location. For an overview of all existing binary logs in your system, issue the SHOW BINARY LOGS command, which will provide a comprehensive list.
For those needing to scrutinize what’s inside these binary logs more closely—whether it’s assessing event timings or positions—the mysqlbinlog utility comes into play. It grants access to peer into these logs’ contents.
By directing this output from the mysqlbinlog utility either through a pagination tool or into another file, one can facilitate extended examination and scrutiny of log data.
Managing Binary Log Retention
Maintaining binary log files for an appropriate length of time is a key component of Point-in-Time Recovery (PITR). The duration that these logs are retained is governed by the MySQL system variable ‘expire_logs_days’, which specifies the timeframe after which binary logs will be automatically expunged. To ascertain the existing retention period, utilize the command ‘SHOW VARIABLES LIKE “expire_logs_days”;’, and to modify this period, use ‘SET global expire_logs_days = number_of_days’.
When purging binary log files from your server, execute the command ‘PURGE BINARY LOGS’. This procedure eliminates specific log files while correctly refreshing the corresponding index file. Nevertheless, it’s vital not to remove any old binary logs required by replicas for their synchronization processes, as this might interfere with replication.
Vigilant management of how long binary logs are kept is crucial since, if allowed to accumulate excessively large, they can consume substantial disk space. Such bloated logs could also become problematic in terms of performance implications.
To ensure effective PITR outcomes, it is essential to maintain a complete and unbroken chain of binary log files from immediately after the last full backup to just before the recovery start point. This allows full data restoration to the specified target point within the desired timeline when necessary.
Executing Point-in-Time Data Restoration
Having addressed the foundational aspects of configuring your server and handling binary log files, it is time to tackle how one executes a point-in-time data recovery. Initially, this entails reconstituting the database from the full backup file and setting up a starting reference point for Recovery.
After restoring from the full backup, you incrementally apply an SQL file converted from binary log files. This helps piece together transaction events within the database leading up to your intended historical moment. At this stage, confirming that after applying these binary log events accurately, your database mirrors exactly its state at that specific snapshot before any data mishap is crucial.
Restoring from the Full Backup
In the initial phase of the recovery procedure, you must establish a fresh, empty database on your intended recovery host that will become the destination for your full backup restoration. Whether operating on Linux or Windows platforms, use the command mysql backup.sql to apply the fully backed-up baseline state to this newly created database.
After laying down this groundwork by restoring a full backup, it’s crucial to recover point-in-time. Consider this an essential practice. The idea behind point-in-time time recovery is straightforward: Your full backup captures a comprehensive snapshot of your MySQL database at one specific moment. Then, all subsequent modifications from that precise juncture forward are meticulously recorded within binary logs, allowing incremental changes past that initial checkpoint to be accurately reconstructed during restoration.
Applying Binary Log Events
Upon restoring the full backup on your MySQL server, you must apply events from the binary log for point-in-time recovery. The mysqlbinlog utility translates these binary log files into SQL commands. It uses the stop-datetime parameter to identify when exactly you wish to stop the recovery process.
These SQL commands generated from binary logs can either be channeled straight into a running MySQL server with a command like mysql inc_backup.sql or streamed directly through piping as seen with mysqlbinlog binlog_files | mysql -u root -p.
It should be emphasized that precision in this recovery procedure is achievable by utilizing options such as start-position and stop-position provided within the mysqlbinlog tool. This method might surpass datetime parameters in reliability due to its ability to avoid overlooking critical events found within binary logs.
When handling more than one file associated with binary logging, they must be executed against your MySQL database sequentially. This ensures accuracy during point-in-time recoveries. It can be achieved by directing all files simultaneously via a single connection or combining them into one consolidated file before initiating their execution sequence.
Finalizing the Recovery Process
Great job on progressing so far! Confirm that the restored data is integral and consistent to complete the recovery procedure. Reference the ‘backup_variables.txt’ file to cross-check whether the server’s binary log position matches your intended recovery point.
As you prepare for this final phase, it’s crucial to eliminate any pre-existing databases with identical names to those you’re restoring.
Doing this mitigates conflict when applying new data. Consider moderating the pace of restoration, especially when dealing with voluminous datasets. A measured approach can prevent overburdening your server and help keep its performance stable.
Advanced Point-in-Time Recovery Techniques
In the realm of technological progress, there is consistent potential for refinement and enhancement. Specifically in the context of point-in-time recovery (PITR), sophisticated methods may include:
- Managing substantial data volumes throughout the recovery process
- Implementing automated processes for time recovery
- Utilizing capabilities like row-level locking and transactional operations within the InnoDB storage engine
Employing these strategies bolsters the dependability of cutting-edge point-in-time recovery protocols.
Handling Large Datasets During Recovery
Handling large datasets during recovery can be daunting. However, strategies such as compression and parallel processing can help. For instance, the LZ4 algorithm is a valuable strategy for accelerating large datasets’ backup and recovery process while reducing storage and I/O demands.
Another strategy is partitioning large tables and implementing recovery on partitioned subsets. This facilitates a more efficient restoration process for datasets containing millions of rows.
You can also enhance the recovery architecture by restoring different database segments on separate servers and using read replicas for point-in-time recovery. Utilizing these strategies ensures a robust and efficient approach to handling large datasets during MySQL point-in-time recovery.
Automating Recovery Procedures
In many technological domains, automation has revolutionized how we approach tasks, and point-in-time recovery (PITR) is certainly one area where it has significantly impacted.
Automating the PITR process is advisable as it helps to reduce errors caused by manual intervention while speeding up the time recovery operation. This enhancement involves writing shell scripts that automate the collection of binary log files and their subsequent application to an existing backup during restoration.
Automating commonly used settings within the mysqlbackup command can simplify processes further. By predefining these options in the [mysqlbackup] section located within MySQL’s configuration file, you streamline command execution.
It’s also advantageous to regularly conduct automated test recoveries against backups. This validation measure for your backup integrity should preferably be carried out within an isolated testing environment.
Tools and Utilities for Enhanced Recovery
There’s no need to tackle Point-in-Time Recovery (PITR) alone, as various tools and utilities can improve the recovery workflow.
By compressing binary logs, for instance, you can streamline the recovery process by lessening the amount of data storage needed. Smaller binary log files result in faster transfer rates, which is essential for expediting automated recovery operations.
Cloud-based platforms such as SqlBak make managing backups more straightforward. They offer user-friendly tools and interfaces dedicated to enhancing efficiency in the backup process.
Leveraging the mysqlbinlog Utility
The mysqlbinlog utility is a command-line tool designed to aid recovery by reading MySQL binary log files and converting them into a format understandable for humans. Using the mysqlbinlog, you can perform several tasks including:
- Transforming binary logs into an SQL file that can be interpreted easily for certain timeframes using start and stop dates.
- Retrieving particular records through their specific offset or position within the log.
- Tailoring output so it only includes events from one specified database.
This utility facilitates more complex recovery endeavors by offering options to filter out or include GTIDs (Global Transaction Identifiers), as well as a feature that ensures statements produced in output are not logged again on your server, helping prevent unnecessary duplication of events.
Starting with MySQL version 8.0.14, mysqlbinlog has enhanced its capabilities further. It now seamlessly handles encrypted binary log files and has automatic functionality for decompressing and decrypting event data when transaction compression is turned on within the system.
Third-Party Tools for Simplified Recovery
Outside the standard MySQL utilities, a range of external tools can ease the process of data recovery on your database server. Tools such as SQLBak and SQLBackupAndFTP offer intuitive interfaces and automated systems for executing point-in-time recoveries and managing backups across various platforms.
Take Zmanda Recovery Manager for MySQL as an example. It includes features like:
- The capability for point-in-time data recovery
- Automation of backup schedules
- Systems monitoring
- Detailed reporting
It serves as an all-encompassing solution tailored to recovering MySQL databases.
Some tools like MyDumper and MyLoader employ multithreading to significantly improve the speed at which logical backups are created and subsequently restored within a MySQL environment.
By creating dump files more efficiently, these tools add another layer of expedience and straightforwardness to your server’s backup and time recovery strategy.
Best Practices for Point-in-Time Recovery
Understanding the nuances of Point-In-Time Recovery (PITR) and familiarizing yourself with the related tools is crucial. As you apply this knowledge, consider these essential best practices:
- Synchronize your server clocks to ensure consistency
- Consistently perform backups of your binary log files
- Keep detailed records of recovery protocols
- Abide by regulations about data protection
Following these guidelines will help you reduce both downtime and the risk of losing important data, thereby supporting uninterrupted business operations.
Regular Testing of Recovery Procedures
It is imperative to routinely verify your recovery processes through testing, ensuring that data can be precisely recovered to a designated point in time should unintended deletions or damage occur. By trialing the point-in-time recovery method within a controlled staging setting before its implementation on an operational database, you solidify comprehension of the process and confirm its efficacy.
To avoid disrupting live databases with test transactions, it’s recommended to utilize an independent server dedicated solely to evaluating recovery scenarios.
Automated systems designed for conducting tests within these environments can also help affirm the dependability of backups and their capacity to be successfully restored when necessary.
Maintaining a Secure Backup Environment
Ensuring a secure backup environment is crucial for any recovery strategy. This includes protecting and observing binary logs to block unauthorized access and identify potential security risks. To safeguard the sensitive information in binary log files, you can activate encryption by setting binlog_encryption to ON.
It’s recommended that you:
- Retain backups on an independent server or cloud storage to mitigate data loss due to server malfunctions while boosting protection against unsanctioned access.
- Apply encryption methods for all backups.
- Establish procedures aimed at managing scenarios like multiple transfer efforts.
Taking these steps enhances the dependability of your incremental backup framework.
Navigating the complex procedure of MySQL Point in Time Recovery, we’ve recognized its essential role in database administration. Grasping the significance of binary logs and getting your server ready paves the way for implementing recovery measures while tapping into sophisticated methodologies and instruments.
Mastering PITR is indispensable for any database manager striving to rebound effectively from unpredictable incidents, curtail data misplacement, and guarantee uninterrupted business operations. Always remember that possessing a temporal calibration tool such as point-in-time recovery can radically transform your approach to safeguarding your databases against loss or damage.
Frequently Asked Questions
How do you do point-in-time recovery?
To perform a point-in-time recovery, you should create a database backup by setting a timestamp from an earlier period. Subsequently, this data can be restored into a fresh database.
Such time recovery techniques are often employed as a strategy for dealing with problems related to data corruption.
What is point-in-time recovery SQL?
SQL point-in-time recovery enables the restoration of databases to a precise moment, thereby recuperating data modifications made until that point. This feature is beneficial when one needs to generate a duplicate of the database as it existed earlier for any reason.
How do binary logs contribute to PITR?
Binary logs are critical in facilitating point-in-time recovery (PITR) for databases. They maintain a record of all modifications to the database, thus enabling incremental time-based recovery at any needed point.
<p>The post Master MySQL Point in Time Recovery first appeared on ScaleGrid.</p>
MySQL Shorts - Episode #57 is Released
MySQL Rockstars 2023
MySQL Shorts - Episode #58 is Released
MySQL Asynchronous Connectivity with MySQL Connector/Python
Using the Oracle Cloud TypeScript SDK Part 5 - Creating a MySQL HeatWave Backup
Want to become an AI developer? Here’s how with MySQL HeatWave
Analysts react to MySQL HeatWave Gen AI and vector store innovations
Percona XtraBackup 8.0.28 Supports Encrypted Table Backups with AWS KMS
Champs Recap: Devart’s Award-Winning Products in Q1 2024
Last season was quite fruitful in terms of awards scored by our products on independent review platforms. And today, we've got a lot more of those to share. Without further ado, let's get started!
The post Champs Recap: Devart’s Award-Winning Products in Q1 2024 appeared first on Devart Blog.
MySQL InnoDB’s Instant Schema Changes: What DBAs Should Know
In MySQL 8.0.12, we introduced a new algorithm for DDLs that won’t block the table when changing its definition. The first instant operation was adding a column at the end of a table, this was a contribution from Tencent Games.
Then in MySQL 8.0.29 we added the possibility to add (or remove) a column anywhere in the table.
For more information, please check these articles from Mayank Prasad : [1], [2]
In this article, I want to focus on some dangers that could happen when using blindly this feature.
Default Algorithm
Since MySQL 8.0.12, for any supported DDL, the default algorithm is INSTANT
. This means that the ALTER
statement will only modify the table’s metadata in the data dictionary. No exclusive metadata locks are taken on the table during the preparation and execution phases of the operation, and table data is unaffected, making the operations instantaneous.
The other two algorithms are COPY
and INPLACE
, see the manual for the online DDL operations.
However, there is a limitation for INSTANT DDLs even when the operation is supported: a table supports 64 instant changes. After reaching that counter, the table needs to be “rebuilt”.
If the algorithm is not specified during the ALTER
statement (DDL operation), the appropriate algorithm will be chosen silently. Of course, this can lead to a nightmare situation in production if not expected.
Always specify the ALGORITHM
So the first recommendation is always to specify the algorithm even if it’s the default one when performing DDLs. When the algorithm is specified, if MySQL is not able to use it, it will throw an error instead of executing the operation using another algorithm:
SQL > ALTER TABLE t1 DROP col1, ALGORITHM=INSTANT;
ERROR: 4092 (HY000): Maximum row versions reached for table test/t1.
No more columns can be added or dropped instantly. Please use COPY/INPLACE.
Monitor the instant changes
The second recommendation is also to monitor the number of instant changes performed on the tables.
MySQL keeps the row versions in Information_Schema
:
SQL > SELECT NAME, TOTAL_ROW_VERSIONS
FROM INFORMATION_SCHEMA.INNODB_TABLES WHERE NAME LIKE 'test/t1';
+---------+--------------------+
| NAME | TOTAL_ROW_VERSIONS |
+---------+--------------------+
| test/t1 | 63 |
+---------+--------------------+
In the example above, the DBA will be able to perform one extra INSTANT DDL operation but after that one, MySQL won’t be able to perform another one.
As DBA, it’s a good practice to monitor all the tables and decide when a table needs to be rebuilt (to reset that counter).
This is an example of a recommended query to add to your monitoring tool:
SQL > SELECT NAME, TOTAL_ROW_VERSIONS, 64-TOTAL_ROW_VERSIONS AS
"REMAINING_INSTANT_DDLs",
ROUND(TOTAL_ROW_VERSIONS/64 * 100,2) AS "DDLs %"
FROM INFORMATION_SCHEMA.INNODB_TABLES
WHERE TOTAL_ROW_VERSIONS > 0 ORDER BY 2 DESC;
+--------------------------+--------------------+------------------------+--------+
| NAME | TOTAL_ROW_VERSIONS | REMAINING_INSTANT_DDLs | DDLs % |
+--------------------------+--------------------+------------------------+--------+
| test/t1 | 63 | 1 | 98.44 |
| test/t | 4 | 60 | 6.25 |
| test2/t1 | 3 | 61 | 4.69 |
| sbtest/sbtest1 | 2 | 62 | 3.13 |
| test/deprecation_warning | 1 | 63 | 1.56 |
+--------------------------+--------------------+------------------------+--------+
To reset the counter and rebuild the table, you can use OPTIMIZE TABLE <table>
or ALTER TABLE <table> ENGINE=InnoDB
Conclusion
In conclusion, MySQL 8.0’s introduction of the INSTANT
algorithm for DDL operations has revolutionized schema changes by avoiding blocking changes. However, with the limitation of 64 instant changes, before a table rebuild is required, it’s crucial to specify the algorithm explicitly during ALTER
statements to avoid unexpected behaviors. Monitoring the number of instant changes through Information_Schema
is also recommended to avoid surprises by reaching the instant change limit unaware and plan carefully the table rebuilds
Enjoy MySQL !