Quantcast
Channel: Planet MySQL
Viewing all 18838 articles
Browse latest View live

Is Galera trx_commit=2 and sync_binlog=0 evil?

$
0
0

It has been almost 5 years since I posted on my personal MySQL related blog. In the past few years I have worked for Severalnines and blogging both on their corporate blog and here would be confusing. After that I forgot and neglected this blog a bit, but it’s time to revive this blog!

Speaking at Percona Live Europe – Amsterdam 2019

Why? I will be presenting at Percona Live Europe soon and this blog and upcoming content is the more in-depth part of some background stories in my talk on benchmarking: Benchmarking should never be optional. The talk will mainly cover why you should always benchmark your servers, clusters and entire systems.

See me speak at Percona Live Europe 2019

If you wish to see me present, you can receive 20% discount using this code: CMESPEAK-ART. Now let’s move on to the real content of this post!

Innodb_flush_log_at_trx_commit=2 and sync_binlog=0

At one of my previous employers we ran a Galera cluster of 3 nodes to store all shopping carts of their webshop. Any cart operation (adding a product to the basket, removing a product from the basket or increasing/decreasing the number of items) would end up as a database transaction. With such important information stored in this database, in a traditional MySQL asynchronous replication setup it would be essential to ensure all transactions are retained at all times. To be fully ACID compliant the master would have both innodb_flush_log_at_trx_commit set to 2 and sync_binlog set to 0 to ensure every transaction is written to the logs and flushed to disk. When every transaction has to wait for data to be written to the logs and flushed to disk, this will limit the number of cart operations you can do.

Somewhere in a dark past the company passed the number of cart operations possible on this host and one of the engineers found a Stackoverflow post instructing how to improve the performance of MySQL by “tuning” the combo of the two variables. Naturally this solved the immediate capacity problem, but sacrificed in consistency at the same time. As Jean-François Gagné pointed out in a blog post, you can lose transactions in MySQL when you suffer from OS crashes. This was inevitable to happen some day and when that day arrived a new solution had come available: Galera!

Galera and being crash-unsafe

Galera offers semi-synchronous replication to ensure your transaction has been committed on the other nodes in the cluster. You just spread your cluster over your entire infrastructure on multiple hosts in multiple racks. When a node crashes it will recover when rejoining and Galera will fix itself, right?

Why would you care about crash-unsafe situations?

The answer is a bit more complicated than a yes or a no. When an OS crash happens (or a kill -9), InnoDB can be more advanced than the data written to the binary logs. But Galera doesn’t use binary logs by default, right? No it doesn’t, but it uses GCache instead: this file stores all transactions committed (in the ring buffer) so it acts similar to the binary logs and acts similar to these two variables. Also if you have asynchronous slaves attached to Galera nodes, it will write to both the GCache and the binary logs simultaneously. In other words: you could create a transaction gap with a crash-unsafe Galera node.

However Galera will keep state of the last UUID and sequence number in the grastate.dat file in the MySQL root folder. Now when an OS crash happens, Galera will read the grastate.dat file on startup and on an unclean shutdown it encounters seqno: -1. While  Galera is running the file contains the seqno: -1 and only upon normal shutdown the grastate.dat is written. So when it finds seqno: -1, Galera will assume an unclean shutdown happened and if the node is joining an existing cluster (becoming part of the primary component) it will force a State Snapshot Transfer (SST) from a donor. This wipes all data on the broken node, copies all data and makes sure the joining node has the same dataset.

Apart from the fact that unclean shutdown always triggers a SST (bad if your dataset is large, but more on that in a future post), Galera is pretty much recovering itself and not so much affected by being crash-unsafe. So what’s the problem?

It’s not a problem until all nodes crash at the same time.

Full Galera cluster crash

Suppose all nodes crash at the same time, none of the nodes would have been shut down properly and all nodes would have seqno: -1 in the grastate.dat. In this case a full cluster recovery has to be performed where MySQL has to be started with the –wsrep-recover option. This will open the innodb header files, shutdown immediately and return the last known state for that particular node.

$ mysqld --wsrep-recover
...
2019-09-09 13:22:27 36311 [Note] InnoDB: Database was not shutdown normally!
2019-09-09 13:22:27 36311 [Note] InnoDB: Starting crash recovery.
...
2019-09-09 13:22:28 36311 [Note] WSREP: Recovered position: 8bcf4a34-aedb-14e5-bcc3-d3e36277729f:114428
...

Now we have three independent Galera nodes that each suffered from an unclean shutdown. This means all three have lost transactions up to one second before crashing. Even though all transactions committed within the cluster are theoretically the same as the cluster crashed at the same moment in time, this doesn’t mean all three nodes have the same number of transactions flushed to disk. Most probably all three nodes have a different last UUID and sequence number and even within this there could be gaps as transactions are executed in parallel. Are we back at eeny-meeny-miny-moe and just pick one of these nodes?

Can we consider Galera with trx_commit=2 and sync_binlog=0 to be evil?

Yes and no… Yes because we have potentially lost a few transactions so yes it’s bad for consistency. No because the entire cart functionality became unavailable and carts have been abandoned in all sorts of states. As the entire cluster crashed, customers couldn’t perform any actions on the carts anyway and had to wait until service had been restored. Even if a customer just finished a payment, in this particular case the next step in the cart could not have been saved due to the unavailability of the database. This means carts have been abandoned and some may actually have been paid for. Even without the lost transactions we would need to recover these carts and payments manually.

So to be honest: I think it doesn’t matter that much if you handle cases like this properly. Now if you would design your application right you would catch the (database) error after returning from the payment screen and create a ticket for customer support to pick this up. Even better would be to trigger a circuit breaker and ensure your customers can’t re-use their carts after the database has been recovered. Another approach would be to scavenge data from various sources and double check the integrity of your system.

The background story

Now why is this background to my talk because this doesn’t have anything to do with benchmarking? The actual story in my presentation is about a particular problem around hyperconverging an (existing) infrastructure. A hyperconverged infrastructure will sync every write to disk to at least one other hypervisor in the infrastructure (via network) to ensure that if the hypervisor dies, you can quickly spin up a new node on a different hypervisor. As we have learned from above: the data on a crashed Galera node is unrecoverable and will be deleted during the joining process (SST). This means it’s useless to sync Galera data to another hypervisor in a hyperconverged infrastructure. And guess what the risk is if you hyper-converge your entire infrastructure into a single rack? 😆

I’ll write more about the issues with Galera on a hyperconverged infrastructure in the next post!

Advertisements

Setting up multi-source replication (MySQL 8.0) with GTID based replication

$
0
0
Last week I posted a blog on how to setup multi-source for MySQL 5.7 using GTID based replication, in this blog we will do the same using MySQL 8.0.

Most of the procedure are the same, there are only two things that we need to change are;
1) Procedure for creating the replication user.
2) Fetching the GTID value from the MySQL dumpfile.

1) Create replication user for MySQL 8
MySQL 8 uses a new authentication plugin so to create the replication user will differ from MySQL 5.7. The procedure for 5.7 was with:
mysql> CREATE USER 'repl_user'@'slave-host' IDENTIFIED BY 'repl_pass';
mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl_user'@'slave-host';
With MySQL 8 you need to create the user like:
mysql> CREATE USER 'repl_user'@'slave-host' IDENTIFIED WITH sha256_password BY 'repl_pass';
mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl_user'@'slave-host';
Another option is to configure MySQL to use old authentication method, then you do not have to modify the CREATE statement above. This can be done by setting configuration variable default_authentication_plugin to "mysql_native_password".

2) Changed format of GTIF_PURGED information in dumpfile
The GTID_PERGED information in the dumpfile now included a comment like:
SET @@GLOBAL.GTID_PURGED=/*!80000 '+'*/ 'dfa44283-d466-11e9-80ec-ec21e522bf21:1-6';

To fetch the GTID you now have to use below command that will remove also the new comment:
cat dumpM1.sql | grep GTID_PURGED | perl -p0 -e 's#/\*.*?\*/##sg' | cut -f2 -d'=' | cut -f2 -d$'\''

The github page and the scripts to create a test environment have also been updated!

Cassandra on Fedora 30

$
0
0

The first thing to do with Fedora 30 is to check what part of Apache Cassandra is installed. You can use the following rpm command to determine that:

rpm -qa | grep cassandra

My Fedora 30 returned the following values:

cassandra-java-libs-3.11.1-12.fc30.x86_64
cassandra-python2-cqlshlib-3.11.1-12.fc30.x86_64
cassandra-3.11.1-12.fc30.x86_64
python2-cassandra-driver-3.18.0-1.fc30.x86_64

Notably missing from the list of rpm list is the cassandra-server package. You install cassandra-server with the def utility:

dnf install -y cassandra-server

You should get an installation log like the following for the cassandra-server package:

Fedora Magazine has a great Get Started with Apache Cassandra on Fedora article on all the steps required to setup clusters. This article only covers creating and enabling the Cassandra service, and setting up a single node Cassandra instance.

You start Cassandra with the following command as the root user:

systemctl start cassandra

You enable Cassandra with the following command as the root user:

systemctl enable cassandra

It creates the following symlink:

Created symlink /etc/systemd/system/multi-user.target.wants/cassandra.service → /usr/lib/systemd/system/cassandra.service.

You can connect to the Test cluster with the following command:

cqlsh

You should see the following:

Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.11.1 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.

You can see the options by typing the help command:

Documented shell commands:
===========================
CAPTURE  CLS          COPY  DESCRIBE  EXPAND  LOGIN   SERIAL  SOURCE   UNICODE
CLEAR    CONSISTENCY  DESC  EXIT      HELP    PAGING  SHOW    TRACING

CQL help topics:
================
AGGREGATES               CREATE_KEYSPACE           DROP_TRIGGER      TEXT     
ALTER_KEYSPACE           CREATE_MATERIALIZED_VIEW  DROP_TYPE         TIME     
ALTER_MATERIALIZED_VIEW  CREATE_ROLE               DROP_USER         TIMESTAMP
ALTER_TABLE              CREATE_TABLE              FUNCTIONS         TRUNCATE 
ALTER_TYPE               CREATE_TRIGGER            GRANT             TYPES    
ALTER_USER               CREATE_TYPE               INSERT            UPDATE   
APPLY                    CREATE_USER               INSERT_JSON       USE      
ASCII                    DATE                      INT               UUID     
BATCH                    DELETE                    JSON            
BEGIN                    DROP_AGGREGATE            KEYWORDS        
BLOB                     DROP_COLUMNFAMILY         LIST_PERMISSIONS
BOOLEAN                  DROP_FUNCTION             LIST_ROLES      
COUNTER                  DROP_INDEX                LIST_USERS      
CREATE_AGGREGATE         DROP_KEYSPACE             PERMISSIONS     
CREATE_COLUMNFAMILY      DROP_MATERIALIZED_VIEW    REVOKE          
CREATE_FUNCTION          DROP_ROLE                 SELECT          
CREATE_INDEX             DROP_TABLE                SELECT_JSON     

Here’s my script that creates Cassandra keyspace, which is more or less a database. You use the USE command to connect to the keyspace or database, like you would in MySQL. You do not have sequences in Cassandra because they’re not a good fit for a distributed architecture. Cassandra does not support a native procedural extension like relational databases. You must create User-defined functions (UDFs) by embedding the logic in Java.

This script does the following:

  • Creates a keyspace
  • Uses the keyspace
  • Conditionally drops tables and functions
  • Creates two tables
  • Inserts data into the two tables
  • Queries data from the tables

I also included a call to a UDF inside a query in two of the examples. One of the queries demonstrates how to return a JSON structure from a query. To simplify things and provide clarification of the scripts behaviors, the details are outlined below.

  • The first segment of the script creates the keyspace, changes the scope to use the keyspace, conditionally drop tables, create tables, and insert values into the tables:

    /* Create a keyspace in Cassandra, which is like a database
       in MySQL or a schema in Oracle. */
    CREATE KEYSPACE IF NOT EXISTS student
      WITH REPLICATION = {
         'class':'SimpleStrategy'
        ,'replication_factor': 1 }
      AND DURABLE_WRITES = true;
    
    /* Use the keyspace or connect to the database. */
    USE student;
    
    /* Drop the member table from the student keyspace. */
    DROP TABLE IF EXISTS member;
    
    /* Create a member table in the student keyspace. */
    CREATE TABLE member
    ( member_number       VARCHAR
    , member_type         VARCHAR
    , credit_card_number  VARCHAR
    , credit_card_type    VARCHAR
    , PRIMARY KEY ( member_number ));
    
    /* Conditionally drop the contact table from the student keyspace. */
    DROP TABLE IF EXISTS contact;
    
    /* Create a contact table in the student keyspace. */
    CREATE TABLE contact
    ( contact_number      VARCHAR
    , contact_type        VARCHAR
    , first_name          VARCHAR
    , middle_name         VARCHAR
    , last_name           VARCHAR
    , member_number       VARCHAR
    , PRIMARY KEY ( contact_number ));
    
    /* Insert a row into the member table. */
    INSERT INTO member
    ( member_number, member_type, credit_card_number, credit_card_type )
    VALUES
    ('SFO-12345','GROUP','2222-4444-5555-6666','VISA');
    
    /* Insert a row into the contact table. */
    INSERT INTO contact
    ( contact_number, contact_type, first_name, middle_name, last_name, member_number )
    VALUES
    ('CUS_00001','FAMILY','Barry', NULL,'Allen','SFO-12345');
    
    /* Insert a row into the contact table. */
    INSERT INTO contact
    ( contact_number, contact_type, first_name, middle_name, last_name, member_number )
    VALUES
    ('CUS_00002','FAMILY','Iris', NULL,'West-Allen','SFO-12345');
    
    /* Insert a row into the member table. */
    INSERT INTO member
    ( member_number, member_type, credit_card_number, credit_card_type )
    VALUES
    ('SFO-12346','GROUP','3333-8888-9999-2222','VISA');
    
    /* Insert a row into the contact table. */
    INSERT INTO contact
    ( contact_number, contact_type, first_name, middle_name, last_name, member_number )
    VALUES
    ('CUS_00003','FAMILY','Caitlin','Marie','Snow','SFO-12346');
    

    The following queries the member table:

    /* Select all columns from the member table. */
    SELECT * FROM member;
    

    It returns the following:

     member_number | credit_card_number  | credit_card_type | member_type
    ---------------+---------------------+------------------+-------------
         SFO-12345 | 2222-4444-5555-6666 |             VISA |       GROUP
         SFO-12346 | 3333-8888-9999-2222 |             VISA |       GROUP
    
  • Create a concatenate User-defined function (UDF) for Cassandra. The first step requires you to edit the cassandra.yaml file, which you find in the /etc/cassandra/default.conf directory. There is a single parameter that you need to edit, and it is the enable_user_defined_functions parameter. By default the parameter is set to false, and you need to enable it to create UDFs.

    If you open the cassandra.yaml file as the root user, you should find the parameter on line 987, like:

    # If unset, all GC Pauses greater than gc_log_threshold_in_ms will log at
    # INFO level
    # UDFs (user defined functions) are disabled by default.
    # As of Cassandra 3.0 there is a sandbox in place that should prevent execution of evil code.
    enable_user_defined_functions: false
    

    After you make the edit, the cassandra.yaml file should look like this:

    # If unset, all GC Pauses greater than gc_log_threshold_in_ms will log at
    # INFO level
    # UDFs (user defined functions) are disabled by default.
    # As of Cassandra 3.0 there is a sandbox in place that should prevent execution of evil code.
    enable_user_defined_functions: true
    

    After you make the change, you can create your own UDF. The following UDF formats the first, middle, and last name so there’s only one whitespace between the first and last name when there middle name value is null.

    This type of function must use a CALLED ON NULL INPUT clause in lieu of a RETURNS NULL ON NULL INPUT clause. The latter would force the function to return a null value if any one of the parameters were null.

    /* Drop the concatenate function because a replace disallows changing a
       RETURNS NULL ON NULL INPUT with a CALLED ON NULL INPUT without raising
       an "89: InvalidRequest" exception. */
    DROP FUNCTION concatenate;
    
    /* Create a user-defined function to concatenate names. */
    CREATE OR REPLACE FUNCTION concatenate (first_name VARCHAR, middle_name VARCHAR, last_name VARCHAR)
    CALLED ON NULL INPUT
    RETURNS VARCHAR
    LANGUAGE java
    AS $$
      /* Concatenate first and last names when middle name is null, and
         first, middle, and last names when middle name is not null. */
      String name;
    
      /* Check for null middle name. */
      if (middle_name == null) {
        name = first_name + " " + last_name; }
      else {
        name = first_name + " " + middle_name + " " + last_name; }
    
      return name;
    $$;
    
  • Query the values from the contact table with the UDF function in the SELECT-list:

/* Query the contact information. */
SELECT member_number
,      contact_number
,      contact_type
,      concatenate(first_name, middle_name, last_name) AS full_name
FROM   contact;

It returns the following:

 member_number | contact_number | contact_type | full_name
---------------+----------------+--------------+--------------------
     SFO-12345 |      CUS_00001 |       FAMILY |        Barry Allen
     SFO-12345 |      CUS_00002 |       FAMILY |    Iris West-Allen
     SFO-12346 |      CUS_00003 |       FAMILY | Caitlin Marie Snow

Query the values from the contact table with a JSON format:

/* Query the contact information and return in a JSON format. */
SELECT JSON
       contact_number
,      contact_type
,      concatenate(first_name, middle_name, last_name) AS full_name
FROM   contact;

It returns the following:

 [json]
-------------------------------------------------------------------------------------------------
{"contact_number": "CUS_00001", "contact_type": "FAMILY", "full_name": "Barry Allen"}
{"contact_number": "CUS_00002", "contact_type": "FAMILY", "full_name": "Iris West-Allen"}
{"contact_number": "CUS_00003", "contact_type": "FAMILY", "full_name": "Caitlin Marie Snow"}

Laravel 6 CRUD Example | Laravel 6 Tutorial For Beginners

$
0
0
Laravel 6 CRUD Example | Laravel 6 Tutorial For Beginners

Laravel 6 CRUD Example | Laravel 6 Tutorial For Beginners is today’s topic. We can upgrade your Laravel’s 6 version by going to this link. Laravel 6 continues the improvements made in Laravel 5.8 by introducing the following features.

  1. Semantic versioning
  2. Compatibility with Laravel Vapor,
  3. Improved authorization responses,
  4. Job middleware,
  5. Lazy collections,
  6. Sub-query improvements,
  7. The extraction of frontend scaffolding to the laravel/ui Composer package

#Server Requirements For Laravel 6

See the following server requirements to run Laravel 6.

  1. PHP >= 7.2.0
  2. BCMath PHP Extension
  3. Ctype PHP Extension
  4. JSON PHP Extension
  5. Mbstring PHP Extension
  6. OpenSSL PHP Extension
  7. PDO PHP Extension
  8. Tokenizer PHP Extension
  9. XML PHP Extension

#Prerequisites

For this Laravel 6 crud project, I am using the Visual Studio Code as an editor for my project. If you do not know how to set the PHP on VSCode, then we have a perfect tutorial on this blog on how to configure Visual Studio Code For PHP Developers.

Laravel 6 CRUD Tutorial With Example

On this blog, we have already written Laravel 5.6 CRUDLaravel 5.7 CRUD, Laravel 5.8 CRUD.

You can install Laravel 6 via global installer or using the Composer Create-Project command.

composer create-project --prefer-dist laravel/laravel laravel6

Now, go inside the laravel6 folder. You need to install the frontend dependencies using the following command.

npm install

Step 1: Configure the MySQL Database

I have created the MySQL database called laravel6 and now write the MySQL credentials inside a .env file.

Before creating migrations, we need to set up MySQL database, assuming you know how to create the database using phpmyadmin.

After creating a database, we will add database credentials in our application. Laravel has the .env environment file which will have all the sensitive data like database details, mail driver details, etc.

Because it’s not recommended to save such information directly inside a code (environment files are not limited to just PHP, they are used in all other frameworks).

Values inside a .env file are loaded inside the files from the config directory. A .env file is located at the root of our laravel project.

Whenever you make the changes to a .env file then don’t forget to restart a server ( if you are using laravel dev server) and if you are using the virtual host and changes don’t seem to take effect then just run the php artisan config:clear (This command will clear the configuration cache) in your terminal.

DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=laravel6
DB_USERNAME=root
DB_PASSWORD=root

So now you will be able to connect a MySQL database.

Laravel always ships with the migration files, so you can able to generate the tables in the database using the following command.

php artisan migrate

You will see that three tables will create inside the MySQL Databases.

In Laravel 6, you see one more table called create_failed_jobs_table

Laravel 6 CRUD Tutorial With Example

We will create a CRUD operation on Netflix shows. So a user can create, read, update, and delete the shows from a database. So, let’s create the model and migration files.

Step 2: Create the model and migration files

We have already set up our database, now let’s see at database migrations. Migration is used to save the details about the database table, and it’s the properties, so you don’t have to manually generate all of the tables by going to the database interface or something like phpmyadmin.

We can develop the migrations using artisan with “make: migration” command.

Type the following command to create a model and migration files.

php artisan make:model Show -m

In laravel, the name of a model has to be singular, and the name of the migration should be the plural so it can automatically find a table name.

You can find these migration files inside the database/migrations directory.

Laravel comes with three migrations, namely users, failed_jobs, and the password_resets (all migration files are prepended with a timestamp), which are used for authentication.

If you want to create the migration but have a different table name in mind, then you can explicitly define a table name with “ — create” flag.

It will create the Show.php file and [timestamp]create_shows_table.php migration file.

Now, open a migration file inside the database >> migrations >> [timestamp]create_shows_table file and add the following schema inside it.

When you open a create_books_table.php, you will see two methods, up() and down().

The up() function is used for creating/updating tables, columns, and indexes. The down() function is used for reversing an operation done by up() method.

Inside up() function, We have the Schema: create(‘table_name,’ callback) method which will be used for creating the new table.

Inside a callback, we have $table->bigIncrements(‘id’) which will create the auto_increment integer column with the primary key and argument ‘id’ is the name of a column.

Second is $table->timestamps() which will create the two timestamp columns created_at and updated_at. The created_at will be filled when the row is created and updated_at when the row is updated.

Now, add the following columns.

public function up()
{
        Schema::create('shows', function (Blueprint $table) {
            $table->bigIncrements('id');
            $table->string('show_name');
            $table->string('genre');
            $table->float('imdb_rating');
            $table->string('lead_actor');
            $table->timestamps();
        });
}

Now, create the table in the database using the following command.

php artisan migrate

The command will run the migrations and create defined tables. It will execute the up() function.

If you need to reverse the migrations, you can use the migrate: rollback command which will execute the down() function like php artisan migrate:rollback.

Now, add the fillable property inside Show.php file.

<?php

namespace App;

use Illuminate\Database\Eloquent\Model;

class Show extends Model
{
    protected $fillable = ['show_name', 'genre', 'imdb_rating', 'lead_actor'];
}

We can specify all the properties to modify the behavior of the model.

We can write a $table property which is used to determine a name of the table that this model will interact with in the future operations.

By default, It will define a table name as the plural of the model name, e.g., shows table for Show model and users table for User model.

When you don’t need to use timestamps on your table, then you will also have to specify the $timestamps property and set it to false value in your Model because Laravel expects your table to have the created_at and updated_at timestamp columns.

Step 3: Create routes and controller

First, create the ShowController using the following command.

Note that we have also added the — resource flag which will define six methods inside the ShowController namely:

  1. Index (used for displaying a list of Shows)
  2. Create (will show the view with a form for creating a Show)
  3. Store (used for creating a Show inside the database. Note: create method submits to store method)
  4. Show (will display a specified Show)
  5. Edit (will show the form for editing a Show. Form will be filled with the existing Show data)
  6. Update (Used for updating a Show inside the database. Note: edit submits to update method)
  7. Destroy (used for deleting a specified Show)

Now, inside the routes >> web.php file, add the following line of code.

<?php

// ShowController.php

Route::get('/', function () {
    return view('welcome');
});

Route::resource('shows', 'ShowController');

We can pass dynamic parameters with {} brackets, and you might have noticed that show, update, and destroy has the same url but different methods, so it’s legit.

Just like — resource flag, laravel has a method called resource() that will generate all the above routes. You can also use that method instead of specifying them individually like above.

Actually, by adding the following line, we have registered the multiple routes for our application. We can check it using the following command.

Laravel 6 Tutorial For Beginners

Step 4: Configure Bootstrap 4

Right now, there are some issues, or somehow I do not see any code inside the public >> css >> app.css file. I have already compiled the CSS and JS file by the npm run dev command, but still, the app.css file is empty.

One possible solution is to copy the code of the previous version’s Laravel’s app.css file and paste it here inside the Laravel 6 folder’s public >> css >> app.css file like the following. I have put the link of the previous css file.

https://raw.githubusercontent.com/KrunalLathiya/Laravel58CRUD/master/public/css/app.css

Now, the second possible solution is this. This new scaffolding is only available in Laravel 6 and not in the earlier versions like Laravel 5.8 or 5.7.

While Laravel 6 does not dictate which JavaScript or CSS pre-processors you use, it does provide the essential starting point using Bootstrap and Vue that will be helpful for many projects.

By default, the Laravel uses the NPM to install both of these frontend packages.

Bootstrap and Vue scaffolding provided by Laravel is located in the laravel/ui Composer package, which you can install using Composer via the following command:

composer require laravel/ui --dev

 

Configure Bootstrap 4

Once the laravel/ui package has been installed, and you may install the frontend scaffolding using the ui Artisan command:

// Generate basic scaffolding...
php artisan ui vue
php artisan ui react

// Generate login / registration scaffolding...
php artisan ui vue --auth
php artisan ui react --auth

Step 5: Create the views

Inside the resources >> views folder, create the following three-view files.

  1. create.blade.php
  2. edit.blade.php
  3. index.blade.php

Inside the views folder, we also need to create the layout file.

So create a file inside the views folder called layout.blade.php. Add the following code inside a layout.blade.php file.

<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <meta http-equiv="X-UA-Compatible" content="ie=edge">
  <title>Laravel 6 CRUD Example</title>
  <link href="{{ asset('css/app.css') }}" rel="stylesheet" type="text/css" />
</head>
<body>
  <div class="container">
    @yield('content')
  </div>
  <script src="{{ asset('js/app.js') }}" type="text/js"></script>
</body>
</html>

So basically, this is our main template file, and all the other view files will extend this layout.blade.php file.

Here, we have already included the Bootstrap 4 by adding the app.css.

Next step would be to code a create.blade.php file. So write the following code inside it.

@extends('layout')

@section('content')
<style>
  .uper {
    margin-top: 40px;
  }
</style>
<div class="card uper">
  <div class="card-header">
    Add Shows
  </div>
  <div class="card-body">
    @if ($errors->any())
      <div class="alert alert-danger">
        <ul>
            @foreach ($errors->all() as $error)
              <li>{{ $error }}</li>
            @endforeach
        </ul>
      </div><br />
    @endif
      <form method="post" action="{{ route('shows.store') }}">
          <div class="form-group">
              @csrf
              <label for="name">Show Name:</label>
              <input type="text" class="form-control" name="show_name"/>
          </div>
          <div class="form-group">
              <label for="price">Show Genre :</label>
              <input type="text" class="form-control" name="genre"/>
          </div>
          <div class="form-group">
              <label for="price">Show IMDB Rating :</label>
              <input type="text" class="form-control" name="imdb_rating"/>
          </div>
          <div class="form-group">
              <label for="quantity">Show Lead Actor :</label>
              <input type="text" class="form-control" name="lead_actor"/>
          </div>
          <button type="submit" class="btn btn-primary">Create Show</button>
      </form>
  </div>
</div>
@endsection

Okay, now we need to open the ShowController.php file, and on the create() function, we need to return the view, and that is a create.blade.php file.

// ShowController.php

public function create()
{
   return view('create');
}

Go to a http://localhost:8000/books/create or http://laravel6.test/shows/create

You will see something like following.

Create the views

Step 6: Add Validation rules and save data

In this step, we will add a Laravel Validation.

Now, the first step inside the ShowController.php is that import the namespace of Show model inside the ShowController.php file.

<?php

// ShowController.php

namespace App\Http\Controllers;

use Illuminate\Http\Request;
use App\Show;

Now, write the following code inside the ShowController.php file’s store() function.

public function store(Request $request)
{
        $validatedData = $request->validate([
            'show_name' => 'required|max:255',
            'genre' => 'required|max:255',
            'imdb_rating' => 'required|numeric',
            'lead_actor' => 'required|max:255',
        ]);
        $show = Show::create($validatedData);
   
        return redirect('/books')->with('success', 'Show is successfully saved');
 }

Here, what we have done is first check for all four fields of the form.

If incoming data fail any of the rules, then it will directly go to the form with the error messages.

Store method has $request object as the parameter which will be used to access form data. 

The first thing you want to do is validate a form of data.

We can use a $request->validate() function for validation, which will receive the array of validation rules.

Validation rules is an associative array.

Key will be the field_name and value with being the validation rules. 

The second parameter is the optional array for custom validation messages.

Rules are separated with pipe sign “|.” We are using the most basic validation rules.

First is “required,” which means the field_name should not be empty. (“nullable” rule is vice versa), “string” means it should be the string value, “min” is the limit of minimum characters for a string in an input field and “max” is the maximum characters. “unique:table, column” with see if the same value does not exists in the database (comes handy for storing emails or any other unique data).

If the validation fails, then it will redirect us back. After the validation, we are creating a new book and save that book in the database.

We need to loop through that error messages inside the create.blade.php file which we have already done it.

If you leave all the form fields empty, then you will find an error message like this image.

Add Validation rules and save data

Now, if you fill the form fields correctly, then it will create a new row in the database. I have created a new Show.

created a new Show

Step 7: Display the data

Now, we need to write the ShowController’s index function to return an index view with a data fetched from a database. Write the following code inside the index() function.

// ShowController.php

public function index()
{
     $shows = Show::all();

     return view('index', compact('shows'));
}

Okay, now create a file called index.blade.php inside the views folder and add the following code.

@extends('layout')

@section('content')
<style>
  .uper {
    margin-top: 40px;
  }
</style>
<div class="uper">
  @if(session()->get('success'))
    <div class="alert alert-success">
      {{ session()->get('success') }}  
    </div><br />
  @endif
  <table class="table table-striped">
    <thead>
        <tr>
          <td>ID</td>
          <td>Show Name</td>
          <td>Show Genre</td>
          <td>Show IMDB Rating</td>
          <td>Lead Actor</td>
          <td colspan="2">Action</td>
        </tr>
    </thead>
    <tbody>
        @foreach($shows as $show)
        <tr>
            <td>{{$show->id}}</td>
            <td>{{$show->show_name}}</td>
            <td>{{$show->genre}}</td>
            <td>{{number_format($show->imdb_rating,2)}}</td>
            <td>{{$show->lead_actor}}</td>
            <td><a href="{{ route('shows.edit', $show->id)}}" class="btn btn-primary">Edit</a></td>
            <td>
                <form action="{{ route('shows.destroy', $show->id)}}" method="post">
                  @csrf
                  @method('DELETE')
                  <button class="btn btn-danger" type="submit">Delete</button>
                </form>
            </td>
        </tr>
        @endforeach
    </tbody>
  </table>
<div>
@endsection

We have used the PHP number_format() function to print the IMDB Rating float value.

Here, we have looped through the show’s array and display the data in the table format.

Also, we have added two buttons for edit and delete operation.

Step 8: Create Edit and Update Operation

First, we need to add the following piece of code inside the ShowController.php file’s edit function.

// ShowController.php

public function edit($id)
{
     $show = Show::findOrFail($id);

     return view('edit', compact('show'));
}

Now, create the new file inside the views folder called edit.blade.php and add the following code.

@extends('layout')

@section('content')
<style>
  .uper {
    margin-top: 40px;
  }
</style>
<div class="card uper">
  <div class="card-header">
    Update Shows
  </div>
  <div class="card-body">
    @if ($errors->any())
      <div class="alert alert-danger">
        <ul>
            @foreach ($errors->all() as $error)
              <li>{{ $error }}</li>
            @endforeach
        </ul>
      </div><br />
    @endif
    <form method="post" action="{{ route('shows.update', $show->id) }}">
          <div class="form-group">
              @csrf
              @method('PATCH')
              <label for="name">Show Name:</label>
              <input type="text" class="form-control" name="show_name" value="{{ $show->show_name }}"/>
          </div>
          <div class="form-group">
              <label for="price">Show Genre :</label>
              <input type="text" class="form-control" name="genre" value="{{ $show->genre }}"/>
          </div>
          <div class="form-group">
              <label for="price">Show IMDB Rating :</label>
              <input type="text" class="form-control" name="imdb_rating" value="{{ number_format($show->imdb_rating, 2) }}"/>
          </div>
          <div class="form-group">
              <label for="quantity">Show Lead Actor :</label>
              <input type="text" class="form-control" name="lead_actor" value="{{ $show->lead_actor }}"/>
          </div>
          <button type="submit" class="btn btn-primary">Update Show</button>
      </form>
  </div>
</div>
@endsection

In this file, you can show the values of the particular row using its unique id inside the form fields.

So, when you hit this URL: http://localhost:8000/shows/1/edit or http://laravel6.test/shows/1/edit, you will see something like below image.

Create Edit and Update Operation

Now, add the following code inside the ShowController’s update() function.

// ShowController.php

public function update(Request $request, $id)
{
        $validatedData = $request->validate([
            'show_name' => 'required|max:255',
            'genre' => 'required|max:255',
            'imdb_rating' => 'required|numeric',
            'lead_actor' => 'required|max:255',
        ]);
        Show::whereId($id)->update($validatedData);

        return redirect('/shows')->with('success', 'Show is successfully updated');
}

So now, you can edit and update all the data into the database successfully.

Step 9: Create Delete Functionality

Write the following code inside the ShowController’s destroy function.

// ShowController.php

public function destroy($id)
{
        $show = Show::findOrFail($id);
        $show->delete();

        return redirect('/shows')->with('success', 'Show is successfully deleted');
}

Go to the URL: http://localhost:8000/shows and try to remove the show.

You can see that you have successfully deleted the Show.

So, our complete ShowController.php code looks like below.

<?php

// ShowController.php

namespace App\Http\Controllers;

use Illuminate\Http\Request;
use App\Show;

class ShowController extends Controller
{
    /**
     * Display a listing of the resource.
     *
     * @return \Illuminate\Http\Response
     */
    public function index()
    {
        $shows = Show::all();

        return view('index', compact('shows'));
    }

    /**
     * Show the form for creating a new resource.
     *
     * @return \Illuminate\Http\Response
     */
    public function create()
    {
        return view('create');
    }

    /**
     * Store a newly created resource in storage.
     *
     * @param  \Illuminate\Http\Request  $request
     * @return \Illuminate\Http\Response
     */
    public function store(Request $request)
    {
        $validatedData = $request->validate([
            'show_name' => 'required|max:255',
            'genre' => 'required|max:255',
            'imdb_rating' => 'required|numeric',
            'lead_actor' => 'required|max:255',
        ]);
        $show = Show::create($validatedData);
   
        return redirect('/books')->with('success', 'Show is successfully saved');
    }

    /**
     * Display the specified resource.
     *
     * @param  int  $id
     * @return \Illuminate\Http\Response
     */
    public function show($id)
    {
        //
    }

    /**
     * Show the form for editing the specified resource.
     *
     * @param  int  $id
     * @return \Illuminate\Http\Response
     */
    public function edit($id)
    {
        $show = Show::findOrFail($id);

        return view('edit', compact('show'));
    }

    /**
     * Update the specified resource in storage.
     *
     * @param  \Illuminate\Http\Request  $request
     * @param  int  $id
     * @return \Illuminate\Http\Response
     */
    public function update(Request $request, $id)
    {
        $validatedData = $request->validate([
            'show_name' => 'required|max:255',
            'genre' => 'required|max:255',
            'imdb_rating' => 'required|numeric',
            'lead_actor' => 'required|max:255',
        ]);
        Show::whereId($id)->update($validatedData);

        return redirect('/shows')->with('success', 'Show is successfully updated');
    }

    /**
     * Remove the specified resource from storage.
     *
     * @param  int  $id
     * @return \Illuminate\Http\Response
     */
    public function destroy($id)
    {
        $show = Show::findOrFail($id);
        $show->delete();

        return redirect('/shows')->with('success', 'Show is successfully deleted');
    }
}

So, we have completed a Laravel 6 CRUD operations tutorial with the example from scratch.

If you are interested in the FrontEnd framework like Vue.js with Laravel or Angular with Laravel, then check out my Vue Laravel CRUD example and Angular Laravel Tutorial Example.

I have put this code on Github so you can check it out as well.

Github Code

The post Laravel 6 CRUD Example | Laravel 6 Tutorial For Beginners appeared first on AppDividend.

Handling Bi-Directional Replication between Tungsten Clusters and AWS Aurora

$
0
0

Overview

The Skinny

In this blog post, we explore the correct way to implement bi-directional Tungsten Replication between AWS Aurora and Tungsten Clustering for MySQL databases.

Background

The Story

When we are approached by a prospect interested in using our solutions, we are proud of our pre-sales process by which that we engage at a very deep technical level to ensure the we provide the best possible solution to meet with the prospect’s requirements. This involves an in-depth hands-on POC, in addition to the significant time and effort we spend building and testing the solution architectures in our lab environment as part of the proposal process.

From time to time, we are presented with requirements that are not always quite so straight forward. Just recently we faced such a situation. A prospect that is currently a heavy Amazon Aurora user is interested in converting to a Tungsten Composite MultiMaster solution, across multiple regions both in Amazon and across Google Cloud. This is quite a common request as customers realise that whilst Amazon Aurora does have many benefits, there are many occasions where it falls short.

The basic requirement to convert from Aurora to Tungsten Clustering is normally straight-forward and is something we have done on many occasions, so no issue there.

Next, cross-cloud replication is one of our strengths – the ability to have instance-based clusters in both Amazon Web Services and in Google Cloud Platform, all replicating to each other – this is one of our many unique selling points.

What made this requirement a little more challenging was the need to be able to provide bi-directional replication between Aurora and the new Tungsten clusters. This was to be a temporary solution to allow the business to migrate and test the applications, and also provide an easy rollback should any issues arise.

So why was this challenging? Surely if we provide MultiMaster clustering, then this should be a no-brainer, right? Well, not quite.

When we replicate, we extract transactions from the MySQL binary logs. If we replicate bi-directionally, we need to be sure that we do not create a circular replication loop by re-applying events we just sent to the other side.

When we do this between native (non-RDS/Aurora) MySQL sources and targets, the cross-cluster replicator can control this by bypassing the writes to the binary logs for transactions that the replicator applies.

With Amazon Aurora (and RDS) it’s not possible to bypass this step, because set session sql_log_bin=0 simply isn’t allowed. While there are perfectly valid reasons for this (i.e. you are using replicas), this means that with a default install for our Tungsten Replicator, a change made to the Tungsten-managed databases would replicate to Aurora (Good), hit the binlog in Aurora and then be replicated back (Bad).

OK, so that’s the background covered, let’s take a look at how we can resolve this. Fortunately, Tungsten Replicator is both extremely configurable and powerful, and so actually resolving this challenge was pretty straight-forward – it just required a few careful steps, a bit of testing, and a good understanding of all of the various properties!

Procedure Summary

Understand the Steps First

This is what is needed to create bi-directional Tungsten replication streams between AWS Aurora and one or more Tungsten Clusters:

From Aurora to the Cluster(s)

  • Aurora Extractor – An extractor configured to read from Aurora
  • Aurora Applier to Cluster Connector – A single applier that would write the Aurora changes, via a connector into the one cluster only

From the Cluster(s) back to Aurora

  • Cluster Config Change – A single change to every cluster’s config to provide the replicators a little bit of extra detail
  • Configure Cluster-slave(s) – A single Cluster-Slave for EACH Tungsten cluster to replicate changes back to Aurora

Procedure Details

In the Weeds
  • Aurora Extractor
    This is a fairly straight forward configuration, the extractor is configured in the same way as any regular standalone MySQL extractor, but with two very important additional parameters:
    svc-extractor-filters=dropcatalogdata
    property=replicator.service.comments=true
    The first is a filter that ensures the tracking schema doesn’t replicate because normally the creation of this would bypass binary logs – we don’t want this replicating!!

    The second ensures that any transaction we extract from the local binary logs is tagged with the name of the service that the extractor is running, in my case I called my service aws2cluster

  • Aurora Applier to Cluster Connector
    Again, this is a fairly simple and straight forward applier service. First of all, we configure this applier to write to a connector, this will mean that we can have a single applier that doesn’t need to be tied to the master in the cluster, by going through the connector we know we will always reach the master but would never need to reconfigure if the master switched.

    Additionally, we include the following properties:

    property=local.service.name=localcluster
    property=replicator.service.type=remote
    svc-applier-filters=bidiSlave
    log-slave-updates=true

    The first, local.service.name sets a service that we associate with our target – The replicator uses this to stop transactions being applied if they are tagged with a service name that is anything other than the servicename the replicator is running as, so in this case it is ensuring we only apply transactions tagged with “aws2cluster” as the OriginatingService

    On its own, this property won’t do a lot, but when it’s combined with the next two properties – service.type=remote and the bidiSlave filter – that’s when the replicator starts to get interesting.

    Finally, log-slave-updates will ensure that anything this replicator writes, goes into the binary log of the master, so that this then propagates to all of the nodes in all the clusters.

  • Cluster Config Change
    All the nodes in all the clusters also need one very small change – the addition of:
    property=replicator.service.comments=true

    Like the Aurora extractor, this ensures anything written in the local clusters are tagged with their service names.

  • Configure Cluster-Slave(s)
    We now configure one or more cluster-slave replicators. There is a single replicator associated with each cluster, which will read THL generated by that cluster and apply it back to the Aurora instance. This configuration also has the same additional properties as the applier into the cluster, specifically:
    property=local.service.name=aws2cluster
    property=replicator.service.type=remote
    svc-applier-filters=bidiSlave

So let’s take a quick look at what this does to THL

Insert into AWS that we want to replicate – Note the source id and service tag:

MyService = aws2cluster
OriginatingService = aws2cluster
TargetLocalService = localcluster
Result = APPLY

SEQ# = 100 / FRAG# = 0 (last frag)
- FILE = thl.data.0000000001
- TIME = 2019-09-08 11:48:31.0
- EPOCH# = 0
- EVENTID = mysql-bin-changelog.000002:0000000000143042;-1
- SOURCEID = p28test.cluster-cw8gilzabv20.eu-west-1.rds.amazonaws.com
- METADATA = [mysql_server_id=1395894048;dbms_type=mysql;tz_aware=true;service=aws2cluster;shard=demo]
- TYPE = com.continuent.tungsten.replicator.event.ReplDBMSEvent
- OPTIONS = [foreign_key_checks = 1, unique_checks = 1, time_zone = '+00:00', ##charset = UTF-8]
- SQL(0) =
 - ACTION = INSERT
 - SCHEMA = demo

This means this will replicate, via the connector into the cluster, but because of log-slave-updates to ensure all the nodes in the cluster are updated, this ends up being picked up by the cluster-slaves but because the servicename is different, it won’t get replicated back.

Now, insert into a cluster node that we want to replicate to Aurora, again note Source ID and service:

MyService = nyc
OriginatingService = nyc
TargetLocalService = aws2cluster
Result = APPLY

SEQ# = 52 / FRAG# = 0 (last frag)
- FILE = thl.data.0000000001
- TIME = 2019-09-08 11:52:33.0
- EPOCH# = 42
- EVENTID = mysql-bin.000002:0000000000031207;-1
- SOURCEID = db1
- METADATA = [mysql_server_id=940;dbms_type=mysql;tz_aware=true;service=nyc;shard=demo]
- TYPE = com.continuent.tungsten.replicator.event.ReplDBMSEvent
- OPTIONS = [foreign_key_checks = 1, unique_checks = 1, time_zone = '+00:00', ##charset = ISO-8859-1]
- SQL(0) =
 - ACTION = INSERT
 - SCHEMA = demo
 - TABLE = regions

Because we can’t bypass writing to Aurora binlogs, this ends up being extracted again, but because of the servicename being different to the servicename of the extractor process itself, and the bidiSlave filter on the applier, we know that this is something that came in from another Master and therefore we discard it:

MyService = aws2cluster
OriginatingService = nyc
TargetLocalService = localcluster
Result = DISCARD

SEQ# = 101 / FRAG# = 0 (last frag)
- FILE = thl.data.0000000001
- TIME = 2019-09-08 10:59:08.0
- EPOCH# = 0
- EVENTID = mysql-bin-changelog.000002:0000000000143935;-1
- SOURCEID = p28test.cluster-cw8gilzabv20.eu-west-1.rds.amazonaws.com
- METADATA = [mysql_server_id=1395894048;dbms_type=mysql;tz_aware=true;is_metadata=true;service=nyc;shard=demo]
- TYPE = com.continuent.tungsten.replicator.event.ReplDBMSEvent
- OPTIONS = [foreign_key_checks = 1, unique_checks = 1, time_zone = '+00:00', ##charset = UTF-8]
- SQL(0) =
 - ACTION = INSERT
 - SCHEMA = demo
 - TABLE = regions

So there we go, by using some of the more advanced features of the replicator we can safely setup bi-directional replication between Aurora and a Tungsten Cluster


The Library

Please read the docs!

For more information about Tungsten clusters, please visit https://docs.continuent.com


Summary

The Wrap-Up

In this blog post we discussed the correct way to implement bi-directional Tungsten Replication between AWS Aurora and Tungsten Clustering for MySQL databases.

Tungsten Clustering is the most flexible, performant global database layer available today – use it underlying your SaaS offering as a strong base upon which to grow your worldwide business!

For more information, please visit https://www.continuent.com/solutions

Want to learn more or run a POC? Contact us.

InnoDB Cluster, Managing Async Integration

$
0
0
In MySQL 8.0.17 there have been a lot of updates to the MySQL set of offerings. We’ve introduced Cloning into InnoDB Cluster 8.0.17, advances with the MySQL-Router in 8.0.17 and MySQL continues to expand its collection of automation managed features. When Group Replication was first introduced in MySQL 5.7.17, there was considerably less to manage… Read More »

MySQL 8.0.17+: Cloning is now much easier

$
0
0
If you use replication with MySQL, if you need a backup, if you need a spare copy of a system for testing and for many other reasons you need a way to make a copy of your MySQL system. In the past you could make a copy in various ways: using a cold file system … Continue reading MySQL 8.0.17+: Cloning is now much easier

How To Export Data In Excel and CSV In Laravel 6 | maatwebsite/excel 3.1

$
0
0
How To Export Data In Excel and CSV In Laravel 6 | maatwebsite:excel 3.1

In this tutorial, we will see How To Export Data In Excel and CSV In Laravel 6 | maatwebsite/excel 3.1. If you want to up and running with basic laravel functionality, then go to my other article on this web blog called Laravel 6 Crud Example From ScratchIf you want to Generate PDF In Laravel, then check out Laravel 6 Generate PDF From View Example. For this example, we use the package called maatwebsite/excel version 3.1. So, our Laravel 6 and maatwebsite/excel 3.1.

Export Data In Excel and CSV in Laravel 6

When using the package in your application, it’s good to understand how the package functions behind the scenes. Following the behind-the-scenes will make you feel more comfortable and confident using the maximum potential of the tool.

Laravel Excel 3.1

🚀 Laravel Excel is intended at being Laravel-flavoured PhpSpreadsheet: a simple, but an elegant wrapper around PhpSpreadsheet to simplify the exports and imports.

🔥 PhpSpreadsheet is the library written in pure PHP and providing the set of classes that allow us to read from and to write to different type of spreadsheet file formats, like Excel and LibreOffice Calc.

Laravel Excel Features

  1. We can easily export collections to Excel.
  2. We can export queries with automatic chunking for better performance.
  3. We can queue exports for better performance.
  4. We can easily export Blade views to Excel.
  5. We can easily import to collections.
  6. We can read the Excel file in chunks.
  7. We can handle the import inserts in batches.

Requirements

  1. PHP: ^7.0
  2. Laravel: ^5.5
  3. PhpSpreadsheet: ^1.6
  4. PHP extension php_zip enabled
  5. PHP extension php_xml enabled
  6. PHP extension php_gd2 enabled

Step 1: Installation

Require the following package in the composer.json of your Laravel 6 project. The following command will download the package and PhpSpreadsheet.

composer require maatwebsite/excel
➜  laravel6 git:(master) ✗ composer require maatwebsite/excel
Using version ^3.1 for maatwebsite/excel
./composer.json has been updated
Loading composer repositories with package information
Updating dependencies (including require-dev)
Package operations: 4 installs, 0 updates, 0 removals
  - Installing markbaker/matrix (1.1.4): Downloading (100%)
  - Installing markbaker/complex (1.4.7): Downloading (100%)
  - Installing phpoffice/phpspreadsheet (1.9.0): Downloading (100%)
  - Installing maatwebsite/excel (3.1.17): Downloading (100%)
phpoffice/phpspreadsheet suggests installing mpdf/mpdf (Option for rendering PDF with PDF Writer)
phpoffice/phpspreadsheet suggests installing tecnickcom/tcpdf (Option for rendering PDF with PDF Writer)
phpoffice/phpspreadsheet suggests installing jpgraph/jpgraph (Option for rendering charts, or including charts with PDF or HTML Writers)
Writing lock file
Generating optimized autoload files
> Illuminate\Foundation\ComposerScripts::postAutoloadDump
> @php artisan package:discover --ansi
Discovered Package: barryvdh/laravel-dompdf
Discovered Package: facade/ignition
Discovered Package: fideloper/proxy
Discovered Package: laravel/tinker
Discovered Package: laravel/ui
Discovered Package: maatwebsite/excel
Discovered Package: nesbot/carbon
Discovered Package: nunomaduro/collision
Package manifest generated successfully.
➜  laravel6 git:(master) ✗

Step 2: Configure package

The Maatwebsite\Excel\ExcelServiceProvider is auto-discovered and registered by default.

If you want to register by yourself, then add the ServiceProvider in config/app.php:

'providers' => [
    /*
     * Package Service Providers...
     */
    Maatwebsite\Excel\ExcelServiceProvider::class,
]

Excel facade is auto-discovered.

If you want to add it manually, add a Facade in config/app.php:

'aliases' => [
    ...
    'Excel' => Maatwebsite\Excel\Facades\Excel::class,
]

If you want to publish a config, run the vendor publish command:

php artisan vendor:publish --provider="Maatwebsite\Excel\ExcelServiceProvider"
➜  laravel6 git:(master) ✗ php artisan vendor:publish --provider="Maatwebsite\Excel\ExcelServiceProvider"
Copied File [/vendor/maatwebsite/excel/config/excel.php] To [/config/excel.php]
Publishing complete.
➜  laravel6 git:(master) ✗

This will create the new config file named config/excel.php.

Step 3: Create model and migration files

Type the following command.

php artisan make:model Disneyplus -m

Now, go to the [timestamp].create_disneypluses_table.php file and add the columns.

public function up()
{
        Schema::create('disneypluses', function (Blueprint $table) {
            $table->bigIncrements('id');
            $table->string('show_name');
            $table->string('series');
            $table->string('lead_actor');
            $table->timestamps();
        });
}

Now, migrate the database using the following command.

php artisan migrate

Step 4: Create a controller and routes

Next step is to create a DisneyplusController.php file.

php artisan make:controller DisneyplusController

Now, add the two routes inside the routes >> web.php file.

// web.php

Route::get('disneyplus', 'DisneyController@create')->name('disneyplus.create');
Route::post('disneyplus', 'DisneyController@store')->name('disneyplus.store');

Now, create two methods inside the DisneyplusController.php file.

<?php

// DisneyplusController.php

namespace App\Http\Controllers;

use Illuminate\Http\Request;
use App\Disneyplus;

class DisneyplusController extends Controller
{
    public function create()
    {

    }

    public function store()
    {
        
    }
}

Step: 5 Create a form blade file for input the data

Now, inside the views folder, create one file called form.blade.php file. Add the following code.

@extends('layout')

@section('content')
<style>
  .uper {
    margin-top: 40px;
  }
</style>
<div class="card uper">
  <div class="card-header">
    Add Disneyplus Shows
  </div>
  <div class="card-body">
    @if ($errors->any())
      <div class="alert alert-danger">
        <ul>
            @foreach ($errors->all() as $error)
              <li>{{ $error }}</li>
            @endforeach
        </ul>
      </div><br />
    @endif
      <form method="post" action="{{ route('disneyplus.store') }}">
          <div class="form-group">
              @csrf
              <label for="name">Show Name:</label>
              <input type="text" class="form-control" name="show_name"/>
          </div>
          <div class="form-group">
              <label for="price">Series :</label>
              <input type="text" class="form-control" name="series"/>
          </div>
          <div class="form-group">
              <label for="quantity">Show Lead Actor :</label>
              <input type="text" class="form-control" name="lead_actor"/>
          </div>
          <button type="submit" class="btn btn-primary">Create Show</button>
      </form>
  </div>
</div>
@endsection

 

Step 6: Store data in the database

Now, we will write the two functions inside the DisneyplusController.php file.

<?php

namespace App\Http\Controllers;

use Illuminate\Http\Request;
use App\Disneyplus;

class DisneyplusController extends Controller
{
    public function create()
    {
        return view('form');
    }

    public function store(Request $request)
    {
        $validatedData = $request->validate([
            'show_name' => 'required|max:255',
            'series' => 'required|max:255',
            'lead_actor' => 'required|max:255',
        ]);
        Disneyplus::create($validatedData);
   
        return redirect('/disneyplus')->with('success', 'Disney Plus Show is successfully saved');
    }
}

So, in the above file, first, we have shown the form file, and then inside the store function, we check for validation and then store the data into the database.

Also, add the fillable fields inside the Disneyplus.php model file.
<?php

namespace App;

use Illuminate\Database\Eloquent\Model;

class Disneyplus extends Model
{
    protected $fillable = ['show_name', 'series', 'lead_actor'];
}

Now, go to this route: http://laravel6.test/disneyplus or http://localhost:8000/disneyplus

You will see one form. Try to save the data, and if everything in the code is right, then, you will see one entry in the database.

Step: 7 Create a view file for display the data.

Before we create a view file, we need to add one route inside the web.php.

// web.php

Route::get('disneyplus/list', 'DisneyplusController@index')->name('disneyplus.index');

Now, create a view file called list.blade.php file. Add the following code.

@extends('layout')
@section('content')
<table class="table table-striped">
  <thead>
    <th>ID</th>
    <th>Show Name</th>
    <th>Series</th>
    <th>Lead Actor</th>
    <th>Action</th>
  </thead>
  <tbody>
    @foreach($shows as $show)
    <tr>
      <td>{{$show->id}}</td>
      <td>{{$show->show_name}}</td>
      <td>{{$show->series}}</td>
      <td>{{$show->lead_actor}}</td>
    </tr>
    @endforeach
  </tbody>
</table>
@endsection

Now, add the code inside the index() function of DisneyplusController.php file.

public function index()
{
        $shows = Disneyplus::all();

        return view('list', compact('shows'));
}

Now, go to the http://laravel6.test/disneyplus/list or http://localhost:8000/disneyplus/list

You will see the listing of the shows.

Step 8: Create Exports class

You may do this by using the make:export command.

php artisan make:export DisneyplusExport --model=Disneyplus

The file can be found in app/Exports directory.

The file DisneyplusExport.php is following.

<?php

namespace App\Exports;

use App\Disneyplus;
use Maatwebsite\Excel\Concerns\FromCollection;

class DisneyplusExport implements FromCollection
{
    /**
    * @return \Illuminate\Support\Collection
    */
    public function collection()
    {
        return Disneyplus::all();
    }
}

If you prefer to create a export manually, you can build the following in app/Exports.

Step 9: Write the export function

Inside the DisneyplusController.php file, add the following code.

// DisneyplusController.php

use App\Disneyplus;
use App\Exports\DisneyplusExport;
use Maatwebsite\Excel\Facades\Excel;

public function export() 
{
        return Excel::download(new DisneyplusExport, 'disney.xlsx');
}

So, our final file looks like below.

<?php

namespace App\Http\Controllers;

use Illuminate\Http\Request;
use App\Disneyplus;
use App\Exports\DisneyplusExport;
use Maatwebsite\Excel\Facades\Excel;

class DisneyplusController extends Controller
{
    public function create()
    {
        return view('form');
    }

    public function store(Request $request)
    {
        $validatedData = $request->validate([
            'show_name' => 'required|max:255',
            'series' => 'required|max:255',
            'lead_actor' => 'required|max:255',
        ]);
        Disneyplus::create($validatedData);
   
        return redirect('/disneyplus')->with('success', 'Disney Plus Show is successfully saved');
    }

    public function index()
    {
        $shows = Disneyplus::all();

        return view('list', compact('shows'));
    }

    public function export() 
    {
        return Excel::download(new DisneyplusExport, 'disney.xlsx');
    }
}

Finally, add the route to be able to access the export:

// web.php

Route::get('export', 'DisneyplusController@export');

Also, add the link to the Export inside the list.blade.php file.

@foreach($shows as $show)
    <tr>
      <td>{{$show->id}}</td>
      <td>{{$show->show_name}}</td>
      <td>{{$show->series}}</td>
      <td>{{$show->lead_actor}}</td>
      <td><a href="{{action('DisneyplusController@export')}}">Export</a></td>
    </tr>
@endforeach

Okay, now finally go to the http://laravel6.test/disneyplus/list, and now you can see one link called Export.

Click on the Export link, and you will see the disney.xlsx file inside your Download folder.

Exporting collections in CSV in Laravel 6

By default, the export format is determined by the extension of the file.

public function export() 
{
        return Excel::download(new DisneyplusExport, 'disney.csv');
}

It will download the CSV file.

If you want to configure the export format explicitly, you can pass it through as 2nd parameter.

You can find more details about export in different formats on this link.

Finally, How To Export Data In Excel and CSV In Laravel 6 | maatwebsite/excel 3.1 is over.

The post How To Export Data In Excel and CSV In Laravel 6 | maatwebsite/excel 3.1 appeared first on AppDividend.


Entity Framework 6.3 and .NET Core 3 Support

$
0
0
.NET Core 3 .NET Core was presented by Microsoft in 2016, but its 1.x versions had limited set of features comparing to Full .NET Framework. Since then .NET Core has been drastically improved. .NET Core 2.0 has a significant part of Full .NET Framework features and includes new functionality and significant performance optimizations. This year, […]

Percona Toolkit 3.1.0 Is Now Available

$
0
0
Percona Toolkit

Percona announces the release of Percona Toolkit 3.1.0 on September 13, 2019.

Percona Toolkit is a collection of advanced open-source command-line tools, developed and used by the Percona technical staff, that are engineered to perform a variety of MySQL®, MongoDB®, PostgreSQL® and system tasks that are too difficult or complex to perform manually. With over 1,000,000 downloads, Percona Toolkit supports Percona Server for MySQL, MySQL, MariaDB, PostgreSQL, Percona Server for MongoDB, and MongoDB.

Percona Toolkit, like all Percona software, is free and open source. You can download packages from the website or install from official repositories.

This release includes the following changes:

New features and improvements:

  • PT-1696: the new pt-pg-summary tool supports PostgreSQL data collection in a way similar to other PT summary tools. The following is a fragment of the report that the tool produces:
    • ##### --- Database Port and Data_Directory --- ####
      +----------------------+----------------------------------------------------+
      |         Name         |                      Setting                       |
      +----------------------+----------------------------------------------------+
      | data_directory       | /var/lib/postgresql/9.5/main                       |
      +----------------------+----------------------------------------------------+
      
      ##### --- List of Tablespaces ---- ######
      +----------------------+----------------------+----------------------------------------------------+
      |         Name         |         Owner        |               Location                             |
      +----------------------+----------------------+----------------------------------------------------+
      | pg_default           | postgres             |                                                    |
      | pg_global            | postgres             |                                                    |
      +----------------------+----------------------+----------------------------------------------------+
      
      
      ##### --- Cluster Information --- ####
      +------------------------------------------------------------------------------------------------------+                                                                                 
       Usename        : postgres                                                           
       Time           : 2019-09-13 08:30:42.272582 -0400 EDT                                     
       Client Address : ::1                                             
       Client Hostname:                         
       Version        : PostgreSQL 9.5.18 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1                                      
       Started        : 2019-09-13 08:29:43.175138 -0400 EDT                                  
       Is Slave       : false                                              
      +------------------------------------------------------------------------------------------------------+
      
      ##### --- Databases --- ####
      +----------------------+------------+
      |       Dat Name       |    Size    |
      +----------------------+------------+
      | template1            |    6841 kB |
  • PT-1663: pt-stalk has two new options limiting the amount of disk space it can consume: --retention-size option makes pt-stalk to store less than the specified amount of megabytes, while --retention-count option limits the number of runs for which data are kept. Following simple example illustrates how these two parameters can be passed to the tool (here pt-stalk just collects the information and exits):
    pt-stalk --no-stalk --retention-count=3 --retention-size=100M -- --defaults-file=./my.default.cnf
  • PT-1741: Migration to a new MongoDB driver was done.
  • PT-1761: pt-online-schema-change will not run under MySQL 8.0.14 .. 8.0.17 if the table has foreign keys
    Important note: There is an error in MySQL from versions 8.0.14 up to the current 8.0.17 that makes MySQL die under certain conditions when trying to rename a table. Since the last step for pt-online-schema-change is to rename the tables to swap the old and new ones, we have added a check that prevents running pt-online-schema-change if the conditions for this error are met.

Bug fixes:

  • PT-1114: pt-table-checksum failed when the table was empty
  • PT-1344: pt-online-schema-change failed to detect hostnames with a specified port number
  • PT-1575: pt-mysql-summary did not print the PXC section for PXC 5.6 and 5.7
  • PT-1630: pt-table-checksum had a regression which prevented it from working with Galera cluster
  • PT-1633: pt-config-diff incorrectly parsed variables with numbers having K, M, G or T suffix (Thanks to Dieter Adriaenssens)
  • PT-1709: pt-upgrade generated “Use of uninitialized value in concatenation (.) or string” error in case of invalid MySQL packets
  • PT-1720: pt-pmp exited with an error in case of any unknown option in a common PT configuration file
  • PT-1728: pt-table-checksum failed to scan small tables that get wiped out often
  • PT-1734: pt-stalk did non-strict matching for ‘log_error’, resulting in wider filtering
  • PT-1746: pt-diskstats didn’t work for newer Linux kernels starting from 4.18

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.

MySQL Update in mysqli

$
0
0

Somebody didn’t like the MySQLi Update Query example on the tutorialspoint.com website because it use the procedure mysqli_query style. Here’s a simple example of using the object-oriented method version. More or less, instead of query it uses the more intuitive execute() method.

The update_member function contains the logic and below it is a call to the test the function. It relies on a MySQLCredentials.inc file that contains the hostname, user name, password, and database name. You can create create member table, like my example in MySQL 8, or any other table in your MySQL database.

connect_errno) {
    print $mysqli->connect_error."
"; print "Connection not established ...
"; } else { // Initial statement. $stmt = $mysqli->stmt_init(); /* Disabling auto commit when you want two or more statements executed as a set. || ------------------------------------------------------------ || You would add the following command to disable the default || of auto commit. || ------------------------------ || $mysqli->autocommit(FALSE); || ------------------------------------------------------------ */ // Declare a static query. $sql = "UPDATE member\n" . "SET member_type = ?\n" . ", credit_card_number = ?\n" . ", credit_card_type = ?\n" . "WHERE account_number = ?\n"; /* Prepare statement. || ------------------------------------------------------------ || Please note that the bind_param method is a position || rather than named notation, which means you must provide || the variables in the same order as they are found in || the defined $sql variable as "?". || ------------------------------------------------------------ || print($sql); || print("Member Type: [1][".$member_type."]\n"); || print("Credit Card No: [2][".$credit_card_number."]\n"); || print("Credit Card Type: [3][".$credit_card_type."]\n"); || print("Account Number: [4][".$account_number."]\n"); || ------------------------------------------------------------ */ if ($stmt->prepare($sql)) { $stmt->bind_param("ssss",$member_type,$credit_card_number,$credit_card_type,$account_number); } // Attempt query and exit with failure before processing. if (!$stmt->execute()) { // Print failure to resolve query message. print $mysqli->error."
"; print "Failed to resolve query ...
"; } else { /* Manually commiting writes when you have disabled the || default auto commit setting, explained above. || ------------------------------------------------------------ || You would add the following command to commit the || transaction. || ------------------------------ || $mysqli->commit(); || ------------------------------------------------------------ */ } } } // Test case update_member('US00011', '1006', '6011-0000-0000-0078', '1007'); ?>

I put this logic in a function.php file. If you do the same, you can run the test case like this from the command line:

php function.sql

As always, I hope this helps.

mysqli Strict Standards

$
0
0

Six years ago I wrote a common lookup post to illustrate the effectiveness of things used throughout your applications. Now, I’m updating my student image with a more complete solution to show how to avoid update anomalies.

In the prior post, I used a while loop in PHP, like the following:

do {
      ...
} while($stmt->next_result());

Using PHP Version 7.3.8 and MySQL 8.0.16, that now raises the following error message:

Strict Standards: mysqli_stmt::next_result(): There is no next result set. Please, call mysqli_stmt_more_results()/mysqli_stmt::more_results() to check whether to call this function/method in /var/www/html/app/library.inc on line 81

You can see this type of error when you set the following parameters in your file during testing:

ini_set('display_errors',1);
ini_set('display_startup_errors',1);
error_reporting(E_ALL);

You can read more about error handling at this web page. The new and strict compliance standard for mysqli managing rows is:

do {
      ...
} while($stmt->more_result());

As always, I hope this helps those looking for an answer.

Sep 16: Where is the MySQL team this week?!

$
0
0

Please find below the conferences & shows where you can find MySQL Community team and/or MySQL experts during this week:

  • Oracle Open World / MySQL Reception, San Francisco, US, September 16-19, 2019
    • Same as last year MySQL will be part of OOW in San Francisco and same as last there will be a MySQL Reception on the Sep 17, 2019 at 7pm. The Event will be hold at Samovar Tea Lounge, Yerba Buena Gardens, 730 Howard Street, San Francisco, CA.
    • Please see more details & registration for MySQL OOW Reception here.
  • WebExpo, Prague, CZ, September 20-22, 2019
    • This is the fifth time MySQL is partner of WebExpo, the biggest technology conference in the Czech republic! This year, same as last year you will be able to find two teams at Oracle booth in the expo area, NetSuite (Bronto) team & MySQL. 
    • Please also do not miss to come to the MySQL talk given by Vittorio Cioe, the Senior MySQL Sales Consultant as follows:
      • "Modern Data Security with MySQL 8.0", scheduled for Sat, Sep 21 @14:00-14:40pm.

 

SQL ASCII FUNCTION Example | ASCII Function In SQL

$
0
0
SQL ASCII FUNCTION Example | ASCII Function In SQL

SQL ASCII FUNCTION Example | ASCII Function In SQL is today’s topic. ASCII stands for American Standard Code for Information Interchange. SQL ASCII function is used to return the numeric value of a character which is given as an input to the function. This function just acts opposite to that of CHAR function.

SQL ASCII FUNCTION Example

The ASCII function accepts a character expression and returns the ASCII code value of the leftmost character of the character expression. See the following syntax.

Syntax

SELECT ASCII (single_character or string);

PARAMETERS

  1. Single_character: It is a specified character whose numeric value will be returned.
  2. String: If a sequence of characters is inserted in function as input then only the first character, the numeric value will be returned ignoring all the remaining characters.

Example

See the following query.

Select Ascii (‘A’);

See the following output.

65

EXPLANATION

As the ASCII value of A is 65, so it has been returned as output.

See the following query.

Select Ascii (‘a’);

See the output.

97

EXPLANATION

As the ASCII value of a is 97, so it has been returned as output.

See the following third query.

Select ASCII (‘Appdividend.com’);

See the output.

65

EXPLANATION

As the above input was a string so only the first character was returned ignoring all the characters.

Range of ASCII values for characters

A-Z: 65-90

a-z: 97-122

Let’s apply the ASCII function to a table.

Table: Employee

Emp_id Emp_name City State Salary
101 Rohit Raj Patna Bihar 30000
201 Shiva Rana Jalandhar Punjab 20000
301 Karan Kumar Allahabad Uttar Pradesh 40000
401 Suraj Bhakat Kolkata West Bengal 60000
501 Akash Cherukuri Vizag Andhra Pradesh 70000

 

Suppose we want to print the Numeric Code for the first character of Emp_Name, then the following query has to be considered.

QUERY

See the following query.

Select Emp_name, ASCII(Emp_name) AS NumCode from Employee;

See the output.

Emp_name NumCode
Rohit Raj 82
Shiva Rana 83
Karan Kumar 75
Suraj Bhakat 83
Akash Cherukuri 65

So, you can see from the output that the Numeric value of the first character is returned under the column name NumCode.

Finally, SQL ASCII FUNCTION Example | ASCII Function In SQL is over.

Recommended Posts

SQL Substring Function Example

SQL CONCAT Function Example

SQL Replace Function Example

SQL String Functions Example

Sql Try Catch Example

The post SQL ASCII FUNCTION Example | ASCII Function In SQL appeared first on AppDividend.

MySQL Cloning: more thoughts

$
0
0
I posted a few days ago some initial thoughts on the the MySQL native cloning functionality. Overall this looks good and I need to spend time to test further. I’m here in San Francisco ahead of Oracle Open World which starts today. As is usual with trips like this jet lag wakes you up rather … Continue reading MySQL Cloning: more thoughts

MySQL 8.0.17 – New Features Summary

$
0
0
This presentation is a summary of the MySQL 8.0.17 new features.

Database Switchover and Failover for Drupal Websites Using MySQL or PostgreSQL

$
0
0

Drupal is a Content Management System (CMS) designed to create everything from tiny to large corporate websites. Over 1,000,000 websites run on Drupal and it is used to make many of the websites and applications you use every day (including this one). Drupal has a great set of standard features such as easy content authoring, reliable performance, and excellent security. What sets Drupal apart is its flexibility as modularity is one of its core principles. 

Drupal is also a great choice for creating integrated digital frameworks. You can extend it with the thousands of add-ons available. These modules expand Drupal's functionality. Themes let you customize your content's presentation and distributions (Drupal bundles) are bundles which you can use as starter-kits. You can use all these functionalities to mix and match to enhance Drupal's core abilities or to integrate Drupal with external services. It is content management software that is powerful and scalable.

Drupal uses databases to store its web content. When your Drupal-based website or application is experiencing a large amount of traffic it can have an impact on your database server. When you are in this situation you'll require load balancing, high availability, and a redundant architecture to keep your database online. 

When I started researching this blog, I realized there are many answers to this issue online, but the solutions recommended were very dated. This could be a result of the increase in market share by WordPress resulting in a smaller open source community. What I did find were some examples on implementing high availability by using Master/Master (High Availability) or Master/Master/Slave (High Availability/High Performance)

Drupal offers support for a wide array of databases, but it was initially designed using MySQL variants. Though using MySQL is fully supported, there are better approaches you can implement. Implementing these other approaches, however, if not done properly, can cause your website to experience large amounts of downtime, cause your application to suffer performance issues, and may result in write issues to your slaves. Performing maintenance would also be difficult as you need failover to apply the server upgrades or patches (hardware or software) without downtime. This is especially true if you have a large amount of data, causing a potential major impact to your business. 

These are situations you don't want to happen which is why in this blog we’ll discuss how you can implement database failover for your MySQL or PostgreSQL databases.

Why Does Your Drupal Website Need Database Failover?

From Wikipedia “failover is switching to a redundant or standby computer server, system, hardware component or network upon the failure or abnormal termination of the previously active application, server, system, hardware component, or network. Failover and switchover are essentially the same operation, except that failover is automatic and usually operates without warning, while switchover requires human intervention.” 

In database operations, switchover is also a term used for manual failover, meaning that it requires a person to operate the failover. Failover comes in handy for any admin as it isolates unwanted problems such as accidental deletes/dropping of tables, long hours of downtime causing business impact, database corruption, or system-level corruption. 

Database Failover consists of more than a single database node, either physically or virtually. Ideally, since failover requires you to do switching over to a different node, you might as well switch to a different database server, if a host is running multiple database instances on a single host. That still can be either switchover or failover, but typically it's more of redundancy and high-availability in case a catastrophe occurs on that current host.

MySQL Failover for Drupal

Performing a failover for your Drupal-based application requires that the data handled by the database does not differentiate, nor separate. There are several solutions available, and we have already discussed some of them in previous Severalnines blogs. You may likely want to read our Introduction to Failover for MySQL Replication - the 101 Blog.

The Master-Slave Switchover

The most common approaches for MySQL Failover is using the master-slave switch over or the manual failover. There are two approaches you can do here:

  • You can implement your database with a typical asynchronous master-slave replication.
  • or can implement with asynchronous master-slave replication using GTID-based replication.

Switching to another master could be quicker and easier. This can be done with the following MySQL syntax:

mysql> SET GLOBAL read_only = 1; /* enable read-only */

mysql> CHANGE MASTER TO MASTER_HOST = '<hostname-or-ip>', MASTER_USER = '<user>', MASTER_PASSWORD = '<password>', MASTER_LOG_FILE = '<master-log-file>', MASTER_LOG_POS=<master_log_position>; /* master information to connect */

mysql> START SLAVE; /* start replication */

mysql> SHOW SLAVE STATUS\G /* check replication status */

or with GTID, you can simply do,

...

mysql> CHANGE MASTER TO MASTER_HOST = '<hostname-or-ip>', MASTER_USER = '<user>', MASTER_PASSWORD = '<password>', MASTER_AUTO_POSITION = 1; /* master information to connect */

...

Wit

Using the non-GTID approach requires you to determine first the master's log file and master's log pos. You can determine this by looking at the master's status in the master node before switching over. 

mysql> MASTER STATUS;

You may also consider hardening your server adding sync_binlog = 1 and innodb_flush_log_at_trx_commit = 1 as, in the event your master crashes, you'll have a higher chance that transactions from master will be insync with your slave(s). In such a case that promoted master has a higher chance of being a consistent datasource node.

This, however, may not be the best approach for your Drupal database as it could impose long downtimes if not performed correctly, such as being taken down abruptly. If your master database node experiences a bug resulting in a database to crash, you’ll need your application to point to another database waiting on standby as your new master or by having your slave promoted to be the master. You will need to specify exactly which node should take over and then determine the lag and consistency of that node. Achieving this is not as easy as just doing SET GLOBAL read_only=1; CHANGE MASTER TO… (etc), there are certain situations which require deeper analysis, looking at the viable transactions required to be present in that standby server or promoted master, to get it done. 

Drupal Failover Using MHA

One of the most common tools for automatic and manual failover is MHA. It has been around for a long while now and is still used by many organizations. You can checkout these previous blogs we have on the subject, Top Common Issues with MHA and How to Fix Them or MySQL High Availability Tools - Comparing MHA, MRM and ClusterControl.

Drupal Failover Using Orchestrator

Orchestrator has been widely adopted now and is being used by large organizations such as Github and Booking.com. It not only allows you to manage a failover, but also topology management, host discovery, refactoring, and recovery. There's a nice external blog here which I found it very useful to learn about its failover mechanism with Orchestrator. It's a two part blog series; part one and part two.

Drupal Failover Using MaxScale

MaxScale is not just a load balancer designed for MariaDB server, it also extends high availability, scalability, and security for MariaDB while, at the same time, simplifying application development by decoupling it from underlying database infrastructure. If you are using MariaDB, then MaxScale could be a relevant technology for you. Check out our previous blogs on how you can use the MaxScale failover mechanism.

Drupal Failover Using ClusterControl

Severalnines' ClusterControl offers a wide array of database management and monitoring solutions. Part of the solutions we offer is automatic failover, manual failover, and cluster/node recovery. This is very helpful as if it acts as your virtual database administrator, notifying you in real-time in case your cluster is in “panic mode,” all while the recovery is being managed by the system. You can check out this blog How to Automate Database Failover with ClusterControl to learn more about ClusterControl failover.

Other MySQL Solutions

Some of the old approaches are still applicable when you want to failover. There's MMM, MRM, or you can checkout Group Replication or Galera (note: Galera does not use asynchronous, rather synchronous replication). Failover in a Galera Cluster does not work the same way as it does with asynchronous replication. With Galera you can just write to any node or, if you implement a master-slave approach, you can direct your application to another node that will be the active-writer for the cluster.

Drupal PostgreSQL Failover

Since Drupal supports PostgreSQL, we will also checkout the tools to implement a failover or switchover process for PostgreSQL. PostgreSQL uses built-in Streaming Replication, however you can also set it to use a Logical Replication (added as a core element of PostgreSQL in version 10). 

Drupal Failover Using pg_ctlcluster

If your environment is Ubuntu, using pg_ctlcluster is a simple and easy way to achieve failover. For example, you can just run the following command:

$ pg_ctlcluster 9.6 pg_7653 promote

or with RHEL/Centos, you can use the pg_ctl command just like,

$ sudo -iu postgres /usr/lib/postgresql/9.6/bin/pg_ctl promote -D  /data/pgsql/slave/data

server promoting

You can also trigger failover of a log-shipping standby server by creating a trigger file with the filename and path specified by the trigger_file in the recovery.conf. 

You have to be careful with standby promotion or slave promotion here as you might have to ensure that only one master is accepting the read-write request. This means that, while doing the switchover, you might have to ensure the previous master has been relaxed or stopped.

Taking care of switchover or manual failover from primary to standby server can be fast, but it requires some time to re-prepare the failover cluster. Regularly switching from primary to standby is a useful practice as it allows for regular downtime on each system for maintenance. This also serves as a test of the failover mechanism, to ensure that it will really work when you need it. Written administration procedures are always advised. 

Drupal PostgreSQL Automatic Failover

Instead of a manual approach, you might require automatic failover. This is especially needed when a server goes down due to hardware failure or virtual machine corruption. You may also require an application to automatically perform the failover to lessen the downtime of your Drupal application. We'll now go over some of these tools which can be utilized for automatic failover.

Drupal Failover Using Patroni

Patroni is a template for you to create your own customized, high-availability solution using Python and - for maximum accessibility - a distributed configuration store like ZooKeeper, etcd, Consul or Kubernetes. Database engineers, DBAs, DevOps engineers, and SREs who are looking to quickly deploy HA PostgreSQL in the datacenter-or anywhere else-will hopefully find it useful

Drupal Failover Using Pgpool

Pgpool-II is a proxy software that sits between the PostgreSQL servers and a PostgreSQL database client. Aside from having an automatic failover, it has multiple features that includes connection pooling, load balancing, replication, and limiting the exceeding connections. You can read more about this tool is our three part blog; part one, part two, part three.

Drupal Failover Using pglookout

pglookout is a PostgreSQL replication monitoring and failover daemon. pglookout monitors the database nodes, their replication status, and acts according to that status. For example, calling a predefined failover command to promote a new master in the case the previous one goes missing.

pglookout supports two different node types, ones that are installed on the db nodes themselves and observer nodes that can be installed anywhere. The purpose of having pglookout on the PostgreSQL DB nodes is to monitor the replication status of the cluster and act accordingly, the observers have a more limited remit: they just observe the cluster status to give another viewpoint to the cluster state.

Drupal Failover Using repmgr

repmgr is an open-source tool suite for managing replication and failover in a cluster of PostgreSQL servers. It enhances PostgreSQL's built-in hot-standby capabilities with tools to set up standby servers, monitor replication, and perform administrative tasks such as failover or manual switchover operations.

repmgr has provided advanced support for PostgreSQL's built-in replication mechanisms since they were introduced in 9.0. The current repmgr series, repmgr 4, supports the latest developments in replication functionality introduced from PostgreSQL 9.3 such as cascading replication, timeline switching and base backups via the replication protocol.

Drupal Failover Using ClusterControl

ClusterControl supports automatic failover for PostgreSQL. If you have an incident, your slave can be promoted to master status automatically. With ClusterControl you can also deploy standalone, replicated, or clustered PostgreSQL database. You can also easily add or remove a node with a single action.

Other PostgreSQL Drupal Failover Solutions

There are certainly automatic failover solutions that I might have missed on this blog. If I did, please add your comments below so we can know your thoughts and experiences with your implementation and setup for failover especially for Drupal websites or applications.

Additional Solutions For Drupal Failover

While the tools I have mentioned earlier definitely handles the solution for your problems with failover, adding some tools that makes the failover pretty easier, safer, and has a total isolation between your database layer can be satisfactory. 

Drupal Failover Using ProxySQL

With ProxySQL, you can just point your Drupal websites or applications to the ProxySQL server host and it will designate which node will receive writes and which nodes will receive the reads. The magic happens transparently within the TCP layer and no changes are needed for your application/website configuration. In addition to that, ProxySQL acts also as your load balancer for your write and read requests for your database traffic. This is only applicable if you are using MySQL database variants.

Drupal Failover Using HAProxy with Keepalived

Using HAProxy and Keepalived adds more high availability and redundancy within your Drupal's database. If you want to failover, it can be done without your application knowing what's happening within your database layer. Just point your application to the vrrp IP that you setup in your Keepalived and everything will be handled with total isolation from your application. Having an automatic failover will be handled transparently and unknowingly by your application so no changes has to occur once, for example, a disaster has occurred and a recovery or failover was applied. The good thing about this setup is that it is applicable for both MySQL and PostgreSQL databases. I suggest you check out our blog PostgreSQL Load Balancing Using HAProxy & Keepalived to learn more about how to do this.

All of the options above are supported by ClusterControl. You can deploy or import the database and then deploy ProxySQL, MaxScale, or HAProxy & Keepalived. Everything will be managed, monitored, and will be set up automatically without any further configuration needed by your end. It all happens in the background and automatically creates a ready-for-production.

Conclusion

Having an always-on Drupal website or application, especially if you are expecting a large amount of traffic, can be complicated to create. If you have the right tools, the right setup, and the right technology stack, however, it is possible to achieve high availability and redundancy.

And if you don’t? Well then ClusterControl will set it up and maintain it for you. Alternatively, you can create a setup using the technologies mentioned in this blog, most of which are open source, free tools that would cater to your needs.

Mysql và Mysql server là gì? Sự khác biệt giữa hai cơ sở dữ liệu

$
0
0

Mysql và Mysql server hiện có thể được xem là 2 giải pháp RDBMS phổ biến nhất. Cả 2 gần như tương tự nhau bởi có cùng chức năng, gốc SQL. Tuy nhiên, vẫn có nhiều sự khác biệt giữa Mysql và Mysql server. Điều đó cũng gây nên sự phân vân cho người dùng nếu không hiểu về từng giải pháp. Hãy cùng chúng tôi tìm hiểu rõ hơn sự khác biệt giữa hai cơ sở dữ liệu mã nguồn qua những chia sẻ dưới đây nhé!

1. MySQL và SQL Server là gì?

Trước hết, để phân biệt được sự khác nhau giữa 2 giải pháp Mysql và Mysql server, chúng ta sẽ cùng tìm hiểu về khái niệm của từng giải pháp nhé!

Mysql và Mysql server đều là một hệ quản lý cơ sở dữ liệu mã nguồn mở rất phổ biến bởi độ tin cậy cao.
Mysql và Mysql server đều là một hệ quản lý cơ sở dữ liệu mã nguồn mở rất phổ biến bởi độ tin cậy cao.

MySQL:

Mysql là một hệ quản lý cơ sở dữ liệu mã nguồn mở rất phổ biến bởi độ tin cậy cao. Giải pháp này rất dễ sử dụng, hiệu năng cao và được dùng cho nhiều ứng dụng mới nhất được xây dựng trên Linux, Apache, PHP. Nhiều tổ chức phổ biến như Alcatel Lucent, Zappos, Google, Adobe, Facebook dựa vào hệ thống quản lý cơ sở dữ liệu này.
Giải pháp này có thể chạy trên hơn 20 nền tảng gồm Windows, IBM AIX, Mac OS, Linux, HP-UX và cung cấp nhiều tính năng linh hoạt. Một loạt các công cụ dịch vụ, cơ sở dữ liệu, hỗ trợ và đào tạo được cung cấp bởi hệ thống cơ sở dữ liệu MySQL.

MySQL Server:

MySQL Server là một hệ quản lý cơ sở dữ liệu quan hệ (RDBMS) phát triển từ Microsoft. Trên Transact-SQL, hệ thống này hoạt động là một tập hợp các phần mở rộng lập trình từ Sybase và Microsoft. T – SQL bổ sung các tính năng khác như kiểm soát giao dịch, xử lý lỗi và ngoại lệ, các biến khai báo và xử lý hàng. Tuy nhiên, vào những năm 1980, Sybase đã phát triển SQL Server trở lại. Phiên bản cuối cùng được phát triển với sự cộng tác của Sybase, Ashton-Tate và Microsoft cho OS/ 2 với tên gọi là SQL Server.
SQL Server 2005 là phiên bản cung cấp tính linh hoạt, độ tin cậy cao, an ninh cũng như khả năng mở rộng cho các ứng dụng cơ sở dữ liệu.

2. Tầm quan trọng của MySQL?

MySQL là cơ sở dữ liệu dễ sử dụng, tốc độ cao và hoạt động trên nhiều hệ điều hành cung cấp một hệ thống lớn những hàm tiện ích rất mạnh
MySQL là cơ sở dữ liệu dễ sử dụng, tốc độ cao và hoạt động trên nhiều hệ điều hành cung cấp một hệ thống lớn những hàm tiện ích rất mạnh

MySQL là cơ sở dữ liệu dễ sử dụng, tốc độ cao, ổn định, có tính khả chuyển, hoạt động trên nhiều hệ điều hành cung cấp một hệ thống lớn những hàm tiện ích rất mạnh. Giải pháp này rất thích hợp với các ứng dụng có truy cập CSDL trên internet bởi tốc độ và tính bảo mật cao. Bạn có thể tải MySQL về từ trang chủ bởi nó miễn phí hoàn toàn.

MySQL là một trong những ví dụ cơ bản nhất về Hệ Quản trị Cơ sở dữ liệu quan hệ sử dụng ngôn ngữ truy vấn có cấu trúc (SQL).

Giải pháp này được sử dụng cho việc bổ trợ Perl, PHP cùng nhiều loại ngôn ngữ khác. Nó được sử dụng để lưu trữ những thông tin trên các website viết bằng PHP hoặc Perl.

MySQL có nhiều phiên bản cho các hệ điều hành khác nhau: phiên bản Win 32 cho các hệ điều hành dòng Linux, Unix, Windows, NetBSD, Mac OSX, SGI Irix, FreeBSD,…

3. Điểm khác biệt giữa Mysql và Mysql server là gì?

Bạn đang còn phân vân về MySQL và MySQL Server? Hãy theo dõi những so sánh về tính năng, hiệu suất cũng như tính bảo mật của hai dòng giải pháp này nhé!

Điểm khác biệt giữa Mysql và Mysql server
Điểm khác biệt giữa Mysql và Mysql server

Môi trường

Trong khi MySQL có thể kết hợp với mọi ngôn ngữ lập trình, thường là PHP còn MySQL Server hoạt động tốt trong môi trường .NET. Trước đây MySQL Server chạy độc quyền trên Windows. Tuy nhiên, giờ đã khác kể từ lúc Microsoft thông báo hỗ trợ Linux cho SQL Server. Dĩ nhiên, chúng ta có thể nhận thấy phiên bản Linux vẫn chưa đủ tốt. Vì thế, lời khuyên dành cho bạn là hãy sử dụng SQL Server nếu dùng Windows, còn nếu dùng MySQL thì hãy chuyển qua Linux.

Hiệu suất

Về mặt hiệu suất thì MySQL không đòi hỏi nhiều như MySQL Server và có thể chạy trên các perform và UNIX highend tốt hơn MySQL Server trên Windows highend server ở nhiều trường hợp. Còn SQL Server thì có Perform kém hơn MySQL về nhiều mặt, đòi hỏi tài nguyên rất lớn (nhiều RAM, CPU mạnh).

SYNTAX

Điểm này là khác biệt lớn nhất giữa 2 nền tảng đối với hầu hết mọi người. Bạn nên quyết định lựa chọn hệ thống dựa trên việc mình quen thuộc với syntax nào.

MySQL Server được Microsoft xây dựng nhiều công cụ hỗ trợ lớn hơn cho RDBMS, gồm các công cụ phân tích dữ liệu. Nó cũng có tính năng trở thành SQL Server Reporting servies – server báo cáo, cũng như là công cụ ETL.

MySQL cũng có thể dựng các tính năng cụ thể tuy nhiên vẫn cần có giải pháp từ bên thứ 3

Khả năng nhân bản

Khả năng nhân bản của MySQL nhanh hơn và ít xảy ra sự cố hơn MySQL Server bởi tất cả các SQL statements dùng để cập nhật dữ liệu được lưu giữ trong binary log.

Còn MySQL Server lại phức tạp và chậm hơn vì nó cung cấp nhiều phương pháp replication cao cấp hơn, chi tiết hơn.

Khả năng phục hồi

Nếu chạy với Innodb thì khả năng phục hồi của MySQL không thua kém gì MySQL Server. Còn nếu chạy thuần túy với MyISAM storage engine thì MySQL có khả năng phục hồi không thể so sánh được với MySQL Server.

SQL Server còn hơn là một RDBMS

Việc được hỗ trợ như thế nào là sự khác biệt lớn nhất giữa một phần mềm độc quyền và phần mềm mã nguồn mở. Lợi thế của MySQL server rất rõ ràng trong trường hợp này, nó được bảo trợ bởi một tập đoàn công nghệ lớn nhất toàn cầu.

Nền tảng này đã được Microsoft xây dựng nhiều công cụ mạnh mẽ, hỗ trợ lớn hơn cho RDBMS, gồm các công cụ phân tích dữ liệu. Nó cũng có tính năng trở thành SQL Server Reporting servies – server báo cáo, cũng như là công cụ ETL.

Điều đó biến MySQL server như trở thành một con dao Thụy Sĩ của RDBMS. Bạn cũng có thể dựng các tính năng tương tự trong MySQL, tuy nhiên cần có giải pháp từ bên thứ 3

Bảo mật

Về tính bảo mật thì hầu như không có nhiều khác biệt giữa MySQL và SQL Server. Cả 2 giải pháp này đều tuân thủ EC2, là lựa chọn an toàn. Tuy nhiên, cũng không thể bỏ qua cái bóng Microsoft ở đây bởi nó đã cung cấp cho MySQL server các tính năng bảo mật mạnh mẽ và đáng giá. Microsoft Baseline Security Analyzer là một công cụ bảo mật riêng cũng giúp bạn tăng tính bảo mật cho SQL Server.

Nếu MySQL chỉ có thể set access đến row level là hết thì MySQL Serer lại có tính bảo mật cao hơn ở column level, Hệ thống xác thực, tính chặt chẽ cao hơn MySQL. Tuy nhiên, MySQL Serer dễ bị exploit hơn MySQL.

Storage engines

MySQL và SQL Server có cách khác nhau để lưu trữ liệu cũng là sự khác biệt lớn nhưng không được xem trọng. MySQL Serer sử dụng một storage engine được phát triển từ Microsoft, nó khác hoàn toàn so với nhiều loại engines được tạo ra cho MySQL. Điều đó, giúp cho lập trình viên của MySQL có sự linh hoạt nhất định, bởi có thể dùng nhiều storage engine khác nhau cho bảng, dựa trên độ tin cậy, tốc độ,… InnoDB là một storage engine phổ biến của MySQL, có thể chậm hơn nhưng ổn định hơn so với MyISAM.

Hủy Query

Có thể nhiều người không biết về điều nay, tuy nhiên nó lại và khách biệt khá lớn giữa 2 nền tảng MySQL và SQL server mà bạn cần cân nhắc. MySQL không cho phép bạn hủy query giữa chừng. Còn MySQL Server ở mặt khác cho phép bạn hủy query giữa chừng. Điều đó có thể trái với web developer, gây tổn thất cho database admin, những người thực thi query hiếm khi cần hủy query trong quá trình thực thi.

Phí tổn

MySQL bản community không mất phí nhưng phải tự thủ công. Nhưng việc cài đặt, sử dụng và tối ưu nền tảng này lại không quá khó bởi tài liệu về nó rất đầy đủ và nhiều nên bạn có thể tìm thấy trên internet.

Còn MySQL Server lại phải trả $1.5 cho một license SQL Server Standard và ban cần phải trả thêm tiền khi cần support. Đối với bản enterprise thì phải trả tiền khoảng $400 và sẽ được support đầy đủ. MySQL Server vẫn cung cấp bản miễn phí dành cho mục đích development.

Cộng đồng hỗ trợ

Mặc dù bạn sẽ cần thanh toán phí hỗ trợ nếu cần đến hỗ trợ chính thức từ giải pháp MySQL, khả năng này hiếm xảy ra, bởi lượng người dùng nền tảng này và cộng đồng của nó rất lớn. Vì thế, sự hỗ trợ từ cộng đồng này cũng vô cùng lớn.

Đặc quyền khi là một thành viên của cộng đồng người dùng, bạn sẽ được trợ giúp từ mọi người trên giới và có rất nhiều giải pháp cho bất kỳ vấn đề nào của bạn.

IDEs

IDEs rất quan trọng bởi cả 2 RDMBSs đều hỗ trợ công cụ IDE (Integrated Development Environment). Công cụ này tạo ra môi trường lập trình cho lập trình viên. Trong khi SQL Server dùng Management Studio (SSMS) thì MySQL lại sử dụng Enterprise Manager của Oracle.

Lựa chọn RDMBS là một việc làm rất quan trọng nếu muốn bắt đầu tạo ứng dụng. Chúng tôi đã đưa ra những khác biệt giữa MySQL và MySQL server. Lựa chọn là của bạn. Hiểu đơn giản, nếu muốn tạo một ứng dụng vừa và nhỏ, chuyên dùng PHP thì hãy chọn MySQ. Còn nếu muốn tạo một ứng dụng lớn, bảo mật cao thì MySQL Server sẽ nên là bạn đồng hành cùng bạn. Chúc các bạn thành công và đừng quên ghé thăm chúng tôi thường xuyên nhé!

The post Mysql và Mysql server là gì? Sự khác biệt giữa hai cơ sở dữ liệu appeared first on DBAhire.

Another time, another place… about next conferences

$
0
0

Missed OpportunityIs “that” time of the year … when autumn is just around the corner and temperature start to drop.
But is also the time for many exciting conferences in the MySQL world.
We have the Oracle Open world this week, and many interesting talks around MySQL 8.
Then in 2 weeks we will be at the Percona live Europe in Amsterdam, which is not a MySQL (only) conference anymore (https://www.cvent.com/events/percona-live-open-source-database-conference-europe-2019/agenda-6321c2468b1b43328f97212f3e53f4de.aspx).
Percona had move to a more “polyglot” approach not only in its services but also during the events.
This is obviously an interesting experiment, that allow people from different technologies to meet and discuss. At the end of the day it is a quite unique situation and opportunity, the only negative effect is that it takes space from the MySQL community, who is suffering a bit in terms of space, attendee and brainstorming focus on MySQL deep dive.
Said that there are few interesting talks I am looking to attend:
• Security and GDPR, many sessions
• MySQL 8.0 Performance: Scalability & Benchmarks
• Percona will also present the Percona cluster version 8, which is a must attend session
Plus the other technologies which I am only marginally interested to.

After Percona live in Amsterdam there will be a ProxySQL technology day in Ghent (https://proxysql.com/blog/proxysql-technology-day-ghent-oct3rd2019). Ghent is a very nice city and worth a visit, to reach it from Amsterdam is only 2hrs train. Given this event is the 3td of October I will just move there immediately after PLEU.
The ProxySQL event is a mid-day event starting at 5PM, with 30 minutes sessions focus on best practices on how to integrate the community award winning solution “ProxySQL” with the most common scenario and solutions.
I like that because I am expecting to see and discuss real cases and hands on issues with the participants.

So, a lot of things, right?
But once more, I want to raise the red flag about the lack of a MySQL community event.
We do have many events, most of them are following companies focus, and they are sparse and not well synchronized. Given that more than anything else, we miss A MySQL event. A place where we can group around and not only attract DBAs from companies who use it and sometime abuse it, but also a place for all of us to discuss and coordinate the efforts.

In the meantime, see you in Amsterdam, then Ghent, then Fosdem then …
Good MySQL to all

Another time, another place… about next conferences

$
0
0

Missed OpportunityIs “that” time of the year … when autumn is just around the corner and temperature start to drop.
But is also the time for many exciting conferences in the MySQL world.
We have the Oracle Open world this week, and many interesting talks around MySQL 8.
Then in 2 weeks we will be at the Percona live Europe in Amsterdam, which is not a MySQL (only) conference anymore (https://www.cvent.com/events/percona-live-open-source-database-conference-europe-2019/agenda-6321c2468b1b43328f97212f3e53f4de.aspx).
Percona had move to a more “polyglot” approach not only in its services but also during the events.
This is obviously an interesting experiment, that allow people from different technologies to meet and discuss. At the end of the day it is a quite unique situation and opportunity, the only negative effect is that it takes space from the MySQL community, who is suffering a bit in terms of space, attendee and brainstorming focus on MySQL deep dive.
Said that there are few interesting talks I am looking to attend:
• Security and GDPR, many sessions
• MySQL 8.0 Performance: Scalability & Benchmarks
• Percona will also present the Percona cluster version 8, which is a must attend session
Plus the other technologies which I am only marginally interested to.

After Percona live in Amsterdam there will be a ProxySQL technology day in Ghent (https://proxysql.com/blog/proxysql-technology-day-ghent-oct3rd2019). Ghent is a very nice city and worth a visit, to reach it from Amsterdam is only 2hrs train. Given this event is the 3td of October I will just move there immediately after PLEU.
The ProxySQL event is a mid-day event starting at 5PM, with 30 minutes sessions focus on best practices on how to integrate the community award winning solution “ProxySQL” with the most common scenario and solutions.
I like that because I am expecting to see and discuss real cases and hands on issues with the participants.

So, a lot of things, right?
But once more, I want to raise the red flag about the lack of a MySQL community event.
We do have many events, most of them are following companies focus, and they are sparse and not well synchronized. Given that more than anything else, we miss A MySQL event. A place where we can group around and not only attract DBAs from companies who use it and sometime abuse it, but also a place for all of us to discuss and coordinate the efforts.

In the meantime, see you in Amsterdam, then Ghent, then Fosdem then …
Good MySQL to all

Viewing all 18838 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>