Quantcast
Channel: Planet MySQL
Viewing all 18798 articles
Browse latest View live

Laravel 5.7 CRUD Example Tutorial For Beginners From Scratch

$
0
0
Laravel 5.7 CRUD Operation Tutorial Example

Laravel 5.7 CRUD Example Tutorial For Beginners From Scratch is today’s leading topic. Laravel 5.7 has some new cool features as well as several other enhancement and bug fixes. In previous Laracon event, Taylor Otwell announced some of the notable changes which are the following.

  1. Resources Directory Changes.
  2. Callable Action URLs.
  3. Laravel Dump Server.
  4. Improved Error Messages For Dynamic Calls.

Now in this tutorial, first we will install the Laravel 5.7 and then build a CRUD application.

Laravel 5.7 CRUD Example Tutorial

First, let us install Laravel 5.7 using the following command. We will use Composer Create-Project to generate laravel 5.7 projects.

#1: Install Laravel 5.7

Type the following command. Make sure you have installed composer in your machine.

composer create-project --prefer-dist laravel/laravel stocks

 

Laravel 5.7 CRUD Example Tutorial

Okay, now go inside the folder and install the npm packages using the following command. The requirement for the below command is that node.js is installed on your machine. So, if you have not installed, then please install it using its official site.

npm install

#2: Configure MySQL Database

Now, first, in MySQL, you need to create the database, and then we need to connect that database to the Laravel application. You can also use phpmyadmin to create the database.

Now, After creating the database, we need to open the .env file inside Laravel stocks project and add the database credentials. I have typed my credentials; please enter yours otherwise it won’t connect.

DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=laravel57
DB_USERNAME=root
DB_PASSWORD=root

So now you will be able to connect the MySQL database.

Laravel always ships with migration files, so you can able to generate the tables in the database using the following command.

php artisan migrate

 

Laravel 5.7 Tutorial

#3: Create a model and migration file.

Go to the terminal and type the following command to generate the model and migration file.

php artisan make:model Share -m

It will create the model and migration file. Now, we will write the schema inside <timestamp>create_shares_table.php file.

   /**
     * Run the migrations.
     *
     * @return void
     */
    public function up()
    {
        Schema::create('shares', function (Blueprint $table) {
            $table->increments('id');
            $table->string('share_name');
            $table->integer('share_price');
            $table->integer('share_qty');
            $table->timestamps();
        });
    }

Okay now migrate the table using the following command.

php artisan migrate

Now, add the fillable property inside Share.php file.

<?php

namespace App;

use Illuminate\Database\Eloquent\Model;

class Share extends Model
{
  protected $fillable = [
    'share_name',
    'share_price',
    'share_qty'
  ];
}

#4: Create routes and controller

First, create the ShareController using the following command.

php artisan make:controller ShareController --resource

Now, inside routes >> web.php file, add the following line of code.

<?php

Route::get('/', function () {
    return view('welcome');
});

Route::resource('shares', 'ShareController');

Actually, by adding the following line, we have registered the multiple routes for our application. We can check it using the following command.

php artisan route:list

 

Laravel 5.7 Example

Okay, now open the ShareController.php file, and you can see that all the functions declarations are there.

<?php

namespace App\Http\Controllers;

use Illuminate\Http\Request;

class ShareController extends Controller
{
    /**
     * Display a listing of the resource.
     *
     * @return \Illuminate\Http\Response
     */
    public function index()
    {
        //
    }

    /**
     * Show the form for creating a new resource.
     *
     * @return \Illuminate\Http\Response
     */
    public function create()
    {
        //
    }

    /**
     * Store a newly created resource in storage.
     *
     * @param  \Illuminate\Http\Request  $request
     * @return \Illuminate\Http\Response
     */
    public function store(Request $request)
    {
        //
    }

    /**
     * Display the specified resource.
     *
     * @param  int  $id
     * @return \Illuminate\Http\Response
     */
    public function show($id)
    {
        //
    }

    /**
     * Show the form for editing the specified resource.
     *
     * @param  int  $id
     * @return \Illuminate\Http\Response
     */
    public function edit($id)
    {
        //
    }

    /**
     * Update the specified resource in storage.
     *
     * @param  \Illuminate\Http\Request  $request
     * @param  int  $id
     * @return \Illuminate\Http\Response
     */
    public function update(Request $request, $id)
    {
        //
    }

    /**
     * Remove the specified resource from storage.
     *
     * @param  int  $id
     * @return \Illuminate\Http\Response
     */
    public function destroy($id)
    {
        //
    }
}

#5: Create the views

Inside resources >> views folder, create one folder called shares.

Inside that folder, create the following three files.

  1. create.blade.php
  2. edit.blade.php
  3. index.blade.php

But inside views folder, we also need to create a layout file. So create one file inside the views folder called layout.blade.php. Add the following code inside the layout.blade.php file.

<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <meta http-equiv="X-UA-Compatible" content="ie=edge">
  <title>Laravel 5.7 CRUD Example Tutorial</title>
  <link href="{{ asset('css/app.css') }}" rel="stylesheet" type="text/css" />
</head>
<body>
  <div class="container">
    @yield('content')
  </div>
  <script src="{{ asset('js/app.js') }}" type="text/js"></script>
</body>
</html>

So basically this file is our main template file, and all the other view files will extend this file. Here, we have already included the bootstrap four by adding the app.css.

Next step would be to code the create.blade.php file. So write the following code inside it.

@extends('layout')

@section('content')
<style>
  .uper {
    margin-top: 40px;
  }
</style>
<div class="card uper">
  <div class="card-header">
    Add Share
  </div>
  <div class="card-body">
    @if ($errors->any())
      <div class="alert alert-danger">
        <ul>
            @foreach ($errors->all() as $error)
              <li>{{ $error }}</li>
            @endforeach
        </ul>
      </div><br />
    @endif
      <form method="post" action="{{ route('shares.store') }}">
          <div class="form-group">
              @csrf
              <label for="name">Share Name:</label>
              <input type="text" class="form-control" name="share_name"/>
          </div>
          <div class="form-group">
              <label for="price">Share Price :</label>
              <input type="text" class="form-control" name="share_price"/>
          </div>
          <div class="form-group">
              <label for="quantity">Share Quantity:</label>
              <input type="text" class="form-control" name="share_qty"/>
          </div>
          <button type="submit" class="btn btn-primary">Add</button>
      </form>
  </div>
</div>
@endsection

Okay, now we need to open the ShareController.php file, and on the create function, we need to return a view, and that is the create.blade.php file.

// ShareController.php

public function create()
{
   return view('shares.create');
}

Save the file and start the Laravel development server using the following command.

php artisan serve

Go to the http://localhost:8000/shares/create. 

You can see something like this.

 

Laravel 5.7 Demo For Beginners

#6: Save the data

Now, we need to code the store function to save the data in the database. First, include the Share.php model inside ShareController.php file.

<?php

namespace App\Http\Controllers;

use Illuminate\Http\Request;
use App\Share;

class ShareController extends Controller
{
    /**
     * Display a listing of the resource.
     *
     * @return \Illuminate\Http\Response
     */
    public function index()
    {
        //
    }

    /**
     * Show the form for creating a new resource.
     *
     * @return \Illuminate\Http\Response
     */
    public function create()
    {
        return view('shares.create');
    }

    /**
     * Store a newly created resource in storage.
     *
     * @param  \Illuminate\Http\Request  $request
     * @return \Illuminate\Http\Response
     */
    public function store(Request $request)
    {
      $request->validate([
        'share_name'=>'required',
        'share_price'=> 'required|integer',
        'share_qty' => 'required|integer'
      ]);
      $share = new Share([
        'share_name' => $request->get('share_name'),
        'share_price'=> $request->get('share_price'),
        'share_qty'=> $request->get('share_qty')
      ]);
      $share->save();
      return redirect('/shares')->with('success', 'Stock has been added');
    }

    /**
     * Display the specified resource.
     *
     * @param  int  $id
     * @return \Illuminate\Http\Response
     */
    public function show($id)
    {
        //
    }

    /**
     * Show the form for editing the specified resource.
     *
     * @param  int  $id
     * @return \Illuminate\Http\Response
     */
    public function edit($id)
    {
        //
    }

    /**
     * Update the specified resource in storage.
     *
     * @param  \Illuminate\Http\Request  $request
     * @param  int  $id
     * @return \Illuminate\Http\Response
     */
    public function update(Request $request, $id)
    {
        //
    }

    /**
     * Remove the specified resource from storage.
     *
     * @param  int  $id
     * @return \Illuminate\Http\Response
     */
    public function destroy($id)
    {
        //
    }
}

If the validation fails, then it will throw an error, and we will display inside the create.blade.php file.

If all the values are good and pass the validation, then it will save the values in the database.

 

Laravel 5.7 CRUD

#7: Display the data.

Okay, now open the file called index.blade.php file and add the following code.

@extends('layout')

@section('content')
<style>
  .uper {
    margin-top: 40px;
  }
</style>
<div class="uper">
  @if(session()->get('success'))
    <div class="alert alert-success">
      {{ session()->get('success') }}  
    </div><br />
  @endif
  <table class="table table-striped">
    <thead>
        <tr>
          <td>ID</td>
          <td>Stock Name</td>
          <td>Stock Price</td>
          <td>Stock Quantity</td>
          <td colspan="2">Action</td>
        </tr>
    </thead>
    <tbody>
        @foreach($shares as $share)
        <tr>
            <td>{{$share->id}}</td>
            <td>{{$share->share_name}}</td>
            <td>{{$share->share_price}}</td>
            <td>{{$share->share_qty}}</td>
            <td><a href="{{ route('shares.edit',$share->id)}}" class="btn btn-primary">Edit</a></td>
            <td>
                <form action="{{ route('shares.destroy', $share->id)}}" method="post">
                  @csrf
                  @method('DELETE')
                  <button class="btn btn-danger" type="submit">Delete</button>
                </form>
            </td>
        </tr>
        @endforeach
    </tbody>
  </table>
<div>
@endsection

Next thing is we need to code the index() function inside ShareController.php file.

<?php

namespace App\Http\Controllers;

use Illuminate\Http\Request;
use App\Share;

class ShareController extends Controller
{
    /**
     * Display a listing of the resource.
     *
     * @return \Illuminate\Http\Response
     */
    public function index()
    {
        $shares = Share::all();

        return view('shares.index', compact('shares'));
    }

    /**
     * Show the form for creating a new resource.
     *
     * @return \Illuminate\Http\Response
     */
    public function create()
    {
        return view('shares.create');
    }

    /**
     * Store a newly created resource in storage.
     *
     * @param  \Illuminate\Http\Request  $request
     * @return \Illuminate\Http\Response
     */
    public function store(Request $request)
    {
      $request->validate([
        'share_name'=>'required',
        'share_price'=> 'required|integer',
        'share_qty' => 'required|integer'
      ]);
      $share = new Share([
        'share_name' => $request->get('share_name'),
        'share_price'=> $request->get('share_price'),
        'share_qty'=> $request->get('share_qty')
      ]);
      $share->save();
      return redirect('/shares')->with('success', 'Stock has been added');
    }

    /**
     * Display the specified resource.
     *
     * @param  int  $id
     * @return \Illuminate\Http\Response
     */
    public function show($id)
    {
        //
    }

    /**
     * Show the form for editing the specified resource.
     *
     * @param  int  $id
     * @return \Illuminate\Http\Response
     */
    public function edit($id)
    {
        //
    }

    /**
     * Update the specified resource in storage.
     *
     * @param  \Illuminate\Http\Request  $request
     * @param  int  $id
     * @return \Illuminate\Http\Response
     */
    public function update(Request $request, $id)
    {
        //
    }

    /**
     * Remove the specified resource from storage.
     *
     * @param  int  $id
     * @return \Illuminate\Http\Response
     */
    public function destroy($id)
    {
        //
    }
}

#8: Edit and Update Data

First, we need to code the edit() function inside  ShareController.php file.

<?php

namespace App\Http\Controllers;

use Illuminate\Http\Request;
use App\Share;

class ShareController extends Controller
{
    /**
     * Display a listing of the resource.
     *
     * @return \Illuminate\Http\Response
     */
    public function index()
    {
        $shares = Share::all();

        return view('shares.index', compact('shares'));
    }

    /**
     * Show the form for creating a new resource.
     *
     * @return \Illuminate\Http\Response
     */
    public function create()
    {
        return view('shares.create');
    }

    /**
     * Store a newly created resource in storage.
     *
     * @param  \Illuminate\Http\Request  $request
     * @return \Illuminate\Http\Response
     */
    public function store(Request $request)
    {
      $request->validate([
        'share_name'=>'required',
        'share_price'=> 'required|integer',
        'share_qty' => 'required|integer'
      ]);
      $share = new Share([
        'share_name' => $request->get('share_name'),
        'share_price'=> $request->get('share_price'),
        'share_qty'=> $request->get('share_qty')
      ]);
      $share->save();
      return redirect('/shares')->with('success', 'Stock has been added');
    }

    /**
     * Display the specified resource.
     *
     * @param  int  $id
     * @return \Illuminate\Http\Response
     */
    public function show($id)
    {
        //
    }

    /**
     * Show the form for editing the specified resource.
     *
     * @param  int  $id
     * @return \Illuminate\Http\Response
     */
    public function edit($id)
    {
        $share = Share::find($id);

        return view('shares.edit', compact('share'));
    }

    /**
     * Update the specified resource in storage.
     *
     * @param  \Illuminate\Http\Request  $request
     * @param  int  $id
     * @return \Illuminate\Http\Response
     */
    public function update(Request $request, $id)
    {
        //
    }

    /**
     * Remove the specified resource from storage.
     *
     * @param  int  $id
     * @return \Illuminate\Http\Response
     */
    public function destroy($id)
    {
        //
    }
}

Now, add the following lines of code inside the edit.blade.php file.

@extends('layout')

@section('content')
<style>
  .uper {
    margin-top: 40px;
  }
</style>
<div class="card uper">
  <div class="card-header">
    Edit Share
  </div>
  <div class="card-body">
    @if ($errors->any())
      <div class="alert alert-danger">
        <ul>
            @foreach ($errors->all() as $error)
              <li>{{ $error }}</li>
            @endforeach
        </ul>
      </div><br />
    @endif
      <form method="post" action="{{ route('shares.update', $share->id) }}">
        @method('PATCH')
        @csrf
        <div class="form-group">
          <label for="name">Share Name:</label>
          <input type="text" class="form-control" name="share_name" value={{ $share->share_name }} />
        </div>
        <div class="form-group">
          <label for="price">Share Price :</label>
          <input type="text" class="form-control" name="share_price" value={{ $share->share_price }} />
        </div>
        <div class="form-group">
          <label for="quantity">Share Quantity:</label>
          <input type="text" class="form-control" name="share_qty" value={{ $share->share_qty }} />
        </div>
        <button type="submit" class="btn btn-primary">Update</button>
      </form>
  </div>
</div>
@endsection

Finally, code the update function inside ShareController.php file.

public function update(Request $request, $id)
{
      $request->validate([
        'share_name'=>'required',
        'share_price'=> 'required|integer',
        'share_qty' => 'required|integer'
      ]);

      $share = Share::find($id);
      $share->share_name = $request->get('share_name');
      $share->share_price = $request->get('share_price');
      $share->share_qty = $request->get('share_qty');
      $share->save();

      return redirect('/shares')->with('success', 'Stock has been updated');
}

So, now you can update the existing values.

#9: Delete the data

Just code the delete function inside ShareController.php file.

public function destroy($id)
{
     $share = Share::find($id);
     $share->delete();

     return redirect('/shares')->with('success', 'Stock has been deleted Successfully');
}

Finally, Laravel 5.7 CRUD Example Tutorial For Beginners From Scratch is over. I have put the code in the Github Repo. So check out as well.

Github Code

The post Laravel 5.7 CRUD Example Tutorial For Beginners From Scratch appeared first on AppDividend.


MySQL Plugin For Oracle Enterprise Manager 13c Cloud Control

$
0
0

This is the same plugin that Alex Gorvachev created back in the day. I’ve simply modified it to be compatible with both 12c and 13c versions.

I created this in response to a comment on the blog about issues deploying the plugin in OEM 13c. There is also a note in MOS “EM 13c: Adding a MySQL Instance in Enterprise Manager 13c Fails with Error: oracle.sysman.emSDK.agent.client.exception.NoSuchMetricException: the Load metric does not exist (Doc ID 2160785.1)“.

This version has been tested with OEM 13cR2 and OEM 12cR3, so it should be working in all the versions in between except for bugs.

Caveats

During the development of this new version, I’ve found that MySQL 8 has changed the default authentication plugin, being now caching_sha2_password as explained in this blog post.

What does this mean for the plugin? Well, the Perl DB module version 5.10 used to connect to the MySQL instance. It does not support this new authentication mode and the console returns the following error message:

“Client does not support authentication protocol requested by server; consider upgrading MySQL client …”

In order to work around this issue without compromising the enhanced security of the latest MySQL version, I suggest the monitoring user be created as follows:

create user oem@localhost identified with mysql_native_password by '*********************';
grant process, replication client on *.* to oem@localhost;

Also, this version of the plugin requires Perl 5.10+, which is the base version distributed with OEM 12cR3.

Finally, due to the nature of the changes in the plugin required for it to be compatible with OEM 13c, if you have a previous version of the plugin deployed in your system it has to be removed, and with it all the MySQL monitored targets. This should not be the case unless you are upgrading to 13c which, most probably, will render the plugin unusable.

 

Download

Below is the new zipped OPAR file to be uploaded and deployed to the management server and the management agents as required.

Download Plugin

 

Installation guide

First of all, if you have an older version of the plugin deployed, just keep it unless you have a very good reason to deploy this new version, such as upgrading to 13c. There are no changes in functionality, just the compatibility.

The steps below have been obtained from the Oracle OEM 12cR5 CC documentation.

Upload the plugin file to OMS

The first step is to upload the OPAR file to the OMS Software Library repository. You will need a working EMCLI setup and we recommend to set it up in the OMS server itself. When EMCLI is working, create a session to the OMS, sync the repository and upload the OPAR file.

$ emcli login -username=sysman
Enter password :

Login successful

$ emcli sync
Synchronized successfully

$ emcli import_update -file=/home/oracle/12.1.0.3.0_pythian.mysql_.prod_2000_0.opar -omslocal
Processing update: Plug-in - MySQL plug-in by Alex Gorbachev, The Pythian Group                  
Successfully uploaded the update to Enterprise Manager. Use the Self Update Console to manage this update.

Deploy the plugin

Once the OPAR file has been uploaded to the OMS repository, you can start deploying it. First to the OMS Management Servers and then to the Management Agents.
There are two ways to accomplish this task: Using the OEM Console and using EMCLI.

Deploying the plugin using the OEM Console

To use the OEM Console go to Setup -> Extensibility -> Plug-ins page. Once there, expand the Databases section and select the Pythian MySQL Plugin entry. Now either right click and select Deploy on option or use the top menu Deploy on option to initiate the deployment wizard.

See the screenshots below showing the process to deploy the plugin (version 12.1.0.1) on a management server.

Main plugin page

Main plugin page

Deploy wizard step 1

Deploy wizard step 1

Deploy wizard step 2

Deploy wizard step 2

Deploy wizard step 3

Deploy wizard step 3

Deploy wizard step 4

Deploy wizard step 4

Deploy wizard step 5

Deploy wizard step 5

Deployment final result

Deployment final result

Deploying the plugin using EMCLI

If you, like me, are not a fan of GUIs and graphical consoles or simply don’t have access to it, you can use EMCLI to deploy the plugin both to the management servers and the agents.

Initiate the process for the management server.

$ time emcli deploy_plugin_on_server -plugin=pythian.mysql.prod
Enter repository DB sys password:                              

Performing pre-requisites check... This will take a while.
Prerequisites check succeeded                             
Deployment of plug-in on the management servers is in progress
Use "emcli get_plugin_deployment_status -plugin=pythian.mysql.prod" to track the plug-in deployment status.

real    0m56.396s
user    0m1.822s 
sys     0m0.212s 

Monitor until completion:

$ emcli get_plugin_deployment_status -plugin=pythian.mysql.prod
Plug-in Deployment/Undeployment Status                         

Destination          : Management Server - emcc.example.com:4889_Management_Service
Plug-in Name         : Pythian MySQL Plugin                                        
Version              : 12.1.0.3.0                                                  
ID                   : pythian.mysql.prod           
Content              : Plug-in                                                     
Action               : Deployment                                                  
Status               : Deploying                                                   
Steps Info:                                                                        
---------------------------------------- ------------------------- ------------------------- ---------- 
Step                                     Start Time                End Time                  Status     
---------------------------------------- ------------------------- ------------------------- ---------- 
Submit job for deployment                8/6/18 7:02:41 AM EDT     8/6/18 7:02:41 AM EDT     Success    

Initialize                               8/6/18 7:02:44 AM EDT     N/A                       Running    

---------------------------------------- ------------------------- ------------------------- ---------- 

$ emcli get_plugin_deployment_status -plugin=pythian.mysql.prod                                         
Plug-in Deployment/Undeployment Status                                                                  

Destination          : Management Server - emcc.example.com:4889_Management_Service
Plug-in Name         : Pythian MySQL Plugin                                        
Version              : 12.1.0.3.0                                                  
ID                   : pythian.mysql.prod           
Content              : Plug-in                                                     
Action               : Deployment                                                  
Status               : Deploying                                                   
Steps Info:                                                                        
---------------------------------------- ------------------------- ------------------------- ---------- 
Step                                     Start Time                End Time                  Status     
---------------------------------------- ------------------------- ------------------------- ---------- 
Submit job for deployment                8/6/18 7:02:41 AM EDT     8/6/18 7:02:41 AM EDT     Success    

Initialize                               8/6/18 7:02:44 AM EDT     8/6/18 7:03:00 AM EDT     Success    

Install software                         8/6/18 7:03:00 AM EDT     8/6/18 7:03:02 AM EDT     Success    

Validate plug-in home                    8/6/18 7:03:04 AM EDT     8/6/18 7:03:04 AM EDT     Success    

Perform custom preconfiguration          8/6/18 7:03:04 AM EDT     8/6/18 7:03:05 AM EDT     Success    

Check mandatory patches                  8/6/18 7:03:05 AM EDT     8/6/18 7:03:05 AM EDT     Success    

Generate metadata SQL                    8/6/18 7:03:05 AM EDT     8/6/18 7:03:05 AM EDT     Success    

Preconfigure Management Repository       8/6/18 7:03:05 AM EDT     8/6/18 7:03:05 AM EDT     Success    

Preregister DLF                          8/6/18 7:03:05 AM EDT     8/6/18 7:03:05 AM EDT     Success    

OPSS jazn policy migration               8/6/18 7:03:05 AM EDT     8/6/18 7:03:05 AM EDT     Success

Configure Management Repository          8/6/18 7:03:05 AM EDT     N/A                       Running

Register DLF                             8/6/18 7:03:05 AM EDT     N/A                       Running

---------------------------------------- ------------------------- ------------------------- ----------

$ emcli get_plugin_deployment_status -plugin=pythian.mysql.prod
Plug-in Deployment/Undeployment Status

Destination          : Management Server - emcc.example.com:4889_Management_Service
Plug-in Name         : Pythian MySQL Plugin
Version              : 12.1.0.3.0                                                  
ID                   : pythian.mysql.prod           
Content              : Plug-in
Action               : Deployment
Status               : Success
Steps Info:
---------------------------------------- ------------------------- ------------------------- ----------
Step                                     Start Time                End Time                  Status
---------------------------------------- ------------------------- ------------------------- ----------
Submit job for deployment                8/6/18 7:02:41 AM EDT     8/6/18 7:02:41 AM EDT     Success

Initialize                               8/6/18 7:02:44 AM EDT     8/6/18 7:03:00 AM EDT     Success

Install software                         8/6/18 7:03:00 AM EDT     8/6/18 7:03:02 AM EDT     Success

Validate plug-in home                    8/6/18 7:03:04 AM EDT     8/6/18 7:03:04 AM EDT     Success

Perform custom preconfiguration          8/6/18 7:03:04 AM EDT     8/6/18 7:03:05 AM EDT     Success

Check mandatory patches                  8/6/18 7:03:05 AM EDT     8/6/18 7:03:05 AM EDT     Success

Generate metadata SQL                    8/6/18 7:03:05 AM EDT     8/6/18 7:03:05 AM EDT     Success

Preconfigure Management Repository       8/6/18 7:03:05 AM EDT     8/6/18 7:03:05 AM EDT     Success

Preregister DLF                          8/6/18 7:03:05 AM EDT     8/6/18 7:03:05 AM EDT     Success

OPSS jazn policy migration               8/6/18 7:03:05 AM EDT     8/6/18 7:03:05 AM EDT     Success

Configure Management Repository          8/6/18 7:03:05 AM EDT     8/6/18 7:04:58 AM EDT     Success

Register DLF                             8/6/18 7:03:05 AM EDT     8/6/18 7:05:01 AM EDT     Success

Register metadata                        8/6/18 7:05:01 AM EDT     8/6/18 7:05:09 AM EDT     Success

Perform custom postconfiguration         8/6/18 7:05:09 AM EDT     8/6/18 7:05:09 AM EDT     Success

Update inventory                         8/6/18 7:05:09 AM EDT     8/6/18 7:05:11 AM EDT     Success

---------------------------------------- ------------------------- ------------------------- ----------

Once the plugin has been deployed to the server, it is time to deploy it to the agents. We start by listing the existing management agents.

$ emcli get_targets -target="oracle_emd"
Status  Status           Target Type           Target Name
 ID
1       Up               oracle_emd            emcc.example.com:3872

Then we deploy the plugin to the agents we want, only one in this case.

$ time emcli deploy_plugin_on_agent -agent_names=emcc.example.com:3872 -plugin=pythian.mysql.prod
Agent side plug-in deployment is in progress
Use "emcli get_plugin_deployment_status -plugin=pythian.mysql.prod" to track the plug-in deployment status.

real    0m1.637s
user    0m1.271s
sys     0m0.089s

Now we monitor the deployment status until it completes. Note: I am using the watch command to automatically execute the query every 15 seconds but it may not be available in your OS distribution so simply execute the emcli command manually after a few minutes to review the status.

$ watch -n 15 emcli get_plugin_deployment_status -plugin=pythian.mysql.prod

Plug-in Deployment/Undeployment Status

Destination          : Management Agent - emcc.example.com:3872
Plug-in Name         : Pythian MySQL Plugin
Version              : 12.1.0.3.0                                                  
ID                   : pythian.mysql.prod           
Content              : Plug-in
Action               : Deployment
Status               : Deploying
Steps Info:
---------------------------------------- ------------------------- ------------------------- ----------
Step                                     Start Time                End Time                  Status
---------------------------------------- ------------------------- ------------------------- ----------
Submit job for deployment                8/8/18 10:09:34 AM EDT    8/8/18 10:09:34 AM EDT    Success

Initialize                               8/8/18 10:09:38 AM EDT    8/8/18 10:09:46 AM EDT    Success

Validate Environment                     8/8/18 10:09:47 AM EDT    8/8/18 10:09:47 AM EDT    Success

Install software                         8/8/18 10:09:47 AM EDT    8/8/18 10:09:48 AM EDT    Success

Attach Oracle Home to Inventory          8/8/18 10:09:49 AM EDT    8/8/18 10:10:03 AM EDT    Success

Configure plug-in on Management Agent    8/8/18 10:10:03 AM EDT    N/A                       Running

Update inventory                         8/8/18 10:10:23 AM EDT    N/A                       Running

---------------------------------------- ------------------------- ------------------------- ----------


Plug-in Deployment/Undeployment Status

Destination          : Management Agent - emcc.example.com:3872
Plug-in Name         : Pythian MySQL Plugin
Version              : 12.1.0.3.0                                                  
ID                   : pythian.mysql.prod           
Content              : Plug-in
Action               : Deployment
Status               : Success
Steps Info:
---------------------------------------- ------------------------- ------------------------- ----------
Step                                     Start Time                End Time                  Status
---------------------------------------- ------------------------- ------------------------- ----------
Submit job for deployment                8/8/18 10:09:34 AM EDT    8/8/18 10:09:34 AM EDT    Success

Initialize                               8/8/18 10:09:38 AM EDT    8/8/18 10:09:46 AM EDT    Success

Validate Environment                     8/8/18 10:09:47 AM EDT    8/8/18 10:09:47 AM EDT    Success

Install software                         8/8/18 10:09:47 AM EDT    8/8/18 10:09:48 AM EDT    Success

Attach Oracle Home to Inventory          8/8/18 10:09:49 AM EDT    8/8/18 10:10:03 AM EDT    Success

Configure plug-in on Management Agent    8/8/18 10:10:03 AM EDT    8/8/18 10:10:29 AM EDT    Success

Update inventory                         8/8/18 10:10:23 AM EDT    8/8/18 10:10:29 AM EDT    Success

---------------------------------------- ------------------------- ------------------------- ----------

Adding MySQL targets

Once you are done with the deployment on both the management servers and the management agents you can add the MySQL targets. The current version of the plugin does not include the discovery scripts, so you have to add the targets manually using the OEM console.

In the console, got to Setup -> Add Target -> Add targets manually. Click on Add target Declaratively and fill in the details. Pay special attention to the username, server, port and socket if these are not the default.

See the screenshots below for a basic setup:

Add Targets Manually page

Add Targets Manually page

Select the monitored Host where the MySQL instance exists and the target type "MySQL Instance"

Select host where the MySQL instance exists and the target type “MySQL Instance”

Enter the MySQL instance details.

Enter the MySQL instance details.

Confirmation notice

Confirmation notice

Now you should be able to list the MySQL targets in the All Targets page.

MySQL targets in the All Targets page

MySQL targets in the All Targets page

If the target appears down, and you know for sure that it is up, review the Monitoring Configuration of the target for correct user, host, password, socket, etc.

After a few minutes, the agent has collected data and OEM will be able to populate the MySQL monitoring target charts as below:

MySQL target charts

MySQL target charts

JSON Paths and the MySQL JSON Functions

$
0
0
I wrote MySQL and JSON: A Practical Programming Guide to help developers find their way around the MySQL JSON data type and the supporting functions. The MySQL Documentation on the subject is very good but I had to puzzle through the examples to see how things worked.  I might be a bit 'thick' but good examples always make things easier.  Others seem to have similar difficulties.
MySQL and JSON a Practical Programming Guide should be on your desk as a handy reference to MySQL's JSON data type.

 There was a recent post on Stackoverflow.com where someone had this JSON document:

{  
   "textures":[  
      {  
         "label":"test",
         "types":{  
            "t_1":0,
            "t_2":0
         }
      },
      {  
         "label":"KEK",
         "types":{  
            "t_1":0,
            "t_2":0
         }
      }
   ],
   "weapons":[  
      {  
         "name":"WW_SHT",
         "ammo":0
      },
      {  
         "name":"WW_DSS",
         "ammo":0
      }
   ]
}

And they wanted want to update t_1 to change value from 0 to 1.  I will not repost their code but to my eyes it looked convoluted. 

So How Do You Get There From Here?

Trying to figure out how to get down to a key or value is easy.  To see the top level keys simply use JSON_KEYS()

SELECT JSON_KEYS(doc) FROM zz1 LIMIT 1;

'[\"weapons\", \"textures\"]'

But how to get deeper??

By using select doc->>"$.textures[*]" from zz1 limit 1; we get all the info under the textures key.

[{"label": "test", "types": {"t_1": 0, "t_2": 0}}, {"label": "KEK", "types": {"t_1": 0, "t_2": 0}}]

Okay so we are getting closer to the target!    Now to take one more step closer with select doc->"$.textures[*].types" from zz1 limit 1;

[{"t_1": 0, "t_2": 0}, {"t_1": 0, "t_2": 0}]

I like to use JSON_PRETTY to get a enhanced view:

select json_pretty(doc>"$.textures[*].types")  
from zz1 limit 1;
 [
  {
    "t_1": 0,
    "t_2": 0
  },
  {
    "t_1": 0,
    "t_2": 0
  }


But there are two t_1s!

The next step is to get just those t_1 values and that is done with select 
doc->"$.textures[*].types.t_1" from zz1;

Which gives us:

 [0, 0]

Not really confidence inspiring ,eh? So lets change one of those zeros to a nine.

update zz1 set doc = json_set(doc,"$.textures[0].types.t_1",9);
 
So did we change the first or the second t_1??

select json_pretty(doc>"$.textures[*].types")  
from zz1 ;
 [
  {
    "t_1": 9,
    "t_2": 0
  },
  {
    "t_1": 0,
    "t_2": 0
  }

But lets double check and change the second t_1 also. 

update zz1 set doc = json_set(doc,"$.textures[1].types.t_1",7) ;

Hopefully that second one will end up with a value of seven.

select 
json_pretty(doc->"$.textures[*].types") from zz1;
 [
  {
    "t_1": 9,
    "t_2": 0
  },
  {
    "t_1": 7,
    "t_2": 0
  }

So now we can get to the exact values we want.

Annotated JSON Document

So lets look at the section of the JSON document and annotate in red the paths in the document.

"textures":[  
      {  -- textures[0]
         "label":"test",  
         "types":{  -- textures[0].types
            "t_1":0,--textures[0].types.t_1
            "t_2":0
         }
      },
      {  -- textures[1]
         "label":"KEK",
         "types":{  
            "t_1":0,--textures[1].types.t_1
            "t_2":0
         }
      }
   ]
Hopefully this will ease someone's confusion down the line.  And please do buy my book.

Manipulating queries with non-conforming data via MySQL Query Rewrite Plugin, triggers and stored procedures

$
0
0

The MySQL database is used in thousands of third-party applications, but what can you do when you want to use MySQL with an application, but that application’s queries or data doesn’t match MySQL’s data type or SQL format?

This post will show you three ways to alter a query or mismatched data when you don’t have control of the application’s source code. Of course, there are hundreds of different ways to do what I am about to show you. In this example, I will show you how to use the MySQL Query Rewrite Plugin along with a trigger to alter the non-conforming data. I will also show you an example of manipulating data with a stored procedure.

A customer emailed me with a problem. They wanted to use MySQL for a third-party application, but they didn’t have access to the source code. Their main problem was the application’s TIMESTAMP format didn’t conform to MySQL’s TIMESTAMP format. To be specific, this application produced a TIMESTAMP value that included a trailing time zone, such as “2018-09-05 17:00:00 EDT”. MySQL has two column data types where you can store both the date and time in one column: TIMESTAMP and DATETIME – but MySQL cannot handle TIMESTAMP or DATETIME data with a trailing time zone.

When a TIMESTAMP value is being inserted into a row, MySQL converts the TIMESTAMP value from the current time zone set by the MySQL server (see Time Zone Support) to UTC (Coordinated Universal Time) for storage, and converts the data back from UTC to the current time zone (of the server) when retrieved. (This conversion does not occur for other types such as DATETIME.) By default, the current time zone for each connection is the server’s local time. The time zone can be set on a per-connection basis, and as long as the time zone setting remains constant, you will get back the same value you stored. If you store a TIMESTAMP value, and then change the time zone and retrieve the value, the retrieved value is different from the value you stored. This occurs because the same time zone was not used for conversion in both directions. The current time zone is available as the value of the time_zone system variable. For more information, see Section 5.1.12, “MySQL Server Time Zone Support”.

(From: https://dev.mysql.com/doc/refman/8.0/en/datetime.html)

The customer told me that this application would only be sending data with two different trailing time zones – Central and Eastern. With daylight-savings in use in both of these time zones, this would give us four possible trailing time zone values – CDT, CST, EDT and EST. What we want to do is to intercept the query, and write this TIMESTAMP data to a different column, and then convert the value to UTC time to be stored in the correct column in the database. Because we don’t have access to the source code, I am assuming we have full access to the MySQL database.


NOTE: Since we are using time zone information, if you want to duplicate this post, be sure to load the MySQL time zone information. See: https://dev.mysql.com/doc/refman/8.0/en/time-zone-support.html

 

The MySQL Rewrite Plugin

In MySQL version 5.7, a plugin named the “Query Rewrite Plugin” was introduced. This plugin can examine SQL statements received by the server and modify those statements before the server executes them. In other words, this gives you the ability to intercept “bad” queries and re-format them to be “good” queries for use with MySQL – or to rewrite the queries to do whatever you need. Think of it as a way to change the source code without actually having the source code.

Installing the plugin is fairly easy. In MySQL version 8.0, you install (or uninstall) the plugin via an SQL script provided with your MySQL installation. The script is named install_rewriter.sql and is located in the “share” directory under your MySQL home directory.

# cd /usr/local/mysql/share  (your directory may be different)
# mysql -u root -p < install_rewriter.sql
Enter password: (enter root password here)

The script only takes a few seconds to load (The uninstall script is named uninstall_rewriter.sql). To check and make sure the plugin was installed, run this command from within MySQL:

mysql> SHOW GLOBAL VARIABLES LIKE 'rewriter_enabled';
+------------------+-------+
| Variable_name    | Value |
+------------------+-------+
| rewriter_enabled | ON    |
+------------------+-------+
1 row in set (0.00 sec)

The plugin was installed correctly if the column named “Value” is set to “ON“.

For this example, I am going to create a small table with three columns, and assume that this is an table from a third-party application. The date_time_value column is where the application would normally store the timestamp information.

mysql> create database test;
 Query OK, 1 row affected (0.01 sec)
mysql> use test;
 Database changed
mysql> CREATE TABLE `time_example` (
  `idtime` int(11) NOT NULL AUTO_INCREMENT,
  `action_record` varchar(30) NOT NULL,
  `date_time_value` timestamp NULL DEFAULT NULL,
  PRIMARY KEY (`idtime`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=UTF8MB4;
Query OK, 0 rows affected (0.03 sec)

The date_time_value column will obviously not be able to store timestamp data with a trailing time zone, but let’s see what happens when we try and insert a row of data – and let’s pretend that this is the query the application uses.

mysql> insert into test.time_example (action_record, date_time_value) 
 values ('Arrived at work', '2018-09-05 17:00:00 EDT');
Error Code: 1292. Incorrect datetime value: '2018-09-05 17:00:00 EDT' 
 for column 'date_time_value' at row 1

Of course, we get an error because the format for the timestamp is incorrect.

What we want to do is to alter the table and add a column to store this improperly-formatted timestamp data.

mysql> ALTER TABLE `test`.`time_example` 
    -> ADD COLUMN `date_time_storage` VARCHAR(23) NULL AFTER `date_time_value`;
Query OK, 0 rows affected (0.05 sec)
Records: 0  Duplicates: 0  Warnings: 0

Now that we have a column (date_time_storage) to store the “bad” timestamp data, we need to modify the incoming query so that it writes the timestamp data into the new column.


Note: In MySQL 8.0+, with the Query Rewrite Plugin, you can modify SELECT, INSERT, REPLACE, UPDATE, and DELETE statements. (Prior to MySQL 8.0.12 you could only modify SELECT statements)

 

This is the query the application is sending to the database:

insert into test.time_example (action_record, date_time_value) values (?, ?);

We want to modify the query to use the new date_time_storage column, instead of the date_time_value column. The new query would look like this:

insert into test.time_example (action_record, date_time_storage) values (?, ?);

Now that we have our old (bad) and new (good) queries, we can insert this into the rewrite_rules table of the query_rewrite database.

INSERT INTO query_rewrite.rewrite_rules
    (pattern, replacement, pattern_database) VALUES(
    'insert into test.time_example (action_record, date_time_value) values (?, ?)',
    'insert into test.time_example (action_record, date_time_storage) values (?, ?)',
    'time_example'
    );
1 row(s) affected, 1 warning(s): 1105 Query 'insert into test.time_example 
 (action_record, date_time_value) values ('Left building', '2018-09-05 17:00:00 EDT')' 
 rewritten to 'insert into test.time_example (action_record, date_time_storage) 
 values ('Left building', '2018-09-05 17:00:00 EDT')' by a query rewrite plugin

(More examples may be found on this page: Query Rewrite Plugin Usage)

We need to execute a stored procedure named flush_rewrite_rules to make this query-rewrite change permanent: (See: https://dev.mysql.com/doc/refman/8.0/en/rewriter-query-rewrite-plugin-usage.html)

mysql> CALL query_rewrite.flush_rewrite_rules();
Query OK, 1 row affected (0.00 sec)

We can confirm the INSERT INTO query_rewrite.rewrite_rules by looking at the rewrite_rules table:

mysql> SELECT * FROM query_rewrite.rewrite_rules\G
*************************** 1. row ***************************
                id: 1
           pattern: insert into test.time_example (action_record, date_time_value) values (?, ?)
  pattern_database: time_example
       replacement: insert into test.time_example (action_record, date_time_storage) values (?, ?)
           enabled: YES
           message: NULL
    pattern_digest: e823e987338aeae6d47f7a729e78f532d3ff3721237c15981bcd11fc2607efda
normalized_pattern: insert into `test`.`time_example` (`action_record`,`date_time_value`) values (?,?)
1 row in set (0.00 sec)

Next, let’s run the same query as before, and see if it puts the timestamp data that is supposed to go into the date_time_value column into the new date_time_storage column:

mysql> insert into test.time_example (action_record, date_time_value) 
 values ('Arrived at work', '2018-09-05 17:00:00 EDT');
Query OK, 1 row affected, 1 warning (0.01 sec)

And now the table contains this data:

mysql> select * from time_example;
+--------+-----------------+-----------------+-------------------------+
| idtime | action_record   | date_time_value | date_time_storage       |
+--------+-----------------+-----------------+-------------------------+
|      1 | Arrived at work | NULL            | 2018-09-05 17:00:00 EDT |
+--------+-----------------+-----------------+-------------------------+
1 rows in set (0.00 sec)

We now have the timestamp with the time zone data stored in the MySQL database, but we need to convert this to a proper format, and put the result into the date_time_value column.

To do this, we can use a trigger.

Normally, you would want your application to produce data in the correct format, but in this example, we don’t have access to the source code. So, we can create a trigger to convert the “incorrectly-formatted” data in date_time_storage to the correct data and store it in date_time_value.


NOTE: These examples won’t work if your TIMESTAMP uses microseconds (6-digits) precision (example: ‘1970-01-01 00:00:01.000000’) – but you can modify the code to accommodate microseconds.

 

Here is the SQL to create the trigger:

DELIMITER $$
  
CREATE TRIGGER _time_zone_convert_insert2
AFTER INSERT ON time_example
FOR EACH ROW
BEGIN

DECLARE _date_time_no_tz varchar(20);

SET _date_time_no_tz = SUBSTRING(NEW.date_time_storage, 1, 20);

IF NEW.date_time_storage like '%EDT' THEN
    SET NEW.date_time_value = CONVERT_TZ(_date_time_no_tz,'EST5EDT','GMT');
END IF;

IF NEW.date_time_storage like '%EST' THEN
    SET NEW.date_time_value = CONVERT_TZ(_date_time_no_tz,'EST5EDT','GMT');
END IF;

IF NEW.date_time_storage like '%CDT' THEN
    SET NEW.date_time_value = CONVERT_TZ(_date_time_no_tz,'EST5EDT','GMT');
END IF;

IF NEW.date_time_storage like '%CST' THEN
    SET NEW.date_time_value = CONVERT_TZ(_date_time_no_tz,'EST5EDT','GMT');
END IF;

END$$

DELIMITER ;

Now that we have a trigger in place, let’s insert another line into the database – BUT, we still want to use the SQL from the application. The query will try and write to the date_time_value column, but the Query Rewrite Plugin will intercept the original query and substitute our new query instead – which will insert the timestamp data into the date_time_storage column, and then the trigger will convert the timestamp and place the correct value into the date_time_value column.

mysql> INSERT INTO time_example (action_record, date_time_value) 
 VALUES ('Lunch Break', '2018-09-05 18:00:00 EDT');
Query OK, 1 row affected (0.00 sec)

The table now contains a true timestamp column with the correct timestamp value in UTC. (The old row didn’t change)

mysql> SELECT * FROM test.time_example;
+--------+------------+---------------------+-------------------------+
| idtime | product_id | date_time_value     | date_time_storage       |
+--------+------------+---------------------+-------------------------+
|      1 | time now1  | NULL                | 2018-09-05 18:00:00 EDT |
|      2 | time now2  | 2018-09-05 22:00:00 | 2018-09-05 18:00:00 EDT |
+--------+------------+---------------------+-------------------------+
2 rows in set (0.00 sec)

But what about stored procedures?

The easiest way to handle the time zone conversion is with a trigger. But, to show you how stored procedures can do the same thing, I have an example of a stored procedure. In this example, I will be passing the values of the idtime and date_time_storage columns.

This example will be similar to the one above – I created a table named time_example, but this time, I am including the extra column:

'CREATE TABLE `time_example` (
  `idtime` int(11) NOT NULL AUTO_INCREMENT,
  `action_record` varchar(30) NOT NULL,
  `date_time_value` timestamp NULL DEFAULT NULL,
  `date_time_storage` varchar(23) DEFAULT NULL,
  PRIMARY KEY (`idtime`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8'

I then inserted a row, where I am storing the time stamp with the time zone information:

mysql> insert into test.time_example (action_record, date_time_storage) 
 values ('Left work', '2018-09-05 17:00:00 EDT’);
Query OK, 1 row affected (0.00 sec)

Here is what the row looks like:

mysql> SELECT * FROM test.time_example;
+--------+------------+-----------------+-------------------------+
| idtime | product_id | date_time_value | date_time_storage       |
+--------+------------+-----------------+-------------------------+
|      1 | Left work  | NULL            | 2018-09-05 17:00:00 EDT |
+--------+------------+-----------------+-------------------------+
1 row in set (0.00 sec)

Again, the date_time_storage column is a temporary storage column. I will call the stored procedure, and provide the idtime and date_time_storage values. The stored procedure which will look at the last three characters in the date_time_storage column, and then convert the time to UTC, which is then stored in the date_time_value column.

call _check_time_zone('1','2018-09-05 17:00:00 EDT');

Now the row looks like this, where the date_time_value column is now stored as UTC:

mysql> SELECT * FROM test.time_example;
+--------+------------+---------------------+-------------------------+
| idtime | product_id | date_time_value     | date_time_storage       |
+--------+------------+---------------------+-------------------------+
|      1 | Left work  | 2018-09-05 21:00:00 | 2018-09-05 17:00:00 EDT |
+--------+------------+---------------------+-------------------------+
1 row in set (0.00 sec)

And here is the code to create the stored procedure:

DELIMITER $$
CREATE DEFINER=`root`@`localhost` 
PROCEDURE `_check_time_zone`(IN _id_time INT, IN _date_time_storage VARCHAR(23))
BEGIN

DECLARE _date_time_no_tz varchar(20);

SET _date_time_no_tz = SUBSTRING(_date_time_storage, 1, 20);

IF _date_time_storage like '%EDT' THEN 
UPDATE time_example SET date_time_value = CONVERT_TZ(_date_time_no_tz,'EST5EDT','GMT')
WHERE idtime = _id_time;
END IF;

IF _date_time_storage like '%EST' THEN 
UPDATE time_example SET date_time_value = CONVERT_TZ(_date_time_no_tz,'EST5EDT','GMT')
WHERE idtime = _id_time;
END IF;

IF _date_time_storage like '%CDT' THEN 
UPDATE time_example SET date_time_value = CONVERT_TZ(_date_time_no_tz,'CST5CDT','GMT')
WHERE idtime = _id_time;
END IF;

IF _date_time_storage like '%CST' THEN 
UPDATE time_example SET date_time_value = CONVERT_TZ(_date_time_no_tz,'CST5CDT','GMT')
WHERE idtime = _id_time;
END IF;

IF _date_time_storage like '%UTC' THEN 
UPDATE time_example SET date_time_value = _date_time_no_tz
WHERE idtime = _id_time;
END IF;

END $$
DELIMITER ;

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots 
Visit http://2044thebook.com for more information.
Tony is the editor/illustrator for NASA Graphics Standards Manual Remastered Edition 
Visit https://amzn.to/2oPFLI0 for more information.

Upcoming Webinar Tues 9/11: Migrating to AWS Aurora: A Checklist for Success

$
0
0
Migrating to AWS Aurora: A Checklist for Success

Migrating to AWS Aurora: A Checklist for Success

Please join Percona’s Senior Consultant, Jervin Real, as he presents Migrating to AWS Aurora: A Checklist for Success. The event will take place on Tuesday, September 11th, 2018, at 11:00 AM PDT (UTC-7) / 2:00 PM EDT (UTC-4).

 

In the last few weeks, we have shown you how to successfully migrate from on-premise MySQL installations to AWS Aurora. What comes next is how to successfully ensure that your Aurora cluster performs and operates as you expect it to.

While Aurora’s hands-off operational approach ensures agile practices remain agile; there are also trade-offs and subsequent growing pains.

This webinar will discuss the points on how to remain flexible and in full control of your data while using AWS Aurora.

Register for this webinar on how to make your Aurora migration a success.

The post Upcoming Webinar Tues 9/11: Migrating to AWS Aurora: A Checklist for Success appeared first on Percona Database Performance Blog.

MariaDB 10.1.36 and MariaDB Connector/C 2.3.7, Connector/J 2.3.0 and Connector/ODBC 2.0.18 now available

$
0
0

The MariaDB Foundation is pleased to announce the availability of MariaDB 10.1.36, the latest stable release in the MariaDB 10.1 series, as well as MariaDB Connector/C 2.3.7, MariaDB Connector/J 2.3.0 and MariaDB Connector/ODBC 2.0.18, the latest stable MariaDB Connector releases. See the release notes and changelogs for details. Download MariaDB 10.1.36 Release Notes Changelog What […]

The post MariaDB 10.1.36 and MariaDB Connector/C 2.3.7, Connector/J 2.3.0 and Connector/ODBC 2.0.18 now available appeared first on MariaDB.org.

NoSQL/X DevAPI Tutorial with MySQL Connector/Python 8.0

$
0
0

The MySQL Document Store became general available (GA) with MySQL 8. One of the nice features of the MySQL Document Store is the X DevAPI that allows you to query the data from a multitude of programming languages using the same API (but while retaining the conventions of the language). The programming languages with support for the X DevAPI includes JavaScript (Node.js), PHP, Java, DotNet, and C++.

I will be using MySQL Connector/Python 8.0.12 for the example in this blog. The example is executed on Microsoft Windows with Python 3.6 installed, but it has also been tested on Oracle Linux 7 with Python 2.7. I do assume that MySQL Connector/Python has been installed. If that is not the case, you can read how to do it in the Installing Connector/Python from a Binary Distribution section in the manual or Chapter 1 of MySQL Connector/Python Revealed from Apress.

The output of the example program
The output of the example program

The example will go through the following steps:

  • Getting Ready:
    1. Load the mysqlx module.
    2. Create a database connection.
  • Setup:
    1. Create a schema.
    2. Create a collection.
  • CRUD – Create:
    1. Insert some documents into the collection.
  • CRUD – Read:
    1. Query the documents.
  • Cleanup:
    1. Drop the schema.
You can download the complete example program here: Example Code for Tutorial

The program uses the pyuser@localhost user. The connection parameters can be changed as described in the “Getting Ready” section. A user that fulfills the requirement to the example program can be created using the following SQL statements:

mysql> CREATE USER pyuser@localhost IDENTIFIED BY 'Py@pp4Demo';
mysql> GRANT CREATE, INSERT, SELECT, DROP
             ON my_collections.* TO pyuser@localhost;

Warning: This program is not an example of using best practices. Do not store the password and preferably also the other connection options in the source code. There is also very limited handling or errors and warnings in order to keep the example simple. You should not skip those steps in a production program.

Getting Ready

The first thing is to get ready by importing MySQL Connector/Python’s mysqlx module and connect to MySQL. This is simple to do as shown in the below code snippet (the line numbers refer to the full example):

import mysqlx

connect_args = {
    'host': '127.0.0.1',
    'port': 33060,
    'user': 'pyuser',
    'password': 'Py@pp4Demo',
};

# OK is not used but would correspond to a value of 0.
# 1 is Info and 2 is warning. Errors cause an exception.
warning_levels = ("OK", "Info", "Warning")

# Create the database connection
db = mysqlx.get_session(**connect_args)

The mysqlx module in imported in line 38. This is where the MySQL Connector/Python implementation of the X DevAPI resides. The module includes support for CRUD statements both for documents and SQL tables, schema and collection manipulations, as well as executing SQL statements. In this example, only the CRUD implementation for documents and the schema and collection manipulation will be used.

The warning_levels variable is uses to convert numeric warning levels returns by the X DevAPI to names. There is an example of how to handle warnings after the first document has been added.

Finally, the connection is created in line 52 using the get_session() method in the mysqlx module.  With a connection object in place, let’s move on to set up the schema and collection.

Setup

The X DevAPI has support for creating and dropping schemas and collections (but currently not SQL tables). This is used in the example to set up the my_collections schema with a single collection called my_docs:

# Create the my_collections schema
schema = db.create_schema("my_collections")

# Create the my_docs collection in the my_collections schema
my_docs = schema.create_collection("my_docs")

The create_schema() method on the database (session) object is used to create the schema. It will succeed even if the schema already exists. In that case the existing schema will be returned.

The collection is similarly created with the create_collection() method from the schema object. This will by default fail if the collection already exists. It can be overwritten with the reuse argument (the second argument).

That is it. A collection is always defined internally in the same way when it is created. You can add indexes – I hope to get back to that in a future blog – but there is no need to think of columns and data types at this point. That is the advantage of using a schemaless database (but also one of the dangers – now the whole responsibility of staying consistent is up to you as a developer).

Let’s continue to add some documents.

CRUD – Create

For this example, three documents will be added to the my_docs collection. The documents contain information about three persons including their name, birthday, and hobbies. The documents can be defined as Python dictionaries with JSON arrays represented as Python lists:

# Define three documents to insert
adam = {
    "First_name": "Adam",
    "Surname": "Smith",
    "Birthday": "1970-10-31",
    "Hobbies": [
        "Programming",
        "Databases",
        "Hiking"
    ]
}

kate = {
    "First_name": "Kate",
    "Surname": "Lee",
    "Birthday": "1982-08-09",
    "Hobbies": [
        "Programming",
        "Photography",
        "Running"
    ]
}

jane = {
    "First_name": "Jane",
    "Surname": "Walker",
    "Birthday": "1977-02-23",
    "Hobbies": [
        "Databases",
        "Hiking",
        "Photography"
    ]
}

This is the beauty of working with JSON documents in Python. They just work.

The birthdays are written in the ISO 8601 format (the same as MySQL’s date data type uses – but not datetime!). As the MySQL Document Store is schemaless, you are free to chose whatever format you feel like, however, it is strongly recommended to use a standard format. This YYYY-mm-dd format has the advantage that it will sort correctly, so alone for that reason, it is a strong candidate.

The documents will be inserted in two rounds. First Adam will be added, then Kate and Jane.

Adding a Single Document

There are a few ways to add documents (all working in the same basic way). This example will show two of them. First let’s look at how Adam is added:

# Insert the document for Adam
# This is an example of chaining the actions
db.start_transaction()
result = my_docs.add(adam).execute()

if (result.get_warnings_count() > 0):
    print("{0} warnings occurred!".format(result.get_warnings_count()))
    print("The warnings are:\n")
    for warning in result.get_warnings():
        level = warning_levels[warning["level"]]
        print("   * Level: {0} - Errno: {1} - Message: {2}".format(
            level, warning["code"], warning["msg"]))
    print("")
    print("Rolling the transaction back and existing.")
    db.rollback()
    exit()

# No errors or warnings, so the transaction can be committed
db.commit()

print("Adam: Number of documents added: {0}".format(
    result.get_affected_items_count()))
print("Document ID for Adam: {0}".format(result.get_generated_ids()))

The document is added inside a transaction. The X DevAPI connection inherits the value of autocommit from the server-side (defaults to ON), so to be sure the create action can be tested for warnings before committing, an explicit transaction is used. (Errors cause an exception, so since that is not handled here, it would cause an automatic rollback.)

The document is added using a chained statement. When you build an X DevAPI statement, you can choose between calling the method one by one or chaining them together as it is done in this case. Or you can choose a combination with some parts chained and some not. When the documents for Kate and Jane are added, it will be done without chaining.

The statement is submitted to the database using the execute() method. If you are used to executing Python statements in MySQL Shell, you may not be familiar with execute() as MySQL Shell allows you to skip it for interactive statements where the result is not assigned to a variable. The result is stored in the result variable which will be used to examine whether any warnings were triggered by the statement.

Tip: In MySQL Connector/Python, you must always call execute() to execute an X DevAPI statement.

It is best practice to verify whether queries cause any warnings. A warning will still allow the statement to execute, but it is in general a sign that not everything is as it should be. So, take warnings seriously. The earlier you include tests for warnings, the easier it is to handle them.

In line 99, the get_warnings_count() method of the result object is used to check if any warnings occurred. If so, the number of warnings is printed and each warning if retrieved using the get_warnings() method. A warning is a dictionary with three elements:

  • level: 1 for note and 2 for warning. This is what the warning_levels variable was created for at the start of the example.
  • code: The MySQL error number. The mysqlx.errorcode module contains string symbols for all the error numbers. This can be useful in order to check whether it is an expected error number that can be ignored.
  • msg: A string message describing the problem.

In this case, if any warnings occur, the transaction is rolled back, and the script exists.

Tip: Include handling of warnings from the beginning of coding your program. Handling warnings from the get go makes it much easier to handle them. They are usually a sign of something not working as expected and it is important that you know exactly why the warnings occur. All warnings include an error code that you can check against to verify whether it is an expected warning. If some warning is expected and you are confident, it is acceptable to ignore it.

If no error occurs, some information from the result is printed. An example output looks like (the ID will be different):

Adam: Number of documents added: 1
Document ID for Adam: ['00005b9634e3000000000000001c']

As expected one document has been added. The number of documents is printed using the get_affected_items_count() method. More interesting is the document ID. As the document did not include an element named _id, MySQL added one automatically and assigned a value to it. I will not go into how the ID is generated here, but just note that it includes three parts that together ensure global uniqueness even if you use multiple clients against multiple MySQL Server instances. At the same time, the IDs are still being generated in a way that is optimal for the InnoDB storage engine that is used for the underlying storage of the documents. The IDs are returned as a list; in this case there is only one element in the list, but if more than one document is inserted without an _id value, then there will be one generated ID per document.

The final step is to commit the transaction, so the document is persisted in the collection.

Adding Multiple Documents

When you want to add multiple documents using a single CRUD statement, you can essentially do it in two ways. You can add all of the documents in one go in the initial add() call similar to what was done for a single document with Adam. This can for example be done by having the documents in a tuple or list.

The other way, which will be used here, is to repeatably call add() to add the documents. Let’s see how that works:

db.start_transaction()
stmt_add = my_docs.add()
stmt_add.add(kate)
stmt_add.add(jane)
result = stmt_add.execute()
db.commit()
print("Kate and Jane: Number of documents added: {0}".format(
    result.get_affected_items_count()))
print("Document IDs: {0}".format(result.get_generated_ids()))

To keep the example from getting too long, the check for warnings have been removed, and the example will just focus on adding the documents.

After the transaction has been started, the statement object is created by calling add() on the collection object. In this case, no arguments are given, so at that point in time, the statement will not insert any documents.

Then the two documents are added one by one by calling add() on the statement object, first with the kate document, then with the jane document. An advantage of this approach is that if you for example generate the documents inside a loop, then you can add them as they are ready.

When both documents have been added, the execute() method is called to submit the documents to the database and the transaction is committed. Again, some information from the result is printed (the IDs will be different):

Kate and Jane: Number of documents added: 2
Document IDs: ['00005b9634e3000000000000001d', '00005b9634e3000000000000001e']

So, two documents are inserted (again as expected) and two IDs are generated.

The way that the add statement was used to insert the two documents is an example of the opposite of chaining. Here, one action at a time is performed and the result is stored in the stmt_add variable.

Now that there are some documents to work with, it is time to query them.

CRUD – Read

When you want to query documents in a collation, you use the find() method of the collection object. The resulting find statement support all of the usual refinements such as filtering, sorting, grouping, etc. In this example, three queries will be executed. The first will find the total number of documents in the collection. The second, will find the persons born on 9 August 1982. The third will find the persons who has hiking as a hobby.

Total Number of Documents

The X DevAPI makes it easy to determine the number of documents in the document – the count() method of the collection will return the value as an integer. In practice the count() method goes through the same steps as you will see in the two subsequent queries, but they are hidden inside the implementation. The code snippet is:

print("The total number of documents in the collection: {0}".format(
    my_docs.count()))

It cannot get much easier than that. The output is:

The total number of documents in the collection: 3

Let’s move on and see some of the steps that were hidden in the first query.

Finding Documents Based on Simple Comparison

The persons (in this case just one person) born on 9 August 1982 can be found by creating a find statement and adding a simple filter. The example code is:

# Find the person born on 9 August 1982
print("")
stmt_find = my_docs.find("Birthday = :birthday")
stmt_find.fields("First_name", "Surname")
stmt_find.bind("birthday","1982-08-09")
result = stmt_find.execute()
person = result.fetch_one()
print("Person born on 9 August 1982: {First_name} {Surname}".format(**person))

The filter clause is added in the call to find(). The syntax :birthday means that a parameter is used and the value will be added later. That has two advantages: it makes it easier to reuse the statement, and importantly it makes the statement safer as MySQL will ensure the value is escaped correctly – this is similar to the mysql_real_escape_string() function in the MySQL C API. The value of the parameter is given using the bind() method that has two arguments: the parameter name and value. If you use multiple parameters, call bind() once for each of them.

Otherwise the statement is simple to use. The filtering condition may seem too simple given it is a JSON document it applies to. However, Birthday in the condition is interpreted as $.Birthday (the $. part is optional) – that is the object named Birthday and is a child of the root of the document, which is just what is needed in this case. The next example includes a more complicated filter condition.

The fields to include are specified in a similar manner to the filter condition. You specify the path to the element you want to include. You can optionally rename the element using the AS keyword, for example: Surname AS Last_name. As for the condition, the $. part is optional.

The resulting row is retrieved using the fetch_one() method on the result object. This is fine here as we know there is only one resulting row. However, in a more general case you should use fetch_one() is a loop and continue until it returns None at which point all rows have been fetched.

The output is:

Person born on 9 August 1982: Kate Lee

Querying with Condition on Element in Array

A more complicated find statement is to look into the Hobbies array and see if any of the elements is Hiking. This query also matches two of the persons in the collection, so a loop is required to handle them. The code is:

stmt_find = my_docs.find("JSON_CONTAINS($.Hobbies, :hobby)")
stmt_find.fields("First_name", "Surname")
stmt_find.sort("Surname", "First_name")
stmt_find.bind("hobby", '"Hiking"')
result = stmt_find.execute()
people = result.fetch_all()
print("Number of people in the result: {0}".format(result.count))
print("The people with a hobby of hiking:")
for person in people:
    print("   {First_name} {Surname}".format(**person))

There are two main differences between this example and the previous: the filter condition and how the result documents are handled.

The filter uses the JSON_CONTAINS() function to check whether the $.Hobbies elements contains the value specified by the :hobby parameter. In the call to bind(), the parameter value is set to "Hiking". Note that Hiking must be quoted with double quotes as it is a JSON string. In this case, $. is included in the document path. However, it is still optional.

After executing the query, the resulting documents are fetched using the fetch_all() method. This will return all of the documents as a list. This makes it simpler to loop over the resulting rows, however be aware that for large result sets, it can cause a high memory usage on the application server.

Warning: Be careful with the fetch_all() method if the query can return a large result set. It will require the remaining part of the result to be stored in-memory on the application-side.

One advantage of the fetch_all() method is that it will allow you to get the total number of documents in the result using the count property of the result. The count property will show 0 until fetch_all() have completed. Once the documents have been fetched, it is possible to print the names of the persons who like to hike. The output is:

Number of people in the result: 2
The people with a hobby of hiking:
   Adam Smith
   Jane Walker

Other than a bit of cleanup, there is nothing more to do.

Cleanup

The final part of the example is to clean up. The my_collections schema is dropped so the database is left in the same state as at the start, and the connection is closed:

# Remove the schema again, so the the database is left in the same
# state as at the start. Comment out if you want to play with the
# data.
db.drop_schema("my_collections")

# Close the database connection.
db.close()

Dropping a schema is done in the same way as creating it, just that the drop_schema() method is used instead. The drop_schema() method will also work if the schema does not exist. In that case it is a null-operation.

It is important always to close the database connection. Have you ever seen the MySQL Server error log full of notes about aborted connections? If you do not explicitly close the database connection when you are done with it, one of those notes will be generated (provided the server is configured with error_log_verbosity = 3).

Additionally, not closing the connection will keep the connection alive until the program terminates. That is not a problem here, but in other cases, it may take a long time before the application shuts down. In the meantime, the connection count is higher than it needs to be, and if you happen to have an ongoing transaction (can very easily happen with autocommit = OFF), the connection may cause lock issues or slowness for the other connections.

Tip: Always close the database connection when you are done with it.

Want to Learn More?

I hope this has triggered your curiosity and you are ready to dive deeper into the world of MySQL Connector/Python, the X DevAPI, and the MySQL Document Store. If so, there are two recently released books that you may find useful.

Disclaimer: I am the author of one of these books.

One book is MySQL Connector/Python Revealed (Apress) written by me. It goes through MySQL Connector/Python both for the legacy PEP249 API (mainly the mysql.connector module) and the new X DevAPI (the mysqlx module). There are three chapters dedicated to the X DevAPI.

The other book is Introducing the MySQL 8 Document Store (Apress) written by Dr. Charles Bell (MySQL developer). This book goes through how JSON works in MySQL including information about the X DevAPI and its siblings the X Protocol and the X Plugin.

Both books are more than 500 pages and comes with code examples that will help bring you up to speed with MySQL Connector/Python and the MySQL Document Store.

Shinguz: Schulung MariaDB/MySQL für Fortgeschrittene vom 12.-16. 11. in Köln

$
0
0

In unserer FromDual Schulung MariaDB/MySQL für Fortgeschrittene vom 12. bis 16. November in Köln hat es zur Zeit noch 3 Plätze frei.

Anmelden können Sie sich direkt bei unserem Schulungspartner GfU in Köln.

Den Schulungsinhalt finden Sie auf der FromDual Webseite.

Wenn Sie spezifische in Haus Schulungen oder eine auf Sie zugeschnittene MariaDB oder MySQL Beratung benötigen, nehmen Sie mit uns Kontakt auf.

Taxonomy upgrade extras: 

Using ProxySQL to connect to IPv6-only databases over IPv4

$
0
0
connect to ipv6 database from ipv4 application using proxysql

connect to ipv6 database from ipv4 application using proxysqlIt’s 2018. Maybe now is the time to start migrating your network to IPv6, and your database infrastructure is a great place to start. Unfortunately, many legacy applications don’t offer the option to connect to MySQL directly over IPv6 (sometimes even if passing a hostname). We can workaround this by using ProxySQL’s IPv6 support which was added in version 1.3. This will allow us to proxy incoming IPv4 connections to IPv6-only database servers.

Note that by default ProxySQL only listens on IPv4. We don’t recommended changing that until this bug is resolved. The bug causes ProxySQL to segfault frequently if listening on IPv6.

In this example I’ll use centos7-pxc57-1 as my database server. It’s running Percona XtraDB Cluster (PXC) 5.7 on CentOS 7,  which is only accessible over IPv6. This is one node of a three node cluster, but l treat this one node as a standalone server for this example.  One node of a synchronous cluster can be thought of as equivalent to the entire cluster, and vice-versa. Using the PXC plugin for ProxySQL to split reads from writes is the subject of a future blog post.

The application server, centos7-app01, would be running the hypothetical legacy application.

Note: We use default passwords throughout this example. You should always change the default passwords.

We have changed the IPv6 address in these examples. Any resemblance to real IPv6 addresses, living or dead, is purely coincidental.

  • 2a01:5f8:261:748c::74 is the IPv6 address of the ProxySQL server
  • 2a01:5f8:261:748c::71 is the Percona XtraDB node

Step 1: Install ProxySQL for your distribution

Packages are available here but in this case I’m going to use the version provided by the Percona yum repository:

[...]
Installed:
proxysql.x86_64 0:1.4.9-1.1.el7
Complete!

Step 2: Configure ProxySQL to listen on IPv4 TCP port 3306 by editing /etc/proxysql.cnf and starting it

[root@centos7-app1 ~]# vim /etc/proxysql.cnf
[root@centos7-app1 ~]# grep interfaces /etc/proxysql.cnf
interfaces="127.0.0.1:3306"
[root@centos7-app1 ~]# systemctl start proxysql

Step 3: Configure ACLs on the destination database server to allow ProxySQL to connect over IPv6

mysql> GRANT SELECT on sys.* to 'monitor'@'2a01:5f8:261:748c::74' IDENTIFIED BY 'monitor';
Query OK, 0 rows affected, 1 warning (0.25 sec)
mysql> GRANT ALL ON legacyapp.* TO 'legacyappuser'@'2a01:5f8:261:748c::74' IDENTIFIED BY 'super_secure_password';
Query OK, 0 rows affected, 1 warning (0.25 sec)

Step 4: Add the IPv6 address of the destination server to ProxySQL and add users

We need to configure the IPv6 server as a mysql_server inside ProxySQL. We also need to add a user to ProxySQL as it will reuse these credentials when connecting to the backend server. We’ll do this by connecting to the admin interface of ProxySQL on port 6032:

[root@centos7-app1 ~]# mysql -h127.0.0.1 -P6032 -uadmin -padmin
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.
Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.5.30 (ProxySQL Admin Module)
Copyright (c) 2009-2018 Percona LLC and/or its affiliates
Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> INSERT INTO mysql_servers(hostgroup_id,hostname,port) VALUES (1,'2a01:5f8:261:748c::71',3306);
Query OK, 1 row affected (0.00 sec)
mysql> INSERT INTO mysql_users(username, password, default_hostgroup) VALUES ('legacyappuser', 'super_secure_password', 1);
Query OK, 1 row affected (0.00 sec)
mysql> LOAD MYSQL USERS TO RUNTIME;
Query OK, 0 rows affected (0.00 sec)
mysql> SAVE MYSQL USERS TO DISK;
Query OK, 0 rows affected (0.27 sec)
mysql> LOAD MYSQL SERVERS TO RUNTIME;
Query OK, 0 rows affected (0.01 sec)
mysql> SAVE MYSQL SERVERS TO DISK;
Query OK, 0 rows affected (0.30 sec)
mysql> LOAD MYSQL VARIABLES TO RUNTIME;
Query OK, 0 rows affected (0.00 sec)
mysql> SAVE MYSQL VARIABLES TO DISK;
Query OK, 95 rows affected (0.12 sec)

Step 5: Configure your application to connect to ProxySQL over IPv4 on localhost4 (IPv4 localhost)

This is application specific and so not shown here, but I’d configure my application to use localhost4 as this is in /etc/hosts by default and points to 127.0.0.1 and not ::1

Step 6: Verify

As I don’t have the application here, I’ll verify with mysql-client. Remember that ProxySQL is listening on 127.0.0.1 port 3306, so we connect via ProxySQL on IPv4 (the usage of 127.0.0.1 rather than a hostname is just to show this explicitly):

[root@centos7-app1 ~]# mysql -h127.0.0.1 -ulegacyappuser -psuper_secure_password
mysql: [Warning] Using a password on the command line interface can be insecure.
mysql> SELECT host FROM information_schema.processlist WHERE ID=connection_id();
+-----------------------------+
| host                        |
+-----------------------------+
| 2a01:5f8:261:748c::74:57546 |
+-----------------------------+
1 row in set (0.00 sec)
mysql> CREATE TABLE legacyapp.legacy_test_table(id int);
Query OK, 0 rows affected (0.83 sec)

The query above shows the remote host (from MySQL’s point of view) for the current connection. As you can see, MySQL sees this connection established over IPv6. So to recap, we connected to MySQL on an IPv4 IP address (127.0.0.1) and were successfully proxied to a backend IPv6 server.

The post Using ProxySQL to connect to IPv6-only databases over IPv4 appeared first on Percona Database Performance Blog.

Multi-master with MariaDB 10 – a tutorial

$
0
0

The goal of this tutorial is to show you how to use multi-master to aggregate databases with the same name, but different data from different masters, on the same slave.

Example :

  • master1 => a French subsidiary
  • master2 => a British subsidiary

Both have the same database PRODUCTION but the data are totally different.

 

PmaControl schema topology

This screenshot is made from my own monitoring tool: PmaControl. You have to read 10.10.16.232 on master2 and not 10.10.16.235.
The fault of my admin system! :p)

We will start with three servers—2 masters and 1 slave—you can add more masters if needed. For this tutorial, I used Ubuntu 12.04. I’ll let you choose the right procedure for your distribution from Downloads.

Scenario

  • 10.10.16.231 : first master (referred to subsequently as master1) => a French subsidiary
  • 10.10.16.232 : second master (referred to subsequently as master2) => a British subsidiary
  • 10.10.16.233 : slave (multi-master) (referred to subsequently as slave)

If you already have your three servers correctly installed, you can scroll down directly to “Dump your master1 and master2 databases from slave“.

Default installation on 3 servers

apt-get -y install python-software-properties
apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db

The main reason I put it in a different file because we use Chef as the configuration manager and this overwrites /etc/apt/sources.list . The other reason is that if any trouble occurs, you can just remove this file and restart with the default configuration.

echo "deb http://mirror.stshosting.co.uk/mariadb/repo/10.0/ubuntu precise main" > /etc/apt/sources.list.d/mariadb.list

apt-get update
apt-get install mariadb-server

The goal of this small script is to get the IP of the server and make a CRC32 from this IP to generate one unique server-id. Generally the command CRC32 isn’t installed, so we will use the one from MySQL. To set account // password we use the account system of Debian / Ubuntu.

Even if your server has more interfaces, you should have no trouble because the IP address should be unique.

user=`egrep user /etc/mysql/debian.cnf | tr -d ' ' | cut -d '=' -f 2 | head -n1 | tr -d '\n'`
passwd=`egrep password /etc/mysql/debian.cnf | tr -d ' ' | cut -d '=' -f 2 | head -n1 | tr -d '\n'`
ip=`ifconfig eth0 | grep "inet addr" | awk -F: '{print $2}' | awk '{print $1}' | head -n1 | tr -d '\n'`
crc32=`mysql -u $user -p$passwd -e "SELECT CRC32('$ip')"`
id_server=`echo -n $crc32 | cut -d ' ' -f 2 | tr -d '\n'`

This configuration file is not one I use in production, but a minimal version that’s shown just as an example. The config may work fine for me, but perhaps it won’t be the same for you, and it might just crash your MySQL server.

If you’re interested in my default install of MariaDB 10  you can see it here: https://raw.githubusercontent.com/Esysteme/Debian/master/mariadb.sh  (this script as been updated since 4 years)

example :

./mariadb.sh -p 'secret_password' -v 10.3 -d /src/mysql

 

cat >> /etc/mysql/conf.d/mariadb10.cnf << EOF
 
[client]
 
# default-character-set = utf8
 
[mysqld]
character-set-client-handshake = FALSE
character-set-server = utf8
collation-server = utf8_general_ci
 
bind-address        = 0.0.0.0
external-locking    = off
skip-name-resolve
 
#make a crc32 of ip server
server-id=$id_server
 
#to prevent auto start of thread slave
skip-slave-start
 
[mysql]
default-character-set   = utf8
 
EOF

We restart the server

/etc/init.d/mysql restart

* Stopping MariaDB database server mysqld                                        [ OK ]
 * Starting MariaDB database server mysqld                                        [ OK ]
 * Checking for corrupt, not cleanly closed and upgrade needing tables.

Repeat these actions on all three servers.

Create users on both masters

Create the replication user on both masters

on master1 (10.10.16.231)

mysql -u root -p -e "GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'replication'@'%' IDENTIFIED BY 'passwd';"

on master2 (10.10.16.232)

mysql -u root -p -e "GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'replication'@'%' IDENTIFIED BY 'passwd';"

Create a user for external backup

On master1 and on master2

mysql -u root -p -e "GRANT SELECT, LOCK TABLES, RELOAD, REPLICATION CLIENT, SUPER ON *.* TO 'backup'@'10.10.16.%' IDENTIFIED BY 'passwd' WITH GRANT OPTION;"

If you are just testing…

If you don’t have a such a configuration and you want to set up tests:

Create a database on master1 (10.10.16.231)

master1 [(NONE)]> CREATE DATABASE PRODUCTION;

Create a database on master2 (10.10.16.232)

master2 [(NONE)]> CREATE DATABASE PRODUCTION;

Dump your master1 and master2 databases from slave (10.10.16.233)

All the commands from now until the end have to be carried out on the slave server

  • --master-data=2
      get the file (binary log) and its position, and add it to the beginning of the dump as a comment
  • --single-transaction
      This option issues a BEGIN SQL statement before dumping data from the server (this works only on tables with the InnoDB storage engine)

mysqldump -h 10.10.16.231 -u root -p --master-data=2 --single-transaction PRODUCTION > PRODUCTION_10.10.16.231.sql
mysqldump -h 10.10.16.232 -u root -p --master-data=2 --single-transaction PRODUCTION > PRODUCTION_10.10.16.232.sql

Create both new databases:

slave[(NONE)]> CREATE DATABASE PRODUCTION_FR;
slave[(NONE)]> CREATE DATABASE PRODUCTION_UK;

Load the data :

mysql -h 10.10.16.233 -u root -p PRODUCTION_FR < PRODUCTION_10.10.16.231.sql
mysql -h 10.10.16.233 -u root -p PRODUCTION_UK < PRODUCTION_10.10.16.232.sql

Set up both replications on the slave

Edit both dumps to get file name and position of the binlog, and replace it here: (use the command “less” instead of other commands in huge files)

French subsidiary – master1

less PRODUCTION_10.10.16.231.sql

get the line : (the MASTER_LOG_FILE and MASTER_LOG_POS values will be different to this example)

-- CHANGE MASTER TO MASTER_LOG_FILE='mariadb-bin.000010', MASTER_LOG_POS=771;

replace the file and position in this command:

CHANGE MASTER 'PRODUCTION_FR' TO MASTER_HOST = "10.10.16.231", MASTER_USER = "replication", MASTER_PASSWORD ="passwd", MASTER_LOG_FILE='mariadb-bin.000010', MASTER_LOG_POS=771;

English subsidiary – master2

less PRODUCTION_10.10.16.232.sql

get the line: (the MASTER_LOG_FILE and MASTER_LOG_POS values will be different to this example, and would normally be different between master1 and master2. It’s just in my test example they were the same)

-- CHANGE MASTER TO MASTER_LOG_FILE='mariadb-bin.000010', MASTER_LOG_POS=771;

replace the file and position in this command:

CHANGE MASTER 'PRODUCTION_UK' TO MASTER_HOST = "10.10.16.232", MASTER_USER = "replication", MASTER_PASSWORD ="passwd", MASTER_LOG_FILE='mariadb-bin.000010', MASTER_LOG_POS=771;

Rules of replication on config file

Unfortunately, the option replicate-rewrite-db doesn’t exist for variables, and we cannot set up this kind of configuration without restarting the slave server. In the section relating to the slave, add the following lines to

/etc/mysql/my.cnf

add these lines :

PRODUCTION_FR.replicate-rewrite-db="PRODUCTION->PRODUCTION_FR"
PRODUCTION_UK.replicate-rewrite-db="PRODUCTION->PRODUCTION_UK"
PRODUCTION_FR.replicate-do-db="PRODUCTION_FR"
PRODUCTION_UK.replicate-do-db="PRODUCTION_UK"

After that, you can restart the daemon without a problem – but don’t forgot to launch the slaves because we skipped that at the start ;).

/etc/init.d/mysql restart

Start the replication:

  • one by one

START SLAVE 'PRODUCTION_FR';
START SLAVE 'PRODUCTION_UK';

  • all at the same time:

START ALL SLAVES;

Now to check the replication :

slave[(NONE)]>SHOW SLAVE 'PRODUCTION_UK' STATUS;
slave[(NONE)]>SHOW SLAVE 'PRODUCTION_FR' STATUS;
slave[(NONE)]>SHOW ALL SLAVES STATUS;

Tests

on slave:

slave [(NONE)]> USE PRODUCTION_FR;
DATABASE changed
slave [PRODUCTION_FR]> SHOW TABLES;
Empty SET (0.00 sec)
 
slave [(NONE)]> USE PRODUCTION_UK;
DATABASE changed
slave [PRODUCTION_UK]> SHOW TABLES;
Empty SET (0.00 sec)

on master1:

master1 [(NONE)]> USE PRODUCTION;
DATABASE changed
master1 [PRODUCTION]>CREATE TABLE `france` (id INT);
Query OK, 0 ROWS affected (0.13 sec)
 
master1 [PRODUCTION]> INSERT INTO `france` SET id=1;
Query OK, 1 ROW affected (0.00 sec)

on master2:

master2 [(NONE)]> USE PRODUCTION;
DATABASE changed
master2 [PRODUCTION]>CREATE TABLE `british` (id INT);
Query OK, 0 ROWS affected (0.13 sec)
 
master2 [PRODUCTION]> INSERT INTO `british` SET id=2;
Query OK, 1 ROW affected (0.00 sec)

on slave:

-- for FRANCE
slave [(NONE)]> USE PRODUCTION_FR;
DATABASE changed
slave [PRODUCTION_FR]> SHOW TABLES;
+-------------------------+
| Tables_in_PRODUCTION_FR |
+-------------------------+
| france                  |
+-------------------------+
1 ROW IN SET (0.00 sec)
 
slave [PRODUCTION_FR]> SELECT * FROM france;
+------+
| id   |
+------+
|    1 |
+------+
1 ROW IN SET (0.00 sec)
 
 
-- for British
slave [(NONE)]> USE PRODUCTION_UK;
DATABASE changed
 
 
slave [PRODUCTION_UK]> SHOW TABLES;
+-------------------------+
| Tables_in_PRODUCTION_UK |
+-------------------------+
| british                 |
+-------------------------+
1 ROW IN SET (0.00 sec)
 
slave [PRODUCTION_UK]> SELECT * FROM british;
+------+
| id   |
+------+
|    2 |
+------+
1 ROW IN SET (0.00 sec)

It works!

 

If you want do this online, please add +1 to : https://jira.mariadb.org/browse/MDEV-17165

 

Limitations

WARNING: it doesn’t work with the database specified in query. (With Binlog_format = STATEMENT or MIXED)

This works fine:

USE PRODUCTION;
UPDATE `ma_table` SET id=1 WHERE id =2;

This query will break the replication :

USE PRODUCTION;
UPDATE `PRODUCTION`.`ma_table` SET id=1 WHERE id =2;

=> databases PRODUCTION does not exist on this server.

A real example

Missing update

on master1:

master1 [(NONE)]>UPDATE `PRODUCTION`.`france` SET id=3 WHERE id =1;
Query OK, 1 ROW affected (0.02 sec)
ROWS matched: 1  Changed: 1  Warnings: 0
 
master1 [(NONE)]> SELECT * FROM `PRODUCTION`.`france`;
+------+
| id   |
+------+
|    3 |
+------+
1 ROW IN SET (0.00 sec)

on slave:

slave [PRODUCTION_FR]> SELECT * FROM france;
+------+
| id   |
+------+
|    1 |
+------+
1 ROW IN SET (0.00 sec)

In this case we missed the update. It’s a real problem, because if the replication should crash, our slave is desynchronized with master1 and we didn’t realize it.

Crash replication

on master1:

master1[(NONE)]> USE PRODUCTION;
DATABASE changed
 
 
master1 [PRODUCTION]> SELECT * FROM`PRODUCTION`.`france`;
+------+
| id   |
+------+
|    3 |
+------+
1 ROW IN SET (0.00 sec)
 
master1 [PRODUCTION]>UPDATE `PRODUCTION`.`france` SET id=4 WHERE id =3;
Query OK, 1 ROW affected (0.01 sec)
ROWS matched: 1  Changed: 1  Warnings: 0
 
master1 [PRODUCTION]> SELECT * FROM `PRODUCTION`.`france`;
+------+
| id   |
+------+
|    4 |
+------+
1 ROW IN SET (0.01 sec)

on PmaControl:

pmacli schema diagram showing error

on slave:

slave [PRODUCTION_FR]> SHOW slave 'PRODUCTION_FR' STATUS\G;
*************************** 1. ROW ***************************
               Slave_IO_State: Waiting FOR master TO send event
                  Master_Host: 10.10.16.231
                  Master_User: replication
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mariadb-bin.000010
          Read_Master_Log_Pos: 2737
               Relay_Log_File: mysqld-relay-bin-production_fr.000003
                Relay_Log_Pos: 2320
        Relay_Master_Log_File: mariadb-bin.000010
             Slave_IO_Running: Yes
            Slave_SQL_Running: No
              Replicate_Do_DB: PRODUCTION_FR
          Replicate_Ignore_DB:
 Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 1146
                   Last_Error: Error 'Table 'PRODUCTION.france' doesn't exist' on query. Default database: 'PRODUCTION_FR'. Query: 'UPDATE `PRODUCTION`.`france` SET id=4 WHERE id =3'
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 2554
              Relay_Log_Space: 2815
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File:
           Master_SSL_CA_Path:
              Master_SSL_Cert:
            Master_SSL_Cipher:
               Master_SSL_Key:
        Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 1146
               Last_SQL_Error: Error 'TABLE 'PRODUCTION.france' doesn't exist' ON query. DEFAULT DATABASE: 'PRODUCTION_FR'. Query: 'UPDATE `PRODUCTION`.`france` SET id=4 WHERE id =3'
  Replicate_Ignore_Server_Ids:
             Master_Server_Id: 2370966657
               Master_SSL_Crl:
           Master_SSL_Crlpath:
                   Using_Gtid: No
                  Gtid_IO_Pos:
1 ROW IN SET (0.00 sec)
 
ERROR: No query specified

And we got the error which crash replication :

Error TABLE 'PRODUCTION.france' doesn't exist' ON query. DEFAULT DATABASE: 'PRODUCTION_FR'. Query: 'UPDATE `PRODUCTION`.`france` SET id=4 WHERE id =3

NB : Everything works fine with binlog_format=ROW.

Author: Aurélien LEQUOY <aurelien.lequoy@esysteme.com> you don’t copy/paste the email, it won’t work. You didn’t think I would post it like that in the open for all bots, right? ;).

License

This article is published under: The GNU General Public License v3.0 http://opensource.org/licenses/GPL-3.0

Others

The point of interest is to describe a real use case with full technical information to allow you to reproduce it by yourself.

This article was originally published just after the release of MariaDB 10.0 on the now defunct website www.mysqlplus.net, unfortunately someone copied/pasted less than 15 days after and forgot to mention me as here: https://mariadb.com/resources/blog/multisource-replication-how-resolve-schema-name-conflicts (my comment disappear too!).

The post Multi-master with MariaDB 10 – a tutorial appeared first on Percona Community Blog.

Percona Monitoring and Management (PMM) 1.14.1 Is Now Available

$
0
0
Percona Monitoring and Management

Percona Monitoring and Management

Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL® and MongoDB® performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL® and MongoDB® servers to ensure that your data works as efficiently as possible.

We’re releasing hotfix 1.14.1 to address two issues found post-release of 1.14.0:

  • PMM-2963: Upgrading to PMM 1.14.0 fails due to attempting to create already existing Dashboard
    • Our upgrade script incorrectly tried to create dashboards that already existed, and generating failure message:
      A folder or dashboard in the general folder with the same name already exists
  • PMM-2958: Grafana did not update to 5.1 when upgrading from versions older than 1.11
    • We identified a niche case where PMM installations that were upgraded from < 1.11 would fail to upgrade Grafana to correct release 5.1 (Users were left on Grafana 5.0)

Help us improve our software quality by reporting any Percona Monitoring and Management bugs you encounter using our bug tracking system.

The post Percona Monitoring and Management (PMM) 1.14.1 Is Now Available appeared first on Percona Database Performance Blog.

Vitess Weekly Digest Sep 10, 2018

$
0
0
This week, we continue the digest from the Slack discussions for Aug 3 2018 to Aug 31 2018.

Secondary Vindexes

raj.veerappan [Aug 3 9:27 AM]
how do secondary vindexes work? would they result in further sharding?

weitzman [9:32 AM]
If you have an authors table and a books table and you shard by author ID, a secondary vindex is a performance tool to help respond to the query, `select * from books where book_id = :id`
Being sharded by authors, there’s not any obvious information in that query to help identify what the associated author ID / keyspace ID would be for the book entry (edited)
A secondary vindex is a function that helps answer that question in a more effective way than just doing a scatter query across all shards

raj.veerappan [9:35 AM]
ahh got it
so sharding only happens by primary vindex and secondary vindexes help with routing to primary vindexes.

Update streams like Maxell

raj.veerappan [10:56 AM]
I think I heard that Vitess lets you listen to a stream of db updates, similar to Maxwell, are there docs available on this?

sougou [10:57 AM]
yup :slightly_smiling_face:
it needs to be beefed up a bit. It could be easily upgraded to send full row values for RBR

Migrating from existin MySQL

sumitj [Aug 13th at 3:25 AM]
hi , How easy it is to migrate from existing mysql stack to vitess ? it would be really helpful if we can document migration strategy and challenges in that .I think there are lot of mid size companies who use mysql and facing scaling issues in some way , it would be great if we can make this transition smooth . (edited)

faut
I think it depends on how much downtime and risk you can take. 
Doing a mysqldump and restoring to a new vitess cluster seems relatively easy but it requires a lot of downtime.

faut
And also if your DB is sharded etc increases the migration complexity

sumitj
@faut, I know there will be certain challenges , but if the steps would be nicely documented then it would be easier to adopt best practice and less chance for any unexpected issues .

sougou
This is on my list of things todo

sumitj
thanks a lot @sougou

sjmudd
another thought: it depends heavily on what you have running already and how you have it set up. In theory putting vitess on top of an existing replication setup shouldn’t be hard but in practice you’ll probably have to modify existing infrastructure to be at least partially vitess aware.  I tried going along that route and found it troublesome. It’s possibly easier to migrate tables to a new clean system which you can test and use without affecting existing production. That then requires you to think about how you want to move data over or if you’re comfortable putting vitess on top or whatever, but managing Vitess is different to managing a normal MySQL replication setup however you have that running at the moment.

Ordering by text strings

xuhaihua [Aug 22nd at 3:06 AM]
Case sensitive should be consistent in gate and MySQL.

sougou
Is this for table names?

xuhaihua
not table names, for row  values

xuhaihua
for example  there two shards and a table t, select a from t order by a asc; MySQL returned result is case insensitive.
shard1 return values:  [o P]
shard2 return values:   [o r ]
vtgate merge the two result in a heap:
merge shard1 o,  current result : [o]
next we should merge shard2 o, but the compare in vtgate is case sensitive, the next pop is P
merge shard1 P ,current result : [o, P]
….
the final result is [o P o r]
the correct result should be  [o o P r]
if the gourp by ,  shard1 o and shard2 o will not merged, it will be two group. (edited)

sougou
In your case, vtgate should have given an error saying that it cannot compare `text` columns.
what is the column type in your case?
However, if you specify a column's type in vschema, like this: https://github.com/vitessio/vitess/blob/master/data/test/vtgate/schema_test.json#L78
which will yield the correct results.

xuhaihua
wow, understand, I didn’t know weight_string before ,awesome:+1:

Packing multiple instances in a VM

Srinath [Aug 22nd at 9:07 PM]
Can we configure one vttablet to be a master of one shard and to be a replica of another shard?

Srinath
If yes, would like to know if this is a recommended pattern, if not, would like to know the reasoning behind

sougou
i remember someone asking this question, but don't remember the answer anymore. What's the use case?

Srinath
we have an application that is distributed across many VMs. for other applications that are part of these VMs, data is stored in cassandra. the application that we write uses postgres, but none of the existing solutions help us automatically re-shard and abstract the sharding logic. hence we have chosen to keep postgres stateless and only one node is a leader, when leader goes down, all state is lost. but this poses limitations to scale of our application.

we are evaluating vitess to see if we can have a database solution that gives easy re-sharding abilities, with some guarantees w.r.t consistency & performance. we also like the fact that vitess abstracts out the sharding routing logic so that our application need not know about the topology of the database clusters at all.

that said, we are trying to see if we can come up with an architecture using vitess that will help us add more data storage capacity as more nodes are added to the cluster. initially we start from one VM, then scale out to 3, 5 etc as the need arises.

one possible architecture for 3 nodes is that have one shard and make all three nodes part of the same shard and have one master. but when we scale-out to 5 nodes, it gets complicated as to we have 2 shards, that can have 2 masters 3 replicas, but when we get to this architecture, there will be an imbalance in the replica count for shards as one shard will have two replicas and the other one will have only one replica.

another possible way to look at this is to not have database/vitess running on all nodes. basically for 5 node cluster, use only 3 node database configuration and for probably more than 5 node cluster, come up with a 6 node vitess configuration  which will have 2 master shards and 4 slaves with 2 slaves per shard.

other way to look at this is to form a ring (like cassandra) where one have many shards that can be mapped on to available servers and make every available server a leader for some of the shards and follower for other shards. very much the way cockroach db is also architected. but that would mean that one vttablet instance should become both a master and a replica at the same time.

would like to know your thoughts.

sougou
i think you're assuming that you can run only one mysql per node.
vitess allows you to pack multiple mysqls per host.
so, there's no need to overload a replica onto a master
if you have host1, host2 and host3 (and say, 3 shards)

Srinath
yes, we have come up with the same configuration where one node has more than one mysql, vttablet instance running, with one of it is master and other one is replica for some other master. is that the recommended configuration?

sougou
yes. that's the recommended config

Srinath
alright, lets see how our benchmarking goes :slightly_smiling_face:

sougou
if you run like this, you just have to make sure that a replica is never on the same host as the master.
because that can cause loss of durability

Srinath
yes, we are aware of it

Srinath
thank you for your thoughts :slightly_smiling_face: (edited)

Avoiding stray tablet records

Mark Solters [Aug 27th at 10:45 AM]
I have noticed that when `vttablets` go down, `vtctld` has no idea?  e.g. I follow the example tutorial, and make a single unsharded `test` cell (1 master, 2 read replica, 2 read only) and these show up fine when I `ListAllTablets`.  But, when those pods go down (for example, if I kill them/they OOM/get evicted for any reason) `ListAllTablets` continues to list those now non-existant replicas as having an IP.  Restarting `vtctld` does not resolve this.  Am I missing something necessary to keep the state of the tablets synced with the vitess control plane here?

sougou
Yeah. This is a limitation of how things work. VTTablet is responsible for registering and unregistering the record. But it can't unregister if it's killed.
This also causes other issues: if you reparent later, the vtctld ends up waiting for a long time to change the master on the dead vttablets
We could have an agent that performs a sweep and remove orphaned records, but it's dangerous

Mark Solters
hmm, so what is the recommended approach to keep the pods alive/synced?

sougou
the recommended approach is to have another vttablet restarted with the same id if the pod dies

Mark Solters
for example, in the beginning tutorial, the pods are created directly.  should they die for any reason (they OOM frequently using the specs out-of-the-box) they do not come back
so I was thinking I’d try to spin up vttablets as statefulsets to preserve those unique ids/disk relationships
the same pod id?

sougou
the same tablet id

Do we need multiple vtctlds?

Mark Solters [Aug 28]
a bit of confusion here with `vtctld`: is there one instance per cell? it seems there’s only one per _cluster_
but it does accept an argument like `-cell {{cell}}`, but this is only `global` in the tutorial.  is this a shortcut for ease of explanation?  does each cell in fact need its own `vtctld`? why wouldn’t there be a `vtctld` with `-cell=test`?

sougou
a single vtctld per cluster is likely sufficient. Some people launch 3, just in case.
the `cell` parameter in vtctld is just an old legacy thing. We need to remove that requirement.

Designing ahead for sharding

ruant [Aug 29th at 12:51 AM]
So totally dumb question...
What should i think about when designing a DB that i want to shard in the future (since this startup is of course taking of like a rocket soon :sweat_smile: )

sougou
The easiest approach is to think that you need to shard this right now, and if so, how would you do it.

ruant
I guess splitting it up by each tenant, since it's multi tenant db.
Not all tables have a tenant id on it, but if you follow the relationship all rows in the db eventually will hit a table that has a tenant id...
How advanced can I define these sharding keys? (vindex if i'm not mistaken?)

sougou
Yeah. You can shard things such that all rows related to a tenant live together.

ruant
Nice :slightly_smiling_face:
But i'm still able to query across all the data even if it's sharded.
I guess it will go faster? Since it's being processed by two "db's"
I should just take a few hours to read up on this sharding topic.
Sorry for all the questions.

sougou
There are a few approaches with different trade-offs. The TL;DR: for secondary tables: if you don't have a tenant id, you may need to incur the overhead of going through a slower vindex (backed by lookup tables)

ruant
Thanks for your replies @sougou
Really appreciate it :slightly_smiling_face:

Vitess sequences are globally unique

skyler [Aug 31st at 9:25 AM]
Do Vitess sequences support `auto_increment_increment` and `auto_increment_offset`?

sougou
The offset can be set by initializing a starting value in the sequence table (I think it's `next_id`). But there's currently no support for something like `auto_increment_increment`). Reason: typically, people set this value when they do custom sharding and want different masters to generate non-overlapping ids. But this is not required for vitess sequences because they generate globally unique ids.

skyler
Custom sharding, right, that’s exactly why we use it. If possible I wanted to keep IDs unique across different Vitess installations, using the same scheme.
but Vitess sequences are globally unique, aha, I did not know that
Thank you!

Feature like Debezium (CQRS)

Lucas Piske [Aug 31st at 11:58 AM]
I'm developing a project where I'm using vitess as the sharding engine for mysql and I would like to use Debezium to implement CQRS. Do you think its possible to integrate this two technologies? Do you forsee any challenges that could cause problems?

koz
There will most likely be problems
It looks like Debezium uses the binlog to produce the changelog
Since Vitess is distributed you would need to connect to the binlog of each shard
It looks like Debezium supports that, but it would require additional configuration
@sougou Just implemented a feature called vreplication which might support the same feature set you are looking at with debezium

sougou
It's a POC, but we'll make into a real product soon

Lucas Piske
That would be great
I think it would be a really cool feature
It would help to implement some eventual consistency patterns
Thanks for the help

sougou
I'll definitely announce when the feature is ready

Unforeseen use case of my GTID work: replicating from AWS Aurora to Google CloudSQL

$
0
0
A colleague brought an article to my attention.  I did not see it on Planet MySQL where I get most of the MySQL news (or it did not catch my eye there).  As it is interesting replication stuff, I think it is important to bring it to the attention of the MySQL Community, so I am writing this short post. The surprising part for me is that it uses my 4-year-old work for online migration to GTID

Press Release 2018-09-11: Open Query acquired by Catalyst IT Australia Pty Limited

$
0
0

We are pleased to announce that Open Query, Queensland-based provider of MySQL, MariaDB and related services which just celebrated its 11-th anniversary, has been acquired by Catalyst IT Australia.

Founded in New Zealand in 1997, Catalyst is an experienced and respected Open Source integrator.  Catalyst is looking forward to the opportunity to work with the current Open Query clients as well as with new prospects. Catalyst offers a broad suite of Enterprise services, including support and custom development for Drupal, SilverStripe CMS, Moodle, Samba and other software, as well as fully managed hosting on AWS and other platforms.

“Catalyst’s core values are very much aligned with those of Open Query, which is why we are particularly pleased with this outcome”, notes Arjen Lentz, Founder and Exec.Director of Open Query.

Catalyst IT Australia has offices in Sydney, Melbourne and Brisbane.

Contacts

For Open Query Pty Ltd

Arjen Lentz, Exec.Director
https://openquery.com.au

For Catalyst IT Australia Pty Limited

Andrew Boag, Managing Director
https://www.catalyst-au.net/
Phone (02) 8203 9777

Non-blocking Two-phase commit in NDB Cluster

$
0
0
Non-blocking 2PC protocol

Many of the new DBMSs developed in the last 10 years have abandoned the
two-phase commit protocol and instead relied on replication protocols.

One of the main reasons for this has been the notion that two-phase commit
protocol is a blocking protocol. This is true for the classic version of the
two-phase commit protocol.

When NDB Cluster was developed in the 1990s we had requirements that
the replication protocol could not be blocking. A competitor at the time,
ClustRa, solved this by using a backup transaction coordinator. Given that
NDB Cluster had requirements to survive multiple simultaneous node failures,
this wasn't sufficient.

Thus a new two-phase commit protocol was developed that is completely
non-blocking. The main idea is that one uses a take-over protocol, this means
that any number of nodes can crash and we can still handle it as long as there
is enough nodes to keep all data available.

In addition NDB Cluster is designed both for Disk Durable transactions
and Network Durable transactions. Disk Durable transactions requires
data to be durable on disk when the transaction have committed and
Network Durable requires that the transaction is on at least 2 computers
when the transaction is committed.

Due to the response time requirements for applications that NDB Cluster
was designed for, we implemented it such that when applications received
the response the transaction was Network Durable.

The Disk Durability is handled in a background phase where data is
consistently flushed to disk such that we can always recover a consistent
version of the data even in the presence of a complete failure of the
cluster.

This part is handled by the Global Checkpoint protocol. The PDF above
describes the transaction protocol and the global checkpoint protocol that
together implement the Network Durability and Disk Durability of NDB
Cluster.

How to know if a user never connected to the MySQL server since last boot ?

$
0
0

Performance_Schema is used most of the time to get metrics about queries and connections. But it can also provide other very useful information.

So today, I will show you how you can see a list of users that didn’t connect to MySQL since we restarted it (since last reboot).

SELECT DISTINCT mu.user FROM mysql.user mu
LEFT JOIN performance_schema.users psu
ON mu.user = psu.user
WHERE psu.user IS NULL
AND mu.user NOT IN ('mysql.infoschema', 'mysql.session', 'mysql.sys')
ORDER BY mu.user;

Example:

mysql> SELECT DISTINCT mu.user FROM mysql.user mu
    ->       LEFT JOIN performance_schema.users psu 
    ->       ON mu.user = psu.user  
    ->       AND mu.user NOT IN ('mysql.infoschema', 'mysql.session', 'mysql.sys')
    ->       WHERE psu.user IS NULL ORDER BY mu.user;
+------------------+
| user             |
+------------------+
| fred             |
| myuser           |
+------------------+
2 rows in set (0.00 sec)

How To Deploy PMM on Linode With StackScripts

$
0
0
rebuiild from a StackScript

In my previous blog post, I showed how to deploy Percona Monitoring and Management (PMM) on Linode manually. It is pretty simple, but with a little coding it can be done even more easily using StackScripts

Here’s how:

1. Click on the “Add a Linode” and pick a Linode type you want to deploy.

2. Click on the deployed Linode and then click on the “Rebuild” Link

Rebuild the Linode

3. Click on Deploy Using StackScripts

Deploy using StackScripts

4. On the resulting page search for “PMM” and pick PMMServer from PerconaLab.

Choose PMMServer from PerconaLab

5. Provide the host name for new Linode, pick the root password and click on “Rebuild”

6. Boot the server.

boot the server

7.  You’re done. Wait for about 5 minutes for the installation to complete, then you can see PMM interface by going to this Linode IP

view PMM on Linode IP

If you think that a manual deployment with StackScripts is not much less hassle than doing it manually, you’re right. The real benefit comes with using Linode API for deployment.

There are multiple way to access this API, though for basic scripting I prefer the linode-cli tool for using the Linode API from the command line.

With linode-cli  you can deploy your PMM Server on Linode using this one liner:

linode-cli linodes create --label pmm-test  --root_pass MyRootPassword123 --stackscript_id 338458  --stackscript_data '{"hostname": "pmm-test"}'

Summary

As you can see, with Linode StackScripts you can get going with Percona Monitoring and Management on Linode in no time, especially if you chose to use the Linode API.

You might also like:

Here’s an overview from the Percona Monitoring and Management manual on deploying PMM. If you are new to PMM and would like to know more, you will find lots of resources on this site including my webinar MySQL Troubleshooting and Performance Optimization with PMM.

The post How To Deploy PMM on Linode With StackScripts appeared first on Percona Database Performance Blog.

Releasing puppet-proxysql version 2.0.0

$
0
0

Everyone knows those situations where there is a task that you need to do and you want to do, but you just don’t come around to actually doing it. Well, for me, this new release was such a task.

Early in 2017, I released the first version of puppet-proxysql on GitHub. It was my first puppet-module release and I was quite proud of it. I had implemented types and providers for managing the ProxySQL resources such as mysql_user, mysql_servers, etc…

At Config Management Camp Gent (February 2017) I met Vox Pupuli, who is the group that forms the puppet-user-community. They picked up the responsibility of taking ownership of well-known modules that are left unmaintained or abandoned and/or modules that only had a single maintainer. The puppet-proxysql module was kind of the latter. Additionally, they own the “puppet/” prefix on the Puppet Forge which is great for module visibility. So I joined forces with them and very soon we got set up using Travis for testing, releasing and pushing the very first version of the module to the forge.

Then it quieted down… I was busy doing other things (like changing jobs to work for Pythian) and I did not have much time to maintain the module anymore. I managed to review and merge some pull requests but releasing them was becoming more of a problem.

Until recently! I got a huge pull request implementing a lot of overdue new features like ProxySQL cluster support, new types for managing resources via class parameters, etc. I jumped on and with the help of Tim Meusel aka bastelfreak, member of the Vox Pupuli project maintainer committee (PMC), we managed to get this PR chunked up into smaller pieces, get it reviewed and eventually merged.

So without further ado, we can present you with puppet-proxysql version 2.0.0. This release contains a lot of new features that never got released, like support for Ubuntu and CentOS/RHEL based systems, repo management for installing ProxySQL, support for newer puppet versions and many bug fixes. You can view the full release notes here.

Thank you to Pythian for allowing me to spend time on this and to all the contributors and reviewers for their help working on getting this released and I’m very sorry it took so long.

Laravel 5.7 Email Verification Tutorial Example

$
0
0
Laravel 5.7 Email Verification Example

Laravel 5.7 Email Verification Tutorial Example From Scratch is today’s leading topic. In this version, you need to configure the settings and write some minimal code to setup everything. Email verification is the must functionality in web apps, and laravel makes very easy. So let us do it then.

Laravel 5.7 Email Verification Tutorial Example

First, install the Laravel 5.7 using the following command.

#1: Install Laravel 5.7 and configure the database.

composer create-project laravel/laravel emailVerify --prefer-dist

# or

laravel new emailVerify

 

Laravel 5.7 Email Verification Tutorial Example

Go inside the folder.

cd emailVerify

Fire up your favorite IDE or Editor.

code .

Create the MySQL database and write the credentials inside the .env file.

DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=emailVerify
DB_USERNAME=root
DB_PASSWORD=root

Okay, now migrate the tables by the following command.

php artisan migrate

Now, see the users table and you can there is one more field called email_verified_at.

 

Laravel 5.7 Email Verification

The email_verified_at column is there which is new in Laravel 5.7. So when the user registers and verifies the email, the timestamp will be recorded here. So based on that, we can differentiate that user has confirmed the email or not. For this kind of functionality, We have generally used the datatype boolean, but nowadays people are using a timestamp to accomplish this kind of goal.

#2: Laravel 5.7 Auth Scaffolding

Okay, now go to the terminal and type the following command.

php artisan make:auth

This command has generated one more view called verify.blade.php. It is new in Laravel 5.7 as the verification functionality is implemented in this version.

@extends('layouts.app')

@section('content')
<div class="container">
    <div class="row justify-content-center">
        <div class="col-md-8">
            <div class="card">
                <div class="card-header">{{ __('Verify Your Email Address') }}</div>

                <div class="card-body">
                    @if (session('resent'))
                        <div class="alert alert-success" role="alert">
                            {{ __('A fresh verification link has been sent to your email address.') }}
                        </div>
                    @endif

                    {{ __('Before proceeding, please check your email for a verification link.') }}
                    {{ __('If you did not receive the email') }}, <a href="{{ route('verification.resend') }}">{{ __('click here to request another') }}</a>.
                </div>
            </div>
        </div>
    </div>
</div>
@endsection

#3: Implement mustVerify interface in the User model.

In the User.php model, you can see one more contract added called  MustVerifyEmail. To use the email verification process, we need to implement this contract.

<?php

namespace App;

use Illuminate\Notifications\Notifiable;
use Illuminate\Contracts\Auth\MustVerifyEmail;
use Illuminate\Foundation\Auth\User as Authenticatable;

class User extends Authenticatable implements MustVerifyEmail
{
    use Notifiable;

    /**
     * The attributes that are mass assignable.
     *
     * @var array
     */
    protected $fillable = [
        'name', 'email', 'password',
    ];

    /**
     * The attributes that should be hidden for arrays.
     *
     * @var array
     */
    protected $hidden = [
        'password', 'remember_token',
    ];
}

#4: Add Email Route Verification

Go to the routes >> web.php file and add the extra parameter inside Auth::routes().

Auth::routes(['verify' => true]);

This enables the new Verification controller with the route actions. You can see the new controller called VerificationController.php file already comes with Laravel 5.7.

Also, we need to protect the HomeController route, so let us do that via adding middleware.

   /** HomeController.php
     * Create a new controller instance.
     *
     * @return void
     */
    public function __construct()
    {
        $this->middleware(['auth', 'verified']);
    }

#5: Setup email configuration

I am using mailtrap for this example. So log in to the https://mailtrap.io/signin.

Go to the demo inbox and copy the credentials and paste to your .env file.

MAIL_DRIVER=smtp
MAIL_HOST=smtp.mailtrap.io
MAIL_PORT=2525
MAIL_USERNAME=null
MAIL_PASSWORD=null
MAIL_ENCRYPTION=null

#6: Test the email verification functionality.

First, go to the browser and go to either http://localhost:8000/register or like me go to the http://emailverify.test/.register 

You will see the page like this.

 

Verify Email in Laravel

Now go the mailtrap, and you can see that verification mail has arrived.

 

Email Verification in Laravel 5.7

Also, see the database and analyze the users table.

 

Laravel 5.7 Auth Functionality

Here, the email_verified_at is null. Now, click the link that is arrived at your email and your email will be verified and you can see here the timestamp will be registered.

 

Email verified in Laravel

So, finally Laravel 5.7 Email Verification Tutorial Example is over. Thanks for taking.

The post Laravel 5.7 Email Verification Tutorial Example appeared first on AppDividend.

SQL Performance Tuning Tutorial – MySQL Query Optimization Tips

$
0
0

This is the first part of our SQL Performance Tuning series. In this article, we’ll focus and MySQL related examples, but the same concepts can be applied to many other relational databases.

Now more than ever, software engineers need to have vast knowledge in SQL performance tuning.
The shift is happening in both small startups and large enterprises. Nowadays, developers are the ones writing the SQL queries and database access layer.

As technology advances, even the most novice end-users are becoming impatient and will expect your application to work quickly, even quicker than you’d expect. Therefore, we, as software developers, are bound to meet that endless need for fast and immediate response time, anywhere and anytime.

It doesn’t really matter if you’re using a database abstraction layer (Hibernate, JOOQ, Entity Framework, Sqlalchemy, Django, or others) or writing native SQL queries, you’ll eventually be challenged with tuning the queries you’re sending to your database.

Create indexes, but do it wisely

Some will say that indexing is the most important part of SQL query tuning. In many cases, it can definitely be true. First, get familiar with the aspects you should consider when choosing the optimal indexes.

Remember, when index, you should pay close attention to the query’s WHERE clause and table JOINs, as those statements include the critical filtering parts of the query.

Also, major bottlenecks in data search can be the GROUP BY and ORDER BY parts. Said that, a potential hiccup will be that you may not be able to index them in some cases, as we explained here. Therefore, you might need to re-think the design of your query before creating the indexes, to make sure you write great queries, but also write index-able queries.

Once you got indexing figured out for one query, don’t stop there. Widen your view and look into other important queries in your application. The more queries you look at, the more you’ll think about the best indexes to create. Make sure you combine indexes whenever possible, and remove indexes which aren’t needed anymore. Looking at the entire application’s scope will always be better than looking at a single query’s scope.

That said, having more indexes than you need can also backfire on you, as they can slow down write operations (such as INSERT / UPDATE statements). So create indexes to optimize your SQL queries, but do it wisely.

Do not stand in the way of indexes

We’re being approached a lot by customers who’re asking us “why the database doesn’t use my index?”. Well, that’s a great question, with endless possible answers. But, in this article, we’ll try to provide several common options we see a lot, so hopefully, you’ll find them useful for your own use case.

Example #1 – Avoid wrapping indexed columns with functions

Consider this query, which counts the number of hot dogs purchased in the US on 2018. Just in case you’re curious, 18,000,000,000 hot dogs were sold in the US in 2018.

SELECT count(*) FROM us_hotdog_purchases WHERE YEAR(purchase_time) = ‘2018’

As you can see, we are using the YEAR function to grab the year part from the purchase_time column. This function call will prevent the database from being able to use an index for the purchase_time column search, because we indexed the value of purchase_time, but not the return value of YEAR(purchase_time).

To overcome this challenge and tune this SQL query, you can index the function’s result, by using  Generated Columns, which are available starting MySQL 5.7.5.

Another solution can be to find an alternative way to write the same query, without using the function call. In this example, we can transform that condition to a 2-way range condition, which will return the same results:

SELECT count(*) FROM us_hotdog_purchases WHERE purchased_at >= ‘2018-01-01’ AND purchased_at < ‘2019-01-01’

Example #2 – avoid OR conditions

Consider this query, which selects the amount of posts on Facebook posted after new year’s eve, or posted by a user named Mark.

SELECT count(*) FROM fb_posts WHERE username = ‘Mark’ OR post_time = ‘2018-01-01’

Having an index on both the username and post_time columns might sound helpful, but in most cases, the database won’t use it, at least not in full. The reason will be the connection between the two conditions – the OR operator, which makes the database fetch the results of each part of the condition separately.

An alternative way to look at this query can be to ‘split’ the OR condition and ‘combine’ it using a UNION clause. This alternative will allow you to index each of the conditions separately, so the database will use the indexes to search for the results and then combine the results with the UNION clause.

SELECT …
FROM …
WHERE username = ‘Mark’
    UNION
SELECT …
FROM …
WHERE post_time = ‘2018-01-01’

Please note that if you don’t mind duplicate records in your result set, you can also use UNION ALL (which will perform better than the default UNION DISTINCT).

Example #3 – Avoid sorting with a mixed order

Consider this query, which selects all posts from Facebook and sorts them by the username in an ascending order, and then by the post date in a descending order.

SELECT username, post_type FROM fb_posts ORDER BY username ASC, post_type DESC

MySQL (and so many other relational databases), cannot use indexes when sorting with a mixed order (both ASC and DESC in the same ORDER BY clause). This changed with the release of the reversed indexes functionality and MySQL 8.x.

So what can you do if you didn’t upgrade to the latest MySQL version just yet? First, we’d recommend to re-consider the mixed order sort. Do you really need it? If not, avoid it.

So you decided you need it, or your product manager said: “No way we can manage without it”? Another option will be to use Generated columns (available on MySQL 5.7.5+) to create a reversed column and sort on that column instead of the original. As an example, assume you’re sorting on a numeric column, you can create a generated column with the negative numeric value that correlates to the original number and sort on that new column in the opposite order. That way, all columns will have the same sort order in the ORDER BY clause, but the sort will happen as originally defined by your product’s requirement.

The last potential solution won’t always be an option, so your last resort will be upgrading to the latest MySQL version which supports mixed order sorting using indexes.

Example #4 – Avoid conditions with different column types

Consider this query, which selects the number of red fruits in a forest.

SELECT count(*) FROM forest WHERE fruit_color = 5;      /* 5 = red */

Assuming the column fruit_color‘s type is VARCHAR, or just anything non-numeric, indexing that column won’t be very helpful, as the required implicit cast will prevent the database from using the index for the filtering process.

So how can you tune this SQL query? You have two options to optimize this query. The first one would be to compare the column to a constant value that matches the column’s type, so if it’s a VARCHAR column, compare it to ‘5’ (with single quotes) and not to 5 (which is a numeric comparison which will result in an implicit cast).

A better option will be to adjust the column’s type to match the most suitable type for the values the column holds. In this example, the column should be altered to an INT type. Please note that altering a column’s type can be a complicated task, so read about the challenges of that task before heading towards it.

Avoid LIKE searches with prefix wildcards

Consider this query, which searches all Facebook posts from a username which includes the string ‘Mar’, so we are searching for all posts written by users named Mark, Marcus, Almar, etc.

SELECT * FROM fb_posts WHERE username like ‘%Mar%’

Having a wildcard ‘%’ at the beginning of the pattern will prevent the database from using an index for this column’s search. Such searches can take a while..

In this case, there are two options to improve this query’s performance. The first one is trivial – consider whether the prefix wildcard is important enough. If you can manage without it, get rid of it.

Another option will be to use full-text indexes. Please note though, that these indexes and the MATCH … AGAINST syntax aren’t free from challenges and have some differences when compared to the familiar LIKE expressions in MySQL.

 

Conclusion

In this first part of our SQL Query Performance Tuning tutorial series, we covered the importance of wise indexing, we went through several examples of possible obstacles while using indexed columns in queries, and we also detailed several other tips and tricks which can be helpful for better query performance. See you in the next post.

Viewing all 18798 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>