Quantcast
Channel: Planet MySQL
Viewing all 18842 articles
Browse latest View live

Node.js MySQL Tutorial

$
0
0

Node.js and MySQL is one of the necessary binding needed for any web application. MySQL is one of the most popular open source database in world and efficient as well. Almost every popular programming language like Java or PHP provides driver to access and perform operations with MySQL.

In this tutorial i am trying to cover code for learning and code for production. So if you know this already and looking for ready made code for production. Click here to jump there directly.

Introduction:

Node.js is rich with number of popular packages registered at package registry called NPM. Most of them are not so reliable to use for production but there are some on which we can rely upon. For MySQL there is one popular driver called node-mysql.

In this tutorial i am going to cover following points related to Node.js and MySQL.

  • Sample code to get started.
  • Code for Production.
  • Testing concurrent users.

Sample code to get started.

Project directory:

---node_modules
-----+ mysql
-----+ express
---index.js
---package.json
package.json
{
  "name": "node-mysql",
  "version": "0.0.1",
  "dependencies": {
    "express": "^4.10.6",
    "mysql": "^2.5.4"
  }
}

Install dependencies using

npm install

Here is sample code which connects to Database and perform SQL query.

var mysql      = require('mysql');
var connection = mysql.createConnection({
  host     : 'localhost',
  user     : '< MySQL username >',
  password : '< MySQL password >',
  database : '<your database name>'
});

connection.connect();

connection.query('SELECT * from < table name >', function(err, rows, fields) {
  if (!err)
    console.log('The solution is: ', rows);
  else
    console.log('Error while performing Query.');
});

connection.end();

Make sure you have started MySQL on default port and changed the parameter in above code then run this code using

node file_name.js

Code for production :

Above code is just for learning purpose and not for production payload. In production scenario is different, there may be thousands of concurrent users which turns into tons of MySQL queries. Above code won’t run for concurrent users and here is a proof. Let’s modify our code little bit and add Express routes in that, here it is.

test.js ( Change database settings in code )
var express    = require("express");
var mysql      = require('mysql');
var connection = mysql.createConnection({
  host     : 'localhost',
  user     : 'root',
  password : '',
  database : 'address_book'
});
var app = express();

connection.connect(function(err){
if(!err) {
    console.log("Database is connected ... nn");    
} else {
    console.log("Error connecting database ... nn");    
}
});

app.get("/",function(req,res){
connection.query('SELECT * from user LIMIT 2', function(err, rows, fields) {
connection.end();
  if (!err)
    console.log('The solution is: ', rows);
  else
    console.log('Error while performing Query.');
  });
});

app.listen(3000);

Install siege in your system. I use this command to install it in Ubuntu.

apt-get install siege

then run our node and server and following command.

node test.js
siege -c10 -t1M http://localhost:3000

Assuming you are running Node server on Port 3000.
Here is the output.
Node and mysql
In above code, we are allowing it to run for standalone connection i.e one connection at a time but reality is bit different. You may get 100 or 1000 connection at one particular time and if your server is not powerful enough to serve those request then at least it should put them in queue.

Pool connection in MySQL :

Connection Pooling is mechanism to maintain cache of database connection so that connection can be reused after releasing it. In Node mysql, we can use pooling directly to handle multiple connection and reuse the connection. Let’s write same code with pooling and check whether it can handle multiple connection or not.

test.js
var express   =    require("express");
var mysql     =    require('mysql');
var app       =    express();

var pool      =    mysql.createPool({
    connectionLimit : 100, //important
    host     : 'localhost',
    user     : 'root',
    password : '',
    database : 'address_book',
    debug    :  false
});

function handle_database(req,res) {
   
    pool.getConnection(function(err,connection){
        if (err) {
          res.json({"code" : 100, "status" : "Error in connection database"});
          return;
        }  

        console.log('connected as id ' + connection.threadId);
       
        connection.query("select * from user",function(err,rows){
            connection.release();
            if(!err) {
                res.json(rows);
            }          
        });

        connection.on('error', function(err) {      
              res.json({"code" : 100, "status" : "Error in connection database"});
              return;    
        });
  });
}

app.get("/",function(req,res){-
        handle_database(req,res);
});

app.listen(3000);

Run the app using

node test.js

and fire 10 concurrent users for 1 minute using siege by using this command.

siege -c10 -t1M http://localhost:3000

Here is output.
Code is stable !

**UPDATE**

You can directly use pool.query() which internally will acquire connection and release it when query is executed. In my personal code review experience, majority of the developers often forget to release the acquired connection which in turns creates bottleneck and database load.

Refer the code snippet below:

test.js
var express   =    require("express");
var mysql     =    require('mysql');
var app       =    express();

var pool      =    mysql.createPool({
    connectionLimit : 100, //important
    host     : 'localhost',
    user     : 'root',
    password : '',
    database : 'address_book',
    debug    :  false
});

function handle_database(req,res) {
       // connection will be acquired automatically
       pool.query("select * from user",function(err,rows){
        if(err) {
            return res.json({'error': true, 'message': 'Error occurred'+err});
        }
                //connection will be released as well.
                res.json(rows);
       });
}

app.get("/",function(req,res){-
        handle_database(req,res);
});

app.listen(3000);

I have used this function in production environment with heavy payload and it works like charm.

Final comments :

Siege is really powerful tool for testing server under pressure. We have created 100 connection limit in code, so you might be wondering that after 100 concurrent connection code will break. Well let me answer it via code. Fire 1000 concurrent user for 1 minute and let’s see how our code reacts.

siege -c1000 -t1M http://localhost:3000

If your MySQL server is configured to handle such traffic at one socket then it will run and our code will manage the scheduling of concurrent connection. It will serve 100 connection a time but rest 900 will be in queue. So code will not break.

Conclusion :

MySQL is one of widely used database engine in world and with Node it really works very well. Node-mysql pooling and event based debugging is really powerful and easy to code.

Further reading:


How to Install Nginx with PHP and MySQL (LEMP Stack) on Ubuntu 18.04

$
0
0
This tutorial shows how you can install Nginx on an Ubuntu 18.04 LTS server with PHP 7.2 support (through PHP-FPM) and MySQL support (LEMP = Linux + nginx (pronounced "engine x") + MySQL + PHP).

Circular tables in MariaDB

Manual for dbt2-0.37.50.15, fully automated Sysbench and DBT2 benchmarking with NDB

$
0
0
The link dbt2.0.37.50 manual provides the details of how to use the dbt2-0.37.50 scripts
to execute benchmarks using MySQL Cluster.

These scripts can be used to execute automated test runs of Sysbench, DBT2 and
FlexAsynch. I also use it to start up NDB Clusters to run DBT3 benchmarks and
YCSB benchmarks.

This set of scripts originates from 2006 when I wanted to automate all my benchmark
efforts. The most challenging benchmarks constitute starting more than 100 programs
to work together and using more than 100 machines. This requires automation to
be succesful.

Now running any benchmark is a 1-liner e.g.
./bench_run.sh --default-directory /path/to/dir --init

The preparation to run this benchmark is to place a file called autobench.conf in
/path/to/dir. This contains the configuration of the NDB data nodes, NDB MGM
servers, MySQL Servers and the benchmark programs. Multiple benchmark
programs are supported for Sysbench, DBT2 and flexAsynch.

MySQL Replication for High Availability - New Whitepaper

$
0
0

We’re happy to announce that our newly updated whitepaper MySQL Replication for High Availability is now available to download for free!

MySQL Replication enables data from one MySQL database server to be copied automatically to one or more MySQL database servers.

Unfortunately database downtime is often caused by sub-optimal HA setups, manual/prolonged failover times, and manual failover of applications. This technology is common knowledge for DBAs worldwide, but maintaining those high availability setups can sometimes be a challenge.

In this whitepaper, we discuss the latest features in MySQL 5.6, 5.7 & 8.0 as well as show you how to deploy and manage a replication setup. We also show how ClusterControl gives you all the tools you need to ensure your database infrastructure performs at peak proficiency.

Topics included in this whitepaper are …

  • What is MySQL Replication?
    • Replication Scheme
      • Asynchronous Replication
      • Semi-Synchronous Replication
    • Global Transaction Identifier (GTID)
      • Replication in MySQL 5.5 and Earlier
      • How GTID Solves the Problem
      • MariaDB GTID vs MySQL GTID
    • Multi-Threaded Slave
    • Crash-Safe Slave
    • Group Commit
  • Topology for MySQL Replication
    • Master with Slaves (Single Replication)
    • Master with Relay Slaves (Chain Replication)
    • Master with Active Master (Circular Replication)
    • Master with Backup Master (Multiple Replication)
    • Multiple Masters to Single Slave (Multi-Source Replication)
    • Galera with Replication Slave (Hybrid Replication)
  • Deploying a MySQL Replication Setup
    • General and SSH Settings
    • Define the MySQL Servers
    • Define Topology
    • Scaling Out
  • Connecting Application to the Replication Setup
    • Application Connector
    • Fabric-Aware Connector
    • Reverse Proxy/Load Balancer
      • MariaDB MaxScale
      • ProxySQL
      • HAProxy (Master-Slave Replication)
  • Failover with ClusterControl
    • Automatic Failover of Master
      • Whitelists and Blacklists
    • Manual Failover of Master
    • Failure of a Slave
    • Pre and Post-Failover Scripts
      • When Hooks Can Be Useful?
        • Service Discovery
        • Proxy Reconfiguration
        • Additional Logging
  • Operations - Managing Your MySQL Replication Setup
    • Show Replication Status
    • Start/Stop Replication
    • Promote Slave
    • Rebuild Replication Slave
    • Backup
    • Restore
    • Software Upgrade
    • Configuration Changes
    • Schema Changes
    • Topology Changes
  • Issues and Troubleshooting
    • Replication Status
    • Replication Lag
    • Data Drifting
    • Errant Transaction
    • Corrupted Slave
    • Recommendations

Download the whitepaper today!

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

About ClusterControl

ClusterControl is the all-inclusive open source database management system for users with mixed environments that removes the need for multiple management tools. ClusterControl provides advanced deployment, management, monitoring, and scaling functionality to get your MySQL, MongoDB, and PostgreSQL databases up-and-running using proven methodologies that you can depend on to work. At the core of ClusterControl is it’s automation functionality that lets you automate many of the database tasks you have to perform regularly like deploying new databases, adding and scaling new nodes, running backups and upgrades, and more.

To learn more about ClusterControl click here.

About Severalnines

Severalnines provides automation and management software for database clusters. We help companies deploy their databases in any environment, and manage all operational aspects to achieve high-scale availability.

Severalnines' products are used by developers and administrators of all skills levels to provide the full 'deploy, manage, monitor, scale' database cycle, thus freeing them from the complexity and learning curves that are typically associated with highly available database clusters. Severalnines is often called the “anti-startup” as it is entirely self-funded by its founders. The company has enabled over 32,000 deployments to date via its popular product ClusterControl. Currently counting BT, Orange, Cisco, CNRS, Technicolor, AVG, Ping Identity and Paytrail as customers. Severalnines is a private company headquartered in Stockholm, Sweden with offices in Singapore, Japan and the United States. To see who is using Severalnines today visit, https://www.severalnines.com/company.

Database High Availability for Camunda BPM using MySQL or MariaDB Galera Cluster

$
0
0

Camunda BPM is an open-source workflow and decision automation platform. Camunda BPM ships with tools for creating workflow and decision models, operating deployed models in production, and allowing users to execute workflow tasks assigned to them.

By default, Camunda comes with an embedded database called H2, which works pretty decently within a Java environment with relatively small memory footprint. However, when it comes to scaling and high availability, there are other database backends that might be more appropriate.

In this blog post, we are going to deploy Camunda BPM 7.10 Community Edition on Linux, with a focus on achieving database high availability. Camunda supports major databases through JDBC drivers, namely Oracle, DB2, MySQL, MariaDB and PostgreSQL. This blog only focuses on MySQL and MariaDB Galera Cluster, with different implementation on each - one with ProxySQL as database load balancer, and the other using the JDBC driver to connect to multiple database instances. Take note that this article does not cover on high availability for the Camunda application itself.

Prerequisite

Camunda BPM runs on Java. In our CentOS 7 box, we have to install JDK and the best option is to use the one from Oracle, and skip using the OpenJDK packages provided in the repository. On the application server where Camunda should run, download the latest Java SE Development Kit (JDK) from Oracle by sending the acceptance cookie:

$ wget --header "Cookie: oraclelicense=accept-securebackup-cookie" https://download.oracle.com/otn-pub/java/jdk/12+33/312335d836a34c7c8bba9d963e26dc23/jdk-12_linux-x64_bin.rpm

Install it on the host:

$ yum localinstall jdk-12_linux-x64_bin.rpm

Verify with:

$ java --version
java 12 2019-03-19
Java(TM) SE Runtime Environment (build 12+33)
Java HotSpot(TM) 64-Bit Server VM (build 12+33, mixed mode, sharing)

Create a new directory and download Camunda Community for Apache Tomcat from the official download page:

$ mkdir ~/camunda
$ cd ~/camunda
$ wget --content-disposition 'https://camunda.org/release/camunda-bpm/tomcat/7.10/camunda-bpm-tomcat-7.10.0.tar.gz'

Extract it:

$ tar -xzf camunda-bpm-tomcat-7.10.0.tar.gz

There are a number of dependencies we have to configure before starting up Camunda web application. This depends on the chosen database platform like datastore configuration, database connector and CLASSPATH environment. The next sections explain the required steps for MySQL Galera (using Percona XtraDB Cluster) and MariaDB Galera Cluster.

Note that the configurations shown in this blog are based on Apache Tomcat environment. If you are using JBOSS or Wildfly, the datastore configuration will be a bit different. Refer to Camunda documentation for details.

MySQL Galera Cluster (with ProxySQL and Keepalived)

We will use ClusterControl to deploy MySQL-based Galera cluster with Percona XtraDB Cluster. There are some Galera-related limitations mentioned in the Camunda docs surrounding Galera multi-writer conflicts handling and InnoDB isolation level. In case you are affected by these, the safest way is to use the single-writer approach, which is achievable with ProxySQL hostgroup configuration. To provide no single-point of failure, we will deploy two ProxySQL instances and tie them with a virtual IP address by Keepalived.

The following diagram illustrates our final architecture:

First, deploy a three-node Percona XtraDB Cluster 5.7. Install ClusterControl, generate a SSH key and setup passwordless SSH from ClusterControl host to all nodes (including ProxySQL). On ClusterControl node, do:

$ whoami
root
$ ssh-keygen -t rsa
$ for i in 192.168.0.21 192.168.0.22 192.168.0.23 192.168.0.11 192.168.0.12; do ssh-copy-id $i; done

Before we deploy our cluster, we have to modify the MySQL configuration template file that ClusterControl will use when installing MySQL servers. The template file name is my57.cnf.galera and located under /usr/share/cmon/templates/ on the ClusterControl host. Make sure the following lines exist under [mysqld] section:

[mysqld]
...
transaction-isolation=READ-COMMITTED
wsrep_sync_wait=7
...

Save the file and we are good to go. The above are the requirements as stated in Camunda docs, especially on the supported transaction isolation for Galera. Variable wsrep_sync_wait is set to 7 to perform cluster-wide causality checks for READ (including SELECT, SHOW, and BEGIN or START TRANSACTION), UPDATE, DELETE, INSERT, and REPLACE statements, ensuring that the statement is executed on a fully synced node. Keep in mind that value other than 0 can result in increased latency.

Go to ClusterControl -> Deploy -> MySQL Galera and specify the following details (if not mentioned, use the default value):

  • SSH User: root
  • SSH Key Path: /root/.ssh/id_rsa
  • Cluster Name: Percona XtraDB Cluster 5.7
  • Vendor: Percona
  • Version: 5.7
  • Admin/Root Password: {specify a password}
  • Add Node: 192.168.0.21 (press Enter), 192.168.0.22 (press Enter), 192.168.0.23 (press Enter)

Make sure you got all the green ticks, indicating ClusterControl is able to connect to the node passwordlessly. Click "Deploy" to start the deployment.

Create the database, MySQL user and password on one of the database nodes:

mysql> CREATE DATABASE camunda;
mysql> CREATE USER camunda@'%' IDENTIFIED BY 'passw0rd';
mysql> GRANT ALL PRIVILEGES ON camunda.* TO camunda@'%';

Or from the ClusterControl interface, you can use Manage -> Schema and Users instead:

Once cluster is deployed, install ProxySQL by going to ClusterControl -> Manage -> Load Balancer -> ProxySQL -> Deploy ProxySQL and enter the following details:

  • Server Address: 192.168.0.11
  • Administration Password:
  • Monitor Password:
  • DB User: camunda
  • DB Password: passw0rd
  • Are you using implicit transactions?: Yes

Repeat the ProxySQL deployment step for the second ProxySQL instance, by changing the Server Address value to 192.168.0.12. The virtual IP address provided by Keepalived requires at least two ProxySQL instances deployed and running. Finally, deploy virtual IP address by going to ClusterControl -> Manage -> Load Balancer -> Keepalived and pick both ProxySQL nodes and specify the virtual IP address and network interface for the VIP to listen:

Our database backend is now complete. Next, import the SQL files into the Galera Cluster as the created MySQL user. On the application server, go to the "sql" directory and import them into one of the Galera nodes (we pick 192.168.0.21):

$ cd ~/camunda/sql/create
$ yum install mysql #install mysql client
$ mysql -ucamunda -p -h192.168.0.21 camunda < mysql_engine_7.10.0.sql
$ mysql -ucamunda -p -h192.168.0.21 camunda < mysql_identity_7.10.0.sql

Camunda does not provide MySQL connector for Java since its default database is H2. On the application server, download MySQL Connector/J from MySQL download page and copy the JAR file into Apache Tomcat bin directory:

$ wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-8.0.15.tar.gz
$ tar -xzf mysql-connector-java-8.0.15.tar.gz
$ cd mysql-connector-java-8.0.15
$ cp mysql-connector-java-8.0.15.jar ~/camunda/server/apache-tomcat-9.0.12/bin/

Then, set the CLASSPATH environment variable to include the database connector. Open setenv.sh using text editor:

$ vim ~/camunda/server/apache-tomcat-9.0.12/bin/setenv.sh

And add the following line:

export CLASSPATH=$CLASSPATH:$CATALINA_HOME/bin/mysql-connector-java-8.0.15.jar

Open ~/camunda/server/apache-tomcat-9.0.12/conf/server.xml and change the lines related to datastore. Specify the virtual IP address as the MySQL host in the connection string, with ProxySQL port 6033:

<Resource name="jdbc/ProcessEngine"
              ...
              driverClassName="com.mysql.jdbc.Driver" 
              defaultTransactionIsolation="READ_COMMITTED"
              url="jdbc:mysql://192.168.0.10:6033/camunda"
              username="camunda"  
              password="passw0rd"
              ...
/>

Finally, we can start the Camunda service by executing start-camunda.sh script:

$ cd ~/camunda
$ ./start-camunda.sh
starting camunda BPM platform on Tomcat Application Server
Using CATALINA_BASE:   ./server/apache-tomcat-9.0.12
Using CATALINA_HOME:   ./server/apache-tomcat-9.0.12
Using CATALINA_TMPDIR: ./server/apache-tomcat-9.0.12/temp
Using JRE_HOME:        /
Using CLASSPATH:       :./server/apache-tomcat-9.0.12/bin/mysql-connector-java-8.0.15.jar:./server/apache-tomcat-9.0.12/bin/bootstrap.jar:./server/apache-tomcat-9.0.12/bin/tomcat-juli.jar
Tomcat started.

Make sure the CLASSPATH shown in the output includes the path to the MySQL Connector/J JAR file. After the initialization completes, you can then access Camunda webapps on port 8080 at http://192.168.0.8:8080/camunda/. The default username is demo with password 'demo':

You can then see the digested capture queries from Nodes -> ProxySQL -> Top Queries, indicating the application is interacting correctly with the Galera Cluster:

There is no read-write splitting configured for ProxySQL. Camunda uses "SET autocommit=0" on every SQL statement to initialize transaction and the best way for ProxySQL to handle this by sending all the queries to the same backend servers of the target hostgroup. This is the safest method alongside better availability. However, all connections might end up reaching a single server, so there is no load balancing.

MariaDB Galera

MariaDB Connector/J is able to handle a variety of connection modes - failover, sequential, replication and aurora - but Camunda only supports failover and sequential. Taken from MariaDB Connector/J documentation:

Mode Description
sequential
(available since 1.3.0)
This mode supports connection failover in a multi-master environment, such as MariaDB Galera Cluster. This mode does not support load-balancing reads on slaves. The connector will try to connect to hosts in the order in which they were declared in the connection URL, so the first available host is used for all queries. For example, let's say that the connection URL is the following:
jdbc:mariadb:sequential:host1,host2,host3/testdb
When the connector tries to connect, it will always try host1 first. If that host is not available, then it will try host2. etc. When a host fails, the connector will try to reconnect to hosts in the same order.
failover
(available since 1.2.0)
This mode supports connection failover in a multi-master environment, such as MariaDB Galera Cluster. This mode does not support load-balancing reads on slaves. The connector performs load-balancing for all queries by randomly picking a host from the connection URL for each connection, so queries will be load-balanced as a result of the connections getting randomly distributed across all hosts.

Using "failover" mode poses a higher potential risk of deadlock, since writes will be distributed to all backend servers almost equally. Single-writer approach is a safe way to run, which means using sequential mode should do the job pretty well. You also can skip the load-balancer tier in the architecture. Hence with MariaDB Java connector, we can deploy our architecture as simple as below:

Before we deploy our cluster, modify the MariaDB configuration template file that ClusterControl will use when installing MariaDB servers. The template file name is my.cnf.galera and located under /usr/share/cmon/templates/ on ClusterControl host. Make sure the following lines exist under [mysqld] section:

[mysqld]
...
transaction-isolation=READ-COMMITTED
wsrep_sync_wait=7
performance_schema = ON
...

Save the file and we are good to go. A bit of explanation, the above list are the requirements as stated in Camunda docs, especially on the supported transaction isolation for Galera. Variable wsrep_sync_wait is set to 7 to perform cluster-wide causality checks for READ (including SELECT, SHOW, and BEGIN or START TRANSACTION), UPDATE, DELETE, INSERT, and REPLACE statements, ensuring that the statement is executed on a fully synced node. Keep in mind that value other than 0 can result in increased latency. Enabling Performance Schema is optional for ClusterControl query monitoring feature.

Now we can start the cluster deployment process. Install ClusterControl, generate a SSH key and setup passwordless SSH from ClusterControl host to all Galera nodes. On ClusterControl node, do:

$ whoami
root
$ ssh-keygen -t rsa
$ for i in 192.168.0.41 192.168.0.42 192.168.0.43; do ssh-copy-id $i; done

Go to ClusterControl -> Deploy -> MySQL Galera and specify the following details (if not mentioned, use the default value):

  • SSH User: root
  • SSH Key Path: /root/.ssh/id_rsa
  • Cluster Name: MariaDB Galera 10.3
  • Vendor: MariaDB
  • Version: 10.3
  • Admin/Root Password: {specify a password}
  • Add Node: 192.168.0.41 (press Enter), 192.168.0.42 (press Enter), 192.168.0.43 (press Enter)

Make sure you got all the green ticks when adding nodes, indicating ClusterControl is able to connect to the node passwordlessly. Click "Deploy" to start the deployment.

Create the database, MariaDB user and password on one of the Galera nodes:

mysql> CREATE DATABASE camunda;
mysql> CREATE USER camunda@'%' IDENTIFIED BY 'passw0rd';
mysql> GRANT ALL PRIVILEGES ON camunda.* TO camunda@'%';

For ClusterControl user, you can use ClusterControl -> Manage -> Schema and Users instead:

Our database cluster deployment is now complete. Next, import the SQL files into the MariaDB cluster. On the application server, go to the "sql" directory and import them into one of the MariaDB nodes (we chose 192.168.0.41):

$ cd ~/camunda/sql/create
$ yum install mysql #install mariadb client
$ mysql -ucamunda -p -h192.168.0.41 camunda < mariadb_engine_7.10.0.sql
$ mysql -ucamunda -p -h192.168.0.41 camunda < mariadb_identity_7.10.0.sql

Camunda does not provide MariaDB connector for Java since its default database is H2. On the application server, download MariaDB Connector/J from MariaDB download page and copy the JAR file into Apache Tomcat bin directory:

$ wget https://downloads.mariadb.com/Connectors/java/connector-java-2.4.1/mariadb-java-client-2.4.1.jar
$ cp mariadb-java-client-2.4.1.jar ~/camunda/server/apache-tomcat-9.0.12/bin/

Then, set the CLASSPATH environment variable to include the database connector. Open setenv.sh via text editor:

$ vim ~/camunda/server/apache-tomcat-9.0.12/bin/setenv.sh

And add the following line:

export CLASSPATH=$CLASSPATH:$CATALINA_HOME/bin/mariadb-java-client-2.4.1.jar

Open ~/camunda/server/apache-tomcat-9.0.12/conf/server.xml and change the lines related to datastore. Use the sequential connection protocol and list out all the Galera nodes separated by comma in the connection string:

<Resource name="jdbc/ProcessEngine"
              ...
              driverClassName="org.mariadb.jdbc.Driver" 
              defaultTransactionIsolation="READ_COMMITTED"
              url="jdbc:mariadb:sequential://192.168.0.41:3306,192.168.0.42:3306,192.168.0.43:3306/camunda"
              username="camunda"  
              password="passw0rd"
              ...
/>

Finally, we can start the Camunda service by executing start-camunda.sh script:

$ cd ~/camunda
$ ./start-camunda.sh
starting camunda BPM platform on Tomcat Application Server
Using CATALINA_BASE:   ./server/apache-tomcat-9.0.12
Using CATALINA_HOME:   ./server/apache-tomcat-9.0.12
Using CATALINA_TMPDIR: ./server/apache-tomcat-9.0.12/temp
Using JRE_HOME:        /
Using CLASSPATH:       :./server/apache-tomcat-9.0.12/bin/mariadb-java-client-2.4.1.jar:./server/apache-tomcat-9.0.12/bin/bootstrap.jar:./server/apache-tomcat-9.0.12/bin/tomcat-juli.jar
Tomcat started.

Make sure the CLASSPATH shown in the output includes the path to the MariaDB Java client JAR file. After the initialization completes, you can then access Camunda webapps on port 8080 at http://192.168.0.8:8080/camunda/. The default username is demo with password 'demo':

You can see the digested capture queries from ClusterControl -> Query Monitor -> Top Queries, indicating the application is interacting correctly with the MariaDB Cluster:

With MariaDB Connector/J, we do not need load balancer tier which simplifies our overall architecture. The sequential connection mode should do the trick to avoid multi-writer deadlocks - which can happen in Galera. This setup provides high availability with each Camunda instance configured with JDBC to access the cluster of MySQL or MariaDB nodes. Galera takes care of synchronizing the data between the database instances in real time.

Angular 8|7 CRUD Tutorial: Python|Django REST API

$
0
0
Angular 8 is released! Read about its new features and how to update Angular 7 to v8. This tutorial is designed for developers that want to use Angular 8|7 to build front-end apps for their back-end REST APIs. Note: Check out how to build a developer's portfolio web application with Angular 7.1, Firebase and Firestore from these series: Angular 7|6 Tutorial Course: CLI, Components, Routing & Bootstrap 4, Angular 7|6 Tutorial Course: Angular NgModules (Feature and Root Modules), Angular 7|6 Tutorial Course: Nested Router-Outlet, Child Routes & forChild(), Angular 7|6 Tutorial Course: Authentication with Firebase (Email & Password), Angular 7|6 Tutorial Course: Securing the UI with Router Guards and UrlTree Parsed Routes You will see by example how to build a CRUD REST API with Angular and Python. The new features of Angular 7 include better performance, new powerful CLI additions and a new way to inject services. Throughout this tutorial, designed for beginners, you'll learn Angular by example by building a full-stack CRUD—Create, Read, Update and Delete—web application using the latest version of the most popular framework and platform for building mobile and desktop client side applications (also called SPAs or Single Page Applications), created and used internally by Google. In the back-end we'll use Python with Django, the most popular pythonic web framework designed for perfectionists with deadlines. In nutshell, you'll learn to generate Angular 8 apps, generate components and services and add routing. You'll also learn to use various features such as HttpClient for sending AJAX requests and HTTP calls and subscribing to RxJS 6 Observables etc. By the end of this Angular tutorial, you'll learn by building a real world example application: How to install the latest version of Angular CLI, How to use Angular CLI to generate a new Angular 8 project, How to use Angular to build a simple CRM application, What's a component and component-based architecture How to use RxJS 6 Observables and operators (map() and filter() etc.) How to create Angular components, How to add component routing and navigation, How to use HttpClient to consume a REST API etc. Prerequisites You will need to have the following prerequisites in order to follow this tutorial: A Python development environment. We use a Ubuntu system with Python 3.7 and pip installed but you can follow these instructions in a different system as long as you have Python 3 and pip installed. Also the commands shown here are bash commands which are available in Linux-based systems and macOS but if you use Windows CMD or Powershell , make sure to use the equivalent commands or install bash for Windows. Node.js and npm installed on your system. They are required by Angular CLI. The MySQL database management system installed on your system since we'll be using a MySQL database in our application. If you don't want to deal with MySQL installation, you can also use SQLite (a file database that doesn't require any installation) which is configured by default in your Django project. Familiarity with Python, Django and JavaScript (TypeScript). If you meet these requirements, you are good to go! Creating a MySQL Database We'll be using a MySQL database. In your terminal invoke the mysql client using the following command: $ mysql -u root -p Enter your MySQL password and hit Enter. Next, run the following SQL statement to create a database: mysql> create database crmdb; Creating a Virtual Environment Let's start our tutorial by creating a virtual environment. Open a new terminal, navigate to a working folder and run the following command: $ cd ~/demos $ python3 -m venv .env Next, activate the virtual environment using the following command: $ source .env/bin/activate Installing Django and Django REST Framework Now, that you have created and activated your virtual environment, you can install your Python packages using pip. In your terminal where you have activated the virtual environment, run the following commands to install the necessary packages: $ pip install django $ pip install djangorestframework You will also need to install the MySQL client for Python using pip: $ pip install mysqlclient Creating a Django Project Now, let's proceed to creating our django project. In your terminal, run the following command: $ django-admin startproject simplecrm Next, open the settings.py file and update the database setting to point to our crmdb database: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'crmdb', 'USER': 'root', 'PASSWORD': 'YOUR_DB_PASSWORD', 'HOST': 'localhost', 'PORT': '3306', } } Next, add rest_framework to the INSTALLED_APPS array: INSTALLED_APPS = [ # [...] 'rest_framework' ] Finally, migrate the database and run the development server using the following commands: $ cd simplecrm $ python manage.py migrate $ python manage.py runserver You will be able to access your database from the 127.0.0.1:8000 address. Create an Admin User Let's create an admin user using the following command: $ python manage.py createsuperuser Creating a Django Application Next, let's create a Django application for encapsulating our core CRM functionality. In your terminal, run the following command: $ python manage.py startapp crmapp Next, you need to add it in the settings.py file: INSTALLED_APPS = [ # ... 'rest_framework', 'crmapp' ] Creating the Database Models Let's now proceed to create the database models for our application. We are going to create the following models: Contact Account Activity ContactStatus ContactSource ActivityStatus We have three main models which are Contact, Account and Activity. The last three models are simply lookup tables (They can be replaced by an enum). Open the crmapp/models.py file and the following code: from django.db import models from django.contrib.auth.models import User INDCHOICES = ( ('FINANCE', 'FINANCE'), ('HEALTHCARE', 'HEALTHCARE'), ('INSURANCE', 'INSURANCE'), ('LEGAL', 'LEGAL'), ('MANUFACTURING', 'MANUFACTURING'), ('PUBLISHING', 'PUBLISHING'), ('REAL ESTATE', 'REAL ESTATE'), ('SOFTWARE', 'SOFTWARE'), ) class Account(models.Model): name = models.CharField("Name of Account", "Name", max_length=64) email = models.EmailField(blank = True, null = True) phone = models.CharField(max_length=20, blank = True, null = True) industry = models.CharField("Industry Type", max_length=255, choices=INDCHOICES, blank=True, null=True) website = models.URLField("Website", blank=True, null=True) description = models.TextField(blank=True, null=True) createdBy = models.ForeignKey(User, related_name='account_created_by', on_delete=models.CASCADE) createdAt = models.DateTimeField("Created At", auto_now_add=True) isActive = models.BooleanField(default=False) def __str__(self): return self.name class ContactSource(models.Model): status = models.CharField("Contact Source", max_length=20) def __str__(self): return self.status class ContactStatus(models.Model): status = models.CharField("Contact Status", max_length=20) def __str__(self): return self.status class Contact(models.Model): first_name = models.CharField("First name", max_length=255, blank = True, null = True) last_name = models.CharField("Last name", max_length=255, blank = True, null = True) account = models.ForeignKey(Account, related_name='lead_account_contacts', on_delete=models.CASCADE, blank=True, null=True) email = models.EmailField() phone = models.CharField(max_length=20, blank = True, null = True) address = models.TextField(blank=True, null=True) description = models.TextField(blank=True, null=True) createdBy = models.ForeignKey(User, related_name='contact_created_by', on_delete=models.CASCADE) createdAt = models.DateTimeField("Created At", auto_now_add=True) isActive = models.BooleanField(default=False) def __str__(self): return self.first_name class ActivityStatus(models.Model): status = models.CharField("Activity Status", max_length=20) def __str__(self): return self.status class Activity(models.Model): description = models.TextField(blank=True, null=True) createdAt = models.DateTimeField("Created At", auto_now_add=True) contact = models.ForeignKey(Contact, on_delete=models.CASCADE, blank=True, null=True) def __str__(self): return self.description Creating Model Serializers After creating models we need to create the serializers. In the crmapp folder create a serializers.py file: $ cd crmapp $ touch serializers.py Next, open the file and add the following imports: from rest_framework import serializers from .models import Account, Activity, ActivityStatus, Contact, ContactSource, ContactStatus Next, add a serializer class for each model: class AccountSerializer(serializers.ModelSerializer): class Meta: model = Account fields = "__all__" class ActivitySerializer(serializers.ModelSerializer): class Meta: model = Activity fields = "__all__" class ActivityStatusSerializer(serializers.ModelSerializer): class Meta: model = ActivityStatus fields = "__all__" class ContactSerializer(serializers.ModelSerializer): class Meta: model = Contact fields = "__all__" class ContactSourceSerializer(serializers.ModelSerializer): class Meta: model = ContactSource fields = "__all__" class ContactStatusSerializer(serializers.ModelSerializer): class Meta: model = ContactStatus fields = "__all__" Creating API Views After creating the model serializers, let's now create the API views. Open the crmapp/views.py file and add the following imports: from rest_framework import generics from .models import Account, Activity, ActivityStatus, Contact, ContactSource, ContactStatus from .serializers import AccountSerializer, ActivitySerializer, ActivityStatusSerializer, ContactSerializer, ContactSourceSerializer, ContactStatusSerializer Next, add the following views: from rest_framework import generics from .models import Account, Activity, ActivityStatus, Contact, ContactSource, ContactStatus from .serializers import AccountSerializer, ActivitySerializer, ActivityStatusSerializer, ContactSerializer, ContactSourceSerializer, ContactStatusSerializer class AccountAPIView(generics.ListCreateAPIView): queryset = Account.objects.all() serializer_class = AccountSerializer class ActivityAPIView(generics.ListCreateAPIView): queryset = Activity.objects.all() serializer_class = ActivitySerializer class ActivityStatusAPIView(generics.ListCreateAPIView): queryset = ActivityStatus.objects.all() serializer_class = ActivitySerializer class ContactAPIView(generics.ListCreateAPIView): queryset = Contact.objects.all() serializer_class = ContactSerializer class ContactStatusAPIView(generics.ListCreateAPIView): queryset = ContactStatus.objects.all() serializer_class = ContactSerializer class ContactSourceAPIView(generics.ListCreateAPIView): queryset = ContactSource.objects.all() serializer_class = ContactSourceSerializer Creating API URLs Let's now create the API URLs to access our API views. Open the urls.py file and add the following imports: from django.contrib import admin from django.urls import path from crmapp import views Next, add the following content: urlpatterns = [ path('admin/', admin.site.urls), path(r'accounts', views.AccountAPIView.as_view(), name='account-list'), path(r'contacts', views.ContactAPIView.as_view(), name='contact-list'), path(r'activities', views.ActivityAPIView.as_view(), name='activity-list'), path(r'activitystatuses', views.ActivityStatusAPIView.as_view(), name='activity-status-list'), path(r'contactsources', views.ContactSourceAPIView.as_view(), name='contact-source-list'), path(r'contactstatuses', views.ContactStatusAPIView.as_view(), name='contact-status-list') ] Enabling CORS Since we'll be using two development servers running from two different ports (considered as two separate domains) we need to enable CORS (Cross Origin Resource Sharing) in our Django application. Start by installing django-cors-headers using pip $ pip install django-cors-headers Next, you need to add it to your project settings.py file: INSTALLED_APPS = ( ## [...] 'corsheaders' ) Next, you need to add corsheaders.middleware.CorsMiddleware middleware to the middleware classes in settings.py MIDDLEWARE = ( 'corsheaders.middleware.CorsMiddleware', # [...] ) You can then, either enable CORS for all domains by adding the following setting: CORS_ORIGIN_ALLOW_ALL = True You can find more configuration options from the docs. The example Angular application we'll be building is the front-end for the CRM RESTful API that will allow you to create accounts, leads, opportunities and contacts. It's a perfect example for a CRUD (Create, Read, Update and Delete) application built as an SPA (Single Page Application). The example application is work on progress so we'll be building it through a series of tutorials and will be updated to contain advanced features such as RxJS 6 and JWT authentication. We'll also use Bootstrap 4 and Angular Material for building and styling the UI components. You either need Bootstrap 4 or Angular Material for styling so depending on your choice you can follow separate tutorials: Building the UI with Angular Material Building the UI with Bootstrap 4 Installing the Angular CLI 8 Make sure you have Node.js installed, next run the following command in your terminal to install Angular CLI 8: $ npm install @angular/cli@next --global At the time of this writing @angular/cli v8.0.0-beta.11 is installed. You can check the installed version by running the following command: $ ng version Now, you're ready to create a project using Angular CLI 8. Simply run the following command in your terminal: ng new ngsimplecrm The CLI will automatically generate a bunch of files common to most Angular projects and install the required dependencies for your project. The CLI will prompt you if Would you like to add Angular routing? (y/N), type y. And Which stylesheet format would you like to use? Choose CSS and type Enter. Next, you can serve your application locally using the following command: $ cd ./ngsimplecrm $ ng serve You application will be running from http://localhost:4200. This is a screen-shot of home page of the application: Setting up Angular Material We'll be using Material Design to style our CRM UI so we need Angular Material to our project. Fortunately, this is only one command away. Open a new terminal and run the following commands: $ cd ./ngsimplecrm $ ng add @angular/material The command will ask you for to Choose a prebuilt theme name, or "custom" for a custom theme: (Use arrow keys) Indigo/Pink Deep Purple/Amber Pink/Blue Grey Purple/Green Choose Deep Purple/Amber or whatever theme you prefer. And if you want to Set up HammerJS for gesture recognition? (Y/n) Choose the default answer which Yes. And if you want to Set up browser animations for Angular Material? (Y/n) Also choose Yes. That's it, Angular Material (v7.3.7 as of this writing) is configured in your application. After that, you need to import the Angular Material components that you want to use in your project. Open the src/app/app.module.ts file and add the following changes: import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { AppRoutingModule } from './app-routing.module'; import { AppComponent } from './app.component'; import { BrowserAnimationsModule } from '@angular/platform-browser/animations'; import { MatInputModule, MatButtonModule, MatCardModule, MatFormFieldModule,MatTableModule } from '@angular/material'; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule, AppRoutingModule, BrowserAnimationsModule, MatTableModule, MatCardModule, MatInputModule, MatFormFieldModule, MatButtonModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } Components in Angular Now what's a component? A component is a TypeScript class with an HTML template and an optional set of CSS styles that control a part of the screen. Components are the most important concept in Angular. An Angular application is basically a tree of components with a root component (the famous AppComponent). The root component is the one contained in the bootstrap array in the main NgModule module in the app.module.ts file. One important aspect of components is re-usability. A component can be re-used throughout the application and even in other applications. Common and repeatable code that performs a certain task can be encapsulated into a re-usable component that can be called whenever we need the functionality it provides. Each bootstrapped component is the base of its own tree of components. Inserting a bootstrapped component usually triggers a cascade of component creations that fill out that tree. source Component-Based Architecture An Angular application is made of several components forming a tree structure with parent and child components. A component is an independent block of a big system (web application) that communicates with the other building blocks (components) of the system using inputs and outputs. A component has associated view, data and behavior and may have parent and child components. Components allow maximum re-usability, easy testing, maintenance and separation of concerns. Let's now see this practically. Head over to your Angular application project folder and open the src/app folder. You will find the following files: app.component.css: the CSS file for the component app.component.html: the HTML view for the component app.component.spec.ts: the unit tests or spec file for the component app.component.ts: the component code (data and behavior) app.module.ts: the application main module Except for the last file which contains the declaration of the application main (root) Module, all these files are used to create a component. It's the AppComponent: The root component of our application. All other components we are going to create next will be direct or un-direct children of the root component. Demystifying the AppComponent (The Root Component of Angular Applications) Go ahead and open the src/app/app.component.ts file and let's understand the code behind the main/root component of the application. First, this is the code: import { Component } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { title = 'app'; } We first import the Component decorator from @angular/core then we use it to decorate the TypeScript class AppComponent. The Component decorator takes an object with many parameters such as: selector: specifies the tag that can be used to call this component in HTML templates just like the standard HTML tags templateUrl: indicates the path of the HTML template that will be used to display this component (you can also use the template parameter to include the template inline as a string) styleUrls: specifies an array of URLs for CSS style-sheets for the component The export keyword is used to export the component so that it can be imported from other components and modules in the application. The title variable is a member variable that holds the string 'app'. There is nothing special about this variable and it's not a part of the canonical definition of an Angular component. Now let's see the corresponding template for this component. If you open src/app/app.component.html this is what you'll find: <div style="text-align:center"> <h1> Welcome to ! </h1> <img width="300" alt="Angular Logo" src="data:image/svg+xml;...."> </div> <h2>Here are some links to help you start: </h2> <ul> <li> <h2><a target="_blank" rel="noopener" href="https://angular.io/tutorial">Tour of Heroes</a></h2> </li> <li> <h2><a target="_blank" rel="noopener" href="https://github.com/angular/angular-cli/wiki">CLI Documentation</a></h2> </li> <li> <h2><a target="_blank" rel="noopener" href="https://blog.angular.io/">Angular blog</a></h2> </li> </ul> The template is a normal HTML file (almost all HTML tags are valid to be used inside Angular templates except for some tags such as <script>, <html> and <body> etc.) with the exception that it can contain template variables (in this case the title variable) or expressions ({{...}}) that can be used to insert values in the DOM dynamically. This is called interpolation or data binding. You can find more information about templates from the docs. You can also use other components directly inside Angular templates (via the selector property) just like normal HTML. If you are familiar with the MVC (Model View Controller) pattern, the component class plays the role of the Controller and the HTML template plays the role of the View. Angular 8 Components by Example After getting the theory behind Angular components, let's now create the components for our simple CRM application. Our REST API, built with Django, exposes these endpoints: /accounts: create or read a paginated list of accounts /accounts/<id>: read, update or delete an account /contacts: create or read a paginated list of contacts /contacts/<id>: read, update or delete a contact /api/activities: create or read a paginated list of activities /api/activities/<id>: read, update or delete an activity Before adding routing to our application we first need to create the application's components so based on the exposed REST API architecture we can initially divide our application into these components: AccountListComponent: this component displays and controls a tabular list of accounts AccountCreateComponent: this component displays and controls a form for creating or updating accounts ContactListComponent: displays a table of contacts ContactCreateComponent: displays a form to create or update a contact ActivityListComponent: displays a table of activities ActivityCreateComponent: displays a form to create or update an activity Let's use the Angular CLI to create the components $ ng generate component AccountList $ ng generate component AccountCreate $ ng generate component ContactList $ ng generate component ContactCreate $ ng generate component ActivityList $ ng generate component ActivityCreate This is the output of the first command: CREATE src/app/account-list/account-list.component.css (0 bytes) CREATE src/app/account-list/account-list.component.html (31 bytes) CREATE src/app/account-list/account-list.component.spec.ts (664 bytes) CREATE src/app/account-list/account-list.component.ts (292 bytes) UPDATE src/app/app.module.ts (418 bytes) You can see that the command generates all the files to define a component and also updates src/app/app.module.ts. If you open src/app/app.module.ts after running all commands, you can see that all components are automatically added to the AppModule declarations array: import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { AppComponent } from './app.component'; import { AccountListComponent } from './account-list/account-list.component'; import { AccountCreateComponent } from './account-create/account-create.component'; import { ContactListComponent } from './contact-list/contact-list.component'; import { ContactCreateComponent } from './contact-create/contact-create.component'; @NgModule({ declarations: [ AppComponent, AccountListComponent, AccountCreateComponent, ContactListComponent, ContactCreateComponent, ActivityListComponent, ActivityCreateComponent ], imports: [ BrowserModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } Note: If you are creating components manually, you need to make sure to include them manually so they can be recognized as part of the module. Adding Angular Routing Now, let's add routing and navigation links in our application. This is the initial content of src/app/app-routing.module.ts: import { NgModule } from '@angular/core'; import { Routes, RouterModule } from '@angular/router'; const routes: Routes = [ ]; @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule] }) export class AppRoutingModule { } The routes array will contain all the routes of the application. After creating the components we'll now see how to add the routes to this array. First, add the following imports: import { AccountListComponent } from './account-list/account-list.component'; import { AccountCreateComponent } from './account-create/account-create.component'; import { ContactListComponent } from './contact-list/contact-list.component'; import { ContactCreateComponent } from './contact-create/contact-create.component'; import { ActivityListComponent } from './activity-list/activity-list.component'; import { ActivityCreateComponent } from './activity-create/activity-create.component'; For now, we want to redirect the visitor to the /contacts path when the home URL is visited so the first path we'll add is: { path: '', redirectTo: 'contacts', pathMatch: 'full' }, The pathMatch property specifies the matching strategy. The full value means that we want to fully match the path. Next let's add the other paths: { path: '', redirectTo: 'contacts', pathMatch: 'full' }, { path: 'accounts', component: AccountListComponent }, { path: 'create-account', component: AccountCreateComponent }, { path: 'contacts', component: ContactListComponent }, { path: 'create-contact', component: ContactCreateComponent }, { path: 'activities', component: ActivityListComponent }, { path: 'create-activity', component: ActivityCreateComponent } Finally, open the src/app/app.component.html file and add the navigation links, then the router outlet: <a [routerLink]="'/accounts'"> Accounts </a> <a [routerLink]="'/create-account'"> Create Account </a> <a [routerLink]="'/contacts'"> Contacts </a> <a [routerLink]="'/create-contact'"> Create Contact </a> <a [routerLink]="'/activities'"> Activities </a> <a [routerLink]="'/create-activity'"> Create Activity </a> <div> <router-outlet></router-outlet> </div> An Example for Consuming the REST API Using Angular 8 HttpClient Now that we've created the different components and added routing and navigation, let's see an example of how to use the HttpClient of Angular 8 to consume the RESTful API back-end. First, you need to add the HttpClientModule module to the imports array of the main application module: // [...] import { HttpClientModule } from '@angular/common/http'; @NgModule({ declarations: [ // [...] ], imports: [ // [...] HttpClientModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } Create Angular 8 Services A service is a global class that can be injected in any component. It's used to encapsulate code that can be common between multiple components in one place instead of repeating it throughout various components. Now, let's create the services that encapsulates all the code needed for interacting with the REST API. Using Angular CLI 8 run the following commands: $ ng generate service services/contact $ ng generate service services/activity $ ng generate service services/account Note: Since we have multiple services, we put them in a services folder. Open src/app/services/contact.service.ts then import and inject the HttpClient class: import { Injectable } from '@angular/core'; import { HttpClient} from '@angular/common/http'; @Injectable({ providedIn: 'root' }) export class ContactService { constructor(private httpClient: HttpClient) {} } Note: You will need to do the same for the other services. Angular 8 provides a way to register services/providers directly in the @Injectable() decorator by using the new providedIn attribute. This attribute accepts any module of your application or 'root' for the main app module. Now you don't have to include your service in the providers array of your module. Getting Contacts/Sending HTTP GET Request Example Let's start with the contacts API endpoint. First we'll add a method to consume this endpoint in our global API service, next we'll inject the API service and call the method from the corresponding component class (ContactListComponent) and finally we'll display the result (the list of contacts) in the component template. Open src/app/contact.service.ts and add the following method: import { Injectable } from '@angular/core'; import { HttpClient } from '@angular/common/http'; @Injectable({ providedIn: 'root' }) export class ContactService { API_URL = 'http://localhost:8000'; constructor(private httpClient: HttpClient) { } getFirstPage(){ return this.httpClient.get(`${this.API_URL}/contacts`); } } Next, open src/app/contact-list/contact-list.component.ts and inject the ContactService then call the getFirstPage() method: import { Component, OnInit } from '@angular/core'; import { ContactService } from '../services/contact.service'; @Component({ selector: 'app-contact-list', templateUrl: './contact-list.component.html', styleUrls: ['./contact-list.component.css'] }) export class ContactListComponent implements OnInit { displayedColumns : string[] = ['id', 'first_name', 'last_name', 'email', 'phone', 'account', 'address', 'description', 'createdBy', 'createdAt', 'isActive', 'actions']; dataSource = []; constructor(private contactService: ContactService) { } ngOnInit() { this.fetchContacts(); } fetchContacts(){ this.contactService.getFirstPage().subscribe((data: Array<object>) => { this.dataSource = data; console.log(data); }); } } Conclusion Throughout this Angular 8 tutorial for beginners, we've seen, by building a simple real world CRUD example, how to use different Angular concepts to create simple full-stack CRUD application with Angular and Django.

Django 2 Tutorial & Example: Build a CRUD REST API for A Simple CRM

$
0
0
In this tutorial series, you'll learn about Django 2 by creating a CRUD example application with database, admin access, and REST API views.We'll be using MySQL as the database system. Throughout this beginner's tutorial for Django 2.0, we are going to learn to build web applications with Python and Django. This tutorial assumes no prior experience with Django, so we'll be covering the basic concepts and elements of the Django framework by emphasizing essential theory with practice. Basically, we are going to learn Django fundamental concepts while building a simple CRM web application. This tutorial doesn't only cover fundamental basics of Django but also advanced concepts such as how to use and integrate Django with modern front end frameworks like Angular 2+, Vue and React. You'll learn about CRUD, database ORM, how to create API views and URLs. What's Django? Django is an open source Python based web framework for building web applications quickly. It's a pragmatic framework designed for developers working on projects with strict dead lines. It's perfect for quickly creating prototypes and then continue building them after clients approval. It follows a Model View Controller (MVC) design pattern Django uses the Python language, a general purpose, powerful and feature-rich programming language. What's MVC? MVC is a software architectural design pattern which encourages the separation of concerns and effective collaboration between designers and developers when working on the same project. It basically divides or separates your app into three parts: Model: responsible for data storage and management, View: responsible of representing and rendering the user interface or view, Controller: responsible for handling logic to control the user interface and work with data model. Thanks to MVC, you as a developer can work in the model and controller parts without being concerned with the user interface (left to designers) so if anything changes on the side of designers on the user interface, you can rest assured that you will not be affected. Introduction to Python Python is a general purpose programing language that's suitable for developing all kind of applications including web applications. Python is known by a clean syntax and a large standard library which contains a wide range of modules that can be used by developers to build their applications instead of reinventing the wheel. Here is a list of features and characteristics of Python: Python is an Object Oriented Language just like Java or C++. Also like Java, Python is an interpreted language that runs on top of its own virtual machine which makes it a portable language that can runs across every machine and operating system such as Linux, Windows and MAC. Python is especially popular among the scientific community where it's used for creating numeric applications. Python is also known by the great performance of its runtime environment which makes it a good alternative to PHP for developing web applications. For more information you can head to http://python.org/ where you can also download Python binaries for supported systems. For Linux and MAC, Python is included by default so you don't have to install it. For Windows just head over to the official Python website and grab your installer. Just like any normal Windows program, the installation dead process is easy and straightforward. Why Using Django? Due to its popularity and large community, Python has numerous web frameworks among them Django. So what makes Django the right choice for you or your next project? Django is a batteries-included framework Django includes a set of batteries that can be used to solve common web problems without reinventing the wheel such as: the sites framework, the auth system, forms generation, an ORM for abstracting database systems, and a very powerful templating engine, caching system, RSS generation framework etc. The Django ORM Django has a powerful ORM (Object Relational Mapper) which allows developers to use Python OOP classes and methods instead of SQL tables and queries to work with SQL based databases. Thanks to the Django ORM, developers can work with any database system such as MySQL or PostgresSQL without knowing anything about SQL. In the same time the ORM doesn't get in the way. You can write custom SQL anytime you want especially if you need to optimize the queries against your server database for increased performance. Support for Internationalization: i18n You can use Django for writing web applications for other languages than English with a lot of ease thanks to its powerful support for internationalization or you can also create multi lingual websites The Admin Interface Django is a very suitable framework for quickly building prototypes thanks to its auto-generated admin interface. You can generate a full fledged admin application that can be used to do all sorts of CRUD operations against your database models you have registered with the admin module using a few lines of code. Community and Extensive Documentation Django has a great community that has contributed all sorts of awesome things to Django from tutorials and books to reusable open source packages that extend the core framework to include solutions for even more web development problems without reinventing the wheel or wasting time implementing what other developers have already created. Django has also one of the most extensive and useful documentation on the web which can gets you up and running with Django in no time. As a conclusion, if you are looking for a web framework full of features that makes building web applications fun and easy and that has all what you can expect from a modern framework. Django is the right choice for you if you are a Python developer. Python is a portable programming language that can be used anywhere its runtime environment is installed. Django is a Python framework which can be installed on any system which supports the Python language. In this tutorial part, we are going to see how to install Python and Django on the major available operating systems i.e Windows, Linux and MAC. Installing Python Depending on your operating system you may or may not need to install Python. In Linux and MAC OS Python is included by default. You may only need to update it if the installed version is outdated. Installing Python On Windows Python is not installed by default on Windows, so you'll need to grab the official installer from the official Python website at http://www.python.org/download/. Next launch the installer and follow the wizard to install Python just like any other Windows program. Also make sure to add Python root folder to system path environment variable so you can execute the Python executable from any directory using the command prompt. Next open a command prompt and type python. You should be presented with a Python Interactive Shell printing the current version of Python and prompting you to enter your Python commands (Python is an interpreted language) Installing Python on Linux If you are using a Linux system, there is a great chance that you already have Python installed but you may have an old version. In this case you can very easily update it via your terminal depending on your Linux distribution. For Debian based distributions, like Ubuntu you can use the apt package manager sudo apt-get install python This will update your Python version to the latest available version. For other Linux distributions you should look for equivalent commands to install or update Python which is not a daunting task if you already use a package manager to install packages for your system then you should follow the same process to install or update Python. Installing Python on MAC OS Just like Linux, Python is included by default on MAC but in case you have an old version you should be able to update it by going to [http://www.python.org/download/mac/](http://www.python.org/download/mac/ and grab a Python installer for MAC. Now if you managed to install or update Python on your own system or in case you have verified that you already have an updated version of Python installed on your system let's continue by installing Django. Installing PIP PIP is a Python package manager which's used to install Python packages from Python Package Index which is more advanced than easy_install the default Python package manager that's installed by default when you install Python. You should use PIP instaed of easy_install whenever you can but for installing PIP itself you should use easy_install. So let's first install PIP: Open your terminal and enter: $ sudo easy_install pip You can now install Django on your system using pip $ sudo pip install django While you can do this to install Django, globally on your system, it's strongly not recommend. Instead you need to use a virtual environement to install packages. Creating a MySQL Database We'll be using a MySQL database. In your terminal invoke the mysql client using the following command: $ mysql -u root -p Enter your MySQL password and hit Enter. Next, run the following SQL statement to create a database: mysql> create database crmdb; Creating a Virtual Environment Let's start our tutorial by creating a virtual environment. Open a new terminal, navigate to a working folder and run the following command: $ cd ~/demos $ python3 -m venv .env Next, activate the virtual environment using the following command: $ source .env/bin/activate Installing Django and Django REST Framework Now, that you have created and activated your virtual environment, you can install your Python packages using pip. In your terminal where you have activated the virtual environment, run the following commands to install the necessary packages: $ pip install django $ pip install djangorestframework You will also need to install the MySQL client for Python using pip: $ pip install mysqlclient Creating a Django Project Now, let's proceed to creating our django project. In your terminal, run the following command: $ django-admin startproject simplecrm This command will take care of creating a bunch of necessary files for the project. Executing the tree command in the root of our created project will show us the files that were created. . ├── simplecrm │   ├── __init__.py │   ├── settings.py │   ├── urls.py │   └── wsgi.py └── manage.py __init__ is the Python way to mark the containing folder as a Python package which means a Django project is a Python package. settings.py is the project configuration file. You can use this file to specify every configuration option of your project such as the installed apps, site language and database options etc. urls.py is a special Django file which maps all your web app urls to the views. wsgi.py is necessary for starting a wsgi application server. manage.py is another Django utility to manage the project including creating database and starting the local development server. These are the basic files that you will find in every Django project. Now the next step is to set up and create the database. Next, open the settings.py file and update the database setting to point to our crmdb database: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'crmdb', 'USER': 'root', 'PASSWORD': 'YOUR_DB_PASSWORD', 'HOST': 'localhost', 'PORT': '3306', } } Next, add rest_framework to the INSTALLED_APPS array: INSTALLED_APPS = [ # [...] 'rest_framework' ] Finally, migrate the database using the following commands: $ cd simplecrm $ python manage.py migrate You will be able to access your database from the 127.0.0.1:8000 address. Create an Admin User Let's create an admin user using the following command: $ python manage.py createsuperuser Creating a Django Application Next, let's create a Django application for encapsulating our core CRM functionality. In your terminal, run the following command: $ python manage.py startapp crmapp Next, you need to add it in the settings.py file: INSTALLED_APPS = [ # ... 'rest_framework', 'crmapp' ] Creating the Database Models Let's now proceed to create the database models for our application. We are going to create the following models: Contact Account Activity ContactStatus ContactSource ActivityStatus We have three main models which are Contact, Account and Activity. The last three models are simply lookup tables (They can be replaced by an enum). Open the crmapp/models.py file and the following code: from django.db import models from django.contrib.auth.models import User INDCHOICES = ( ('FINANCE', 'FINANCE'), ('HEALTHCARE', 'HEALTHCARE'), ('INSURANCE', 'INSURANCE'), ('LEGAL', 'LEGAL'), ('MANUFACTURING', 'MANUFACTURING'), ('PUBLISHING', 'PUBLISHING'), ('REAL ESTATE', 'REAL ESTATE'), ('SOFTWARE', 'SOFTWARE'), ) class Account(models.Model): name = models.CharField("Name of Account", "Name", max_length=64) email = models.EmailField(blank = True, null = True) phone = models.CharField(max_length=20, blank = True, null = True) industry = models.CharField("Industry Type", max_length=255, choices=INDCHOICES, blank=True, null=True) website = models.URLField("Website", blank=True, null=True) description = models.TextField(blank=True, null=True) createdBy = models.ForeignKey(User, related_name='account_created_by', on_delete=models.CASCADE) createdAt = models.DateTimeField("Created At", auto_now_add=True) isActive = models.BooleanField(default=False) def __str__(self): return self.name class ContactSource(models.Model): status = models.CharField("Contact Source", max_length=20) def __str__(self): return self.status class ContactStatus(models.Model): status = models.CharField("Contact Status", max_length=20) def __str__(self): return self.status class Contact(models.Model): first_name = models.CharField("First name", max_length=255, blank = True, null = True) last_name = models.CharField("Last name", max_length=255, blank = True, null = True) account = models.ForeignKey(Account, related_name='lead_account_contacts', on_delete=models.CASCADE, blank=True, null=True) email = models.EmailField() phone = models.CharField(max_length=20, blank = True, null = True) address = models.TextField(blank=True, null=True) description = models.TextField(blank=True, null=True) createdBy = models.ForeignKey(User, related_name='contact_created_by', on_delete=models.CASCADE) createdAt = models.DateTimeField("Created At", auto_now_add=True) isActive = models.BooleanField(default=False) def __str__(self): return self.first_name class ActivityStatus(models.Model): status = models.CharField("Activity Status", max_length=20) def __str__(self): return self.status class Activity(models.Model): description = models.TextField(blank=True, null=True) createdAt = models.DateTimeField("Created At", auto_now_add=True) contact = models.ForeignKey(Contact, on_delete=models.CASCADE, blank=True, null=True) def __str__(self): return self.description Creating Model Serializers After creating models we need to create the serializers. In the crmapp folder create a serializers.py file: $ cd crmapp $ touch serializers.py Next, open the file and add the following imports: from rest_framework import serializers from .models import Account, Activity, ActivityStatus, Contact, ContactSource, ContactStatus Next, add a serializer class for each model: class AccountSerializer(serializers.ModelSerializer): class Meta: model = Account fields = "__all__" class ActivitySerializer(serializers.ModelSerializer): class Meta: model = Activity fields = "__all__" class ActivityStatusSerializer(serializers.ModelSerializer): class Meta: model = ActivityStatus fields = "__all__" class ContactSerializer(serializers.ModelSerializer): class Meta: model = Contact fields = "__all__" class ContactSourceSerializer(serializers.ModelSerializer): class Meta: model = ContactSource fields = "__all__" class ContactStatusSerializer(serializers.ModelSerializer): class Meta: model = ContactStatus fields = "__all__" Creating API Views After creating the model serializers, let's now create the API views. Open the crmapp/views.py file and add the following imports: from rest_framework import generics from .models import Account, Activity, ActivityStatus, Contact, ContactSource, ContactStatus from .serializers import AccountSerializer, ActivitySerializer, ActivityStatusSerializer, ContactSerializer, ContactSourceSerializer, ContactStatusSerializer Next, add the following views: from rest_framework import generics from .models import Account, Activity, ActivityStatus, Contact, ContactSource, ContactStatus from .serializers import AccountSerializer, ActivitySerializer, ActivityStatusSerializer, ContactSerializer, ContactSourceSerializer, ContactStatusSerializer class AccountAPIView(generics.ListCreateAPIView): queryset = Account.objects.all() serializer_class = AccountSerializer class ActivityAPIView(generics.ListCreateAPIView): queryset = Activity.objects.all() serializer_class = ActivitySerializer class ActivityStatusAPIView(generics.ListCreateAPIView): queryset = ActivityStatus.objects.all() serializer_class = ActivitySerializer class ContactAPIView(generics.ListCreateAPIView): queryset = Contact.objects.all() serializer_class = ContactSerializer class ContactStatusAPIView(generics.ListCreateAPIView): queryset = ContactStatus.objects.all() serializer_class = ContactSerializer class ContactSourceAPIView(generics.ListCreateAPIView): queryset = ContactSource.objects.all() serializer_class = ContactSourceSerializer After creating these models, you need to create migrations using the following command: $ python manage.py makemigrations Next, you need to migrate your database using the following command: $ python manage.py migrate Creating API URLs Let's now create the API URLs to access our API views. Open the urls.py file and add the following imports: from django.contrib import admin from django.urls import path from crmapp import views Next, add the following content: urlpatterns = [ path('admin/', admin.site.urls), path(r'accounts', views.AccountAPIView.as_view(), name='account-list'), path(r'contacts', views.ContactAPIView.as_view(), name='contact-list'), path(r'activities', views.ActivityAPIView.as_view(), name='activity-list'), path(r'activitystatuses', views.ActivityStatusAPIView.as_view(), name='activity-status-list'), path(r'contactsources', views.ContactSourceAPIView.as_view(), name='contact-source-list'), path(r'contactstatuses', views.ContactStatusAPIView.as_view(), name='contact-status-list') ] Starting the local development server Django has a local development server that can be used while developing your project. It's a simple and primitive server which's suitable only for development not for production. To start the local server for your project, you can simply issue the following command inside your project root directory: $ python manage.py runserver Next navigate to the http://localhost:8000/ address with a web browser. You should see a web page with a message: It worked! Conclusion To conclude this tutorial, let's summarize what we have done. We have created a new Django project, created and migrated a MySQL database, built a simple CRM REST API with Django REST framework and started a local development server.

Create PHP 7 MySQL Database Tables Using MySQLi & PDO

$
0
0
In this tutorial we'll learn how to use MySQLi and PDO to create MySQL database tables in PHP 7. You can use the CREATE DATABASE SQL instruction to create a database in your MySQL client so let's start by creating a database. Open a new terminal and run the following command: $ mysql -u root -p Enter your MySQL instance password when prompted. Note: The official tool for working with MySQL is the mysql client which get installed when you install MySQL in your machine. The MySQL client can be used through your terminal as a CLI tool. Next, you can run the following command to create a MySQL database: mysql> create database mydb; That's it! We now have a database. Let's now see how you can create a MySQL table using PHP, MySQLi and PDO. The mysqli extension is a relational database driver that allows you to access the functionality provided by MySQL 4.1 and above in PHP. It stands for MySQL Improved. Creating a MySQL Table in PHP Using MySQLi Let's start with the MySQLi extension. Create a server.php file and add the following variables: <?php $server = "localhost"; $dbuser = "root"; $dbpassword = "YOUR_DATABASE_PASSWORD"; $dbname = "mydb"; Note: Make sure to change your database user and password accrodingly. Next, create a connection to your MySQL database using the following code: $connection = new mysqli($server, $dbuser, $dbpassword, $dbname); if ($connection->connect_error) { die("Connection error: " . $connection->connect_error); } Next, create a SQL query to create the database table called contacts: $sqlQuery = "CREATE TABLE contacts ( id INT(11) UNSIGNED AUTO_INCREMENT PRIMARY KEY, firstName VARCHAR(35) NOT NULL, lastName VARCHAR(35) NOT NULL, email VARCHAR(55) )"; Next, run the SQL query using the following code: if ($connection->query($sqlQuery) === TRUE) { echo "Table created successfully!"; } else { echo "Error creating SQL table: " . $connection->error; } Finally, close your database connection using the following code: $connection->close(); ?> Using PDO to Create MySQL Database Table in PHP PDO stands for PHP Data Object. It's a set of PHP extensions that provide a core PDO class and database drivers for major database systems. You can also use PDO for connectiong and creating a MySQL database table: <?php $server = "localhost"; $dbuser = "root"; $dbpassword = "YOUR_DATABASE_PASSWORD"; $dbname = "mydb"; try { $connection = new PDO("mysql:host=$server;dbname=$dbname", $dbuser, $dbpassword); $connection->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); $sqlQuery = "CREATE TABLE contacts ( id INT(11) UNSIGNED AUTO_INCREMENT PRIMARY KEY, firstName VARCHAR(35) NOT NULL, lastName VARCHAR(35) NOT NULL, email VARCHAR(55) )"; $connection->exec($sqlQuery); echo "Table created successfully!"; } catch(PDOException $e){ echo $sqlQuery . "<br>" . $e->getMessage(); } $connection = null; ?> Conclusion In this quick post, you have seen how you can create a MySQL database table in PHP using the MySQLi extension and PDO.

PHP PDO Tutorial: CRUD Example with MySQL

$
0
0
PDO stands for PHP Data Object and it's an extension that provides an interface for communicating with many supported popular database systems such as MySQL and Oracle, PostgreSQL and SQLite, etc. It's provided starting with PHP 5.1. Since PDO abstracts away all the differences between various database management systems, you only need to change the information about your database in your code in order to change the database system used in your PHP application. Setting up PDO PDO is added by default starting with PHP 5.1 but you need to set the necessary database driver in the php.ini file: extension=pdo.so extension=pdo_mysql.so Creating a MySQL Database Let's start by creating a MySQL using the mysql client. In your terminal, run the following command: $ mysql -u root -p Enter your MySQL database password when prompted. Next, run the following SQL instruction to create a database: mysql> create database mydb; That's it! We now have a database to work with. Creating a Database Table Next, let's create a database table. First select your mydb database using: mysql> use mydb; Next, run the following SQL instruction to create a contacts table: mysql > CREATE TABLE `contacts` ( `id` int(11) NOT NULL, `name` varchar(30) NOT NULL, `email` varchar(50) NOT NULL ) Connection to Database Using PDO Let's start by creating a folder for our project: $ mkdir php-pdo-example Next, navigate to your project's folder and create the index.php and db.php files: $ cd php-pdo-example $ touch index.php $ touch db.php Open the db.php file and add the following class that allows you to connect to your MySQL database: class DB { protected $conn = null; public function Connect() { try { $dsn = "mysql:dbname=mydb; host=localhost"; $user = <YOUR_DATABASE_USER>; $password = <YOUR_DATABASE_PASSWORD>; $options = array(PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION, PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC, ); $this->conn = new PDO($dsn, $user, $password, $options); return $this->conn; } catch (PDOException $e) { echo 'Connection error: ' . $e->getMessage(); } } public function Close() { $this->conn = null; } } In our DB class we first a protcted $conn variable that will hold the PDO instance. Next, we define two Open() and Close() methods which will be used to open and close the connection to database. Next, open the index.php file and include the db.php file: include 'db.php';   try{       $db = new DB();     $conn = $db->Open();     if($conn){         echo 'connected';     }     else{         echo $conn;     } } catch(PDOException $ex){     echo $ex->getMessage(); } We include the db.php file and we create an instance of the DB class. Finally we call the Open() method of the the DB instance. Running Database SQL Queries After connecting to the databse, we can now run SQL queries. Creating a Contact: SQL Insert Let's start by adding the code to create a contact in the database by running a SQL insert query. Open the index.php file and update it accordingly: include 'db.php';   try{       $db = new DB();     $conn = $db->Open();     if($conn){ $query = "INSERT INTO `contacts`(`name`, `email`) VALUES ('Contact 001','contact001@email.com')"; $conn->query($query);     }     else{         echo $conn;     } } catch(PDOException $ex){     echo $ex->getMessage(); } Reading Data: SQL Select Next, let's add the code for reading data from the database table. Create a read.php file and add the following code: include 'db.php';   try{       $db = new DB();     $conn = $db->Open();     if($conn){ $query = "SELECT * FROM contacts"; $result = $conn->query($query); foreach ($result as $row) { echo $row['name'] . "<br>"; echo $row['email'] . "<br>"; }     }     else{         echo $conn;     } } catch(PDOException $ex){     echo $ex->getMessage(); } You can also create update and delete opertions using the following SQL queries:  $query = "UPDATE `contacts` SET `email`= 'contact002@email.com' WHERE `id` = 1";  $query = "DELETE from `contacts` WHERE `id` = 1"; Conclusion In this quick tutorial we have seen how to create CRUD operations against a MySQL database using PDO and PHP.

Community Coding For The Web

$
0
0

Open source barriers

Right now, it's too hard to contribute to open source.

This leads to a few things that are bad:

  1. People who have good ideas don't contribute
  2. Open source maintainers are overworked
  3. There's less useful open source software than there could be

We want to improve this, so:

We're building a community for web developers to build & share open source modules that work on any website.

There should be a place where you don't need anything "extra" to work on open source: no git, webpack, gems, repos, terminal commands, or non-code tools. If you know the code itself, you should be able to contribute directly without learning special commands.

We're working our way toward this with the first ever online code editor that compiles modules to work on any website:

More than a playground

There are a lot of code "playgrounds" out there for building demos or proofs of concept, but that can be pretty limiting. With the new approach, you can click

And that brings up code to add the actual module to your website. Not a sandboxed or iframe version -- the actual, working module. The code looks like this:

<html>
<head>

  <!-- Paste right before your document </head> -->
  <script id="Anymod-script">
    (function (m,o,d,u,l,a,r,i,z,e) {
      u[m]={Project:o,rq:[],Opts:r,ready:function(j){u[m].rq.push(j)}};function j(s){return encodeURIComponent(btoa(s))};z=l.getElementById(m+'-'+a);r=u.location;
      e=[d+'/page/'+o+'/'+j(r.pathname)+'/'+j(r.host)+'?t='+Date.now(),d];e.map(function(w){i=l.createElement(a);i.defer=1;i.src=w;z.parentNode.insertBefore(i,z);});
    })('Anymod','855EM8','https://cdn.anymod.com/v2',window,document,'script',{ toolkit: true, tips: true, priority: 3 });
  </script>

</head>
<body>

  <!-- Paste where you want the module: Animate CSS grid -->
  <div id="anymod-lllddn"></div>

</body>
</html>

A world of possibilities

Our hope is that an open platform for normal web developers to find, modify, and use modules will help us all to create a huge ecosystem of free, open-source software that works on any website.

Right now there is a library of hundreds of "verified" modules that are ready to use, with thousands more that have been created by developers.

From pre-styled page sections to forms, modular CMS, galleries, team pages, ecommerce, and more, there are lots of ready-to-use modules already available.

Join the community

You can find, clone, and use modules on the platform, and we are building even more tools to help with collaboration.

Soon you will be able to see which modules are looking for contributors, make changes in the browser, and have your updates merged into the original. Both the original author and you will then be listed on the module, and you'll be able to see where it's being used.

Our hope is that we can make open source participation a real option for all web developers by lowering the barriers to contribution.

If you're interested in collaborating, using cool modules, or simply claiming your username, we'd love to have you!

https://anymod.com

Upcoming Webinar Tues 4/9: MySQL 8.0 Architecture and Enhancement

$
0
0
MySQL 8.0 Architecture and Enhancement

MySQL 8.0 Architecture and EnhancementPlease join Percona’s Bug Analyst, Lalit Choudhary as he presents his talk MySQL 8.0 Architecture and Enhancement on Tuesday, April 9th, 2019 at 6:00 AM PDT (UTC-7) / 9:00 AM EDT (UTC-4).

Register Now

The release of MySQL 8.0 offers much more to users compared to previous releases. There are major changes in architecture as well as adding differentiating features, and improvements to manage the database more efficiently.

In our talk we will go through, MySQL 8.0 architecture:
* On Disk
* In-memory

Examples and use cases will be shared showing the new features introduced in MySQL 8.0.

Register for MySQL 8.0 Architecture and Enhancement to learn more.

The Perfect Server CentOS 7.6 with Apache, PHP 7.2, Postfix, Dovecot, Pure-FTPD, BIND and ISPConfig 3.1

$
0
0
This tutorial shows how to install ISPConfig 3.1 on a CentOS 7.6 (64Bit) server. ISPConfig 3 is a web hosting control panel that allows you to configure the following services through a web browser: Apache web server, Postfix mail server, MySQL, BIND nameserver, PureFTPd, SpamAssassin, ClamAV, Mailman, and many more.

Session Management in Nodejs Using Redis as Session Store

$
0
0

We have covered session management in ExpressJs using global variable technique which of course will not work in case of shared server or concurrent execution of http requests which is most familiar production scenario.

Codeforgeek readers requested to provide solution for these issue and the optimal one is to use external session storage which is not dependent on application requests, answer is Redis cause this is the light weight and easy to use NoSQL database.

In this tutorial i am going to explain how to design and code session oriented express web applications by using Redis as external session storage.

DOWNLOAD

To get familiar with Session handling in ExpressJS I recommend to read our first article here.

Getting started with Redis :

If you have already installed Redis please go to next section. For those of you who are not familiar with Redis here is a little introduction.

Redis is key value pair cache and store. it is also referred as data structure server cause keys can contain List, Hash, sets, sorted sets etc.

Redis is fast cause it work in memory data set i.e it by default stores data in your memory than disk and if you from CS background you very well know CRUD operation on memory is way faster than disk, so is Redis.

if you restart Redis or shut it down you may lose all data unless you enable option to dump those data in hard disk. Be careful !

Installation:

1 : On mac

On Mac if you have brew install then just open up your terminal and type

brew install redis

Make sure you have command line tools installed cause it need GCC to compile it.

If you don’t have brew then please install it. It’s awesome !

2 : On ubuntu

Run following command on Ubuntu and rest will be done

sudo apt-get install redis-server

3 : On Windows

Well Redis does not support Windows ! Hard luck.

Basic REDIS command :

I am going to mention those command which i need to go with this tutorial. For detailed information please visit the nice demo built by awesome Redis team to make you pro in Redis.

1 : Starting Redis server.

Run this command on terminal.

redis-server &

2 : Open Redis CLI too.

Run this command on terminal.

redis-cli

3 : List all Keys.
Run this command on terminal.

KEYS *

4 : Retrieve information regarding particular key.
Run this command on terminal.

GET <key name>

Once you have installed Redis, by running first command you should something like this.
Redis start sceeen

Express session with Redis

To add support of Redis you have to use Redis client and connect-redis. Create express-session and pass it to connect-redis object as parameter. This will initialize it.

Then in session middle ware, pass the Redis store information such as host, port and other required parameters.

Here is sample express code with Redis support. Have a look.

Express Session using Redis :
var express = require('express');
var redis   = require("redis");
var session = require('express-session');
var redisStore = require('connect-redis')(session);
var bodyParser = require('body-parser');
var client  = redis.createClient();
var app = express();

app.set('views', __dirname + '/views');
app.engine('html', require('ejs').renderFile);

app.use(session({
    secret: 'ssshhhhh',
    // create new redis store.
    store: new redisStore({ host: 'localhost', port: 6379, client: client,ttl :  260}),
    saveUninitialized: false,
    resave: false
}));
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({extended: true}));

app.get('/',function(req,res){ 
    // create new session object.
    if(req.session.key) {
        // if email key is sent redirect.
        res.redirect('/admin');
    } else {
        // else go to home page.
        res.render('index.html');
    }
});

app.post('/login',function(req,res){
    // when user login set the key to redis.
    req.session.key=req.body.email;
    res.end('done');
});

app.get('/logout',function(req,res){
    req.session.destroy(function(err){
        if(err){
            console.log(err);
        } else {
            res.redirect('/');
        }
    });
});

app.listen(3000,function(){
    console.log("App Started on PORT 3000");
});

Notice the code where we are initiating the session. Have a look.

app.use(session({
    secret: 'ssshhhhh',
    // create new redis store.
    store: new redisStore({ host: 'localhost', port: 6379, client: client}),
    saveUninitialized: false,
    resave: false
}));

If Redis server running, then this is default configuration. Once you have configured it. Store your session key in the way we were doing in previous example.

req.session.key_name = value to set
// this will be set to redis, value may contain User ID, email or any information which you need across your application.

Fetch the information from redis session key.

req.session.key["keyname"]

Our project:

To demonstrate this i have developed web application which will allow you to register and login and post some status. It’s simple but it will demonstrate you how to handle session using external storage.

Create project folder and copy these code to package.json.

Our package.json :

package.json
{
  "name": "Node-Session-Redis",
  "version": "0.0.1",
  "scripts": {
    "start": "node ./bin"
  },
  "dependencies": {
    "async": "^1.2.1",
    "body-parser": "^1.13.0",
    "connect-redis": "^2.3.0",
    "cookie-parser": "^1.3.5",
    "ejs": "^2.3.1",
    "express": "^4.12.4",
    "express-session": "^1.11.3",
    "mysql": "^2.7.0",
    "redis": "^0.12.1"
  }
}

Install dependencies by running following command.

npm install

Our database:

Once completed let’s design simple database to support our application. Here is the diagram.
database diagram

Database is simple and straight, please create database in MySQL and run following DDL queries.

SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0;
SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0;
SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='TRADITIONAL,ALLOW_INVALID_DATES';

CREATE SCHEMA IF NOT EXISTS `redis_demo` DEFAULT CHARACTER SET latin1 ;
USE `redis_demo` ;

-- -----------------------------------------------------
-- Table `redis_demo`.`user_login`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `redis_demo`.`user_login` (
  `user_id` INT(11) NOT NULL AUTO_INCREMENT COMMENT '',
  `user_email` VARCHAR(50) NOT NULL COMMENT '',
  `user_password` VARCHAR(50) NOT NULL COMMENT '',
  `user_name` VARCHAR(50) NOT NULL COMMENT '',
  PRIMARY KEY (`user_id`)  COMMENT '',
  UNIQUE INDEX `user_email` (`user_email` ASC)  COMMENT '')
ENGINE = InnoDB
AUTO_INCREMENT = 7
DEFAULT CHARACTER SET = latin1;

-- -----------------------------------------------------
-- Table `redis_demo`.`user_status`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `redis_demo`.`user_status` (
  `user_id` INT(11) NOT NULL COMMENT '',
  `user_status` TEXT NOT NULL COMMENT '',
  `created_date` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '',
  INDEX `user_id` (`user_id` ASC)  COMMENT '',
  CONSTRAINT `user_status_ibfk_1`
    FOREIGN KEY (`user_id`)
    REFERENCES `redis_demo`.`user_login` (`user_id`))
ENGINE = InnoDB
DEFAULT CHARACTER SET = latin1;


SET SQL_MODE=@OLD_SQL_MODE;
SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS;
SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS;

Our Server code

Server file contains application routes, database support and Redis session support. We will first create connect to database and initialize Redis then according to Routes particular action will happen.

/bin/index.js
/**
  Loading all dependencies.
**/

var express         =     require("express");
var redis           =     require("redis");
var mysql           =     require("mysql");
var session         =     require('express-session');
var redisStore      =     require('connect-redis')(session);
var bodyParser      =     require('body-parser');
var cookieParser    =     require('cookie-parser');
var path            =     require("path");
var async           =     require("async");
var client          =     redis.createClient();
var app             =     express();
var router          =     express.Router();

// Always use MySQL pooling.
// Helpful for multiple connections.

var pool    =   mysql.createPool({
    connectionLimit : 100,
    host     : 'localhost',
    user     : 'root',
    password : '',
    database : 'redis_demo',
    debug    :  false
});

app.set('views', path.join(__dirname,'../','views'));
app.engine('html', require('ejs').renderFile);

// IMPORTANT
// Here we tell Express to use Redis as session store.
// We pass Redis credentials and port information.
// And express does the rest !

app.use(session({
    secret: 'ssshhhhh',
    store: new redisStore({ host: 'localhost', port: 6379, client: client,ttl :  260}),
    saveUninitialized: false,
    resave: false
}));
app.use(cookieParser("secretSign#143_!223"));
app.use(bodyParser.urlencoded({extended: false}));
app.use(bodyParser.json());

// This is an important function.
// This function does the database handling task.
// We also use async here for control flow.

function handle_database(req,type,callback) {
   async.waterfall([
    function(callback) {
        pool.getConnection(function(err,connection){
          if(err) {
                   // if there is error, stop right away.
                   // This will stop the async code execution and goes to last function.
            callback(true);
          } else {
            callback(null,connection);
          }
        });
    },
    function(connection,callback) {
      var SQLquery;
      switch(type) {
       case "login" :
        SQLquery = "SELECT * from user_login WHERE user_email='"+req.body.user_email+"' AND `user_password`='"+req.body.user_password+"'";
        break;
            case "checkEmail" :
             SQLquery = "SELECT * from user_login WHERE user_email='"+req.body.user_email+"'";
            break;
        case "register" :
        SQLquery = "INSERT into user_login(user_email,user_password,user_name) VALUES ('"+req.body.user_email+"','"+req.body.user_password+"','"+req.body.user_name+"')";
        break;
        case "addStatus" :
        SQLquery = "INSERT into user_status(user_id,user_status) VALUES ("+req.session.key["user_id"]+",'"+req.body.status+"')";
        break;
        case "getStatus" :
        SQLquery = "SELECT * FROM user_status WHERE user_id="+req.session.key["user_id"];
        break;
        default :
        break;
    }
    callback(null,connection,SQLquery);
    },
    function(connection,SQLquery,callback) {
       connection.query(SQLquery,function(err,rows){
           connection.release();
        if(!err) {
            if(type === "login") {
              callback(rows.length === 0 ? false : rows[0]);
            } else if(type === "getStatus") {
                          callback(rows.length === 0 ? false : rows);
                        } else if(type === "checkEmail") {
                          callback(rows.length === 0 ? false : true);
                        } else {
                      callback(false);
            }
        } else {
             // if there is error, stop right away.
            // This will stop the async code execution and goes to last function.
            callback(true);
         }
    });
       }],
       function(result){
      // This function gets call after every async task finished.
      if(typeof(result) === "boolean" && result === true) {
        callback(null);
      } else {
        callback(result);
      }
    });
}

/**
    --- Router Code begins here.
**/


router.get('/',function(req,res){
    res.render('index.html');
});

router.post('/login',function(req,res){
    handle_database(req,"login",function(response){
        if(response === null) {
            res.json({"error" : "true","message" : "Database error occured"});
        } else {
            if(!response) {
              res.json({
                             "error" : "true",
                             "message" : "Login failed ! Please register"
                           });
            } else {
               req.session.key = response;
                   res.json({"error" : false,"message" : "Login success."});
            }
        }
    });
});

router.get('/home',function(req,res){
    if(req.session.key) {
        res.render("home.html",{ email : req.session.key["user_name"]});
    } else {
        res.redirect("/");
    }
});

router.get("/fetchStatus",function(req,res){
  if(req.session.key) {
    handle_database(req,"getStatus",function(response){
      if(!response) {
        res.json({"error" : false, "message" : "There is no status to show."});
      } else {
        res.json({"error" : false, "message" : response});
      }
    });
  } else {
    res.json({"error" : true, "message" : "Please login first."});
  }
});

router.post("/addStatus",function(req,res){
    if(req.session.key) {
      handle_database(req,"addStatus",function(response){
        if(!response) {
          res.json({"error" : false, "message" : "Status is added."});
        } else {
          res.json({"error" : false, "message" : "Error while adding Status"});
        }
      });
    } else {
      res.json({"error" : true, "message" : "Please login first."});
    }
});

router.post("/register",function(req,res){
    handle_database(req,"checkEmail",function(response){
      if(response === null) {
        res.json({"error" : true, "message" : "This email is already present"});
      } else {
        handle_database(req,"register",function(response){
          if(response === null) {
            res.json({"error" : true , "message" : "Error while adding user."});
          } else {
            res.json({"error" : false, "message" : "Registered successfully."});
          }
        });
      }
    });
});

router.get('/logout',function(req,res){
    if(req.session.key) {
    req.session.destroy(function(){
      res.redirect('/');
    });
    } else {
        res.redirect('/');
    }
});

app.use('/',router);

app.listen(3000,function(){
    console.log("I am running at 3000");
});

Explanation:

When user provides login credentials then we are checking it against our database. If it’s successful then we are setting database response in our Redis key store. This is where Session has been started.

Now as soon as User go to homepage, we are validating the session key and if it is there, then retrieving the user id from it to fire further MySQL queries.

When user click on Logout, we are calling req.session.destroy() function which in turn deletes the key from Redis and ends the session.

Views ( Index.html and Home.html )

Here is our home page code.

/view/index.html
<html>
<head>
<title>Home</title>
<!-- Latest compiled and minified CSS -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css">
<!-- Optional theme -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap-theme.min.css">
<script src="https://code.jquery.com/jquery-1.11.3.min.js"></script>
<!-- Latest compiled and minified JavaScript -->
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js"></script>
<script>
$(document).ready(function(){
    $("#username").hide();
    $('#login-submit').click(function(e){
      if($(this).attr('value') === 'Register') {
        $.post("http://localhost:3000/register",{
               user_name : $("#username").val(),
               user_email : $("#useremail").val(),
               user_password : $("#password").val()
             },function(data){
        if(data.error) {
            alert(data.message);
        } else {
            $("#username").hide();
            $("#login-submit").prop('value','Log in');
        }
    });
    } else {
        $.post("http://localhost:3000/login",{
                   user_email : $("#useremail").val(),
                   user_password : $("#password").val()
                   },function(data){
            if(!data.error) {
                window.location.href = "/home";
            } else {
                alert(data.message);
            }
        });
    }
    });
    $("#reg").click(function(event){
        $("#username").show('slow');
        $("#login-submit").prop('value','Register');
        event.preventDefault();
    });
});
</script>
    </head>
    <body>
    <nav class="navbar navbar-default navbar-fixed-top">
    <div class="navbar-header">
    <a class="navbar-brand" href="#">
        <p>Redis session demo</p>
    </a>
    </div>
  <div class="container">
    <p class="navbar-text navbar-right">Please sign in</a></p>
  </div>
</nav>
<div class="form-group" style="margin-top: 100px; width : 400px; margin-left : 50px;">
    <input type="text" id="username" placeholder="Name" class="form-control"><br>
    <input type="text" id="useremail" placeholder="Email" class="form-control"><br>
    <input type="password" id="password" placeholder="Password" class="form-control"><br>
    <input type="button" id="login-submit" value="Log In" class="btn btn-primary">&nbsp;<a href="" id="reg">Sign up here </a>
    </div>
    </body>
</html>

Here is how it looks.

Home page - Redis session

/view/home.html
<html>
<head>
<title>Home</title>
<!-- Latest compiled and minified CSS -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css">
<!-- Optional theme -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap-theme.min.css">
<script src="https://code.jquery.com/jquery-1.11.3.min.js"></script>
<!-- Latest compiled and minified JavaScript -->
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js"></script>
<script type="text/javascript">
    $(document).ready(function(){
        $.get("http://localhost:3000/fetchStatus",function(res){
            $.each(res.message,function(index,value) {
                $("#showStatus").append('You have posted <br> <p>'+value.user_status+'</p><hr>');
            });
        });
    $("#addNewStatus").click(function(e){
        e.preventDefault();
            if($("#statusbox").text !== "") {
                $.post("/addStatus",
                                  { status : $("#statusbox").val() },
                                   function(res){
                    if(!res.error) {
                        alert(res.message);
                    }
                })
            }
        });
});
</script>
</head>
<body>
<nav class="navbar navbar-default navbar-fixed-top">
<div class="navbar-header">
<a class="navbar-brand" href="#">
<p>Redis session demo</p>
</a></div><div class="container">
<p class="navbar-text navbar-right">Hi you are login as <b><%= email %></b> (<a href="/logout/">Logout</a>)</p>
</div>
</nav>
<div style="margin-top:100px;margin-left:50px;width:400px">
<textarea rows="10" cols="5" id="statusbox" class="form-control"></textarea><br>
<input type="submit" id="addNewStatus" value="Post" class="btn btn-primary"><br>
<div id="showStatus" style="border : 2px grey; border-radius : 4px;">
</div>
</div>
</body>
</html>

Here is how it looks.

Home page - redis session

How to run:

Download code from Github and extract the file. Make sure you have installed Redis and created database.

If your database name and MySQL password is different then please update it on bin/index.js file.

Once done type npm start on terminal to start the project.

Start the script

Now open your browser and go to localhost:3000 to view the app. Register with your email id and password and then Login with same. After login, have a look to Redis using commands mentioned above.

Redis key set

Now logout from System and check for same key. It should not be there.

After logout

This is it. Express session with Redis is working.

Conclusion:

Redis is one of the popular key storage database system. Using it to store session is not only beneficial in production environment but also it helps to improve performance of system.

Using MySQL Shell to create a three-node MySQL InnoDB Cluster

$
0
0

MySQL InnoDB Cluster was introduced in MySQL version 5.7 and consists of three parts – Group Replication, MySQL Shell and MySQL Router. MySQL InnoDB Cluster provides a complete high availability solution for MySQL. In this post, I am going to explain how to setup a three-node cluster using the MySQL Shell.

Note: Visit this page to learn more about MySQL InnoDB Cluster.
Other blogs on Group Replication and InnoDB Cluster:
MySQL 8.0 Group Replication – Three-server installation
Adding a replicated MySQL database instance using a Group Replication server as the source
Replicating data between two MySQL Group Replication sets using “regular” asynchronous replication with Global Transaction Identifiers (GTID’s)
MySQL 8.0 InnoDB Cluster – Creating a sandbox and testing MySQL Shell, Router and Group Replication

To begin, I am going to install three instances of the MySQL database and MySQL Shell (both version 8.0.15) on three separate virtual machines with the IP addresses of 192.168.1.161, 192.168.1.162 and 192.168.1.163. I will do most of the work on 192.168.1.161 via the MySQL Shell.

Let’s get started

From a terminal window on 192.168.1.161, I will open the MySQL Shell using the command: mysqlsh. MySQL Shell has three modes – SQL, Javascript and Python. For this post, I will be using the SQL and JavaScript modes. Once you are in the MySQL Shell console, here are the commands to switch between the modes:

SQL mode - \sql
JavaScript Mode - \js
Python mode - \py

Each mode is highlighted with a different color. Here is a screenshot to show you what each mode looks like:

When you first open MySQL Shell, you aren’t connected to a database. I will start Shell (via mysqlsh), connect to the local instance, and then switch to SQL mode.


# mysqlsh
MySQL Shell 8.0.15-commercial

Copyright (c) 2016, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.


Type '\help' or '\?' for help; '\quit' to exit.

 MySQL  JS    \connect root@localhost:3306

Creating a session to 'root@localhost:3306'
Please provide the password for 'root@localhost:3306': 
Save password for 'root@localhost:3306'? [Y]es/[N]o/Ne[v]er (default No): N
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 12
Server version: 8.0.15-commercial MySQL Enterprise Server - Commercial
No default schema selected; type \use  to set one.

 MySQL  JS    \sql

Switching to SQL mode... Commands end with ;

I am starting with three fresh installations of MySQL. I shouldn’t have any databases or tables other than the default ones. And, I want to double-check to make sure there haven’t been any transactions already executed. I can do this with the SHOW MASTER STATUS\G command:

 MySQL  SQL    show master status\G

*************************** 1. row ***************************
             File: binlog.000001
         Position: 151
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: 
1 row in set (0.0003 sec)

Note: I installed these servers a few times, and sometimes when I ran this command, it would show binlog.000002 instead of binlog.000001. This doesn’t matter – as long as the Executed_GTID_Set is blank.

From an OS command prompt, you can take a look at the binary log file binlog.000001, which is located inside your MySQL data directory. And the binlog.index file contains a list of the active binary logs, which in this case, is only binlog.000001.


MacVM161:data root# ls -l bin*
-rw-r—– 1 _mysql _mysql 151 Apr 3 20:48 binlog.000001
-rw-r—– 1 _mysql _mysql 16 Apr 3 20:48 binlog.index
MacVM161:data root# cat binlog.index
./binlog.000001

Since this is a new installation, I should only see the four default MySQL databases, and the four default MySQL users:

 MySQL  SQL    show databases;

+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.0013 sec)

 MySQL  SQL    select user, host from mysql.user;

+------------------+-----------+
| user             | host      |
+------------------+-----------+
| mysql.infoschema | localhost |
| mysql.session    | localhost |
| mysql.sys        | localhost |
| root             | localhost |
+------------------+-----------+
4 rows in set (0.0004 sec)

When you use MySQL Shell to build your InnoDB Cluster, you can use pre-built administration commands to configure and manage the cluster. Before adding an instance to the cluster, I can check to see if the instance is suitable for InnoDB Cluster by running the dba.checkInstanceConfiguration command. To use the commands, I will switch to JavaScript mode:

 MySQL  SQL    \js

Switching to JavaScript mode…

 MySQL  JS    dba.checkInstanceConfiguration(“root@localhost:3306”)

Please provide the password for 'root@localhost:3306': 
Save password for 'root@localhost:3306'? [Y]es/[N]o/Ne[v]er (default No): Y
Validating local MySQL instance listening at port 3306 for use in an InnoDB cluster...

This instance reports its own address as MacVM163.local
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...

Some configuration options need to be fixed:
+--------------------------+---------------+----------------+--------------------------------------------------+
| Variable                 | Current Value | Required Value | Note                                             |
+--------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum          | CRC32         | NONE           | Update the server variable                       |
| enforce_gtid_consistency | OFF           | ON             | Update read-only variable and restart the server |
| gtid_mode                | OFF           | ON             | Update read-only variable and restart the server |
| server_id                | 1             |     | Update read-only variable and restart the server |
+--------------------------+---------------+----------------+--------------------------------------------------+

Some variables need to be changed, but cannot be done dynamically on the server.
Please use the dba.configureInstance() command to repair these issues.

{
    "config_errors": [
        {
            "action": "server_update", 
            "current": "CRC32", 
            "option": "binlog_checksum", 
            "required": "NONE"
        }, 
        {
            "action": "restart", 
            "current": "OFF", 
            "option": "enforce_gtid_consistency", 
            "required": "ON"
        }, 
        {
            "action": "restart", 
            "current": "OFF", 
            "option": "gtid_mode", 
            "required": "ON"
        }, 
        {
            "action": "restart", 
            "current": "1", 
            "option": "server_id", 
            "required": ""
        }
    ], 
    "status": "error"
}

From the above output (under the header “Some configuration options need to be fixed:“), I can see I need to add four variables and their correct values to the MySQL configuration file – and I will need to reboot MySQL:

I added the following to the MySQL configuration file (my.cnf) on all three instances under the [mysqld] header – and I restarted all three instances of MySQL:

# MySQL Configuration File
[mysqld]
binlog_checksum=NONE
enforce_gtid_consistency=ON
gtid_mode=ON
server_id=161  # change for each server

Note: You will need to have a unique value for the server_id variable on each MySQL instance, and the variables you need to add might be different from what I have above.

After the reboot, I will need to re-launch mysqlsh and/or reconnect to the database. To exit Shell, use the /exit command. I can then run dba.checkInstanceConfiguration again to see if the instance is ready for InnoDB Cluster. (Remember to switch to JavaScript mode (\js) if you aren’t already in JavaScript mode)

 MySQL  SQL    \js

Switching to JavaScript mode…

 MySQL  JS    dba.checkInstanceConfiguration(“root@localhost:3306”)

Please provide the password for 'root@localhost:3306': 
Save password for 'root@localhost:3306'? [Y]es/[N]o/Ne[v]er (default No): N
Validating local MySQL instance listening at port 3306 for use in an InnoDB cluster...

This instance reports its own address as MacVM161.local
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...
Instance configuration is compatible with InnoDB cluster

The instance 'localhost:3306' is valid for InnoDB cluster usage.

{
    "status": "ok"
}

The above status line of “status”: “ok”, confirms this instance is ready for InnoDB Cluster.

I can configure the first server by using the dba.configureInstance command. You have an option to use root to manage the cluster, or you can choose option #2 which will create a new administrator account for cluster. I decided to choose option #2, which does make the install a little more complicated.

 MySQL  JS    dba.configureInstance(“root@localhost:3306”)


Please provide the password for 'root@localhost:3306': 
Save password for 'root@localhost:3306'? [Y]es/[N]o/Ne[v]er (default No): N
Configuring local MySQL instance listening at port 3306 for use in an InnoDB cluster...

This instance reports its own address as MacVM161.local
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

WARNING: User 'root' can only connect from localhost.
If you need to manage this instance while connected from other hosts, new account(s) with the proper source address specification must be created.

1) Create remotely usable account for 'root' with same grants and password
2) Create a new admin account for InnoDB cluster with minimal required grants
3) Ignore and continue
4) Cancel

Please select an option [1]: 2
Please provide an account name (e.g: icroot@%) to have it created with the necessary
privileges or leave empty and press Enter to cancel.
Account Name: cluster_adm

The instance 'localhost:3306' is valid for InnoDB cluster usage.

Cluster admin user 'cluster_adm'@'%' created.
The instance 'localhost:3306' is already ready for InnoDB cluster usage.

As I am doing these installs, I like to check the master status as I go along, so I will do that again. First I need to switch to SQL mode.

 MySQL  JS    \sql

Switching to SQL mode... Commands end with ;

 MySQL  SQL    show master status\G

*************************** 1. row ***************************
             File: binlog.000002
         Position: 151
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: 
1 row in set (0.0002 sec)

The master status hasn’t changed, so nothing was written to the binary log.

One immediate problem from not using the default root user is you aren’t given the option to assign a password for the cluster_adm user. I can check this by querying the mysql.user table:

 MySQL  SQL    select user, host, authentication_string from mysql.user where user = ‘cluster_adm’;

*************************** 1. row ***************************
+-------------+------+-----------------------+
| user        | host | authentication_string |
+-------------+------+-----------------------+
| cluster_adm | %    |                       |
+-------------+------+-----------------------+

Since there isn’t a password for cluster_adm, I will go ahead and set it. But – I don’t want to write this to the binary log, as then it would get replicated to the other servers once I start the cluster. I will need to create this user on the other two servers. I can suppress writing to the binary log with SET SQL_LOG_BIN=0; and then turn it back on with SET SQL_LOG_BIN=1;. (Don’t forget to substitute the value of new_password for your actual password)

SET SQL_LOG_BIN=0;
ALTER USER 'cluster_adm'@'%' IDENTIFIED BY 'new_password';
FLUSH PRIVILEGES;
SET SQL_LOG_BIN=1;


 MySQL  SQL    SET SQL_LOG_BIN=0;
Query OK, 0 rows affected (0.0001 sec)

 MySQL  SQL    ALTER USER ‘cluster_adm’@’%’ IDENTIFIED BY ‘new_password’;
Query OK, 0 rows affected (0.0001 sec)

 MySQL  SQL    FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.0001 sec)

 MySQL  SQL    SET SQL_LOG_BIN=1;
Query OK, 0 rows affected (0.0001 sec)

I can check again to see if the cluster_adm user now has a password:

 MySQL  SQL    select user, host, authentication_string from mysql.user where user = ‘cluster_adm’;

+-------------+------+------------------------------------------------------------------------+
| user        | host | authentication_string                                                  |
+-------------+------+------------------------------------------------------------------------+
| cluster_adm | %    | $A$005$',q+B<%=JJ|Mz.WH!XXX/iQ4rvG/3DzX/UharambelivesYp1oODqtNZk25     | 
+-------------+------+------------------------------------------------------------------------+

Since I need to create this user along with the appropriate grants (permissions) on the other two MySQL instances, I can just look at the grants which were given to the cluster_adm user when the dba.configureInstance program created the user:

 MySQL  SQL    show grants for ‘cluster_adm’;

+------------------------------------------------------------------------------------------------------------------------------------------------+
| Grants for cluster_adm@%                                                                                                                       |
+------------------------------------------------------------------------------------------------------------------------------------------------+
| GRANT RELOAD, SHUTDOWN, PROCESS, FILE, SUPER, REPLICATION SLAVE, REPLICATION CLIENT, CREATE USER ON *.* TO `cluster_adm`@`%` WITH GRANT OPTION |
| GRANT SELECT, INSERT, UPDATE, DELETE ON `mysql`.* TO `cluster_adm`@`%` WITH GRANT OPTION                                                       |
| GRANT SELECT ON `sys`.* TO `cluster_adm`@`%` WITH GRANT OPTION                                                                                 |
| GRANT ALL PRIVILEGES ON `mysql_innodb_cluster_metadata`.* TO `cluster_adm`@`%` WITH GRANT OPTION                                               |
| GRANT SELECT ON `performance_schema`.`replication_applier_configuration` TO `cluster_adm`@`%` WITH GRANT OPTION                                |
| GRANT SELECT ON `performance_schema`.`replication_applier_status_by_coordinator` TO `cluster_adm`@`%` WITH GRANT OPTION                        |
| GRANT SELECT ON `performance_schema`.`replication_applier_status_by_worker` TO `cluster_adm`@`%` WITH GRANT OPTION                             |
| GRANT SELECT ON `performance_schema`.`replication_applier_status` TO `cluster_adm`@`%` WITH GRANT OPTION                                       |
| GRANT SELECT ON `performance_schema`.`replication_connection_configuration` TO `cluster_adm`@`%` WITH GRANT OPTION                             |
| GRANT SELECT ON `performance_schema`.`replication_connection_status` TO `cluster_adm`@`%` WITH GRANT OPTION                                    |
| GRANT SELECT ON `performance_schema`.`replication_group_member_stats` TO `cluster_adm`@`%` WITH GRANT OPTION                                   |
| GRANT SELECT ON `performance_schema`.`replication_group_members` TO `cluster_adm`@`%` WITH GRANT OPTION                                        |
| GRANT SELECT ON `performance_schema`.`threads` TO `cluster_adm`@`%` WITH GRANT OPTION                                                          |
+------------------------------------------------------------------------------------------------------------------------------------------------+
13 rows in set (0.0086 sec)

And I can edit this information to create the SQL necessary to create the cluster_adm user along with the necessary grants for the other two servers. I do not want to write this to the binary log, so I will use SET SQL_LOG_BIN=0; again. I will execute this on the other two servers, but I am not going to display the output here.

NOTICE: The cluster_adm user must have the same password on all servers in the cluster!

SET SQL_LOG_BIN=0;
CREATE USER 'cluster_adm'@'%' IDENTIFIED BY 'new_password';
GRANT RELOAD, SHUTDOWN, PROCESS, FILE, SUPER, REPLICATION SLAVE, REPLICATION CLIENT, CREATE USER ON *.* TO `cluster_adm`@`%` WITH GRANT OPTION;
GRANT SELECT, INSERT, UPDATE, DELETE ON `mysql`.* TO `cluster_adm`@`%` WITH GRANT OPTION                                                      ;
GRANT SELECT ON `sys`.* TO `cluster_adm`@`%` WITH GRANT OPTION                                                                                ;
GRANT ALL PRIVILEGES ON `mysql_innodb_cluster_metadata`.* TO `cluster_adm`@`%` WITH GRANT OPTION                                              ;
GRANT SELECT ON `performance_schema`.`replication_applier_configuration` TO `cluster_adm`@`%` WITH GRANT OPTION                               ;
GRANT SELECT ON `performance_schema`.`replication_applier_status_by_coordinator` TO `cluster_adm`@`%` WITH GRANT OPTION                       ;
GRANT SELECT ON `performance_schema`.`replication_applier_status_by_worker` TO `cluster_adm`@`%` WITH GRANT OPTION                            ;
GRANT SELECT ON `performance_schema`.`replication_applier_status` TO `cluster_adm`@`%` WITH GRANT OPTION                                      ;
GRANT SELECT ON `performance_schema`.`replication_connection_configuration` TO `cluster_adm`@`%` WITH GRANT OPTION                            ;
GRANT SELECT ON `performance_schema`.`replication_connection_status` TO `cluster_adm`@`%` WITH GRANT OPTION                                   ;
GRANT SELECT ON `performance_schema`.`replication_group_member_stats` TO `cluster_adm`@`%` WITH GRANT OPTION                                  ;
GRANT SELECT ON `performance_schema`.`replication_group_members` TO `cluster_adm`@`%` WITH GRANT OPTION                                       ;
GRANT SELECT ON `performance_schema`.`threads` TO `cluster_adm`@`%` WITH GRANT OPTION                                                         ;
FLUSH PRIVILEGES;
SET SQL_LOG_BIN=1;

Again - I will check the master status to make sure there haven't been any transactions yet.

 MySQL  SQL    show master status\G

*************************** 1. row ***************************
             File: binlog.000003
         Position: 151
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: 
1 row in set (0.0009 sec)

InnoDB Cluster uses Global Transaction Identifiers (GTIDs). "A global transaction identifier (GTID) is a unique identifier created and associated with each transaction committed on the server of origin (the master). This identifier is unique not only to the server on which it originated, but is unique across all servers in a given replication topology." Source

Each server in the three-node cluster will have its own GTIDs, which is composed of a Universal Unique Identifier (UUID), a colon (:) and an incremental number. To see the UUID for each server, this value is stored in the MySQL data directory in the auto.cnf file:

 # cat auto.cnf
[auto]
server-uuid=ae1a6186-5672-11e9-99b4-80e6004d84ae

Therefore, each of these three servers will have their own UUID being used in the GTIDs. The cluster itself will have a separate UUID being used in its own GTID, and this UUID is generated when the cluster is created. I have the following UUID's for the three servers:

IP Address Server UUID
192.168.1.161 ae1a6186-5672-11e9-99b4-80e6004d84ae
192.168.1.162 cd287ef0-5672-11e9-be9a-b79ce5a797fd
192.168.1.163 d85f6086-5672-11e9-ad64-c08d80ddd285
InnoDB Cluster - to be determined -

Now I am ready to create the cluster. From the first server (192.168.1.161), I will switch to JavaScript mode, and then I will want to re-connect as the cluster_adm user.

 MySQL  JS    \js

Switching to JavaScript mode...

 MySQL  JS    \connect cluster_adm@192.168.1.161:3306

Creating a session to 'cluster_adm@192.168.1.161:3306'
Please provide the password for 'cluster_adm@192.168.1.161:3306': 
Save password for 'cluster_adm@192.168.1.161:3306'? [Y]es/[N]o/Ne[v]er (default No): N
Fetching schema names for autocompletion... Press ^C to stop.
Closing old connection...
Your MySQL connection id is 19
Server version: 8.0.15-commercial MySQL Enterprise Server - Commercial
No default schema selected; type \use  to set one.

I can create the cluster using the dba.createCluster command:

 MySQL  JS    dba.createCluster("myCluster");

 MySQL  192.168.1.161:3306 ssl  JS > dba.createCluster("myCluster");
A new InnoDB cluster will be created on instance 'cluster_adm@192.168.1.161:3306'.

Validating instance at 192.168.1.161:3306...

This instance reports its own address as MacVM161.local

Instance configuration is suitable.
Creating InnoDB cluster 'myCluster' on 'cluster_adm@192.168.1.161:3306'...
Adding Seed Instance...

Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.

The InnoDB Cluster was successfully created. Now when I do a SHOW MASTER STATUS\G, I will see some values under the Executed_Gtid_Set section: (I will need to switch back to the SQL mode)

 MySQL  JS    \sql

Switching to SQL mode... Commands end with ;

 MySQL  SQL    show master status\G

*************************** 1. row ***************************
             File: binlog.000001
         Position: 12406
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: ae1a6186-5672-11e9-99b4-80e6004d84ae:1-12,
c446e75c-5674-11e9-bf50-753fdb914192:1-2
1 row in set (0.0003 sec)

Under the Executed_Gtid_Set section, I have two sets of GTID's. One is for the 192.168.1.161 server (ae1a6186-5672-11e9-99b4-80e6004d84ae), and the other is for the cluster (c446e75c-5674-11e9-bf50-753fdb914192).

Executed_Gtid_Set: ae1a6186-5672-11e9-99b4-80e6004d84ae:1-12,
c446e75c-5674-11e9-bf50-753fdb914192:1-2

The Executed_Gtid_Set shows which transactions have been applied to this server (192.168.1.161). There have been 12 transactions for the server (ae1a6186-5672-11e9-99b4-80e6004d84ae:1-12) and two transactions for the cluster (c446e75c-5674-11e9-bf50-753fdb914192:1-2). The GTID's for the cluster should be the same on each server, as we add them to the server. Also, once you add servers to the cluster, the read-only servers will be changed to SUPER_READ_ONLY to prevent write-transactions from being applied to the read-only servers in a single-primary mode cluster. If you want to view the transactions which were executed, you can use the mysqlbinlog utility.

I can reference my earlier table of the UUID's for all of the servers, and I can now add the UUID for the cluster (in red:

IP Address Server UUID
192.168.1.161 ae1a6186-5672-11e9-99b4-80e6004d84ae
192.168.1.162 cd287ef0-5672-11e9-be9a-b79ce5a797fd
192.168.1.163 d85f6086-5672-11e9-ad64-c08d80ddd285
InnoDB Cluster c446e75c-5674-11e9-bf50-753fdb914192

I can check the status of the cluster. (after switching back to JavaScript mode)

 MySQL  JS    \js

Switching to JavaScript mode...

 MySQL  JS    var cluster = dba.getCluster()
 MySQL  JS    cluster.status()

{
    "clusterName": "myCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "192.168.1.161:3306", 
        "ssl": "REQUIRED", 
        "status": "OK_NO_TOLERANCE", 
        "statusText": "Cluster is NOT tolerant to any failures.", 
        "topology": {
            "192.168.1.161:3306": {
                "address": "192.168.1.161:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "MacVM161.local:3306"
}

Under the topology section above, we can see server 192.168.1.161 is in the cluster and has the status of ONLINE.

I am now ready to add 192.168.1.162 and 192.168.1.163 to the cluster. But first I need to make sure each server is ready to be added to the cluster using the dba.checkInstanceConfiguration command. I can run these commands from any one of the servers, but I am going to run this from the first server - 192.168.1.161. Prior to this, I went ahead and made the same changes to the MySQL configuration file (my.cnf or my.ini) as I did on 192.168.1.161.

 MySQL  JS    dba.checkInstanceConfiguration("cluster_adm@192.168.1.162:3306")

Please provide the password for 'cluster_adm@192.168.1.162:3306': 
Save password for 'cluster_adm@192.168.1.162:3306'? [Y]es/[N]o/Ne[v]er (default No): 
Validating MySQL instance at 192.168.1.162:3306 for use in an InnoDB cluster...

This instance reports its own address as MacVM162.local
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...
Instance configuration is compatible with InnoDB cluster

The instance '192.168.1.162:3306' is valid for InnoDB cluster usage.

{
    "status": "ok"
}

 MySQL  JS    dba.checkInstanceConfiguration("cluster_adm@192.168.1.163:3306")

Please provide the password for 'cluster_adm@192.168.1.163:3306': 
Save password for 'cluster_adm@192.168.1.163:3306'? [Y]es/[N]o/Ne[v]er (default No): 
Validating MySQL instance at 192.168.1.163:3306 for use in an InnoDB cluster...

This instance reports its own address as MacVM162.local
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...
Instance configuration is compatible with InnoDB cluster

The instance '192.168.1.163:3306' is valid for InnoDB cluster usage.

{
    "status": "ok"
}

Both servers are ready to go, as indicated by the "status": "ok".

Now I can add the other two servers:

 MySQL  JS    var cluster = dba.getCluster()
 MySQL  JS    cluster.addInstance("cluster_adm@192.168.1.162:3306"

A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Adding instance to the cluster ...

Please provide the password for 'cluster_adm@192.168.1.162:3306': 
Save password for 'cluster_adm@192.168.1.162:3306'? [Y]es/[N]o/Ne[v]er (default No): 
Validating instance at 192.168.1.162:3306...

This instance reports its own address as MacVM162.local

Instance configuration is suitable.
The instance 'cluster_adm@192.168.1.162:3306' was successfully added to the cluster.

 MySQL  JS    var cluster = dba.getCluster()
 MySQL  JS    cluster.addInstance("cluster_adm@192.168.1.163:3306"

A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Adding instance to the cluster ...

Please provide the password for 'cluster_adm@192.168.1.163:3306': 
Save password for 'cluster_adm@192.168.1.163:3306'? [Y]es/[N]o/Ne[v]er (default No): 
Validating instance at 192.168.1.163:3306...

This instance reports its own address as MacVM162.local

Instance configuration is suitable.
The instance 'cluster_adm@192.168.1.163:3306' was successfully added to the cluster.

Note: You only need to set var cluster = dba.getCluster() once per session. I added it here in case you wanted to add the instances to the cluster from a separate MySQL Shell session.

The SHOW MASTER STATUS\G should now be the same on all three servers. I can also check the status of the cluster, to see if all three nodes are ONLINE.

 MySQL  JS    var cluster = dba.getCluster()
 MySQL  JS    cluster.status()

{
    "clusterName": "myCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "192.168.1.161:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "192.168.1.161:3306": {
                "address": "192.168.1.161:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "192.168.1.162:3306": {
                "address": "192.168.1.162:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "192.168.1.163:3306": {
                "address": "192.168.1.163:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "MacVM161.local:3306"
}

All three nodes have the status of ONLINE, so the cluster is up and ready to use.

 


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world's most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots 
Visit http://2044thebook.com for more information.
Tony is the editor/illustrator for NASA Graphics Standards Manual Remastered Edition 
Visit https://amzn.to/2oPFLI0 for more information.

Sometimes the slow database.. is not the database...

$
0
0
So I was recently asked to look into why the updated MySQL 5,.6 was slower than the older 5.5 

So I started by poking around looking over the standard variables and caches and etc.

The test case was a simple routine that took about twice as long to run on 5.6 than it did on 5.5. 

To add to the mix.. the 5.6 version had double the Innodb_buffer_pool_size and of course more ram overall.  

So I started some tests with MySQLslap...

Mysqlslap tests show it slower on 5.6 

5.6:
mysqlslap --defaults-file=./.my.cnf --concurrency=150 --iterations=130 -query=/test.sql --create-schema=applicationdata --verbose 
Benchmark
Average number of seconds to run all queries: 0.028 seconds
Minimum number of seconds to run all queries: 0.019 seconds
Maximum number of seconds to run all queries: 0.071 seconds
Number of clients running queries: 150
Average number of queries per client: 1

5.5:
mysqlslap --defaults-file=./.my.cnf --concurrency=150 --iterations=130 --query=/test.sql --create-schema=applicationdata --verbose 
Benchmark
Average number of seconds to run all queries: 0.015 seconds
Minimum number of seconds to run all queries: 0.011 seconds
Maximum number of seconds to run all queries: 0.024 seconds
Number of clients running queries: 150
Average number of queries per client: 1


All of this goes against the public benchmarks 
http://dimitrik.free.fr/blog/archives/2013/02/mysql-performance-mysql-56-ga-vs-mysql-55-32cores.html 

So I checked disk level --

5.6:
# dd if=/dev/zero of=test bs=1048576 count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 25.7401 s, 83.4 MB/s

# dd if=test of=/dev/null bs=1048576
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 29.1527 s, 73.7 MB/s

5.5:
# dd if=/dev/zero of=test bs=1048576 count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 19.9214 seconds, 108 MB/s

# dd if=test of=/dev/null bs=1048576
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 20.0243 seconds, 107 MB/s



Here the disks with 5.5 is slower regardless of MySQL. So in this case.... Look to fixing the disk speed.. MySQL  was running fine and will. 

PHP, MySQL & React REST API Tutorial with Example Form

$
0
0
Throughout this tutorial, we'll be using PHP with React and Axios to create a simple REST API application with CRUD operations. In the backend we'll use PHP with a MySQL database. The PHP backend will expose a set of RESTful API endpoints so we'll be using the Axios library for making Ajax calls from the React.js UI. We'll also see how to handle forms in React and how to send multipart form data with Axios using FormData. In this tutorial, we are going to integrate React with PHP using Babel in the browser and a <script> tag. As such, we'll serve the React application from PHP so we don't need to enable CORS in our server since both the backend and frontend are served from the same domain. We'll see the other approach of using two separate servers for the frontend and backend apps in another tutorial which will use the create-react-app to create the React project. Prerequisites You must have the following prerequsites in order to follow this tutorial comfortably: Knowledge of PHP and MySQL, Knowledge of JavaScript and React, PHP and MySQL installed on your development machine. Creating the MySQL Database Let's start by creating a MySQL database using the MySQL client (this usually gets installed when you install the MySQL server). Open a new terminal and run the following command: mysql -u root -p You'll be asked for your MySQL password. Make sure to submit the correct password and type Enter on your keyboard to confirm. Next, you'll be presetend with the MySQL client CLI. You can create a database using the following SQL statement: mysql> create database reactdb; Next, let's add a SQL table in our database. Simpy run the following SQL instructions: mysql> use reactdb; mysql> CREATE TABLE `contacts` ( `id` int(11) NOT NULL PRIMARY KEY AUTO_INCREMENT, `name` varchar(100) NOT NULL, `email` varchar(100) NOT NULL, `city` varchar(100), `country` varchar(100), `job` varchar(100) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; We first run the use SQL instruction to select the reactdb database as our current working database. Next, we invoke the CREATE TABLE <name_of_table> statement to create a SQL table that has the following columns: id: A unique identifier for the person, name: The name of the person, email: The email for the person, city: The city of the person country: The country of the person job: The job occupied by the person Basically, this is a simple database for managing your contacts data. Creating The PHP & MySQL RESTful API After creating the MySQL database, table and columns. Let's now proceed to create a RESTful API interface exposed by a PHP application that runs CRUD operations against our previously-created MySQL table. Head back to your terminal and start by creating a directory for your project's files: $ cd ~ $ mkdir php-react-rest-api-crud Create a REST API Endpoint Now, let's create an endpoint that provides contacts data in a JSON format to our Vue frontend. Create an api folder inside your project's root folder: $ mkdir api Navigate inside the api folder and create a contacts.php file and add the following content: <?php $host = "localhost"; $user = "root"; $password = "YOUR_MYSQL_DB_PASSWORD"; $dbname = "reactdb"; $id = ''; $con = mysqli_connect($host, $user, $password,$dbname); $method = $_SERVER['REQUEST_METHOD']; $request = explode('/', trim($_SERVER['PATH_INFO'],'/')); if (!$con) { die("Connection failed: " . mysqli_connect_error()); } switch ($method) { case 'GET': $id = $_GET['id']; $sql = "select * from contacts".($id?" where id=$id":''); break; case 'POST': $name = $_POST["name"]; $email = $_POST["email"]; $country = $_POST["country"]; $city = $_POST["city"]; $job = $_POST["job"]; $sql = "insert into contacts (name, email, city, country, job) values ('$name', '$email', '$city', '$country', '$job')"; break; } // run SQL statement $result = mysqli_query($con,$sql); // die if SQL statement failed if (!$result) { http_response_code(404); die(mysqli_error($con)); } if ($method == 'GET') { if (!$id) echo '['; for ($i=0 ; $i<mysqli_num_rows($result) ; $i++) { echo ($i>0?',':'').json_encode(mysqli_fetch_object($result)); } if (!$id) echo ']'; } elseif ($method == 'POST') { echo json_encode($result); } else { echo mysqli_affected_rows($con); } $con->close(); We first use the MySQLi PHP extension to create a connection to our MySQL database using the mysqli_connect() method. Next, we use the $_SERVER['REQUEST_METHOD'] to retrieve the request method sent from the Axios client. If the request is GET, we create a SQL SELECT query. if the request is POST we create a SQL INSERT query with the post data retrieved from the $_POST object. After that, we use the mysqli_query() method to run the query against our database table either to get or create data. Finally we use the json_encode() method to encode data as JSON data and send it to the client. You can serve your PHP application using the following command from the root of your project: $ php -S 127.0.0.1:8080 Create the React App Next, navigate to the project's root folder and add an index.php file: $ touch index.php Next, open the index.php file and add the following code: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta http-equiv="X-UA-Compatible" content="ie=edge"> <title>PHP| MySQL | React.js | Axios Example</title> <script src= "https://unpkg.com/react@16/umd/react.production.min.js"></script> <script src= "https://unpkg.com/react-dom@16/umd/react-dom.production.min.js"></script> <!-- Load Babel Compiler --> <script src="https://unpkg.com/babel-standalone@6.15.0/babel.min.js"></script> <script src="https://unpkg.com/axios/dist/axios.min.js"></script> </head> <body> </body> </html> We simply include the React, ReactDOM, Babel and Axios libraries from their CDNs. Next, in the index.html, in the <body> tag add a <div> tag where you can mount your React application: <div id='root'></div> Next, add a <script> tag of the text/babel type to create our React app: <body> <div id='root'></div> <script type="text/babel"> class App extends React.Component { state = { contacts: [] } render() { return ( <React.Fragment> <h1>Contact Management</h1> <table border='1' width='100%' > <tr> <th>Name</th> <th>Email</th> <th>Country</th> <th>City</th> <th>Job</th> </tr> {this.state.contacts.map((contact) => ( <tr> <td>{ contact.name }</td> <td>{ contact.email }</td> <td>{ contact.country }</td> <td>{ contact.city }</td> <td>{ contact.job }</td> </tr> ))} </table> </React.Fragment> ); } } ReactDOM.render(<App />, document.getElementById('root')); </script> </body> We first create a React component called App by extending the React.Component class. Next, we add a contacts variable to the state object which will be used to hold the contacts after we fetch them from the PHP REST endpoint using Axios. Next, we define a React render() method which returns a fragment that wraps the <h1> header and <table> elements. In the table we loop through the this.state.contacts and we display each <tr> corresponding to each contact information. Finally, we use the render() method of ReactDOM to actually mount our App component to the DOM. The contacts array is empty. Let's use the Axios client to send a GET request to fetch data from /api/contacts.php endpoint exposed by the PHP server. In the App component add a componentDidMount() life cycle method, which gets called when the component is mounted in the DOM, and inside it; add the code to fetch data: componentDidMount() { const url = '/api/contacts.php' axios.get(url).then(response => response.data) .then((data) => { this.setState({ contacts: data }) console.log(this.state.contacts) }) } Create a React Form for Submitting Data Let's now add a React component that displays a form and handles submitting the form to the PHP backend. In your index.php file add the following component before the App component: class ContactForm extends React.Component { state = { name: '', email: '', country: '', city: '', job: '', } handleFormSubmit( event ) { event.preventDefault(); console.log(this.state); } render(){ return ( <form> <label>Name</label> <input type="text" name="name" value={this.state.name} onChange={e => this.setState({ name: e.target.value })}/> <label>Email</label> <input type="email" name="email" value={this.state.email} onChange={e => this.setState({ email: e.target.value })}/> <label>Country</label> <input type="text" name="country" value={this.state.country} onChange={e => this.setState({ country: e.target.value })}/> <label>City</label> <input type="text" name="city" value={this.state.city} onChange={e => this.setState({ city: e.target.value })}/> <label>Job</label> <input type="text" name="job" value={this.state.job} onChange={e => this.setState({ job: e.target.value })}/> <input type="submit" onClick={e => this.handleFormSubmit(e)} value="Create Contact" /> </form>); } } Next include it in the App component to be able to display it below the table: class App extends React.Component { // [...] render() { return ( <React.Fragment> <!-- [...] --> <ContactForm /> </React.Fragment> ); } } Now let's change the handleFormSubmit() of ContactForm method to actually send the form data using Axios and FormData to our PHP REST endpoint which takes care of saving it in the MySQL database: handleFormSubmit( event ) { event.preventDefault(); let formData = new FormData(); formData.append('name', this.state.name) formData.append('email', this.state.email) formData.append('city', this.state.city) formData.append('country', this.state.country) formData.append('job', this.state.job) axios({ method: 'post', url: '/api/contacts.php', data: formData, config: { headers: {'Content-Type': 'multipart/form-data' }} }) .then(function (response) { //handle success console.log(response) }) .catch(function (response) { //handle error console.log(response) }); } Conclusion In this tutorial, we've seen how to use PHP with MySQL, React and Axios to create a simple REST API CRUD example application. We have also seen how to handle forms in React and submit data to the server.

Removal of implicit and explicit sorting for GROUP BY

$
0
0

In MySQL, historically GROUP BY was used to provide sorting as well. If a query specified GROUP BY, the result was sorted as if ORDER BY was present in the query.

mysql-5.7> CREATE TABLE t (id INTEGER,  cnt INTEGER);
Query OK, 0 rows affected (0.03 sec)

mysql-5.7> INSERT INTO t VALUES (4,1),(3,2),(1,4),(2,2),(1,1),(1,5),(2,6),(2,1),(1,3),(3,4),(4,5),(3,6);
Query OK, 12 rows affected (0.02 sec)
Records: 12  Duplicates: 0  Warnings: 0

mysql-5.7> SELECT id, SUM(cnt) FROM t GROUP BY id;
+------+----------+
| id   | SUM(cnt) |
+------+----------+
|    1 |       13 |
|    2 |        9 |
|    3 |       12 |
|    4 |        6 |
+------+----------+
4 rows in set (0.00 sec)

MySQL here implicitly sorts the results from GROUP BY (i.e.…

How to Add More Nodes to an Existing ProxySQL Cluster

$
0
0
proxysql on application instances

In my previous post, some time ago, I wrote about the new cluster feature of ProxySQL. For that post, we were working with three nodes, now we’ll work with even more! If you’ve installed one ProxySQL per application instance and would like to work up to more, then this post is for you. If this is new to you, though, read my earlier post first for more context.

Check the image below to understand the structure of “one ProxySQL per application”. This means you have ProxySQL installed, and your application (Java, PHP, Apache server etc) in the same VM (virtual machine).

Schematic of a ProxySQL cluster

Having taken a look at that you probably have a few questions, such as:

  • What happens if you have 20 nodes synced and you now need to add 100 or more nodes?
  • How can I sync the new nodes without introducing errors?

Don’t be scared, it’s a simple process.

Remember there are only four tables which can be synced over the cluster. These tables are:

  • mysql_query_rules
  • mysql_servers
  • mysql_users
  • proxysql_servers

From a new proxysql cluster installation

After a fresh installation all those four tables are empty. That means if we configure it as a new node in a cluster, all rows in those tables will be copied instantly.

Generally the installation is straightforward.

Now ProxySQL is up and all the configuration are in default settings

Connect to the ProxySQL console:

mysql -uadmin -padmin -h127.0.0.1 -P6032 --prompt='\u (\d)>'

Now there are two steps remaining:

  • Configure global_variables table
  • Configure proxysql_servers table

How to configure the global_variables table

Below is an example showing the minimal parameters to set – you can change the username and password according to your needs.

You can copy and paste the username and passwords from the current cluster and monitoring process by running the next command in a node of the current cluster:

select * from global_variables where variable_name in ('admin-admin_credentials', 'admin-cluster_password', 'mysql-monitor_password', 'admin-cluster_username', 'mysql-monitor_username');

You can update the parameters of the current node by using this as a template:

update global_variables set variable_value='<REPLACE-HERE>' where variable_name='admin-admin_credentials';
update global_variables set variable_value='<REPLACE-HERE>' where variable_name='admin-cluster_username';
update global_variables set variable_value='<REPLACE-HERE>' where variable_name='admin-cluster_password';
update global_variables set variable_value='<REPLACE-HERE>' where variable_name='mysql-monitor_username';
update global_variables set variable_value='<REPLACE-HERE>' where variable_name='mysql-monitor_password';
update global_variables set variable_value=1000 where variable_name='admin-cluster_check_interval_ms';
update global_variables set variable_value=10 where variable_name='admin-cluster_check_status_frequency';
update global_variables set variable_value='true' where variable_name='admin-cluster_mysql_query_rules_save_to_disk';
update global_variables set variable_value='true' where variable_name='admin-cluster_mysql_servers_save_to_disk';
update global_variables set variable_value='true' where variable_name='admin-cluster_mysql_users_save_to_disk';
update global_variables set variable_value='true' where variable_name='admin-cluster_proxysql_servers_save_to_disk';
update global_variables set variable_value=3 where variable_name='admin-cluster_mysql_query_rules_diffs_before_sync';
update global_variables set variable_value=3 where variable_name='admin-cluster_mysql_servers_diffs_before_sync';
update global_variables set variable_value=3 where variable_name='admin-cluster_mysql_users_diffs_before_sync';
update global_variables set variable_value=3 where variable_name='admin-cluster_proxysql_servers_diffs_before_sync';
load admin variables to RUNTIME;
save admin variables to disk;

Configure “proxysql_servers” table

At this point you need keep in mind that you will need to INSERT into this table “all” the IPs from the other ProxySQL nodes.

Why so? Because this table will have a new epoch time and this process will overwrite the rest of the nodes listed by its last table update.

In our example, let’s assume the IP of the new node is 10.0.1.3 (i.e. node 3, below)

INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('10.0.1.1',6032,0,'p1');
INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('10.0.1.2',6032,0,'p2');
INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('10.0.1.3',6032,0,'p3');
LOAD PROXYSQL SERVERS TO RUNTIME;
SAVE PROXYSQL SERVERS TO DISK;

If you already have many ProxySQL servers in the cluster, you can run mysqldump as this will help speed up this process.

If that’s the case, you need to find the most up to date node and use mysqldump to export the data from the proxysql_servers table.

How would you find this node? There are a few stats tables in ProxySQL, and in this case we can use two of these to help identify the right node.

SELECT stats_proxysql_servers_checksums.hostname, stats_proxysql_servers_metrics.Uptime_s, stats_proxysql_servers_checksums.port, stats_proxysql_servers_checksums.name, stats_proxysql_servers_checksums.version, FROM_UNIXTIME(stats_proxysql_servers_checksums.epoch) epoch, stats_proxysql_servers_checksums.checksum, stats_proxysql_servers_checksums.diff_check FROM stats_proxysql_servers_metrics JOIN stats_proxysql_servers_checksums ON stats_proxysql_servers_checksums.hostname = stats_proxysql_servers_metrics.hostname WHERE stats_proxysql_servers_metrics.Uptime_s > 0 ORDER BY epoch DESC

Here’s an example output

+------------+----------+------+-------------------+---------+---------------------+--------------------+------------+
| hostname   | Uptime_s | port | name              | version | epoch               | checksum           | diff_check |
+------------+----------+------+-------------------+---------+---------------------+--------------------+------------+
| 10.0.1.1   | 1190     | 6032 | mysql_users       | 2       | 2019-04-04 12:04:21 | 0xDB07AC7A298E1690 | 0          |
| 10.0.1.2   | 2210     | 6032 | mysql_users       | 2       | 2019-04-04 12:04:18 | 0xDB07AC7A298E1690 | 0          |
| 10.0.1.1   | 1190     | 6032 | mysql_query_rules | 1       | 2019-04-04 12:00:07 | 0xBC63D734643857A5 | 0          |
| 10.0.1.1   | 1190     | 6032 | mysql_servers     | 1       | 2019-04-04 12:00:07 | 0x0000000000000000 | 0          |
| 10.0.1.1   | 1190     | 6032 | proxysql_servers  | 1       | 2019-04-04 12:00:07 | 0x233638C097DE6190 | 0          |
| 10.0.1.2   | 2210     | 6032 | mysql_query_rules | 1       | 2019-04-04 11:43:13 | 0xBC63D734643857A5 | 0          |
| 10.0.1.2   | 2210     | 6032 | mysql_servers     | 1       | 2019-04-04 11:43:13 | 0x0000000000000000 | 0          |
| 10.0.1.2   | 2210     | 6032 | proxysql_servers  | 1       | 2019-04-04 11:43:13 | 0x233638C097DE6190 | 0          |
| 10.0.1.2   | 2210     | 6032 | admin_variables   | 0       | 1970-01-01 00:00:00 |                    | 0          |
| 10.0.1.2   | 2210     | 6032 | mysql_variables   | 0       | 1970-01-01 00:00:00 |                    | 0          |
| 10.0.1.1   | 1190     | 6032 | admin_variables   | 0       | 1970-01-01 00:00:00 |                    | 0          |
| 10.0.1.1   | 1190     | 6032 | mysql_variables   | 0       | 1970-01-01 00:00:00 |                    | 0          |
+------------+----------+------+-------------------+---------+---------------------+--------------------+------------+

For each table, we can see different versions, each related to table changes.  In this case, we need to look for the latest epoch time for the table “proxysql_servers“. In this example , above, we can see that the server with the IP address of  10.0.1.1 is the latest version.  We can now run the next command to get a backup of all the IP data from the current cluster

mysqldump --host=127.0.0.1 --port=6032 --skip-opt --no-create-info --no-tablespaces --skip-triggers --skip-events main proxysql_servers > proxysql_servers.sql

Now, copy the output to the new node and import into proxysql_servers table. Here’s an example: the data has been exported to the file proxysql_servers.sql which we’ll now load to the new node:

source proxysql_servers.sql
LOAD PROXYSQL SERVERS TO RUNTIME;
SAVE PROXYSQL SERVERS TO DISK;

You can run the next SELECTs to verify that the new node has the data from the current cluster, and in doing so ensure that the nodes are in sync as expected:

select * from mysql_query_rules;
select * from mysql_servers;
select * from mysql_users ;
select * from proxysql_servers;

How can we check if there are errors in the synchronization process?

Run the next command to fetch data from the table stats_proxysql_servers_checksums:

SELECT hostname, port, name, version, FROM_UNIXTIME(epoch) epoch, checksum, diff_check FROM stats_proxysql_servers_checksums  ORDER BY epoch;

Using our example, here’s the data saved in stats_proxysql_servers_checksums for the proxysql_servers table

admin ((none))>SELECT hostname, port, name, version, FROM_UNIXTIME(epoch) epoch, checksum, diff_check FROM stats_proxysql_servers_checksums  ORDER BY epoch;
+------------+------+-------------------+---------+---------------------+--------------------+------------+---------------------+
| hostname   | port | name              | version | epoch               | checksum           | diff_check | DATETIME('NOW')     |
+------------+------+-------------------+---------+---------------------+--------------------+------------+---------------------+
...
| 10.0.1.1   | 6032 | proxysql_servers  | 2       | 2019-03-25 13:36:17 | 0xC7D7443B96FC2A94 | 92         | 2019-03-25 13:38:31 |
| 10.0.1.2   | 6032 | proxysql_servers  | 2       | 2019-03-25 13:36:34 | 0xC7D7443B96FC2A94 | 92         | 2019-03-25 13:38:31 |
...
| 10.0.1.3   | 6032 | proxysql_servers  | 2       | 2019-03-25 13:37:00 | 0x233638C097DE6190 | 0          | 2019-03-25 13:38:31 |
...
+------------+------+-------------------+---------+---------------------+--------------------+------------+---------------------+

As we can see in the column “diff” there are some differences in the checksum of the other nodes: our new node (IP 10.0.1.3) has the checksum 0x233638C097DE6190 where as nodes 1 and 2 both show 0xC7D7443B96FC2A94

This indicates that these nodes have different data. Is this correct? Well, yes, in this case, since in our example the nodes 1 and 2 were created on the current ProxySQL cluster

How do you fix this?

Well, you connect to the node 1 and INSERT the data for node 3 (the new node).

This change will propagate the changes over the current cluster – node 2 and node 3 – and will update the table proxysql_servers.

INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('10.0.1.3',6032,0,'p3');
LOAD PROXYSQL SERVERS TO RUNTIME;

What happens if the new nodes already exist in the proxysql_servers table of the current cluster?

You can add the new ProxySQL node IPs to the current cluster before you start installing and configuring the new nodes.

In fact, this works fine and without any issues. The steps I went through, above, are the same, but when you check for differences in the table stats_proxysql_servers_checksums you should find there are none, and that the “diff” column should be displayed as 0.

How does ProxySQL monitor other ProxySQL nodes to sync locally?

Here’s a link to the explanation from the ProxySQL wiki page https://github.com/sysown/proxysql/wiki/ProxySQL-Cluster

Proxies monitor each other, so they immediately recognize that when a checksum of a configuration changes, the configuration has changed. It’s possible that the remote peer’s configuration and its own configuration were changed at the same time, or within a short period of time. So the proxy must also check its own status.

When proxies find differences:

  • If its own version is 1 , find the peer with version > 1 and with the highest epoch, and sync immediately
  • If its own version is greater than 1, starts counting for how many checks they differ
    • when the number of checks in which they differ is greater than cluster__name___diffs_before_sync and cluster__name__diffs_before_sync itself is greater than 0, find the peer with version > 1 and with the highest epoch, and sync immediately.

Note: it is possible that a difference is detected against one node, but the sync is performed against a different node. Since ProxySQL bases the sync on the node with the highest epoch, it is expected that all the nodes will converge.

To perform the syncing process:

These next steps run automatically when other nodes detect a change: this is an example for the table mysql_users

The same connection that’s used to perform the health check is used to execute a series of SELECTstatements in the form of SELECT _list_of_columns_ FROM runtime_module

SELECT username, password, active, use_ssl, default_hostgroup, default_schema, schema_locked, transaction_persistent, fast_forward, backend, frontend, max_connections FROM runtime_mysql_users;

In the node where it detected a difference in the checksum, it will run the next statements automatically:

DELETE FROM mysql_servers;
INSERT INTO mysql_users(username, password, active, use_ssl, default_hostgroup, default_schema, schema_locked, transaction_persistent, fast_forward, backend, frontend, max_connections) VALUES ('user_app','secret_password', 1, 0, 10, '', 0, 1, 0, 0, 1, 1000);
LOAD MYSQL USERS TO RUNTIME;
SAVE MYSQL USERS TO DISK;

Be careful:

After you’ve added the new ProxySQL node to the cluster, all four tables will sync immediately using the node with the highest epoch time as the basis for sync. Any changes to the tables previously added to the cluster, will propagate and overwrite the cluster

Summary

ProxySQL cluster is a powerful tool, and it’s being used more and more in production environments. If you haven’t tried it before, I’d recommend that you start testing ProxySQL cluster with a view to improving your application’s service. Hope you found this post helpful!

 

SQL Where Clause Example | SQL Where Query Tutorial

$
0
0
sql where clause examples

SQL Where Clause Example | SQL Where Query Tutorial is today’s topic. The WHERE clause is used to filter the database records. The WHERE clause is used to extract only those records that fulfill the specified condition. The SQL WHERE clause is used to specify the condition while fetching the data from a single table or by joining the multiple tables. If a given condition is satisfied, then only it returns the specific value from the table. You should use a WHERE clause to filter the records and fetching the necessary records.

SQL Where Clause Example

The WHERE clause is not only used in the SELECT statement; it is also used in an UPDATE, DELETE statement. The syntax of a SELECT statement with a WHERE clause is following.

SELECT column1, column2, columnN 
FROM table_name
WHERE [condition]

“SELECT column1, column2, column3 FROM tableName” is the standard SELECT statement.

“WHERE” is the keyword that restricts the select query result set and “condition” is a filter to be applied to the results. The filter can be the range, single value or subquery.

Now, we will head for the example, but before that, you might check out my Create table in SQL and How to insert data in database posts.

If you have followed that tutorial then in that tutorial, I have created a table and inserted some records.

Now, we fetch the records based on the WHERE clause.

Okay, now first let’s fetch all records.

For that, I need to write the following query.

SELECT * FROM Apps

See the following output.

 

SQL Where Clause Example

SQL WHERE Query

SQL requires single quotes around text values. However, the numeric fields should not be enclosed in the quotes.

Okay, now let’s use the WHERE query to filter the record. Let’s fetch the record by CreatorName column.

Write the following query.

Select * from Apps
Where CreatorName = 'Krunal'

The output is following.

 

SQL Where Query Tutorial

Let’s fetch the row by filtering AppCategory.

Select * from Apps
Where AppCategory = 'Investment'

The output is following.

 

WHERE Clause Example

You can specify the condition using the comparison or logical operators like >, <, =, LIKE, NOT, etc.

SQL Where Statement with Logical Operator

We can use the Logical operator with the Where clause to filter out the data by comparing.

Let’s say we fetch the data whose AppPrice is greater than 70$. See the following query.

Select * from Apps
Where AppPrice > 70

The output is following.

 

SQL Where Statement with Logical Operator

Operators in The WHERE Clause

The following operators can be used in a WHERE clause.

Operator Description
= Equal
> Greater than
< Less than
>= Greater than or equal
<= Less than or equal
<> Not equal. Note: In some versions of SQL this operator may be written as !=
BETWEEN Between the certain range
LIKE Search for the pattern
IN To specify multiple possible values for a column

WHERE clause combined with AND LOGICAL Operator

The WHERE clause when used with the AND operator, is only executed if All the filter criteria specified are met. See the following query.

Select CreatorName from Apps
Where AppPrice > 70 AND AppName = 'Moneycontrol'

So, we are fetching the only CreatorName in which the AppPrice is > 70 and AppName = MoneyControl. See the below output.

 

SQL Where Clause Example | SQL Where Query Tutorial

WHERE clause combined with IN Keyword

The WHERE clause when used together with an IN keyword only affects the rows whose values match the list of values provided in the IN keyword. IN helps to reduce the number of OR clauses you may have to use. See the following query.

Select * from Apps
Where AppPrice IN (50, 60, 110)

In the above query, we are fetching those records whose AppPrice is either 50, 60, or 110. If AppPrice matches in the database, then it will give us that row in the output. 

In our case, only 50 and 60 matches, 110 is not there in the database. So, two rows will be returned.

 

SQL Where Query Tutorial With Example

WHERE clause combined with OR LOGICAL Operator

The WHERE clause when used with the OR operator, is only executed if any or the entire specified filter criteria are met. See the following query.
Select * from Apps
Where AppPrice = 50 OR AppCategory = 'Fashion'

In the above query, Either AppPrice = 50 holds or AppCategory = Fashion holds.

If both true then also it gives the result. See the output.

 

WHERE clause combined with OR LOGICAL Operator

WHERE clause combined with NOT IN Keyword

The  WHERE clause used with the NOT IN keyword does not affect the rows whose values match the list of values provided in the NOT IN keyword. See the following query.

Select * from Apps
Where AppPrice NOT IN(40, 50, 60)

So, it will return those rows which have not AppPrice 40, 50, 60. It will give us the remaining rows.

 

WHERE clause combined with NOT IN Keyword

The SQL WHERE statement is used to restrict the number of rows affected by the SELECT, UPDATE or DELETE query. The WHERE term can be used in the conjunction with logical operators such as AND and OR, comparison operators such as equals to(=), etc. When used with the AND logical operator, all the criteria must be met. When used with the OR logical operator, any of the criteria must be met. The keyword IN is used to select rows matching the list of values.

Finally, SQL Where Clause Example | SQL Where Query Tutorial is over.

The post SQL Where Clause Example | SQL Where Query Tutorial appeared first on AppDividend.

Viewing all 18842 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>