MySQL InnoDB Cluster Installation – A Technical Case Study

Reinforcing the database fault tolerance of a distributed MySQL database

MySQL InnoDB Cluster Installation

Data is the currency of today’s web, mobile, social, enterprise and cloud applications and making sure data is always available is a top priority for any digitally enabled organisation – minutes of downtime will, if client-facing, result in significant loss of revenue and reputation. In the case of internal data systems, any downtime can have a serious impact on productivity, the ability to make data-informed decisions and potentially even the viability of operations.

Data sharing methods are not new but the next generation of web, cloud, and communications services and their requirements for uncompromising scalability, uptime and agility has introduced new database challenges.

In this technical case study, we’ll guide you through the step-by-step process of installing and setting up a MySQL InnoDB Cluster and connecting a MySQL Router in the context of developing a big data-powered application for solar energy plant monitoring and management.

The context – big data-based PV energy monitoring & management solution

Our client is a pioneer and leading company in PV (photovoltaic) monitoring, smart energy and feed-in management across renewable power and heating. The company offers professional monitoring and management solutions for PV plants across the globe, which allows for individual plant management in fleets and for plants to be sorted and views adapted according to the user group’s requirements.

Detailed error analysis and quick problem-solving guard against plant yield losses, securing the investment. Users benefit from time and budget efficiencies.

Seamless communication between WEB, data loggers and the individual components ensures the timely identification of any problems, protecting yield levels. An easy-to-use user and system management offers extensive options for filtering, grouping and views exactly according to the needs and wishes of the respective user, be it by region, system type, service contract or any other specification.

Structured presentation of data means the user never loses the overview. And intelligent, real-time compilation of individual systems information allows for rapid error analysis, faster troubleshooting and, ultimately, a smoother workflow and yield optimisation.

PV Power Plant Monitoring and Management System

Database Reliability Is Essential

The nature of our client’s PV plant monitoring and management system means a secure and reliable flow of data is absolutely vital. Part of our job was to ensure that and the decision was made to do so by using a MySQL InnoDB cluster.

The solution? MySQL InnoDB cluster

A MySQL InnoDB cluster is an Oracle High Availability solution installed over MySQL with multi-master capabilities and automatic failover for high availability. It allows for database replications that increase fault tolerance by ensuring the failure of one part of the infrastructure never proves fatal because replicated data backups can be pulled from another part of the infrastructure.

Data duplication reinforces fault tolerance of a distributed MySQL database

When building highly accessible and fault-tolerant systems we create points in the infrastructure where data is duplicated. Duplication of data increases its availability in the event of problems with part of the infrastructure and speeds up the process of writing or reading data.

Data duplication is most common across the following sections of infrastructure:

  • load balancers
  • storage server
  • database server
  • backup server

This technical case study raises the question of how to organize a distributed MySQL database for fault tolerance by increasing the number of available database replications.

Our goal was to achieve this purely through MySQL without resorting to third-party software.

The result was a cluster of 4 servers, 1 entry point (MySQL router) and 3 data storage servers (1 master and 2 secondary).

MySQL InnoDB Cluster Diagram

MySQL InnoDB Cluster

When initiating the installation, we used 3 servers for MySQL directly and 1 server to connect to our cluster – MySQL router.

MySQL server:

Shell

10.10.10.1
10.10.10.2
10.10.10.3

Router server:

Shell 

10.10.10.10

OS Centos 7 c disabled selinux and firewalld. MySQL will use version 5.7

Connect the repository:

Shell

   rpm -i https://dev.mysql.com/get/mysql80-community-release-el7-1.noarch.rpm

Turn off version 8 and turn on 5.7:

Shell

yum install yum-utils
yum-config-manager --disable mysql80-community
yum-config-manager --enable mysql57-community
yum repolist
yum install mysql-community-server mysql-shell
Rules /etc/my.cnf
Shell


[mysqld]
 
datadir=/var/lib/mysql

socket=/var/lib/mysql/mysql.sock

 
symbolic-links=0

 
log-error=/var/log/mysqld.log

pid-file=/var/run/mysqld/mysqld.pid

 
bind-address=0.0.0.0

port=3301

# Replication part

server_id=1

gtid_mode=ON

enforce_gtid_consistency=ON

master_info_repository=TABLE

relay_log_info_repository=TABLE

binlog_checksum=NONE

log_slave_updates=ON

log_bin=binlog

binlog_format=ROW

plugin-load = group_replication.so

# Group replication part

transaction_write_set_extraction=XXHASH64

loose-group_replication_start_on_boot = OFF

loose-group_replication_local_address = 10.10.10.1:33011

loose-group_replication_bootstrap_group = OFF

report_port = 3301

report_host = 10.10.10.1

We hang Mysql on port 3301, where data will be exchanged between the cluster servers.

We then bring the file /etc/my.cnf to this form on all three servers of the cluster, change the IP in the variables loose-group_replication_local_address and report_host , and also change the server_id – it is unique for each server.

And start MySQL and carry out initial set-up:

Shell

systemctl start mysqld

grep 'password' /var/log/mysqld.log

mysql_secure_installation

We now start creating a cluster. First, we’ll create a cladmin user on all 3 servers, for this we’ll use the MySQLh console:

Shell


[[email protected] ~] mysqlsh

> \c 127.0.0.1:3301

> dba.configureLocalInstance("127.0.0.1:3301", {mycnfPath: "/etc/my.cnf", clusterAdmin: "cladmin", clusterAdminPassword: "StrongPassword!#1"})

> \c [email protected]:3301

> dba.checkInstanceConfiguration()

> cl=dba.createCluster('TestCluster', {ipWhitelist: '10.10.10.1,10.10.10.2,10.10.10.3'})

> dba.configureLocalInstance()

> cl.status()

The output of cl.status ()  should be something like this:

Shell


MySQL  10.10.10.1:3301 ssl  JS > cluster.status()

{

    "clusterName": "TestCluster",

    "defaultReplicaSet": {

        "name": "default",

        "primary": "10.10.10.1:3301",

        "ssl": "REQUIRED",

        "status": "OK_NO_TOLERANCE",

        "statusText": "Cluster is NOT tolerant to any failures.",

        "topology": {

            "10.10.10.1:3301": {

                "address": "10.10.10.1:3301",

                "mode": "R/W",

                "readReplicas": {},

                "role": "HA",

                "status": "ONLINE"
            }
        },
        "topologyMode": "Single-Primary"
    },
    "groupInformationSourceMember": "mysql://[email protected]:3301"
}

When we make changes to the cluster configuration, be sure to run the command:

Shell

> dba.configureLocalInstance()

We perform similar actions on all 3 servers, do not forget to write the correct IP and ID in /etc/my.cnf (server_id, loose-group_replication_local_address, report_host) 

After the previous steps are performed on the nodes that we connect (the 2nd and 3rd server in our example), we execute:

Shell


[[email protected] ~] mysql -p

> set GLOBAL group_replication_allow_local_disjoint_gtids_join=ON;


Shell


[[email protected] ~] mysqlsh

> \c 127.0.0.1:3301

> dba.configureLocalInstance("127.0.0.1:3301", {mycnfPath: "/etc/my.cnf", clusterAdmin: "cladmin", clusterAdminPassword: "StrongPassword!#1"})

> \c [email protected]:3301

> dba.checkInstanceConfiguration()

After this procedure on the second server, we return to the first server:

Shell


[[email protected] ~] mysqlsh --uri [email protected]:3301 --cluster

> cluster.addInstance('[email protected]:3301', {ipWhitelist: '10.10.10.1,10.10.10.2,10.10.10.3'})

> cluster.status()

> dba.configureLocalInstance()

We also execute dba.configureLocalInstance () on the second server !!!

The output of cluster.status () should be something like this:

Shell


{
    "clusterName": "TestCluster",

    "defaultReplicaSet": {

        "name": "default",

        "primary": "10.10.10.1:3301",

        "ssl": "REQUIRED",

        "status": "OK_NO_TOLERANCE",

        "statusText": "Cluster is NOT tolerant to any failures.",

        "topology": {

            "10.10.10.1:3301": {

                "address": "10.10.10.1:3301",

                "mode": "R/W",

                "readReplicas": {},

                "role": "HA",

                "status": "ONLINE"

            },

            "1.1.1.2:3301": {

                "address": "10.10.10.2:3301",

                "mode": "R/O",

                "readReplicas": {},

                "role": "HA",

                "status": "ONLINE"
            }
        },
        "topologyMode": "Single-Primary"
    },
    "groupInformationSourceMember": "mysql://[email protected]:3301"
}

Add the 3rd server in the same way.

If we need to expand the cluster to 5 or more servers, we will need to edit the whitelist. This can be done through MySQLh (MySQL Shell) by going into sql mode or directly in /etc/my.cnf (do not forget to restart mysql daemons).

An example of how to do it using the MySQLh console (alternately on each of the RW nodes):

Shell


[[email protected] ~] mysqlsh --uri [email protected]:3301 --cluster

> \sql

> STOP GROUP_REPLICATION;

> SET GLOBAL group_replication_ip_whitelist="10.10.10.1,10.10.10.2,10.10.10.3,10.10.10.4,10.10.10.5";

> START GROUP_REPLICATION;

After adding all 3 servers, cluster.status () will return the following:

Shell


[[email protected] ~] mysqlsh --uri [email protected]:3301 --cluster

MySQL  10.10.10.1:3301 ssl  JS > cluster.status()

{


    "clusterName": "TestCluster",

    "defaultReplicaSet": {

        "name": "default",

        "primary": "10.10.10.1:3301",

        "ssl": "REQUIRED",

        "status": "OK",

        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",

        "topology": {

            "10.10.10.1:3301": {

                "address": "10.10.10.1:3301",

                "mode": "R/W",

                "readReplicas": {},

                "role": "HA",

                "status": "ONLINE"


            },
            "10.10.10.2:3301": {

                "address": "10.10.10.2:3301",

                "mode": "R/O",

                "readReplicas": {},

                "role": "HA",

                "status": "ONLINE"

            },

            "10.10.10.3:3301": {

                "address": "10.10.10.3:3301",

                "mode": "R/O",

                "readReplicas": {},

                "role": "HA",

                "status": "ONLINE"

            }

        },

        "topologyMode": "Single-Primary"

    },

    "groupInformationSourceMember": "10.10.10.1:3301"

}

If the cluster falls apart to the state of one server, it will need to be started with the loose-group_replication_bootstrap_group = ON

parameter in /etc/my.cnf After starting, the parameter will need to be turned off again, otherwise this server will always separate from the cluster and work independently.

Configure Mysql Router

Add a turnip and put a package:

Shell


[root@mysql-router ~] rpm -i https://dev.mysql.com/get/mysql80-community-release-el7-1.noarch.rpm
https://dev.mysql.com/get/mysql80-community-release-el7-1.noarch.rpm

[root@mysql-router ~] yum install mysql-router

Create a directory for our cluster configuration files:

Shell


[root@mysql-router ~] mkdir /etc/mysqlrouter/mysql-router

We will do the configuration using bootstrap , when specifying the IP address, you must specify the address of the current RW server:

Shell


[root@mysql-router ~] mysqlrouter --bootstrap [email protected]:3301 --directory /etc/mysqlrouter/mysql-router --user=root

The result of a successful configuration, output:

Shell


Reconfiguring MySQL Router instance at '/etc/mysqlrouter/mysql-router'...

Checking for old Router accounts

Creating account mysql_router1_yxkccoosbuoc@'%'

MySQL Router has now been configured for the InnoDB cluster 'TestCluster'.

 

The following connection information can be used to connect to the cluster after MySQL Router has been started with generated configuration..

Classic MySQL protocol connections to cluster ‘TestCluster’:

Read/Write Connections: localhost:6446 – Read/Only Connections: localhost:6447 X protocol connections to cluster ‘TestCluster’: – Read/Write Connections: localhost:64460 – Read/Only Connections: localhost:64470

Port 6446 – for RW connections.

Port 6447 – for RO connections.

Create a systemd service file to start the mysql router with our generated configuration files:

Shell


[root@mysql-router ~] nano /etc/systemd/system/mysqlrouter-cluster.service
Shell


[Unit]

Description=MySQL Router

After=syslog.target

After=network.target
 

[Service]

Type=simple

User=root

Group=root

PIDFile=/etc/mysqlrouter/mysql-router/mysqlrouter.pid

ExecStart=/bin/bash -c "/usr/bin/mysqlrouter -c /etc/mysqlrouter/mysql-router/mysqlrouter.conf"

 
Restart=on-failure

RestartPreventExitStatus=1

PrivateTmp=true
 
[Install]

WantedBy=multi-user.target

 

Turn it on at startup and run our router:

Shell


systemctl daemon-reload

systemctl start mysqlrouter-cluster

systemctl status mysqlrouter-cluster

Check

MySQL InnoDB Cluster setup check

Conclusion – a high availability & fault-tolerant data base

A MySQL InnoDB Cluster choice has proven to be extremely robust and a highly effective way of achieving the levels of availability and fault tolerance necessary for the effective monitoring and management solar plant yield. We hope the combination of the business use case and technical step-by-step installation guide highlights when and where a MySQL InnoDB Cluster can be the right solution for a database-centric software solution in solving availability and fault tolerance challenges.

Featured blog posts