Fun with MariaDB Galera cluster

Written by Dominik Joe Pantůček on 2016-04-21

database

Striving for high-availability of some software systems requires ensuring their individual components are reliable and usually redundant. A typical example of such component is a SQL database - MariaDB being the case we had to handle this time.

We decided to go for MariaDB Galera cluster on CentOS and I would like to write about our experience here. We have chosen active-active (master-master) design with two nodes called nodeA and nodeB having IP addresses 192.168.0.1 and 192.168.0.2 respectively.


CentOS installation

Our whole infrastructure is virtualized using the libvirt stack and therefore installing latest CentOS was just a matter of executing appropriate libvirt command:

virt-install \
 --name=nodeA \
 --ram=1024 --vcpus=2 \
 --location=http://mirror.centos.org/centos/7/os/x86_64/ \
 --extra-args="console=ttyS0" --serial=pty \
 --console=pty,target_type=serial \
 --os-type=linux \
 --network=bridge=virbr0,model=virtio \
 --graphics=none \
 --virt-type=kvm \
 --noautoconsole --nodisks && \
 virsh attach-device databaseA nodeAdisk.xml --persistent

Let us not worry about where the disk device resides - rest assure it has been taken care of (we just created the device and appropriate XML file) .

Installing the cluster software

There are up-to-date repositories with MariaDB Galera cluster packages available for CentOS and all we need is just adding proper repository to yum repositories configuration. We create /etc/yum.repos.d/mariadb.repo file with the following content:

[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.1.13/centos7-amd64/
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1

There are a few few packages we should install beforehand - mostly for convenience (it is sad joe is not included in CentOS by default):

yum install socat nano net-tools tmux

Afterwards we install all the required packages using yum:

yum install MariaDB-Galera-server MariaDB-client rsync galera mariadb-server

Now we have to set the whole thing up.

Setting up users

First we start regular MariaDB server:

service mysql start

Then we use a script provided by the package to secure the installation:

/usr/bin/mysql_secure_installation

And reset passwords using the MariaDB console:

mysql -u root -p

Using these SQL commands:

DELETE FROM mysql.user WHERE user='';
GRANT ALL ON *.* TO 'root'@'%' IDENTIFIED BY 'password';
GRANT USAGE ON *.* to sst_user@'%' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES on *.* to sst_user@'%';
FLUSH PRIVILEGES;
quit

Of course you should choose stronger password than 'password' in production.

Seting up the cluster

First we need to enable TCP/IP access to our cluster - which means configuring the firewalld service:

firewall-cmd --zone=public --add-service=mysql --permanent
firewall-cmd --zone=public --add-port=3306/tcp --permanent
firewall-cmd --zone=public --add-port=4567/tcp --permanent
firewall-cmd --zone=public --add-port=4568/tcp --permanent
firewall-cmd --zone=public --add-port=4444/tcp --permanent
firewall-cmd --zone=public --add-service=mysql
firewall-cmd --zone=public --add-port=3306/tcp
firewall-cmd --zone=public --add-port=4567/tcp
firewall-cmd --zone=public --add-port=4568/tcp
firewall-cmd --zone=public --add-port=4444/tcp

We also need to disable SELinux enforcing, because with SELinux enabled only the first node will start. For production installations you should setup proper SELinux roles and rights for the database process.

setenforce 0

Setting up the cluster requires certain reconfiguration. We stop the server first:

systemctl stop mysql

Then we add following lines to /etc/my.cnf.d/server.cnf - to the [galera] section:

[galera]
wsrep_on=ON
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
query_cache_size=0
query_cache_type=0
bind-address=0.0.0.0
datadir=/var/lib/mysql
innodb_log_file_size=100M
innodb_file_per_table
innodb_flush_log_at_trx_commit=2
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://192.168.0.1,192.168.0.2"
wsrep_cluster_name='galera_cluster'
wsrep_node_address='192.168.0.1'
wsrep_node_name='nodeA'
wsrep_sst_method=rsync
wsrep_sst_auth=sst_user:password

We do this for both nodes replacing wsrep_node_address and wsrep_node_name accordingly.

With everything in place we start the replication on the first node:

galera_new_cluster

And check replication status - showing only interesting information here:

mysql -u root -p -e "show status like 'wsrep%'"
wsrep_cluster_size 1
wsrep_cluster_status Primary
wsrep_evs_state OPERATIONAL
wsrep_incoming_addresses 192.168.0.1:3306
wsrep_local_state_comment Synced
wsrep_ready ON

Adding secondary node to the cluster is just a matter of starting it regularly:

systemctl start mysql

Now we should see something like this:

mysql -u root -p -e "show status like 'wsrep%'"
wsrep_cluster_size 2
wsrep_cluster_status Primary
wsrep_evs_state OPERATIONAL
wsrep_incoming_addresses 192.168.0.1:3306,192.168.0.2:3306
wsrep_local_state_comment Synced
wsrep_ready ON

And that is about it.

Verifying replication

First we look at the database on nodeB:

MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
+--------------------+
3 rows in set (0.00 sec)

Then we connect on nodeA and do some interesting stuff:

MariaDB [(none)]> create database clustertest;
Query OK, 1 row affected (0.07 sec)
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| clustertest        |
| information_schema |
| mysql              |
| performance_schema |
+--------------------+
4 rows in set (0.00 sec)
MariaDB [(none)]> use clustertest;
Database changed
MariaDB [clustertest]> create table test(message text);
Query OK, 0 rows affected (0.31 sec)
MariaDB [clustertest]> insert into test values('Hello cluster!');
Query OK, 1 row affected (0.03 sec)

And verify the results on nodeB again:

MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| clustertest        |
| information_schema |
| mysql              |
| performance_schema |
+--------------------+
4 rows in set (0.00 sec)

MariaDB [(none)]> use clustertest;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MariaDB [clustertest]> select * from test;
+----------------+
| message        |
+----------------+
| Hello cluster! |
+----------------+
1 row in set (0.00 sec)

Worked like a charm!

Conclusion

It turned out running fully-replicated active-active MariaDB cluster is not hard at all and installing it actually was quite a bit of fun. If you enjoyed reading about it, consider coming back next week - we are going to look at different kind of clusters then.

See ya next thursday!