MySQL Cluster Installation/Configuring Guide
MySQL Cluster Background Information
1. What is MySQL Cluster?
MySQL Cluster is a real-time open source transactional database designed for fast, always-on access to data under high throughput conditions. MySQL Cluster utilizes a “shared nothing” architecture which does not require any additional infrastructure investment to provide 99.999% data availability with no single point of failure.

2. How many physical servers are needed to create a minimum Cluster configuration?
The recommendation for good redundancy would be 2 MySQL Server Nodes, 2 Data Nodes and 1 Management Node. For evaluation purposes it is possible to have only a single physical server, but that would only serve as a very basic evaluation.

3. Minimum system requirements (for each node)

3 Machines Minimum:

OS: Linux (Red Hat, SUSE), Solaris, AIX, HP-UX, Mac OS X
CPU: Intel/AMD x86
Memory: 512MB RAM
HDD: 3GB
Network: 1+ nodes (Standard Ethernet - TCP/IP)

4. Preferred system requirements are as follows (for each node)

4 Machines Preferred:

· OS: Linux (Red Hat, SUSE), Solaris, AIX, HP-UX, Mac OS X
· CPU: 2x Intel Xeon, Intel Itanium, AMD Opteron, Sun SPARC, IBM PowerPC
· Memory: 16GB RAM
· HDD: 4x 36GB SCSI (RAID 1 Controller)
· Network: 1-8 Nodes (Gigabit Ethernet); 8+ Nodes (Dedicated Cluster Interconnect e.g. SCI)

5. MySQL Cluster uses three different types of nodes

· Data Node (ndbd process): These nodes store the data.
· Management Node (ndb_mgmd process): Used for configuration and monitoring of the cluster. They are required only during node startup.
· SQL Node (mysqld process): A MySQL server (mysqld) that connects to all of the data nodes in order to perform data storage and retrieval.

Generally, it is expected that each node will run on a separate host computer.



HOWTO set up a MySQL Cluster with Three Servers
Introduction
This guide was designed to create a MySQL cluster with 3 servers. (All 3 servers will be Ubuntu installed) The aim is to have true redundancy - either server can be unplugged and yet the site will remain up.

Note

· Data directory on each machine is @ /usr/local/mysql/data

Software
· OS: Ubuntu 9.04 (the Jaunty Jackalope)
· Mysql File Version: mysql-max-5.0.19-linux-i686-glibc23.tar.gz

Server Information
The 3 servers are,

[Server Id] [Server Type] [Name] [IP Address][Server 1] [Data/SQL Node] [data188] [192.168.101.188]
[Server 2] [Data/SQL Node] [data189] [192.168.101.189]
[Server 3] [Management Node] [mgmt187] [192.168.101.187 ]

Servers 1 and 2 will be the two that end up "clustered".

Server 3 needs to have only minor changes made to it and does NOT require a MySQL install. It can be a low-end machine and can be carrying out other tasks.



Installation

STAGE 1: Install MySQL on Data/SQL Nodes
** Run following commands on both Servers 1 and 2 (Data/SQL Nodes):

Step 01

Create a new “mysql” group, and then add as “mysql” user to this group.

script> groupadd mysql
script> useradd -g mysql mysql


Step 02
Download the MySQL installation tar file and copy the MySQL tar file to /usr/local

· File Name: mysql-max-5.0.19-linux-i686-glibc23.tar.gz tar


Step 03
Change location to the directory containing the download file, unpack the archive, and create a symlink to the mysql directory named mysql.

script> cd /usr/local/
script> tar -zxvf mysql-max-5.0.19-linux-i686-glibc23.tar.gz

script> ln -s /usr/local/mysql-max-5.0.19-linux-i686-glibc23 /usr/local/mysql


Step 04
Change location to the mysql directory and run the supplied script for creating the system databases.

script> cd /usr/local/mysql
script> scripts/mysql_install_db --user=mysql


Step 05
Set the necessary permissions for the MySQL server and data directories.

script> chown -R root:mysql .
script> chown -R mysql data


Step 06
Copy the MySQL startup script to the appropriate directory, make it executable, and set it to start when the operating system is booted up:

script> cp support-files/mysql.server /etc/init.d/
script> chmod 755 /etc/init.d/mysql.server
script> update-rc.d mysql.server defaults


Step 07
Run following commands too.

script> cd /usr/local/mysql/bin
script> mv * /usr/bin
script> cd ../
script> rm -fr /usr/local/mysql/bin
script> ln -s /usr/bin /usr/local/mysql/bin


If you have done on Server 1 now go back to the start of stage 1 and repeat exactly the same procedure on Server 2 as well.


STAGE 2: Install and configure the Management Node

This requires only two executables be extracted form the MySQL distribution. You need the following files from the /bin of the MySQL directory: ndb_mgm and ndb_mgmd.

** Run following commands on Server 3 (Management Node):

Step 01
Create a directory called mysql-mgm under /usr/src/mysql-mgm
script> mkdir /usr/src/mysql-mgm


Step 02

Copy the mysql-max-5.0.19-linux-i686-glibc23.tar.gz tar file to /usr/src/mysql-mgm


Step 03
Change location to the directory containing the download file, unpack the archive.

script> cd /usr/src/mysql-mgm
script> tar xvfz mysql-max-5.0.19-linux-i686-glibc23.tar.gz


Step 04

Change location to the mysql-max-5.0.19-linux-i686-glibc23 directory.

script> cd mysql-max-5.0.19-linux-i686-glibc23


Step 05
Move ndb_mgm and ndb_mgmd to /usr/bin directory.

script> mv bin/ndb_mgm /usr/bin
script> mv bin/ndb_mgmd /usr/bin


Step 06
Set the necessary permissions.
script> chmod +x /usr/bin/ndb_mgm*


Step 07

Change the directory to /usr/src and run following commands.

script> cd /usr/src
script> rm -rf /usr/src/mysql-mgm


Step 08
Next create a cluster directory called mysql-cluster

script> cd
script> mkdir /var/lib/mysql-cluster


Step 09
You now need to set up the configuration file for this Management Node:

Within that directory create a file called config.ini


Procedure:

Go inside of the /var/lib/mysql-cluster directory.

Inside that directory, right click, then a popup appears.

Create Document >> Empty File >> Rename it to config.ini

Now, insert the following script text in to it.


--------- config.ini content – start ------------------------
[NDBD DEFAULT]
NoOfReplicas=2
DataDir= /var/lib/mysql-cluster #Data Director Path

#*Managment Server
[NDB_MGMD]
HostName=192.168.101.187 # Management Sever IP Address
DataDir= /var/lib/mysql-cluster #Data Director Path

#*Storage Engines
[NDBD]
HostName=192.168.101.188 # Sever 1 IP Address
[NDBD]
HostName=192.168.101.189 # Sever 2 IP Address

#*SQL Nodes
[MYSQLD]
HostName=192.168.101.188 # Sever 1 IP Address
[MYSQLD]
HostName=192.168.101.189 # Sever 2 IP Address

--------- config.ini content – end -------------------------

STAGE 3: Configure Data/SQL Nodes
** Run following commands on both Servers 1 and 2 (Data/SQL Nodes):

Step 1
On each of the two Data/SQL Nodes create a configuration file called my.cnf for MySQL under /etc directory


Procedure:
Go inside of the /etc directory. Inside that directory, right click, then a popup appears.

Create Document >> Empty File >> Rename it to my.cnf

Now, insert the following script text in to it.


--------- my.cnf content – start --------------------------
[mysqld]
ndbcluster
ndb-connectstring=192.168.101.187 # Management Sever IP Address


[mysql_cluster]
ndb-connectstring=192.168.101.187 # Management Sever IP Address

--------- my.cnf content – end ----------------------------
Do the same to the other server as well.


Step 2
Now, we make the data directory called “mysql-cluster”.

script> mkdir /var/lib/mysql-cluster
If you have done on Server 1 now go back to the start of stage 3 and repeat exactly the same procedure on Server 2 as well.


STAGE 4: Starting MySQL Cluster

**After Installing/Configuring all 3 servers, make sure you re-start them before commencing testing.


Step 1

First start the Management Node by issuing the following command.

script> ndb_mgmd -f /var/lib/mysql-cluster/config.ini

If you don’t get an error that means the Management Node is now up and running.

** This is the MySQL management server, not management console. You should therefore not expect any output (we will start the console later).


Step 2

Next you need to start both Data/SQL Nodes by issuing following commands on both nodes.

script> cd /var/lib/mysql-cluster
script> /usr/local/mysql/bin/ndbd --initial

If you don’t get an error that means the Data/SQL Nodes are now up and running.

**Note:
You should ONLY use --initial if you are either starting from scratch or have changed the config.ini file on the management


STAGE 5: Check its working

Scenario 01:
You can now return to the Management Node and enter the management console:

script> ndb_mgm

Enter the command “show” to see what is going on.

If all went correctly, sample output would looks like this:


---------- Output - start ---------------------------------------
script> ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @192.168.101.188 (Version: 5.0.19, Nodegroup: 0, Master)
id=3 @192.168.101.189 (Version: 5.0.19, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.101.187 (Version: 5.0.19)

[mysqld(API)] 2 node(s)
id=4 @192.168.101.188 (Version: 5.0.19)
id=5 @192.168.101.189 (Version: 5.0.19)

ndb_mgm>

---------- Output - end -----------------------------------


If you are OK to here it is time to test MySQL cluster.

For this you need to create a database and populate a table using the NDBCLUSTER engine.

First go to Server 1 and inside it, create a database called “db_sam_test_cluster”.

**Note that we have no root password yet.

SQL commands:
script> mysql
script> create database db_sam_test_cluster;
script> show databases;

**This should show the newly created database “db_sam_test_cluster”.


On Server 2 as well, create a database called “db_sam_test_cluster”.

SQL commands:

script> mysql
script> create database db_sam_test_cluster;
script> show databases;

**This should show the newly created database “db_sam_test_cluster”.


Once you are done with creating databases, on either server 1 or 2 enter the following

I’m using Server 1 for this.

SQL commands:

script> mysql script> use db_sam_test_cluster;
script> create table tbl_sam_test(i int) engine=ndbcluster;
script> show tables;
script> insert into tbl_sam_test () values (1);
script> select * from tbl_sam_test;

**You should see 1 row returned (with the value 1).

Once you are done with the table creation on Server 1, go to server 2. Run following SQL commands on Server 2.

SQL commands:

script> mysql script> use db_sam_test_cluster;
script> show tables;
script> select * from tbl_sam_test;

**You should see 1 row returned (with the value 1).

If it works, then congratulations!!!!! MySQL Cluster is working…..




Scenario 02:

This is a good sign, but note that it does not actually prove that the data is being replicated.

Next we need to kill one node and see, whether clustering is still working.

For that unplug the network cable of Server 1.

Then run “show” command on Management Server. You can see that Server 1 is not connected.

A sample output looks like this:


---------- Output - start ---------------------------------

script> ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 (not connected, accepting connect from 192.168.101.188)
id=3 @192.168.101.189 (Version: 5.0.19, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.101.187 (Version: 5.0.19)

[mysqld(API)] 2 node(s)
id=4 (not connected, accepting connect from 192.168.101.188)
id=5 @192.168.101.189 (Version: 5.0.19)

ndb_mgm>

---------- Output - end ----------------------------------


Then run following SQL command in Server 2.

SQL commands:
script> mysql
script> use db_sam_test_cluster;
script> insert into tbl_sam_test () values (2);
script> select * from tbl_sam_test;

**You should see 2 rows returned (with the value 1 and 2).


Again go to Server 1 and plug on the network connection.

On management node, run the “show” command.

A sample output looks like this:

---------- Output - start --------------------------------------

script> ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @192.168.101.188 (Version: 5.0.19, Nodegroup: 0, Master)
id=3 @192.168.101.189 (Version: 5.0.19, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.101.187 (Version: 5.0.19)

[mysqld(API)] 2 node(s)
id=4 @192.168.101.188 (Version: 5.0.19)
id=5 @192.168.101.189 (Version: 5.0.19)

ndb_mgm>

---------- Output - end -----------------------------------------


Then go to the Server 1 and run following SQL command on it.

SQL commands:

script> select * from tbl_sam_test;

**You should see 2 rows returned (with the value 1 and 2).

This means MySQL cluster is working perfectly!!!!!

Now you can relax with a cup of tea!!!!! :)



URL’s

For MySQL Wiki, visit…
http://en.wikipedia.org/wiki/MySQL_Cluster

For detail MySQL Cluster FAQ, visit…
http://www.mysql.com/products/database/cluster/faq.html#10

For detail MySQL Cluster, visit…
http://dev.mysql.com/doc/refman/5.0/en/mysql-cluster.html
0

Add a comment

In this tutorial I’m going to show how to:
• Create an ASP.NET Core MVC Application and Publish to Azure
• Create/Purchase a Custom Domain from a Custom Domain Provider
• Configure Custom Domain on Azure
Alt Text
You can find the PDF version of this tutorial here.
You can find all my Azure/.Net tutorials here and here.
Enjoy !!
0

Add a comment

About Me
About Me
Blog Archive
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.