0% found this document useful (0 votes)
681 views

Mysql High Availability

MySQL HA solutions discussed include native MySQL replication as well as third party solutions like LVS/Heartbeat and DRBD. MySQL supports both asynchronous and semi-synchronous replication out of the box. The document provides details on how MySQL replication works and its advantages/disadvantages. It also gives a lab exercise example to set up MySQL master-slave replication and discusses how to configure Linux LVS and Heartbeat for automatic failover of MySQL masters.

Uploaded by

Adzmely Mansor
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
681 views

Mysql High Availability

MySQL HA solutions discussed include native MySQL replication as well as third party solutions like LVS/Heartbeat and DRBD. MySQL supports both asynchronous and semi-synchronous replication out of the box. The document provides details on how MySQL replication works and its advantages/disadvantages. It also gives a lab exercise example to set up MySQL master-slave replication and discusses how to configure Linux LVS and Heartbeat for automatic failover of MySQL masters.

Uploaded by

Adzmely Mansor
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 94

MYSQL HA SOLUTIONS

adzmely mansor
[email protected]

MYSQL HA
Possible

HA solutions with MySQL

replications clusters Combinations

:: Third party solutions

LVS/Heartbeat DRBD

MYSQL REPLICATION

MYSQL :: REPLICATION
Natively One

support ONE WAY - ASYNCHRONOUS replication

Master + N number of Slaves ::

Asynchronous does no

not take place in real time

guarantee data from master replicated to slaves

MYSQL :: REPLICATION
Semi-Synchronous Master

Replication :: MySQL 5.5+

waits until at least on semi-synchronous slave acknowledges a commit transaction


slave

acknowledge only after event has been written to relay log and ushed to disk timeout occurs without any semi-synchronous slave acknowledgment - master reverts to asynchronous

if

MYSQL :: REPLICATION
Semi-Synchronous when

Replication :: MySQL 5.5+

one semi-synchronous slave catch-up - master returns to semi-synchronous slave sides


missing

semi-synchronous

must be enabled on both master and

/ not enabled on either master or at least one slave - master uses asynchronous replication

MYSQL :: REPLICATION
With

Asynchronous Replication writes event to its binary log request them when they are ready

master slave no

guarantee any event will ever reach any slave

MYSQL :: REPLICATION
With

Fully Synchronous Replication commits a transaction

master all

slaves also will have to committed the transaction before the master return to the session that perform the transaction be a lot of delay to complete a transaction

might

MYSQL :: REPLICATION
With

Semi-Synchronous Replication - is in between asynchronous and fully synchronous


master wait no

commits a transaction for at least one slave acknowledge the transaction

need to wait for all slaves acknowledgment

MYSQL :: REPLICATION
Asynchronous High Less

Speed Data Integrity

MYSQL :: REPLICATION
Fully Less

Synchronous Speed Data Integrity

Higher

MYSQL :: REPLICATION
Semi

- Synchronous

Speed Data

Integrity

MYSQL :: REPLICATION
Advantages operate in

across different platforms

asynchronous mode can start or stop at any time over slower links or partial links

you

suitable across

geographical boundaries - DRC, etc

MYSQL :: REPLICATION
Advantages one

master - many slaves

suitable

for read intensive application such as Web Services by spreading load across multiple servers

MYSQL :: REPLICATION
Disadvantages data no

can only be written to master

guarantee master and slaves is consistent at a given point of time


asynchronous

- small delay

application

must be replication - aware (write only to master, and read from slaves)

MYSQL :: REPLICATION
Recommended Scale-out

Uses

(horizontal scaling) solutions that require large number of reads but fewer writes analysis of live data

Logging/data by

replicating to slave, will not disturb/degrade/affecting master operation

MYSQL :: REPLICATION
Recommended Online Ofine after

Uses

backup (availability) backup

a while having a reliable snapshots, take the slave down again to catch up

run

MYSQL :: REPLICATION
How

It Works writes updates to its binary log les as record of updates to be sent to slave

master serve slave

connects to master last position of last successful update

determines receives

new updates taken since last update

MYSQL :: REPLICATION

MYSQL - REPLICATION
LAB EXERCISE

MYSQL :: REPLICATION

MYSQL :: REPLICATION
#

mkdir /etc/mysql

/etc/mysql/master.cnf
[mysqld] server-id=1 log-bin=black-bin.log datadir=/home/mysql/master/data sync_binlog=1 user=mysql

# #

mysql_install_db --defaults-le=/etc/mysql/master.cnf mysqld --defaults-le=/etc/mysql/master.cnf &

MYSQL :: REPLICATION
#

mysql -u root
mysql> mysql> -> -> CREATE USER [email protected] GRANT REPLICATION SLAVE ON *.* TO [email protected] IDENTIFIED BY password;

MYSQL :: REPLICATION
#

mkdir /etc/mysql

/etc/mysql/slave.cnf
[mysqld] server-id=2 master-host=192.168.0.31 master-user=repl master-password=password relay-log-index=slave-relay-bin.index relay-log=slave-relay-bin datadir=/home/mysql/slave/data

mysqld --defaults-le=/etc/mysql/slave.cnf &

MYSQL :: REPLICATION
#

mysql -u root
mysql> start slave; mysql> show slave status\G

replication

error at this point, why?

binlog-ignore-db=dbname exclude

database from replication

MYSQL :: REPLICATION
mysql>

show slave status\G

Slave_IO_Running: Yes Slave_SQL_Running: No ... ... Last_Error: Error Duplicate entry...

MYSQL :: REPLICATION
purging

slave relay log

mysql> stop slave; mysql> reset slave; mysql> show slave status\G ... Read_Master_Log_Pos: 4 ...

MYSQL :: REPLICATION
purging

master binary log

mysql> reset master; mysql> show master status;

look

at master position value

MYSQL :: REPLICATION
restart

slave - reconnect to master

mysql> start slave; mysql> show slave status\G ... Read_Master_Log_Pos: 98 ... Slave_IO_Running: Yes Slave_SQL_Running: Yes ...

MYSQL :: REPLICATION
testing

the replication - at master


create database dummy; use dummy; create table profile (id int(3), name varchar(30)); insert into profile values (1,Abdullah); insert into profile values (2,John Doe); select * from profile;

# mysql -u root mysql> mysql> mysql> mysql> mysql> mysql>

MYSQL :: REPLICATION
testing

the replication - at slave

# mysql -u root mysql> mysql> mysql> mysql> show databases; use dummy; show tables; select * from profile;

MYSQL :: REPLICATION
testing

the replication - at slave

# rm -rf /home/mysql/slave/data/dummy # mysql -u root mysql> mysql> mysql> mysql> mysql> mysql> mysql> mysql> mysql> mysql> flush tables; show databases; show slave status\G stop slave; change master to master_log_pos=98; start slave; show slave status\G show databases; use dummy; select * from profile;

MYSQL :: REPLICATION
at

master - mysqlbinlog command can be used to view master binary log and position related
# mysqlbinlog black-bin.000001 | more

purging oldest

binary log kept log

# mysql -p mysql> purge master logs to black-bin.000201;

MYSQL :: REPLICATION
Exercise add

another slave to master server-id=3

set

make

sure at master grant permission to same replication user but with different IP address

MYSQL :: REPLICATION
How

to recover in this kind of situation? crashed

master you

are running a single slave used as read and write during master recovery server now ready - how to sync?

slave

master

MYSQL :: REPLICATION
Conclusion learn

from this exercise

how to create a simple master-slave replication how mysql replication work internally via binary

understand

log

how

to recover when master or slave fails

MYSQL :: REPLICATION

Master => Slave

MYSQL :: REPLICATION

Master => Slaves

MYSQL :: REPLICATION

Master => Slave => Slaves

MYSQL :: REPLICATION

Master => Slave (Multi-Source)

MYSQL :: REPLICATION

Master => Master

MYSQL :: REPLICATION

Circular (Multi-Master)

MYSQL :: REPLICATION
In

Multi-Master Environment Load Balance Multi Master

Never

Only Write

to Single Master, in case of failure failover to another master. Never write to multiple master
possibilities no conict

out of sync resolution in MySQL replication

MYSQL :: REPLICATION
Conguring make

a Slave as Master ready

sure log-bin congured in conguration le

[mysqld] ... log-bin=black-bin.log sync_binlog=1 ...

ready

any time for slave connection user and grant acess

create

AUTOMATIC FAILOVER
WITH MYSQL REPLICATION

INTRODUCTION TO LINUX VIRTUAL SERVER (LVS)


project mission

AUTOMATIC FAILOVER
started 1998

to provide high scaleability, reliability and serviceability

advanced IP load balancing software


included

in standard kernel since 2.4

used

to build high available and scaleable network services services, VOIP, etc

web, email, media

INTRODUCTION TO LINUX VIRTUAL SERVER (LVS)


3

AUTOMATIC FAILOVER

types of LVS load balancing (NAT)

Network Address Translation IP Tunneling redirecting Direct

to a different IP address

Routing

AUTOMATIC FAILOVER
INTRODUCTION TO LINUX HEARTBEAT
Using

Linux HA project also known as Linux Heartbeat a heartbeat protocol

implements message not

sent at a regular interval to one or more nodes

received - node assumed failed a fail over action

trigger

AUTOMATIC FAILOVER
INTRODUCTION TO LINUX HEARTBEAT
Heartbeat serial

Method/Medium

interface (/dev/ttyS[0-9])

broadcast multicast unicast

- group (224.0.0.0 - 239.0.0.0)

- udp

AUTOMATIC FAILOVER
USING LINUX LVS/HEARTBEAT + MYSQL
Linux LVS

Linux LVS / Heartbeat virtual IP 192.168.0.1

Master

Slave

Replication

192.168.0.31

192.168.0.34

AUTOMATIC FAILOVER
USING LINUX LVS/HEARTBEAT + MYSQL
Linux LVS + Heartbeat (HA)

Linux LVS / Heartbeat virtual IP 192.168.0.1

Master

Slave

Replication

192.168.0.31

192.168.0.34

USING LINUX LVS/HEARTBEAT + WEB + MYSQL


Linux LVS + Heartbeat (HA)

AUTOMATIC FAILOVER

HTTP/WEB virtual IP 192.168.0.2 DB Server : 192.168.0.1

DB/MySQL virtual IP 192.168.0.1

Master

Slave

Replication

AUTOMATIC FAILOVER
LAB EXERCISE
USING LVS DIRECT ROUTING

USING LINUX LVS/HEARTBEAT - DIRECT ROUTING


DB Connection request via Virtual IP 192.168.0.1

AUTOMATIC FAILOVER

Active LVS
DB Virtual IP 192.168.0.1 LVS Server IP 192.168.0.101 Heart Beat IP 172.16.0.101

Passive LVS
Heart Beat IP 172.16.0.102 DB Virtual IP 192.168.0.1 LVS Server IP 192.168.0.102

Master
DB Server Reply Directly to Connection/Requestor by passing LVS Master DB IP 192.168.0.11 NO ARP lo:0 IP 192.168.0.1

Slave

Replication

Slave DB IP 192.168.0.12 NO ARP lo:0 IP 192.168.0.1

AUTOMATIC FAILOVER
CONFIGURING HEARTBEAT
yum

install heartbeat

/etc/ha.d/ resource.d/ ha.cf authkeys haresources

AUTOMATIC FAILOVER
CONFIGURING HEARTBEAT
Heartbeat

medium : : broadcast

/etc/ha.d/ha.cf
# udpport port # bcast dev udpport 694 bcast eth0

AUTOMATIC FAILOVER
CONFIGURING HEARTBEAT
Heartbeat

medium : : multicast

/etc/ha.d/ha.cf
# # # # # # # # # # mcast [dev] [mcast group] [port] [ttl] [loop] [dev] [mcast group] [port] [ttl] [loop] device to send/recv heartbeat on multicast group to join - Class D udp port to sendto/recvfrom ttl value for outbound heartbeat how far multicast packet to propogate toggle loopback for outbound multicast heartbeats - loopback to interface it was sent on

mcast eth0 225.0.0.1 694 1 0

AUTOMATIC FAILOVER
CONFIGURING HEARTBEAT
Heartbeat

medium : : unicast

/etc/ha.d/ha.cf
# ucast [dev] [peer-ip-addr] # # [dev] device to send/recv heartbeat on # [peer-ip-addr] IP address of peer to send packets to udpport 694 ucast eth0 172.16.0.102

AUTOMATIC FAILOVER
CONFIGURING HEARTBEAT
/etc/ha.d/ha.cf

:: main heartbeat conguration le

debugfile /var/log/ha-debug logfile /var/log/ha-log logfacility local0 keepalive 2 deadtime 5 # udpport 694 # bcast eth0 mcast eth0 225.0.0.1 694 1 0 # ucast eth0 172.16.0.102 auto_failback on node primemaster.xjutsu.com node secondmaster.xjutsu.com

AUTOMATIC FAILOVER
CONFIGURING HEARTBEAT
/etc/ha.d/authkeys

:: authentication method and password type/password between heartbeat nodes


chmod

600 authkeys

auth 1 1 md5 secretpassword

AUTOMATIC FAILOVER
CONFIGURING HEARTBEAT
/etc/ha.d/haresources
# # # # #

:: node resources monitored/loaded

node-name resource1::options resourceN::options node-name: from uname -n resource: managed - script refer to /etc/ha.d/resources.d options: parameters/arguments passed to resource script

primemaster.xjutsu.com ldirectord::ldirectord.cf \ IPaddr2::192.168.0.1/24/eth0:0/192.168.0.255

CONFIGURING LDIRECTORD - LOAD BALANCER


/etc/ha.d/ldirectord.cf
checktimeout=10 checkinterval=20 autoreload=yes logfile=/var/log/ldirectod.log quiescent=yes virtual=192.168.0.1:3306 fallback=192.168.0.12:3306 gate real=192.168.0.11:3306 gate service=mysql scheduler=wlc protocol=tcp checktype=connect # # man ipvsadm for detail scheduler options

AUTOMATIC FAILOVER

CONFIGURING LDIRECTORD - LOAD BALANCER


The

AUTOMATIC FAILOVER

linux directors must be able to route trafcs to real server


by

enabling IPV4 packet forwarding

/etc/sysctl.conf
net.ipv4.ip_forward = 1 # sysctl -p

CONFIGURING REAL MYSQL SERVERS (MASTER/SLAVE)


ip

AUTOMATIC FAILOVER

aliasing on localhost device lo:0 (/etc/syscong/netwrok-scripts/ifcfg-lo:0)

interface

DEVICE=lo:0 IPADDR=192.168.0.1 NETMASK=255.255.255.255 ONBOOT=yes NAME=loopback

CONFIGURING REAL MYSQL SERVERS (MASTER/SLAVE)


disable ARP

AUTOMATIC FAILOVER
broadcast

/etc/sysctl.conf
net.ipv4.conf.all.arp_ignore = 1 net.ipv4.conf.eth0.arp_ignore = 1 net.ipv4.conf.all.arp_announce = 2 net.ipv4.conf.eth0.arp_announce = 2

# sysctl -p # ifup lo:0

AUTOMATIC FAILOVER
LDIRECTORD + HEARTBEAT
starting

the service [start, stop, restart]

/etc/init.d/heartbeat ifcong virtual ipvsadm list

interface eth0:0 automatically initialized -L

virtual server table

WITH MYSQL REPLICATION + DRBD

HIGH AVAILABILITY

HIGH AVAILABILITY
WITH MYSQL REPLICATION + DRBD
Distributed mirroring

Replicated Block Device (DRBD) a device via network based raid 1 into linux kernel starting from 2.6.33

network integrated

HIGH AVAILABILITY
WITH MYSQL REPLICATION + DRBD
Active Server Passive Server

Dedicated Interface

MySQL Data drbd device formated with cluster lesystem (GFS2/OCFS2) making two way read/write possible with DRBD

WITH MYSQL REPLICATION + DRBD + LVS/HEARTBEAT

HIGH AVAILABILITY
Linux LVS / Heartbeat virtual IP 192.168.0.1

Active Server

Passive Server

Dedicated Interface

MySQL Data drbd device formated with cluster lesystem (GFS2/OCFS2) making two way read/write possible with DRBD

MYSQL CLUSTER

MYSQL CLUSTER
WHAT IS MYSQL CLUSTER?
Relational enables with

Database Technology clustering of in-memory and disk-based tables

shared nothing technology against single point of failure

protect if

node fail, others nodes can be used to reconstruct datas disk is not required

shared

MYSQL CLUSTER
WHAT IS MYSQL CLUSTER?
Relational

Database Technology replication with two phase commit

synchronous

mechanism

guarantee

that data is written to multiple nodes upon committing the data

MYSQL CLUSTER
CORE CONCEPTS
Relational every

Database Technology

part of the cluster is a node types of cluster nodes Node

three

Management Data

Node Node

SQL/Application

MYSQL CLUSTER
CORE CONCEPTS

CORE CONCEPTS - MANAGEMENT NODE


Manage

MYSQL CLUSTER

the other nodes within the cluster conguration data

providing starting should

/ stoping nodes

be started rst before other nodes with the command ndb_mgmd

started

MYSQL CLUSTER
CORE CONCEPTS - DATA NODE
stores

cluster data with the command ndbd

started starting

from version 7.0 ndbmtd can also be used for data node process
multi

threaded data node daemon

at

least 2 data nodes - 1 possible but no replica

MYSQL CLUSTER
CORE CONCEPTS - SQL NODE
node a

that access the cluster data

traditional mysql server that uses NDBCLUSTER storage engine


mysql

started with --ndbcluster and --ndb-connectstring options a API node which accesses MySQL cluster data

like

NODES, NODE GROUPS, REPLICAS + PARTITIONS


Partition is

MYSQL CLUSTER

a portion of data stored by the cluster node is responsible for keeping at least one copy of any partitions assigned to it

each

NODES, NODE GROUPS, REPLICAS + PARTITIONS


(Data)

MYSQL CLUSTER

Node - an ndbd process a replica of the partition assigned to the node group the node is member

stores

copy

which

NODES, NODE GROUPS, REPLICAS + PARTITIONS


Node

MYSQL CLUSTER

Group of one or more nodes

consist stores

partitions or sets of replicas

NODES, NODE GROUPS, REPLICAS + PARTITIONS

MYSQL CLUSTER

NODES, NODE GROUPS, REPLICAS + PARTITIONS

MYSQL CLUSTER

HARDWARE, SOFTWARE + NETWORKING REQUIREMENTS


can

MYSQL CLUSTER

run on commodity hardware data nodes required large amount of RAM

However all

live data storage is done is memory reduce RAM requirement by using Disk Data Tables for non indexed columns of NDB tables faster CPU can enhance performance

can

only multiple

HARDWARE, SOFTWARE + NETWORKING REQUIREMENTS


communication minimum

MYSQL CLUSTER

between nodes via TCP/IP networking

expected for each host is a standard 100Mbps ethernet controller that MySQL to be run on its own subnet

recommend not

sharing with machines not forming part of the cluster

software

requirement is simple, what needed is production release of MySQL 5.1.51-ndb-7.0.21 or 5.1.51-ndb-7.1.10

MYSQL CLUSTER
TYPICAL USE CASES
subscriber

databases for broadband Servers

DNS/DHCP

Telecoms Application Service AAA

Delivery Platforms

Databases FreeRadius with MySQL database

deploying

MYSQL CLUSTER
LAB EXERCISE

MYSQL CLUSTER
INSTALL IN ALL SERVERS
locate

the tar ball downloaded using tar command

extract tar mv ln

xvf mysql-cluster-gpl-7.1.9-linux-x86_64-glibc23.tar.gz mysql-cluster-gpl-7.1.9-linux-x86_64-glibc23 /usr/local

-s mysql-cluster-gpl-7.1.9-linux-x86_64-glibc23 mysqlc name

short

MYSQL CLUSTER
CONFIGURE
create

server

data and conguration folder in mysql cluster manager /etc/mysqlc

mkdir create

and set ownership for data folder in all servers

/home/mysqlc chown

mysql.mysql /home/mysqlc

MYSQL CLUSTER
CONFIGURE
example

setup with 5 nodes

management node - 192.168.1.5 data node = 192.168.1.101 data node = 192.168.1.102 api node = 192.168.1.5 api node = 192.168.1.5

MYSQL CLUSTER
CONFIGURE
congure

the node cluster manager - /etc/mysqlc/cong.ini

[ndb_mgmd] NodeId=1 Hostname=192.168.1.5 [ndbd default] NoOfReplicas=2 Datadir=/home/mysqlc [ndbd] NodeId=3 Hostname=192.168.1.101 [ndbd] NodeId=4 Hostname=192.168.1.102 [mysqld] NodeId=5 Hostname=192.168.1.5 [mysqld] NodeId=6 Hostname=192.168.1.5

MYSQL CLUSTER
CONFIGURE
start/reload

the node cluster manager

# ndb_mgmd -f /etc/mysqlc/config.ini --initial --configdir=/etc/mysqlc # ndb_mgmd -f /etc/mysqlc/config.ini --reload

start

the data nodes (in all congured nodes)

# ndbd -c 192.168.1.5:1186

display

cluster status from management node

# ndb_mgm -e show

MYSQL CLUSTER
CONFIGURE SQL NODE
/etc/mysqlc/my.cnf
[mysqld] basedir=/usr/local/mysqlc datadir=/home/mysqlc ndbcluster ndb-connectstring=192.168.1.5:1186 socket=/home/mysqlc/mysql.sock log-error=error.log memlock

create start

# cd /usr/local/mysqlc # scripts/mysql_install_db --no-defaults --datadir=/home/mysqlc

mysql default database

SQL/API node

# /usr/local/mysqlc/bin/mysqld --defaults-file=/etc/mysqlc/my.cnf &

MYSQL CLUSTER
TESTING SQL/API NODE
in API

server start mysql CLI

# /usr/local/mysqlc/bin/mysql -h 127.0.0.1 # /usr/local/mysqlc/bin/mysql -S /home/mysqlc/mysql.sock mysql> mysql> mysql> mysql> create database clusterdb1; use clusterdb1; create table dummy1 (id int(10) ket, name varchar(100)) engine=ndbcluster; insert into dummy1 values (1, John Doe);

# in mgm server - ndb_desc to view partition information ndb_desc -c 192.168.1.5 -d clusterdb1 dummy1 -p

You might also like