Mysql High Availability
Mysql High Availability
adzmely mansor
[email protected]
MYSQL HA
Possible
LVS/Heartbeat DRBD
MYSQL REPLICATION
MYSQL :: REPLICATION
Natively One
Asynchronous does no
MYSQL :: REPLICATION
Semi-Synchronous Master
acknowledge only after event has been written to relay log and ushed to disk timeout occurs without any semi-synchronous slave acknowledgment - master reverts to asynchronous
if
MYSQL :: REPLICATION
Semi-Synchronous when
semi-synchronous
/ not enabled on either master or at least one slave - master uses asynchronous replication
MYSQL :: REPLICATION
With
Asynchronous Replication writes event to its binary log request them when they are ready
master slave no
MYSQL :: REPLICATION
With
master all
slaves also will have to committed the transaction before the master return to the session that perform the transaction be a lot of delay to complete a transaction
might
MYSQL :: REPLICATION
With
MYSQL :: REPLICATION
Asynchronous High Less
MYSQL :: REPLICATION
Fully Less
Higher
MYSQL :: REPLICATION
Semi
- Synchronous
Speed Data
Integrity
MYSQL :: REPLICATION
Advantages operate in
asynchronous mode can start or stop at any time over slower links or partial links
you
suitable across
MYSQL :: REPLICATION
Advantages one
suitable
for read intensive application such as Web Services by spreading load across multiple servers
MYSQL :: REPLICATION
Disadvantages data no
- small delay
application
must be replication - aware (write only to master, and read from slaves)
MYSQL :: REPLICATION
Recommended Scale-out
Uses
(horizontal scaling) solutions that require large number of reads but fewer writes analysis of live data
Logging/data by
MYSQL :: REPLICATION
Recommended Online Ofine after
Uses
a while having a reliable snapshots, take the slave down again to catch up
run
MYSQL :: REPLICATION
How
It Works writes updates to its binary log les as record of updates to be sent to slave
determines receives
MYSQL :: REPLICATION
MYSQL - REPLICATION
LAB EXERCISE
MYSQL :: REPLICATION
MYSQL :: REPLICATION
#
mkdir /etc/mysql
/etc/mysql/master.cnf
[mysqld] server-id=1 log-bin=black-bin.log datadir=/home/mysql/master/data sync_binlog=1 user=mysql
# #
MYSQL :: REPLICATION
#
mysql -u root
mysql> mysql> -> -> CREATE USER [email protected] GRANT REPLICATION SLAVE ON *.* TO [email protected] IDENTIFIED BY password;
MYSQL :: REPLICATION
#
mkdir /etc/mysql
/etc/mysql/slave.cnf
[mysqld] server-id=2 master-host=192.168.0.31 master-user=repl master-password=password relay-log-index=slave-relay-bin.index relay-log=slave-relay-bin datadir=/home/mysql/slave/data
MYSQL :: REPLICATION
#
mysql -u root
mysql> start slave; mysql> show slave status\G
replication
binlog-ignore-db=dbname exclude
MYSQL :: REPLICATION
mysql>
MYSQL :: REPLICATION
purging
mysql> stop slave; mysql> reset slave; mysql> show slave status\G ... Read_Master_Log_Pos: 4 ...
MYSQL :: REPLICATION
purging
look
MYSQL :: REPLICATION
restart
mysql> start slave; mysql> show slave status\G ... Read_Master_Log_Pos: 98 ... Slave_IO_Running: Yes Slave_SQL_Running: Yes ...
MYSQL :: REPLICATION
testing
MYSQL :: REPLICATION
testing
# mysql -u root mysql> mysql> mysql> mysql> show databases; use dummy; show tables; select * from profile;
MYSQL :: REPLICATION
testing
# rm -rf /home/mysql/slave/data/dummy # mysql -u root mysql> mysql> mysql> mysql> mysql> mysql> mysql> mysql> mysql> mysql> flush tables; show databases; show slave status\G stop slave; change master to master_log_pos=98; start slave; show slave status\G show databases; use dummy; select * from profile;
MYSQL :: REPLICATION
at
master - mysqlbinlog command can be used to view master binary log and position related
# mysqlbinlog black-bin.000001 | more
purging oldest
MYSQL :: REPLICATION
Exercise add
set
make
sure at master grant permission to same replication user but with different IP address
MYSQL :: REPLICATION
How
master you
are running a single slave used as read and write during master recovery server now ready - how to sync?
slave
master
MYSQL :: REPLICATION
Conclusion learn
how to create a simple master-slave replication how mysql replication work internally via binary
understand
log
how
MYSQL :: REPLICATION
MYSQL :: REPLICATION
MYSQL :: REPLICATION
MYSQL :: REPLICATION
MYSQL :: REPLICATION
MYSQL :: REPLICATION
Circular (Multi-Master)
MYSQL :: REPLICATION
In
Never
Only Write
to Single Master, in case of failure failover to another master. Never write to multiple master
possibilities no conict
MYSQL :: REPLICATION
Conguring make
ready
create
AUTOMATIC FAILOVER
WITH MYSQL REPLICATION
AUTOMATIC FAILOVER
started 1998
used
to build high available and scaleable network services services, VOIP, etc
AUTOMATIC FAILOVER
to a different IP address
Routing
AUTOMATIC FAILOVER
INTRODUCTION TO LINUX HEARTBEAT
Using
trigger
AUTOMATIC FAILOVER
INTRODUCTION TO LINUX HEARTBEAT
Heartbeat serial
Method/Medium
interface (/dev/ttyS[0-9])
- udp
AUTOMATIC FAILOVER
USING LINUX LVS/HEARTBEAT + MYSQL
Linux LVS
Master
Slave
Replication
192.168.0.31
192.168.0.34
AUTOMATIC FAILOVER
USING LINUX LVS/HEARTBEAT + MYSQL
Linux LVS + Heartbeat (HA)
Master
Slave
Replication
192.168.0.31
192.168.0.34
AUTOMATIC FAILOVER
Master
Slave
Replication
AUTOMATIC FAILOVER
LAB EXERCISE
USING LVS DIRECT ROUTING
AUTOMATIC FAILOVER
Active LVS
DB Virtual IP 192.168.0.1 LVS Server IP 192.168.0.101 Heart Beat IP 172.16.0.101
Passive LVS
Heart Beat IP 172.16.0.102 DB Virtual IP 192.168.0.1 LVS Server IP 192.168.0.102
Master
DB Server Reply Directly to Connection/Requestor by passing LVS Master DB IP 192.168.0.11 NO ARP lo:0 IP 192.168.0.1
Slave
Replication
AUTOMATIC FAILOVER
CONFIGURING HEARTBEAT
yum
install heartbeat
AUTOMATIC FAILOVER
CONFIGURING HEARTBEAT
Heartbeat
medium : : broadcast
/etc/ha.d/ha.cf
# udpport port # bcast dev udpport 694 bcast eth0
AUTOMATIC FAILOVER
CONFIGURING HEARTBEAT
Heartbeat
medium : : multicast
/etc/ha.d/ha.cf
# # # # # # # # # # mcast [dev] [mcast group] [port] [ttl] [loop] [dev] [mcast group] [port] [ttl] [loop] device to send/recv heartbeat on multicast group to join - Class D udp port to sendto/recvfrom ttl value for outbound heartbeat how far multicast packet to propogate toggle loopback for outbound multicast heartbeats - loopback to interface it was sent on
AUTOMATIC FAILOVER
CONFIGURING HEARTBEAT
Heartbeat
medium : : unicast
/etc/ha.d/ha.cf
# ucast [dev] [peer-ip-addr] # # [dev] device to send/recv heartbeat on # [peer-ip-addr] IP address of peer to send packets to udpport 694 ucast eth0 172.16.0.102
AUTOMATIC FAILOVER
CONFIGURING HEARTBEAT
/etc/ha.d/ha.cf
debugfile /var/log/ha-debug logfile /var/log/ha-log logfacility local0 keepalive 2 deadtime 5 # udpport 694 # bcast eth0 mcast eth0 225.0.0.1 694 1 0 # ucast eth0 172.16.0.102 auto_failback on node primemaster.xjutsu.com node secondmaster.xjutsu.com
AUTOMATIC FAILOVER
CONFIGURING HEARTBEAT
/etc/ha.d/authkeys
600 authkeys
AUTOMATIC FAILOVER
CONFIGURING HEARTBEAT
/etc/ha.d/haresources
# # # # #
node-name resource1::options resourceN::options node-name: from uname -n resource: managed - script refer to /etc/ha.d/resources.d options: parameters/arguments passed to resource script
AUTOMATIC FAILOVER
AUTOMATIC FAILOVER
/etc/sysctl.conf
net.ipv4.ip_forward = 1 # sysctl -p
AUTOMATIC FAILOVER
interface
AUTOMATIC FAILOVER
broadcast
/etc/sysctl.conf
net.ipv4.conf.all.arp_ignore = 1 net.ipv4.conf.eth0.arp_ignore = 1 net.ipv4.conf.all.arp_announce = 2 net.ipv4.conf.eth0.arp_announce = 2
AUTOMATIC FAILOVER
LDIRECTORD + HEARTBEAT
starting
HIGH AVAILABILITY
HIGH AVAILABILITY
WITH MYSQL REPLICATION + DRBD
Distributed mirroring
Replicated Block Device (DRBD) a device via network based raid 1 into linux kernel starting from 2.6.33
network integrated
HIGH AVAILABILITY
WITH MYSQL REPLICATION + DRBD
Active Server Passive Server
Dedicated Interface
MySQL Data drbd device formated with cluster lesystem (GFS2/OCFS2) making two way read/write possible with DRBD
HIGH AVAILABILITY
Linux LVS / Heartbeat virtual IP 192.168.0.1
Active Server
Passive Server
Dedicated Interface
MySQL Data drbd device formated with cluster lesystem (GFS2/OCFS2) making two way read/write possible with DRBD
MYSQL CLUSTER
MYSQL CLUSTER
WHAT IS MYSQL CLUSTER?
Relational enables with
protect if
node fail, others nodes can be used to reconstruct datas disk is not required
shared
MYSQL CLUSTER
WHAT IS MYSQL CLUSTER?
Relational
synchronous
mechanism
guarantee
MYSQL CLUSTER
CORE CONCEPTS
Relational every
Database Technology
three
Management Data
Node Node
SQL/Application
MYSQL CLUSTER
CORE CONCEPTS
MYSQL CLUSTER
/ stoping nodes
started
MYSQL CLUSTER
CORE CONCEPTS - DATA NODE
stores
started starting
from version 7.0 ndbmtd can also be used for data node process
multi
at
MYSQL CLUSTER
CORE CONCEPTS - SQL NODE
node a
started with --ndbcluster and --ndb-connectstring options a API node which accesses MySQL cluster data
like
MYSQL CLUSTER
a portion of data stored by the cluster node is responsible for keeping at least one copy of any partitions assigned to it
each
MYSQL CLUSTER
Node - an ndbd process a replica of the partition assigned to the node group the node is member
stores
copy
which
MYSQL CLUSTER
consist stores
MYSQL CLUSTER
MYSQL CLUSTER
MYSQL CLUSTER
However all
live data storage is done is memory reduce RAM requirement by using Disk Data Tables for non indexed columns of NDB tables faster CPU can enhance performance
can
only multiple
MYSQL CLUSTER
expected for each host is a standard 100Mbps ethernet controller that MySQL to be run on its own subnet
recommend not
software
MYSQL CLUSTER
TYPICAL USE CASES
subscriber
DNS/DHCP
Delivery Platforms
deploying
MYSQL CLUSTER
LAB EXERCISE
MYSQL CLUSTER
INSTALL IN ALL SERVERS
locate
extract tar mv ln
short
MYSQL CLUSTER
CONFIGURE
create
server
mkdir create
/home/mysqlc chown
mysql.mysql /home/mysqlc
MYSQL CLUSTER
CONFIGURE
example
management node - 192.168.1.5 data node = 192.168.1.101 data node = 192.168.1.102 api node = 192.168.1.5 api node = 192.168.1.5
MYSQL CLUSTER
CONFIGURE
congure
[ndb_mgmd] NodeId=1 Hostname=192.168.1.5 [ndbd default] NoOfReplicas=2 Datadir=/home/mysqlc [ndbd] NodeId=3 Hostname=192.168.1.101 [ndbd] NodeId=4 Hostname=192.168.1.102 [mysqld] NodeId=5 Hostname=192.168.1.5 [mysqld] NodeId=6 Hostname=192.168.1.5
MYSQL CLUSTER
CONFIGURE
start/reload
start
# ndbd -c 192.168.1.5:1186
display
# ndb_mgm -e show
MYSQL CLUSTER
CONFIGURE SQL NODE
/etc/mysqlc/my.cnf
[mysqld] basedir=/usr/local/mysqlc datadir=/home/mysqlc ndbcluster ndb-connectstring=192.168.1.5:1186 socket=/home/mysqlc/mysql.sock log-error=error.log memlock
create start
SQL/API node
MYSQL CLUSTER
TESTING SQL/API NODE
in API
# /usr/local/mysqlc/bin/mysql -h 127.0.0.1 # /usr/local/mysqlc/bin/mysql -S /home/mysqlc/mysql.sock mysql> mysql> mysql> mysql> create database clusterdb1; use clusterdb1; create table dummy1 (id int(10) ket, name varchar(100)) engine=ndbcluster; insert into dummy1 values (1, John Doe);
# in mgm server - ndb_desc to view partition information ndb_desc -c 192.168.1.5 -d clusterdb1 dummy1 -p