0% found this document useful (0 votes)
11 views33 pages

mysql-doc (1)

The MySQL documentation outlines its architecture, which consists of three layers: Client, Server, and Storage. It details the functionalities of each layer, including connection handling, authentication, and query optimization in the Server layer, as well as the differences between storage engines like InnoDB and MyISAM. Additionally, it provides installation instructions for CentOS, monitoring commands, backup and restoration methods, and user management commands.

Uploaded by

dbsupport
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views33 pages

mysql-doc (1)

The MySQL documentation outlines its architecture, which consists of three layers: Client, Server, and Storage. It details the functionalities of each layer, including connection handling, authentication, and query optimization in the Server layer, as well as the differences between storage engines like InnoDB and MyISAM. Additionally, it provides installation instructions for CentOS, monitoring commands, backup and restoration methods, and user management commands.

Uploaded by

dbsupport
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 33

MYSQL DOCUMENTATION

SYSTEM ARCHITECTURE:

There are major three layers of MySQL :

 Client Layer
 Server Layer
 Storage Layer

1] Client Layer
The client give request instruction to the server with the help of command prompt or through
GUI screen by using MySQL commands and expressions.
Some important services of Client Layer are:
(a) Connection Handling
◦ The connection established between the server and the client.
Once client gets connected to the server at that time client get its own thread for its
connection, with the help of this thread all the queries from client side will executed.
(b) Authentication
Authentication performed on server side when client is connected to the MySQL server.
Authentication is done with the help of user name and password.

(c) Security
After authentication when the client is successfully connected to server, the server will
check that a particular client has the privileges to issue in certain queries against MySQL server.

2] SERVER LAYER
Server layer is responsible for all the logical functionalities of RDBMS of MySQL When the
Client give request instructions to the Server and the server gives the output as soon as the instruction is
matched.

The various sub components of Server layer are:

(a) Thread handling


When the client is connected to the server , client get it’s own thread. The thread id is unique for
every process.

(b) Parser
Here the input is broken into number of tokens.

(c) Optimizer
Here the query is checked, i.e rearranging the tables scanned, choosing the right indexes
and other changes that are necessary.

(d) Query cache


The results of each query statement are stored. Query cache stores the complete result
set for inputted query statements. Even Before parsing, MySQL server consult query cache.
If the query written by client is identical in the cache then server simply skip the parser and
optimization and directly show the output.
(e) Buffer and Cache
Cache and buffer will store the previous query or problem asked by user. When User write a
query then it firstly goes to Query Cache then query cache will check that the same query or problem is
available in the cache.

(g) Table Metadata cache


The metadata cache is a reserved area of memory used for tracking information on databases,
indexes, or objects. The greater the number of open databases, indexes, or objects, the larger the
metadata cache size.

(h) Key cache


A key cache is an index entry that uniquely identifies an object in a cache.

3] STORAGE LAYER
This Storage Engine Layer of MySQL Architecture make it’s unique and most preferable for
developer’s. Due to this Layer MySQL layer is counted as the mostly used RDBMS and is widely used.
Different type of storage provided by MySQL are:
a) InnoDB
b) MyISAM
c) Memory
d) CSV

DIFFERENCE BETWEEN INNODB & MYISAM


MYISAM INNODB
Used in versions < 5.5 Used in versions > 5.5
Multiple transactions not supported Multiple transactions supported
Select statement is faster Select statement is slower
DML operations are slow DML is supported and is fast
Foreign key is not supported here Foreign key is supported here
Table-level locking Row-level locking
The file formats here are .frm-structure, .myi- Here the file format supported is .ibd
indexing, .myd - data

Base directory: /usr/bin


Data Directory: /var/lib/mysql
Error Logs: /var/log/mysqld.log
configuration file: /etc/my.cnf
Binary Logs: /var/lib/mysql
Relay Logs: /var/lib/mysql
PID file: /var/run/mysqld/mysqld.pid
Socket file: /var/lib/mysql/mysql.sock
INSTALLATION :- MYSQL ON CENTOS:-

1. RPM installation

For this first we need to download the rpm package from the official website of MYSQL
Community Server

▪ wget https://downloads.mysql.com/archives/get/p/23/file/mysql-8.0.321.el9.x86_64.rpm
bundle.tar

This is in the .tar format so we need to first extract all the files and then install them,

For extracting the files we need to use tar:


 tar -xvf mysql-community-server-8.0.32-1.el7.x86_64.rpm

For installing all the extracted files we use the foll command:
 yum install *rpm
 rpm -ivh --nodeps *rpm

After installing all the packages we need to check if mysql is installed properly or not and for
that we need to check in the var/lib directory
 cd /var/lib/mysql

Then we need to check the status of mysql service is active or not using the foll command:
 systemctl status mysqld.service

If the service is not started then start the service:


 systemctl start mysqld.service

Then log in to the mysql using the foll command:


 mysql -u root -p

First we need to login using the default pass that will be in the mysqld.log file

Once login first we need to reset the password using the foll query:
 ALTER USER ‘username’@’localhost’ IDENTIFIED BY ‘new_password’;

2] BINARY INSTALLATION
First we need to install the tar file from the website:
https://downloads.mysql.com/archives/community/
 In Centos install the file using the wget command as follows:
wget https://downloads.mysql.com/archives/get/p/23/file/mysql-8.0.16-el7-
x86_64.tar

▪ After the installation is complete then untar the tar file using the foll command:
tar -xvf mysql-8.0.16-el7-x86_64.tar
▪ Then again extract the files with .tar.gz extension using the same command :
tar -xvf mysql-8.0.16-el7-x86_64.tar
tar -xvf mysql-router-8.0.16-el7-x86_64.tar.gz
tar -xvf mysql-test-8.0.16-el7-x86_64.tar.gz

We need to create the data directory manually

Open the /etc/my.cnf file and create the data directory,

 datadir=/var/lib/mysql
 sock=/var/lib/mysql/mysqld.sock
 port=3300
 log-error= var/log/mysqld.log
 pid-file=/var/run/mysqld/mysqld.pid

 we need to create the directory using the mkdir command as:


mkdir -p /var/lib/mysql
 Once the directory is created we need to initialize the database:
The path should be where the mysql is installed not the defalut path
/home /root/mysql-8.0.16-el7-x86_64/bin/mysqld --defaults-file=/etc/my1.cnf --user=root –
initialize

 Then we need to assign the permissions to the data directory


chmod -R 777 /var/lib/mysql1/

chown -R` mysql:mysql mysql-files

 Then we need to up the database using the foll command:


/home/asmita/mysql-8.0.16-el7-x86_64/bin/mysqld --defaults-file=/etc/my1.cnf --user=root &
 Then we need to check the mysqld.log file for the temporary password generated which we will
use for setting the new password for mysql

We also need to give the socket file path

 Setting the password using the mysql_secure_installation


/home/asmita/mysql-8.0.16-el7-x86_64/bin/mysql_secure_installation -uroot -p
-S'/var/lib/mysql1/mysql.sock'

/root/mysql-8.0.16-el7-x86_64/bin//mysql_secure_installation -uroot -p
-S'/var/lib/mysql1/mysql.sock'

 After the password is set then access the mysql using the foll command:
/home/asmita/mysql-8.0.16-el7-x86_64/bin/mysql -uroot -p'MySQL@123'
-S'/var/lib/mysql1/mysql.sock'

/root/mysql-8.0.16-el7-x86_64/ bin/mysql -uroot -p'MySQL@123'


-S'/var/lib/mysql/mysql.sock'

 If we want to shudown the database use the foll command:


/home/asmita/mysql-8.0.16 -el7-x86_64/bin/mysqladmin -uroot -p'MySQL@123'
-S'/var/lib/mysql1/mysql.sock' shutdown

MONITORING
We need to check the foll:
 os-uptime – uptime

 db-uptime - \s

 disk partition – df -h

 memory – free

 db-size - SELECT table_schema "DB Name",


ROUND(SUM(data_length + index_length) / 1024 / 1024, 1) "DB Size in MB"
FROM information_schema.tables
GROUP BY table_schema;
# Master-slave replication:
Here we need to check the foll:
second_behind_master = 0 / 3421 / null
if,
0 - then it is in sync(Slave and master both are in sync)
digits(5432) – delay
null – sync is breaked
show slave status \G

BACKUP

mysqldump ;
used for creating backups in Mysql databases.
Syntax;
mysqldump db_name > backup_file.sql

 If we want to take the backup of all the databases we use the foll cmd:
mysqldump --all-databases > all_databases.sql

There are various options to use with mysqldump:


mysqldump -u username -p db_name –log-error --verbose=logfile_name.log >
directory_path/output_file.sql

where,
 --log-error
Append warnings and errors to named file
 --verbose
provides detailed information about the backup process.

 To take the backup of multiple databases from all the databases:


mysqldump -u root -p --databases db_name1 db_name2…….. --log-error=file.log –
verbose > file.sql

 To take the backup of multiple tables from numbers of tables:


mysqldump -u root -p db_name table1_name table2_name……. --log-error –verbose>file.sql
mysqldump options:

 Full database backup:


mysqldump -uroot -p --all-databases --single-transaction -R --triggers --events --log-
error=full.log --verbose > /root/full.sql

o --all-databases :
This option is used when we need to take the backup for all the databases present.

mysqldump all-databases -u username -p –log-error=filename.log > filename.sql

o --single-transaction:
consistent backup: the --single-transaction option ensures a consistent backup of
your database by wrapping the entire mysqldump operation in a single transaction. This
guarantees that the backup reflects a consistent state of your database at the time the
transaction started

mysqldump --single-transaction my_database > my_database.sql

o -R(--routines):
Include stored routines (procedures and functions) for the dumped databases in the output.

o –triggers
Include triggers for each dumped table in the output. This option is enabled by default;
disable it with --skip-triggers.

o --verbose
allows you to display additional information about the backup process as it is occurring.

o --log-error:
It is used to show Log warnings and errors by appending them to the named file.
(Disconnecting from the localhost)---- this should be the last line of the log file that will
be created which means that backup is successful.

o --databases:
Dump several databases. Normally, mysqldump treats the first name argument on the
command line as a database name and following names as table names. With this option,
it treats all name arguments as database names.
It is used to specify multiple databases that you want to include in the backup.

Syntax:
mysqldump -u username -p --databases database1 [database2 ...] > backup_file.sql

o --no-data:
It is used to exclude the data from the SQL dump generated by the mysqldump
command. This option is useful when you want to export only the structure (schema) of
the database objects (tables, views, procedures, functions, triggers) without including the
actual data records.

Syntax:
mysqldump -u username -p --no-data database_name > schema_backup.sql

o --no-create-info:
is used to exclude the creation statements of database objects (such as tables, views,
procedures, functions, triggers) from the SQL dump generated by the mysqldump
command. This option is useful when you want to export only the data from the database
without including the structure definitions.

Syntax:
mysqldump -u username -p --no-create-info database_name > data_backup.sql

 To take a backup of 500GB database or large amount of data is present:

mysqldump -u username -p dbname | gzip > filename.sql.gz


Then we can decompress the file

For more compressed format we can use the bzip2 :


mysqldump -u username -p dbname | bzip2 > filename.sql.bz2

To check the backup is completed or not:


bzcat /root/full.sql.bz2 | tail -n 10

Restoration
mysql -u username -p database_name < backup_file.sql -f
where,
-f  forcefully
It means that while doing the restoration if any error occurs then it will not stop the
execution .
Backup and Pipe to Remote Server

mysqldump -u username -p mydatabase | ssh user@remote-server "cat > /path/to/backup.sql"

Single table backup:


mysqldump asmita employee --single-transaction --log-error=error.log --verbose > employee.sql

Other method:
create table employee_25072024 like employee;
insert into employee_25072024 select * from employee;

USER CREATION

 To create a new user:

CREATE USER 'newuser'@'localhost' IDENTIFIED BY 'password';


It was supported in versions <5.5

For versions >5.5 use the foll:


CREATE USER 'newuser'@'localhost' IDENTIFIED WITH mysql_native_password BY
'password';

Eg:
Create user ‘newuser’@’localhost’ identified by ‘Root@123’

 If you want the user to connect from any host (% wildcard), use:
create user ‘username’@’%’ identified by ‘password’

 Once the user is created we will grant the specific privileges to that user using the foll query:
GRANT privileges ON * . * TO 'username'@'host' IDENTIFIED BY 'password' ;

where,
first * is for database name(all databases)
.
second * is for table name(all tables present)

eg:
grant select on asmita.details to 'newuser'@'%' ;
or
grant select,update,insert on asmita.details to 'newuser'@'%' ;

 If we want to grant all privileges to that user


GRANT ALL PRIVILEGES ON *.* TO 'newuser'@'localhost';

 If we want grant only specific privileges :


grant select,insert,update,delete on db_name to ‘user_name’@’localhost’;

 To check the grants we use the foll query:


show grants for user_name;
eg:
show grants for newuser;

REVOKE:
 Revoke ALL PRIVILEGES from a user on a database:
REVOKE ALL PRIVILEGES ON db_name.* FROM user_name@'localhost';
Eg:
Revoke all privileges on asmita.details from ‘newuser’@’%’;

 Revoke specific privileges on a table from multiple users:


REVOKE SELECT, INSERT ON mydatabase.mytable FROM 'user1'@'localhost',
'user2'@'localhost';
EG;
revoke SELECT ON asmita.* FROM 'newuser'@'%';

Providing column access in mysql

 First create a view which contains the specific column names:


CREATE VIEW col_view AS
SELECT Id, Name, Age
FROM details;

 Then grant the permissions to that view:


GRANT SELECT ON database_name.restricted_view TO 'newuser'@'host';
GRANT SELECT ON asmita.col_view TO 'newuser'@'%';

Other method:
Grant select(col1,col2) on db_name.tab_name to ‘username’@’%’;
Eg:
grant select(country,cust_name) on asmita.customers to 'newuser'@'%';

SHOW VARIABLES LIKE 'bind_address';

 To check the users;


Select user,host from mysql.user;

 Grant privileges with GRANT option:

allows a MySQL user not only to use specific privileges themselves but also to grant
those same privileges to other users.

GRANT ALL PRIVILEGES ON `mysql`.* TO `newuser`@`%` with grant option;

MASTER-SLAVE REPLICATION

Relay logs store events received from the master server before they are executed on the slave
database. This temporary storage ensures that if the slave server crashes or restarts, it can resume
replication from where it left off without relying solely on the master server.
If the master becomes unavailable temporarily, the slave can continue to apply changes from the
relay logs until the master is back online.
 Binary logs are generated on the master server and contain the original database changes. Relay
logs are generated on the slave server and contain a copy of these changes after they have been
received and processed for replication.

 Relay Log Files: Similar to binary logs on the master server, relay logs are stored as files on the
slave server. They are named sequentially, typically starting with relay-
bin.<sequence_number>.

 Relay Log Index: MySQL maintains an index file (relay-log.info) that keeps track of the cur-
rent relay log file and position within that file.

 bind-address=* in MySQL allows the server to listen for client connections on all available net-
work interfaces.

Replication setup:

For master-slave replication we need two servers one will be master and the other will be slave
where master has both R/W permissions whereas for slave it has only the R permission.

 First check that server_id for both master and slave are different , if they are same then the
replication will not take place and we will need to change the server_id of master or slave.

 To check the server_id we use:

o show global variables like '%server%';

o If we want to reset the server_id use the foll”

 Set global variables like server_id=any num;

 We’ll also check that the binary logs are enabled or not;

show binary logs;

Master server:

 First we need to take a full backup of master server using the foll command:

mysqldump -u root -p --all-databases -R --triggers --master-data=2 --verbose --log-er-


ror=newbackup.log > newbackup.sql

 Next we need to move that backup file on the secondary i.e. the slave server:

scp file_name -u username root@ip_add(slave):/path/


Slave server:

 Restore the backup file that is copied from master server

mysql -u root -p < file_name

Master server;

 On the master server create a new user and give grant permissions:

o Create user ‘username’@’%’ identified with mysql_native_password by ‘passs’;

o Grant replication slave on *.* to ‘newuser’@’%’;

o Flush privileges;

Slave server:

o Check the replication server is properly working or not using the foll cmd:

mysql -u newuser -p -h ‘ip(master)’

o If the connection is working properly then execute the change master command as follows:

CHANGE MASTER TO MASTER_HOST=192.168.1.28’,


MASTER_USER=’employee’,
MASTER_PASSWORD='Emp@1234',
MASTER_LOG_FILE='mysql-bin.000015',
MASTER_LOG_POS=1294277;

o Once it executed we need to start the slave:


start slave;
o To check the status:
show slave status \G;

 If we want to check the master related information :\

select * from mysql.slave_master_info \G;


Seconds_Behind_Master: Shows the number of seconds the slave SQL thread is behind the master.
This indicates how much replication lag exists.

Lower to higher version replication is possible but not viceversa

CHANGE MASTER TO
MASTER_HOST = '192.168.1.160',
MASTER_USER = 'test',
MASTER_PASSWORD = 'Test@123',
MASTER_LOG_FILE = ‘mysql-bin.000033’,
MASTER_LOG_POS = 197;

GTID REPLICATION:

 Add the foll in the my.cnf file under [mysqld]:

gtid_mode=ON

enforce_gtid_consistency=ON

 To check if the gtid mode is on or off:

SHOW VARIABLES LIKE 'gtid_mode';

CHANGE MASTER TO
MASTER_HOST='192.168.1.160',
MASTER_PORT=3300,
MASTER_USER='test',
MASTER_PASSWORD='Test@123',
MASTER_AUTO_POSITION=1;

For checking the innodb size:


SELECT CONCAT(ROUND(@@innodb_buffer_pool_size / 1024 / 1024, 2), ' MB') AS "InnoDB Buf-
fer Pool Size";
mysqld_safe --skip-grant-tables &

nano /etc/selinux/config

SELECT * FROM information_schema.schema_privileges ;

· InnoDB Buffer Pool focuses on enhancing the performance of the database by keeping frequently
accessed data in memory, reducing the need for disk reads and writes.
· Binary Log (Binlog) is crucial for replication, recovery, and auditing by recording all changes made
to the database.

Percona Backup:
xtrabackup --backup --datadir=/var/lib/mysql --target-dir=/home/Asmita

xtrabackup_binlog_info –

xtrabackup --prepare --use-memory=2G --target-dir=Fullbackup

Linux-Generic (tar file download)

We need to create the foll directories:

Basedir
Datadir
Logs
Socket
my.cnf
[mysqld]
Server-id
Basedir
Datadir
Log-error
Log-bin
socket

lower_case_table_names

It is a system variable that controls how MySQL handles table name case sensitivity on a case-
insensitive file system.

xtrabackup tool

 Differential Backup: Backs up all changes made since the last full backup.
 Incremental Backup: Backs up all changes made since the last backup, whether it was a full or in-
cremental backup.

Changing the ownership of the data directory is mandatory after the initialization of the data directory.

rsync -av /path/to/backup/ /var/lib/mysql/

MYSQL UPGRADATION

8.0.32 ----- 8.0.33

 First take the full database backup :

mysqldump –all-databases --triggers -R –events –log-erroor=err.log > fullbackup.sql

 Once the backup is taken we need to stop the mysql service


systemctl status mysqld.service
systemctl stop mysqld.service
 Next we need to copy the my.cnf file and the datadir to a new path
rsync -av/etc/my.cnf /new/path
 Once it is copied then remove the mysql files
yum remove mysql*
 Then download the newer version of mysql
 Replace the my.cnf file
rsysnc my.cnf /etc/my.cnf
 Start the msyql service
 Verify the changes

SESSIONS IN PROCESSLIST:

To kill sleep sessions in MySQL, you can use the following commands:

*Method 1: Kill by Process ID*

1. Identify the sleeping process ID: `SELECT id FROM information_schema.processlist WHERE


command = 'sleep';`
2. Kill the process: `KILL [process_id];`

*Method 2: Kill all sleep sessions*

`mysql> KILL (SELECT id FROM information_schema.processlist WHERE command = 'sleep');`

*Method 3: Using MySQL 8.0+*

`mysql> KILL QUERY [process_id] FOR CONNECTION [process_id];`

Alternatively, you can also use:

*Method 4: Using mysqladmin*


`mysqladmin kill [process_id]`

*Method 5: Using MySQL Workbench or other GUI tools*

- Open MySQL Workbench


- Go to "Server" > "Status" > "Client Connections"
- Right-click on the sleeping connection and select "Kill"
---------------------------------------------------------------------------------------------------------------------------
To kill sleep sessions in MySQL, you can combine the `SELECT` and `KILL` commands:

```
KILL (SELECT id FROM information_schema.processlist WHERE command = 'sleep');

This command identifies the process IDs of sleeping sessions and immediately kills them.

Alternatively, you can also use:

```
SELECT CONCAT('KILL ', id, ';')
FROM information_schema.processlist
WHERE command = 'sleep'
INTO OUTFILE '/tmp/kill_sleep.sql';

SOURCE /tmp/kill_sleep.sql;
```
This method generates a SQL file with `KILL` commands for each sleeping session and then executes
the file.

 Select now()
o Used to check the time zone in mysql

mysqldump mysql user --single-transaction --verbose > users.sql

AUDIT LOGS

SHOW variables LIKE 'plugin%';


INSTALL PLUGIN audit_log SONAME 'audit_log.so';
SELECT * FROM information_schema.PLUGINS WHERE PLUGIN_NAME LIKE '%audit%'\G
SHOW variables LIKE 'audit%'

ERROR NO: - 1032 / 1024


The above error is caused due to the foll:
When the deletion of transaction is performed at that time it is not able to find one row and the
error occurs:
Steps to follow:
 Stop sla;ve
 Set global sql_slave_skip_counter = 1
 Start slave
 Check the count of the table on master and the slave

Once the slave is started then if the error occurs in 225 we need to check the table count on both the
master server(X.100) and 225

If the count is in greater number then do the foll:


Check the table size on the master server
And we need to restore that table from master to slave so we need to take the backup on master server
and for that we require the downtime for which we need to inform the application team and after post
downtime from application team we need to do the foll:
 We will check the count of the table
 Take the backup of that table on master server
 Then on the slave (225) we need to restore it and before that we need to stop the slave and then
restore the backup
 Then check the count on both master and slave and cross verify
 Start slave
 Select count(*) from table_name;

UPGRADATION

After upgradation we need to run this specific


mysql_upgrade -u root -p

The innodb_thread_sleep_delay is a configuration parameter in MySQL that controls the amount of


time (in microseconds) that InnoDB threads should sleep when there are no tasks to perform. It is de-
signed to manage thread activity in situations where there are fewer operations to execute, allowing the
system to be more efficient by preventing threads from constantly waking up and checking for new
work when nothing is available.

INDEXING

Show indexes on table_name;


Explain create table table_name;
Show create table table_name;
show table status like ‘table_name’\G

Optimize table table_name;


Replication types
Master slave
Master master
Group replication

Flush hosts (error no 1129)


It is used to clear the host cache. When a client connects to the MySQL server, MySQL caches
information about the host (e.g., IP address or hostname) and the status of the connection (such as
whether the client is allowed to connect). This cache is maintained to optimize subsequent connections
from the same host.
If a client host has exceeded the allowed number of connection errors, MySQL blocks that host for a
certain amount of time. Running FLUSH HOSTS can unblock the host immediately.

Collation defines how string data is compared, sorted, and ordered in the database, and is closely tied
to character sets, which define the set of characters that can be stored.

UNDO AND REDO LOGS

undo logs and redo logs are important mechanisms used for ensuring data integrity, supporting transac-
tion rollback and recovery, and maintaining the consistency of the database in case of failures
1. Undo logs:
a. Undo logs are used primarily for transaction rollback. They help to revert or "undo"
changes made by a transaction that has not been committed yet. This is important for
maintaining the ACID properties of the database, especially the Atomicity and Consis-
tency
-- Begin a transaction
START TRANSACTION;

-- Modify data
UPDATE employees SET salary = salary * 1.1 WHERE department = 'Sales';

-- Rollback the changes


ROLLBACK;

The undo log, also known as the rollback segment, is a crucial part of the InnoDB storage engine. Its
primary purpose is to support transactional consistency and provide the ability to roll back changes
made during a transaction.

Here’s how it works:


Maintaining before-images: Whenever a transaction modifies data in InnoDB, the undo log records
the before-image of the modified data. This before-image contains the original values of the modified
rows, allowing for undoing or rolling back changes if needed.
Transaction isolation: The undo log plays a vital role in providing transaction isolation levels like
READ COMMITTED or REPEATABLE READ. It ensures that within a transaction, other concurrent
transactions can still read consistent data by using the before-images stored in the undo log.
Crash recovery: In the event of a system crash or restart, the undo log helps restore the database to a
consistent state by applying the necessary undo operations based on the recorded before-images.

2. Redo Log:
The redo log, also known as the transaction log or InnoDB log, serves a different
purpose than the undo log. Its primary function is to ensure durability and aid in crash
recovery.
Redo Log not only records the changes made to the database but also includes the
modifications that are written into the rollback segments. In a database system, rollback
segments are used to temporarily store undo data during transactions. So, in addition to
tracking database changes, the Redo Log also captures the changes made to the rollback
segments. This ensures that during recovery procedures, the system can properly restore
the database to a consistent state by applying both the database changes and the
modifications stored in the rollback segments.
Write-ahead logging: The redo log follows a write-ahead logging approach, meaning
that changes are written to the redo log before the corresponding data pages are updated.
This ensures that in the event of a crash, the changes recorded in the redo log can be re-
played to restore the database to a consistent state.
Crash recovery: During crash recovery, the redo log is crucial for replaying the logged
changes to bring the database to a consistent state. By reapplying the changes recorded
in the redo log, InnoDB can recover transactions that were committed but not yet written
to disk.
-- Begin a transaction
START TRANSACTION;

-- Modify data
UPDATE products SET stock = stock - 10 WHERE id = 123;

-- Commit the transaction


COMMIT;

 Dirty Pages: These are pages in memory that have been modified but not yet written to disk.
InnoDB periodically flushes these pages to reduce the number of dirty pages in memory.
What is a Tablespace?
A tablespace in InnoDB is a logical container for storing data. InnoDB stores its data in a
variety of tablespaces, which can either be shared or dedicated to individual tables. The
tablespace contains one or more data files that store the actual content (data and indexes) of
the tables.

Types of tablespace:
1. System Tablespace (Shared Tablespace):
 The system tablespace (usually the file ibdata1 by default) stores InnoDB's internal data,
such as metadata, undo logs, and doublewrite buffer.
 All tables and indexes that do not use the file-per-table feature are stored here.
 The system tablespace is a single large file that can grow over time as more data is added.
2. File-Per-Table Tablespaces:
 In this model, each table (or its partitions) is stored in a separate tablespace file. Each table
has its own data file (e.g., .ibd file) for its table data and associated indexes.
 This can improve performance because each table is stored independently, making it easier
to manage and optimize specific tables.
 By enabling the innodb_file_per_table configuration option, MySQL stores each table in its
own .ibd file, rather than in the shared system tablespace.

INNODB ARCHITECTURE:

Common Configuration Parameters for InnoDB:


 innodb_buffer_pool_size: Determines the size of the buffer pool. Increasing this value improves
performance by allowing more data to be cached in memory.
 innodb_log_file_size: Controls the size of the redo log files. Larger log files can improve write
performance but may impact recovery time.
 innodb_flush_log_at_trx_commit: Controls the frequency with which the redo log is flushed
to disk. Setting this to 1 ensures ACID compliance but may incur a
performance penalty. Setting it to 2 or 0 may offer better performance at the cost of durability.
The innodb_flush_log_at_trx_commit parameter can be set to one of the following values:
1. 0 – Flush log to disk once per second
Behavior: The redo log is flushed to disk only once per second, regardless of
transaction commits.
When a transaction commits, InnoDB writes the redo log to memory, and the actual
flush to disk occurs at least once every second
Performance: This setting provides the best performance because it minimizes disk I/O
during each commit. However, it compromises durability in the event of a crash. If the
system crashes right before the flush occurs, transactions that were committed but not
yet flushed may be lost.
This is suitable for environments where performance is a higher priority than durability,
and the loss of a second's worth of transactions in a crash is acceptable.
2. 1 – Flush log to disk at each transaction commit
Behavior: InnoDB flushes the redo log to disk every time a transaction is committed.
This is the safest setting for durability. Each transaction’s commit is guaranteed to be
written to disk, ensuring that committed transactions are durable even in the event of a
crash. The flush happens synchronously with each commit.
Performance: This setting can significantly impact performance due to the additional
disk I/O for each commit operation. Each commit involves writing the redo log to disk,
which can be slower on systems with high transaction throughput.
This is the default and safest setting when data durability is a critical concern
3. 2 – Flush log to disk once per transaction commit, but only on the background thread
Behavior: The redo log is written to memory on each transaction commit, but it is
flushed to disk once per second in the background.
This setting provides a balance between performance and durability. Each commit is
acknowledged quickly, but the log flushing to disk is delayed until the next second.
Performance: This option is faster than 1 because the log is not flushed to disk on every
commit, but it still maintains a level of durability by ensuring that the log is written to
disk at least once per second.
This setting is often used in systems that require a compromise between performance
and durability.

Durability:
innodb_flush_log_at_trx_commit = 1 ensures that the database is fully durable and can recover
committed transactions in case of a system crash. This is because the redo log is written to disk
and flushed on every commit.
innodb_flush_log_at_trx_commit = 0 or innodb_flush_log_at_trx_commit = 2 introduces the risk of los-
ing the last second’s worth of transactions if the system crashes before the log is flushed to disk.
innodb_flush_log_at_trx_commit = 0 is the fastest option, as it reduces the number of disk
flushes by writing the log to disk only once per second. This significantly improves throughput
and reduces I/O overhead.
innodb_flush_log_at_trx_commit = 1 introduces the most disk I/O, as it requires a disk flush on ev-
ery commit, which can reduce performance in systems with high transaction throughput.
innodb_flush_log_at_trx_commit = 2 offers a balance between performance and durability, as it
avoids frequent disk flushes but still provides some level of durability.

 innodb_file_per_table: Determines whether InnoDB stores each table in its own tablespace file
or uses a shared tablespace.

InnoDB is a storage engine for MySQL that provides high-performance, reliability, and
ACID-compliant transactions. It is the default storage engine in MySQL and is commonly used for ap-
plications that require strong data integrity and support for complex transactions.
The InnoDB architecture has two main kinds of structures:
 In-memory structures
 on-disk structures
In-memory structure:
The in-memory structures are responsible for managing and optimizing the storage and retrieval of
data. The in-memory structures include:
 Buffer pool
 Change buffer
 Adaptive hash index
 Log buffer
1. Buffer pool:
a. The buffer pool is an area in main memory where InnoDB caches table and index data
as it is accessed.
b. The buffer pool permits frequently used data to be accessed directly from memory,
which speeds up processing. On dedicated servers, up to 80% of physical memory is
often assigned to the buffer pool.
2. Change buffer:
a. The change buffer is in charge of caching changes to the secondary index pages when
these pages are not in the buffer pool.
b. When we execute an INSERT, UPDATE, or DELETE statement, it changes the data of
the table and the secondary index pages. The change buffer caches these changes when
the relevant pages are not in the buffer pool to avoid time-consuming I/O operations.
3. Adaptive hash index:
a. An Adaptive Hash Index (AHI) is a feature in InnoDB (MySQL's default storage
engine) that optimizes certain types of queries by dynamically creating hash indexes in
memory, based on frequently accessed data. The adaptive part of this feature means that
InnoDB adjusts the hash index in response to query patterns, particularly for frequently
accessed index pages.
b. The goal of the Adaptive Hash Index (AHI) is to improve query performance, especially
for point lookups or queries that access a small set of rows based on specific key values.
4. Log Buffer
a. The log buffer is a memory area that holds the changes to be written to transaction logs.
b. The log buffer improves performance by writing logs to memory before periodically
flushing them to the redo log on disk.
c. The default size of the log buffer is often sufficient for most applications. But if you
have a write-intensive application, you can configure the log buffer size to enhance the
MySQL server performance.

On-disk structure:
InnoDB storage engine uses the on-disk structures to store data permanently on disks. These
structures ensure data integrity, offer efficient storage and support transactional features.
The on-disk structures include:
 System tablespace
 File-per-table tablespaces
 General tablespaces
 Undo tablespaces
 Temporary tablespaces
 Doublewrite buffer
o InnoDB uses the Doublewrite Buffer to store pages that have been flushed from the
buffer pool before their actual writing to InnoDB data files.s
o The Doublewrite Buffer allows InnoDB to retrieve a reliable page copy for recovery in
case a storage issue occurs.
 Redo log
 Undo logs

mysqlpump
It provides a faster and more efficient way of performing backups compared to mysqldump. It is specif-
ically designed to handle parallel backups of large databases, offering better performance, especially
when dealing with large datasets.

You might also like