LVM
LVM
Agenda
What is LVM
Types of LVM (linear, stripped, mirror)
PE and LE concept
Snapshot
Clvm concept
Raid
2
LVM
3
What is LVM?
LVM is a tool for logical volume management which includes allocating disks, striping, mirroring and resizing logical
volumes.
With LVM, a hard drive or set of hard drives is allocated to one or more physical volumes. LVM physical volumes can
be placed on other block devices which might span two or more disks.
The physical volumes are combined into logical volumes, with the exception of the /boot/ partition. The /boot/
partition cannot be on a logical volume group because the boot loader cannot read it. If the root (/) partition is
on a logical volume, create a separate /boot/ partition which is not a part of a volume group.
LVM Terms
• Physical Volume: A physical volume (PV) is another name for a regular physical disk partition that is used or will
be used by LVM. PV is Disk or partition marked as usable space for LVM
• Volume Group: Any number of physical volumes (PVs) on different disk drives can be added together into a
volume group (VG). Collection of PV’s, can think of it as a virtual disk drive.
• Logical Volumes: Volume groups must then be subdivided into logical volumes. Each logical volume can be
individually formatted as if it were a regular Linux partition. A logical volume is, therefore, like a virtual partition
on your virtual disk drive.
• Physical Extent (PE) – An attribute of a VG. Lvs are a collection of PEs, can think of it as a virtual cylinder size
• Logical Extend (LE) – An attribute of LV. LV is collection of LE. It is a virtual cylinder size.
LVM Componenet
6
LVM Features
Features
• You can combine several hard disks or partitions
• You can enlarge a logical volume when free space is exhausted
• You can add hard disks to the volume group in a running system
• You can add logical volumes in a running system
• You can use several hard disks with improved performance in the RAID 0 (striping) mode
• You can add up to 256 logical volumes
• The Snapshot feature enables consistent backups
7
Logical Steps to configure LVM
step1. Step 4
Use disk utility to create some partition Create a LV within the new VG
of any size & assign the partition type I. Expand the new VG
LVM (8e).
II. Select Logical View
step 2.
Now initialize the new partition as PV III. Click Create New Logical Volume
I. Go to System -> Administrator -> buttion
Logical Volume Management IV. Specify the LV name lv1
II. Expand Uninitialized Entities in left V. Specify the LV size, or click ,or click Use
pane remaining space
III. Expand disk with new partition VI. Specify file system properties (type,
IV. Select new partition (confirm mount point, etc)
partition type type is 0x8e in right pane).
VII. Click OK
V. Click Initialize Entity
VI. Confirm by clicking Yes, data will be VIII. Confirm to create mount point
wiped
step 3.
Create the new VG using the PV just
created
I. Click create new volume Group button
II. Specify Volume Group Name (vg1)
III. Click OK
8
PE and LE concepts
9
Create LVM by available PE
For Example
# lvcreate -l 400 –n LV1 VG1
10
LVM
Logical
Volume
VG ( Volume Group )
LVM Physical Volume Layout
• A linear volume aggregates multiple physical volumes into one logical volume. So, if we have two
60GB disks, you can create a 120GB logical volume. The physical storage is concatenated.
Striped LVM
• You can control the way the data is written to the physical volumes by creating a striped logical
volume. For large sequential reads and writes, this can improve the efficiency of the data I/O.
• Striping enhances performance by writing data to a predetermined number of physical volumes in
round-round fashion.
Mirrored LVM
• A mirror maintains identical copies of data on different devices. When data is written to one
device, it is written to a second device as well, mirroring the data. This provides protection for
device failures. When one leg of a mirror fails, the logical volume becomes a linear volume and
can still be accessed.
CREATING LINEAR LVM
Step-1 – Create two Partitions of 500 MB each using FDISK and set type as LINUX LVM
Step-2 – Create Physical Volumes
pvcreate /dev/hda8 /dev/hda9
Step-3 – Create Volume Group
vgcreate VG1 /dev/hda8 /dev/hda9
Step-4 – Change Volume Group to ACTIVE
vgchange -a y VG1
Step-5 – Create Logical Volume
lvcreate -L +600M -n LV1 VG1
Step-6 – Format the Logical Volume
mkfs.ext3 /dev/VG1/LV1
Step-7 – Mount in /etc/fstab
/dev/VG1/LV1 /mnt/data ext3 defaults 00
Step-8 – Activate the new volume
mount -a
Check the newly mounted Logical Volume
Objectives
Upon completion of this unit, you should be able to:
understand advanced LVM topics
move and rename Volume groups
setup Clustered Logical Volumes
LVM2
An LVM2 Review
Review of LVM2 layers:
LVM2
Metadata locations are stored as offset and size (in bytes). There is room in the label for about 15 locations,
but the LVM tools currently use 3: a single data area plus up to two metadata areas.
LVM Components cont.
1. Administrative convenience - It is easier to keep track of the hardware in a system if each real
disk only appears once. This becomes particularly true if a disk fails. In addition, multiple physical
volumes on a single disk may cause a kernel warning about unknown partition types at boot-up.
2. Striping performance – LVM cannot tell that two physical volumes are on the same physical disk. If
you create a striped logical volume when two physical volumes are on the same physical disk, the
stripes could be on different partitions on the same disk. This would result in a decrease in
performance rather than an increase.
LVM Components cont.
Volume Groups
Physical volumes are combined into volume groups (VGs). This creates a pool of disk space out of
which logical volumes can be allocated.
Within a volume group, the disk space available for allocation is divided into units of a fixed-size
called extents. An extent is the smallest unit of space that can be allocated. Within a physical
volume, extents are referred to as physical extents.
A logical volume is allocated into logical extents of the same size as the physical extents. The extent
size is thus the same for all logical volumes in the volume group. The volume group maps the
logical extents to physical extents.
LVM Components cont.
Linear Volumes
A linear volume aggregates space from one or more physical
volumes into one logical volume.
For example, if you have two 60GB disks, you can create a
120GB logical volume. The physical storage is concatenated.
Creating a linear volume assigns a range of physical extents to an area of a logical volume in order.
For example, as shown in Figure , logical extents 1 to 99 could map to one physical volume and logical extents
100 to 198 could map to a second physical volume. From the point of view of the application, there is one
device that is 198 extents in size
LVM Components cont.
In Figure
the first stripe of data is written to PV1
the second stripe of data is written to PV2
the third stripe of data is written to PV3
the fourth stripe of data is written to PV1
In a striped logical volume, the size of the stripe cannot exceed the size of an extent.
LVM Components cont.
Mirrored Logical Volumes
A mirror maintains identical copies of data on different
devices , When data is written to one device, it is written
to a second device as well, mirroring the data.
This provides protection for device failures. When one leg
of a mirror fails, the logical volume becomes a linear volume
and can still be accessed.
An LVM mirror divides the device being copied into regions
that are typically 512KB in size.
LVM maintains a small log which it uses to keep track of which regions are in sync with the mirror
or mirrors.
This log can be kept on disk, which will keep it persistent across reboots, or it can be maintained in
memory.
How to create LVM
If you run above command it will create a lv by the name “lvol0” by using the space from vg
named “myvg”
2. To create the lv with specific name use the option “-n”
# lvcreate –L 100M –n mylv myvg (create the lv name “mylv” by using space from vg “myvg”)
3. create a volume with percentage from the volume group. (Ex: 50% of VG need to be allocated to the new
volume).Note:You need to "-l" to specify the percentage.
Mirror LVM
Mirror lvm is used for data redundancy.
To create a mirrored volume, specify the number of copies of the data to make with the -m
argument of the lvcreate command
Specifying -m1 creates one mirror, which yields two copies of the file system: a linear logical
volume plus one copy. Similarly, specifying -m2 creates two mirrors, yielding three copies of the
file system.
Create a mirror volume from volume group “myvg”. An LVM mirror divides the device being
copied into regions that, by default, are 512KB in size. You can use the -R argument to specify
the region size in MB.
LVM maintains a small log which it uses to keep track of which regions are in sync with the
mirror or mirrors. By default, this log is kept on disk, which keeps it persistent across reboots.
You can specify instead that this log be kept in memory with the --corelog argument; this
eliminates the need for an extra log device, but it requires that the entire mirror be
resynchronized at every reboot.
How to create LVM cont.
2. To keep the corelog in disk ,LVM will always try to keep the mirror copies and logs in different
disks .So you need three disks
The following command creates a mirrored logical volume with a single mirror.
The volume is 500 megabytes in size, it is named mirrorlv, and it is carved out of volume group
myvg. The first leg of the mirror is on device /dev/sda1, the second leg of the mirror is on
device /dev/sdb1, and the mirror log is on /dev/sdc1.
3. LVM provides an option to specify the number of mirror copies.Number of copies can be increased by increasing
the "m" number.
converts the mirrored logical volume vg00/lvol1 to a linear logical volume, removing the mirror leg.
# lvconvert -m0 /dev/vg00/lvol1
How to create LVM cont.
1. The following command creates a logical volume called yourlv that uses all of the unallocated space
in the volume group testvol.
# lvcreate -l 100%FREE -n yourlv testvg
2. To create a logical volume to be allocated from a specific physical volume in the volume group,
# lvcreate -L 1500 -ntestlv testvg /dev/sdg1
creates a logical volume named testlv in volume group testvg allocated from the physical volume
/dev/sdg1
3. You can specify which extents of a physical volume are to be used for a logical volume. The following
example creates a linear logical volume out of extents 0 through 25 of physical volume /dev/sda1 and
extents 50 through 125 of physical volume /dev/sdb1 in volume group testvg
# lvcreate -l 100 -n testlv testvg /dev/sda1:0-25 /dev/sdb1:50-125
4. The following example creates a linear logical volume out of extents 0 through 25 of physical volume
/dev/sda1 and then continues laying out the logical volume at extent 100.
# lvcreate -l 100 -n testlv testvg /dev/sda1:0-25:100-
Advance Concept of LV
The underlying physical volumes used to create a logical volume can be important if the physical volume
needs to be removed, so you may need to consider this possibility when you create the logical volume.
The default policy for how the extents of a logical volume are allocated is inherit, which applies the
same policy as for the volume group. These policies can be changed using the lvchange command. For
information on allocation policies
When physical volumes are used to create a volume group, its disk space is divided into 4MB extents,
by default.
This extent is the minimum amount by which the logical volume may be increased or decreased in
size.
Large numbers of extents will have no impact on I/O performance of the logical volume.
To define the extent size use –s option , if the default extent size is not suitable.
You can put limits on the number of physical or logical volumes the volume group can have by using
the -p and -l arguments of the vgcreate command.
By default, a volume group allocates physical extents according to common-sense rules such as not
placing parallel stripes on the same physical volume. This is the normal allocation policy. You can use
the --alloc argument of the vgcreate command to specify an allocation policy of contiguous,
anywhere, or cling
The contiguous policy requires that new extents are adjacent to existing extents
Advance Concept of LV
If there are sufficient free extents to satisfy an allocation request but a normal
allocation policy would not use them, the anywhere allocation policy will, even if
that reduces performance by placing two stripes on the same physical volume.
In general, allocation policies other than normal are required only in special cases
where you need to specify unusual or nonstandard extent allocation.
The maximum device size with LVM is 8 Exabytes on 64-bit CPUs.
The following command reports the progress of the move as a percentage at five second intervals.
#pvmove -i5 /dev/sdd1
Customized Reporting for LVM
The following command displays only the physical volume name and size.
# pvs -o pv_name,pv_size
The following example displays the UUID of the physical volume in addition to the default fields.
# pvs -o +pv_uuid
The --noheadings argument suppresses the headings line. This can be useful for writing scripts.
# pvs --noheadings -o pv_name
The --separator separator argument uses separator to separate each field.
# pvs --separator =
To keep the fields aligned when using the separator argument, use the separator argument in conjunction
with the --aligned argument
# pvs --separator = --aligned
# vgs -o +pv_name
Displaying Information on Failed Devices
-P argument of the lvs or vgs command to display information about a failed volume that would
otherwise not appear in the output.
This argument permits some operations even though the metatdata is not completely consistent
internally. For example, if one of the devices that made up the volume group vg failed, the vgs
command might show the following output.
# vgs -o +devices
Volume group "vg" not found
If you specify the -P argument of the vgs command, the volume group is still unusable but you can
see more information about the failed device.
# vgs -P -o +devices
Partial mode. Incomplete volume groups will be activated read-only.
VG #PV #LV #SN Attr VSize VFree Devices
vg 9 2 0 rz-pn- 2.11T 2.07T unknown device(0)
vg 9 2 0 rz-pn- 2.11T 2.07T unknown device(5120),/dev/sda1(0)
In this example, the failed device caused both a linear and a striped logical volume in the volume
group to fail. The lvs command without the -P argument shows the following output.
# lvs -a -o +devices
Volume group "vg" not found
Displaying Information on Failed Devices
Using the -P argument shows the logical volumes that have failed.
# lvs -P -a -o +devices
LV VG Attr LSize Origin Snap% Move Log Copy% Devices
Partial mode. Incomplete volume groups will be activated read-only.
linear vg -wi-a- 20.00G unknown device(0)
stripe vg -wi-a- 20.00G unknown device(5120),/dev/sda1(0)
The following examples show the output of the pvs and lvs commands with the -P argument
specified when a leg of a mirrored logical volume has failed.
# vgs -a -o +devices –P
Partial mode. Incomplete volume groups will be activated read-only.
VG #PV #LV #SN Attr VSize VFree Devices
corey 4 4 0 rz-pnc 1.58T 1.34T my_mirror_mimage_0(0),my_mirror_mimage_1(0)
corey 4 4 0 rz-pnc 1.58T 1.34T /dev/sdd1(0)
corey 4 4 0 rz-pnc 1.58T 1.34T unknown device(0)
corey 4 4 0 rz-pnc 1.58T 1.34T /dev/sdb1(0)
# lvs -a -o +devices –P
Partial mode. Incomplete volume groups will be activated read-only.
LV VG Attr LSize Origin Snap% Move Log Copy% Devices
my_mirror corey mwi-a- 120.00G my_mirror_mlog 1.95 my_mirror_mimage_0(0),my_mirror_mimage_1(0)
[my_mirror_mimage_0] corey iwi-ao 120.00G unknown device(0
[my_mirror_mimage_1] corey iwi-ao 120.00G /dev/sdb1(0)
[my_mirror_mlog] corey lwi-ao 4.00M /dev/sdd1(0)
LVM Configuration Files / and commands
For list of lvm command use (The below option show you the
detail of lvm command)
#lvm
Lvm> help
LVM snapshot
A snapshot volume is a special type of volume that presents all the data that was in the volume at
the time the snapshot was created.
This allows the administrator to create a new block device which presents an exact copy of a logical
volume, frozen at some point in time.
Used when some batch processing, a backup for instance, needs to be performed on the logical
volume, but you don't want to halt a live system that is changing the data
When the snapshot device has been finished with the system administrator can just remove the
device.
This facility does require that the snapshot be made at a time when the data on the logical volume
is in a consistent state - the VFS-lock patch for LVM1 makes sure that some filesystems do this
automatically when a snapshot is created, and many of the filesystems in the 2.6 kernel do this
automatically when a snapshot is created without patching.
How To Click the snapshot
#lvcreate –s original-lvm-name -n snapshot-name -L +DefienSize-for-LVM
LVM snapshot
52
LVM snapshot
LVM2
LVM two is read / write snapshot
As we have the snapshot which can be mounted in read / write mode, we can do our testing on the
snapshot, and in case we are not satisfied with it, we want to delete data ,that was edit while the
testing ,We can just role back original logical volume . And All the data that was added to snapshot
that will removed.
53
LVM snapshot
56
LVM Snap shot
LVM1: lvm1 has read-only snapshot. Read-only snapshots work by creating an exception table, which is used to keep
track of which blocks have been changed. If a block is to be changed on the origin, it is first copied to the snapshot,
marked as copied in the exception table, and then the new data is written to the original volume.
LVM1 is available till kernel version 2.4
LVM2: snapshots are read/write by default. Read/write snapshots work like read-only snapshots, with the additional
feature that if data is written to the snapshot, that block is marked in the exception table as used, and never gets
copied from the original volume
This opens up many new possibilities that were not possible with LVM1's read-only snapshots. One example is to
snapshot a volume, mount the snapshot, and try an experimental program that change files on that volume. If you
don't like what it did, you can unmount the snapshot, remove it, and mount the original filesystem in its place. It is
also useful for creating volumes for use with Xen. You can create a disk image, then snapshot it and modify the
snapshot for a particular domU instance. You can then create another snapshot of the original volume, and modify
that one for a different domU instance. Since the only storage used by a snapshot is blocks that were changed on
the origin or the snapshot, the majority of the volume is shared by the domU's.
LVM2 is available from 2.6 kernel
57
Snap shot cont.
This is accomplished by having an exception list that is updated every time something changes
between the LVs (formally known as CoW, Copy-on-Write).
Snap shot cont.
Create a snapshot
# lvcreate –s <original-lv-name> -n <snap-shot-name> -L <size-for-snap>
To backup mount the snapshot read only and take the backup
# mount -o ro /dev/vg-name/snap-lv-name /mount-point-of-snap
2. To make a backup for a directory or file which is already backedup with dump level 0:
# dump -1uf databackup /home/user1/data ( This command backups all the new files added to /home/user1/data directory
after level-0 dump is made )
-1 -Is the dump-level [1 specifies incremental backup]
databackup -Is a dump-file [or backup-file]
/home/user1/data -Is a directory for which a backup is created
restore command
restore COMMAND:
restore - command restores the data from the dump-file or backup-file created using dump
command.
SYNTAX:
The Syntax is
restore [options]
Options Commands used in interactive mode:
-f Used to specify the backup or dump file ls List the files and directories in backup file
-C Used to compare dump-file with original file add Add files in dump-file to current working directory
-i Restore in Interactive mode cd Changes the directory
-v Displays Verbose Information pwd Displays the current working directory
-e Exclude inode while making backup extract Extract the files from the dump
quit Quit from the interactive mode
restore command
restore command example:
To restore file and directories from backup-file
# restore -if databack
i -To make restore with interactive mode
restore -if databack
f -To restore from the backup-file specifed
databack Is a name of backup-file or dump-file
LVM2 is backwards compatible with LVM1, with the exception of snapshot and cluster support. You can
convert a volume group from LVM1 format to LVM2 format with the vgconvert command
The underlying physical storage unit of an LVM logical volume is a block device such as a partition or
whole disk. This device is initialized as an LVM physical volume (PV).
To create an LVM logical volume, the physical volumes are combined into a volume group (VG). This
creates a pool of disk space out of which LVM logical volumes (LVs) can be allocated. This process is
analogous to the way in which disks are divided into partitions. A logical volume is used by file systems
and applications (such as databases)
Removing Physical Volumes from a Volume Group
To remove unused physical volumes from a volume group, use the vgreduce command.
The vgreduce command shrinks a volume group's capacity by removing one or more empty
physical volumes.
This frees those physical volumes to be used in different volume groups or to be removed from the
system.
Before removing a physical volume from a volume group, you can make sure that the physical
volume is not used by any logical volumes by using the pvdisplay command.
If the physical volume is still being used you will have to migrate the data to another physical
volume using the pvmove command.
When you create a volume group it is, by default, activated. This means that the logical volumes in
that group are accessible and subject to change.
There are various circumstances for which you you need to make a volume group inactive and thus
unknown to the kernel. To deactivate or activate a volume group, use the -a (--available) argument
of the vgchange command.
The following example deactivates the volume group myvg.
# vgchange –a n myvg
If clustered locking is enabled, add ’e’ to activate or deactivate a volume group exclusively on one
node or ’l’ to activate or/deactivate a volume group only on the local node. Logical volumes with
single-host snapshots are always activated exclusively because they can only be used on one node
at once.
Changing the Parameters of a Volume Group / removing volume group / Split
a vg from exiting vg/ Combine Volume Group
“vgchange” command to change several volume group parameters for an existing volume group.
The following command changes the maximum number of logical volumes of volume group myvg to 128.
vgchange -l 128 myvg
Metadata backups and archives are automatically created on every volume group and logical volume
configuration change unless disabled in the lvm.conf file.
By default, the metadata backup is stored in the /etc/lvm/backup file and the metadata archives are
stored in the /etc/lvm/archives file.
You can manually back up the metadata to the /etc/lvm/backup file with the vgcfgbackup command.
The vgcfrestore command restores the metadata of a volume group from the archive to all the
physical volumes in the volume groups.
You may be able to recover the data the physical volume by writing a new metadata area on the physical
volume specifying the same UUID as the lost metadata.
NOTE: You should not attempt this procedure with a working LVM logical volume. You will lose your data
if you specify the incorrect UUID.
Backing up Volume group Metadata cont.
The above example labels the /dev/sdh1 device as a physical volume with the UUID indicated above,
FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk.
This command restores the physical volume label with the metadata information contained in
VG_00050.vg, the most recent good archived metatdata for volume group
The restorefile argument instructs the pvcreate command to make the new physical volume
compatible with the old one on the volume group, ensuring that the the new metadata will not be
placed where the old physical volume contained data (which could happen, for example, if the
original pvcreate command had used the command line arguments that control metadata placement,
or it the physical volume was originally created using a different version of the software that used
different defaults). The pvcreate command overwrites only the LVM metadata areas and does not
affect the existing data areas.
Backing up Volume group Metadata cont.
If the on-disk LVM metadata takes as least as much space as what overrode it, this command can recover the
physical volume. If what overrode the metadata went past the metadata area, the data on the volume may
have been affected. You might be able to use the fsck command to recover that data.
Renaming a Volume Group & Moving a Volume Group to Another System
To recreate a volume group directory and logical volume special files, use the vgmknodes
command. This command checks the LVM2 special files in the /dev directory that are needed for
active logical volumes. It creates any special files that are missing and removes unused ones.
You can incorporate the vgmknodes command into the vgscan command by specifying the
mknodes argument to the vgscan command.
Logical Volume Backup
Metadata backups and archives are automatically created on every volume group and logical volume
configuration change unless disabled in the lvm.conf file. By default, the metadata backup is stored in the
/etc/lvm/backup file and the metadata archives are stored in the /etc/lvm/archive file. How long the metadata
archives stored in the /etc/lvm/archive file are kept and how many archive files are kept is determined by
parameters you can set in the lvm.conf file. A daily system backup should include the contents of the /etc/lvm
directory in the backup.
Note that a metadata backup does not back up the user and system data contained in the logical volumes.
You can manually back up the metadata to the /etc/lvm/backup file with the vgcfgbackup command. You can
restore metadata with the vgcfgrestore command
As discussed in vgbackup by using vgcfgbackup command
Snap shot concept from RHEL6
The LVM snapshot feature provides the ability to create virtual images of a device at a
particular instant without causing a service interruption.
When a change is made to the original device (the origin) after a snapshot is taken, the
snapshot feature makes a copy of the changed data area as it was prior to the change
so that it can reconstruct the state of the device.
Because a snapshot copies only the data areas that change after the snapshot is
created, the snapshot feature requires a minimal amount of storage. For example, with
a rarely updated origin, 3-5 % of the origin's capacity is sufficient to maintain the
snapshot
NOTE
Snapshot copies of a file system are virtual copies, not actual media backup for a file
system. Snapshots do not provide a substitute for a backup procedure.
Snap shot concept from RHEL6
The size of the snapshot governs the amount of space set aside for storing the changes to the
origin volume. For example, if you made a snapshot and then completely overwrote the origin the
snapshot would have to be at least as big as the origin volume to hold the changes. You need to
dimension a snapshot according to the expected level of change. So for example a short-lived
snapshot of a read-mostly volume, such as /usr, would need less space than a long-lived snapshot
of a volume that sees a greater number of writes, such as /home.
If a snapshot runs full, the snapshot becomes invalid, since it can no longer track changes on the
origin volume. You should regularly monitor the size of the snapshot. Snapshots are fully
resizeable, however, so if you have the storage capacity you can increase the size of the snapshot
volume to prevent it from getting dropped. Conversely, if you find that the snapshot volume is
larger than you need, you can reduce the size of the volume to free up space that is needed by
other logical volumes.
When you create a snapshot file system, full read and write access to the origin stays possible. If a
chunk on a snapshot is changed, that chunk is marked and never gets copied from the original
volume.
Snap shot concept from RHEL6
Create a snapshot
# lvcreate --size 100M --snapshot --name snap /dev/vg00/lvol1
If the original logical volume contains a file system, you can mount the snapshot logical volume on
an arbitrary directory in order to access the contents of the file system to run a backup while the
original file system continues to get updated.
Merging Snapshot Volumes
For snapshot merging --merge option is used in lvconvert command.
If both the origin and snapshot volume are not open, the merge will start immediately
Otherwise, the merge will start the first time either the origin or snapshot are activated and both
are closed.
Merging a snapshot into an origin that cannot be closed, for example a root file system, is
deferred until the next time the origin volume is activated. When merging starts, the resulting
logical volume will have the origin’s name, minor number and UUID. While the merge is in
progress, reads or writes to the origin appear as they were directed to the snapshot being
merged. When the merge finishes, the merged snapshot is removed.
# lvconvert --merge /dev/vg00/lvol1_snap Merges lvo1_snap to i
Day 8 Topic 2
Advance RAID
81
Advanced RAID
Objectives
Upon completion of this unit, you should be able to:
Understand the different types of RAID supported by Red Hat
Learn how to administer software RAID
Learn how to optimize software RAID
Planning for and implementing storage growth
82
Redundant Array of Inexpensive Disks
83
RAID6
RAID6
Block-level striping with dual distributed parity
Comparable to RAID5, with differences:
Decreased write performance
Greater fault tolerance
Survives the failure of up to 2 array devices
Degraded mode
Protection during single-device rebuild
SATA drives become more viable
Requires 4 or more block devices
Storage efficiency: 100*(1 - 2/N)%, where N=#devices
Example:
mdadm --create /dev/md0 --level=6 --raid-devices=4 /dev/sd[abcd]1
84
RAID6 Parity and Data Distribution
85
RAID10
RAID10
A stripe of mirrors (nested RAID)
Increased performance and fault tolerance
Requires 4 or more block devices
Storage efficiency: (100/N)%, where N=#devices/mirror
Example:
mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/sd[abcd]1
86
Stripe Parameters
Stripe Parameters
RAID0, RAID5, RAID6
Chunk size (mdadm -c N):
Amount of space to use on each round-robin device before moving on to the next
64 kiB default
Stride (mke2fs -E stride=N):
(chunk size) / (filesystem block size)
Can be used to offset ext2-specific data structures across the array devices for
more even distribution
87
/proc/mdstat
/proc/mdstat
Lists and provides information on all active RAID arrays
Used by mdadm during --scan
Monitor array reconstruction (watch -n .5 'cat /proc/mdstat')
Examples:
Initial sync'ing of a RAID1 (mirror):
Personalities : [raid1]md0 : active raid1 sda5[1] sdb5[0] 987840 blocks [2/2] [UU]
[=======>.............] resync = 35.7% (354112/987840) finish=0.9minspeed=10743K/sec Active
functioning RAID1:
# cat /proc/mdstatPersonalities : [raid1]md0 : active raid1 sda5[1] sdb5[0] 987840 blocks
[2/2] [UU] unused devices: <none> Failed half of a RAID1:
# cat /proc/mdstatPersonalities : [raid1]md0 : active raid1 sda5[1](F) sdb5[0] 987840 blocks
[2/1] [U_] unused devices: <none>
88
Event Notification
Event Notification
Make sure e-mail works
/etc/mdadm.conf
MAILADDR [email protected]
MAILFROM [email protected]
PROGRAM /usr/sbin/my-RAID-script
Test:
mdadm --monitor --scan --oneshot --test
Implement continuous monitoring of the array:
chkconfig mdmonitor on; service mdmonitor start
89
Restriping/Reshaping RAID Devices
90
Improving the Process
91
Growing the Size
92
Enabling Write-Intent on a RAID1 Array
93
Write-behind on RAID1
Write-behind on RAID1
--write-behind=256 (default)
Required:
write-intent (--bitmap= )
--write-mostly
Facilitates slow-link RAID1 mirrors
Mirror can be on a remote network
Write-intent bitmap prevents application from blocking during writes
94
RAID Error Handling
95