0% found this document useful (0 votes)
12 views

LVM

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

LVM

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 95

LVM

Agenda

 What is LVM
 Types of LVM (linear, stripped, mirror)
 PE and LE concept
 Snapshot
 Clvm concept
 Raid

2
LVM

• Linux file system is basically inflexible


• It is difficult to modify partitions on a running system
• LVM provides a virtual pool of memory space
- called a volume group
- From which logical volumes can be generated if needed
• LVM lets you resize the physical media during operation
• Physical volumes are combined to a super unit
– Referred to the volume group

3
What is LVM?

LVM is a tool for logical volume management which includes allocating disks, striping, mirroring and resizing logical
volumes.

With LVM, a hard drive or set of hard drives is allocated to one or more physical volumes. LVM physical volumes can
be placed on other block devices which might span two or more disks.

The physical volumes are combined into logical volumes, with the exception of the /boot/ partition. The /boot/
partition cannot be on a logical volume group because the boot loader cannot read it. If the root (/) partition is
on a logical volume, create a separate /boot/ partition which is not a part of a volume group.
LVM Terms

Physical storage (type 0x8e)

• Physical Volume: A physical volume (PV) is another name for a regular physical disk partition that is used or will
be used by LVM. PV is Disk or partition marked as usable space for LVM

• Volume Group: Any number of physical volumes (PVs) on different disk drives can be added together into a
volume group (VG). Collection of PV’s, can think of it as a virtual disk drive.

• Logical Volumes: Volume groups must then be subdivided into logical volumes. Each logical volume can be
individually formatted as if it were a regular Linux partition. A logical volume is, therefore, like a virtual partition
on your virtual disk drive.

• – A piece of a VG, can consider it to be a virtual partition.

• Physical Extent (PE) – An attribute of a VG. Lvs are a collection of PEs, can think of it as a virtual cylinder size
• Logical Extend (LE) – An attribute of LV. LV is collection of LE. It is a virtual cylinder size.
LVM Componenet

6
LVM Features

Features
• You can combine several hard disks or partitions
• You can enlarge a logical volume when free space is exhausted
• You can add hard disks to the volume group in a running system
• You can add logical volumes in a running system
• You can use several hard disks with improved performance in the RAID 0 (striping) mode
• You can add up to 256 logical volumes
• The Snapshot feature enables consistent backups

7
Logical Steps to configure LVM

step1. Step 4
Use disk utility to create some partition Create a LV within the new VG
of any size & assign the partition type I. Expand the new VG
LVM (8e).
II. Select Logical View
step 2.
Now initialize the new partition as PV III. Click Create New Logical Volume
I. Go to System -> Administrator -> buttion
Logical Volume Management IV. Specify the LV name lv1
II. Expand Uninitialized Entities in left V. Specify the LV size, or click ,or click Use
pane remaining space
III. Expand disk with new partition VI. Specify file system properties (type,
IV. Select new partition (confirm mount point, etc)
partition type type is 0x8e in right pane).
VII. Click OK
V. Click Initialize Entity
VI. Confirm by clicking Yes, data will be VIII. Confirm to create mount point
wiped
step 3.
Create the new VG using the PV just
created
I. Click create new volume Group button
II. Specify Volume Group Name (vg1)
III. Click OK

8
PE and LE concepts

9
Create LVM by available PE

Create LVM by available PE


Syntax
# lvcreate –l no-of-PE -n lv-name vg-name

For Example
# lvcreate -l 400 –n LV1 VG1

10
LVM

Logical
Volume
VG ( Volume Group )
LVM Physical Volume Layout

• LVM label is placed in the second 512-byte sector.


• An LVM label provides correct identification and device ordering for a physical device, since devices
can come up in any order when the system is booted. The LVM label identifies the device as an LVM
physical volume. It contains a random unique identifier (the UUID) for the physical volume. It also
stores the size of the block device in bytes, and it records where the LVM metadata will be stored on
the device.
• LVM metadata is small and stored as ASCII.
Types of LVM

There are three types of LVM logical volumes:


1. linear volumes,
2. striped volumes,
3. mirrored volumes.
Linear LVM

• A linear volume aggregates multiple physical volumes into one logical volume. So, if we have two
60GB disks, you can create a 120GB logical volume. The physical storage is concatenated.
Striped LVM

• You can control the way the data is written to the physical volumes by creating a striped logical
volume. For large sequential reads and writes, this can improve the efficiency of the data I/O.
• Striping enhances performance by writing data to a predetermined number of physical volumes in
round-round fashion.
Mirrored LVM

• A mirror maintains identical copies of data on different devices. When data is written to one
device, it is written to a second device as well, mirroring the data. This provides protection for
device failures. When one leg of a mirror fails, the logical volume becomes a linear volume and
can still be accessed.
CREATING LINEAR LVM

 Step-1 – Create two Partitions of 500 MB each using FDISK and set type as LINUX LVM
 Step-2 – Create Physical Volumes
 pvcreate /dev/hda8 /dev/hda9
 Step-3 – Create Volume Group
 vgcreate VG1 /dev/hda8 /dev/hda9
 Step-4 – Change Volume Group to ACTIVE
 vgchange -a y VG1
 Step-5 – Create Logical Volume
 lvcreate -L +600M -n LV1 VG1
 Step-6 – Format the Logical Volume
 mkfs.ext3 /dev/VG1/LV1
 Step-7 – Mount in /etc/fstab
 /dev/VG1/LV1 /mnt/data ext3 defaults 00
 Step-8 – Activate the new volume
 mount -a
Check the newly mounted Logical Volume

For Short details


 pvscan
 lvscan
 vgscan

For Long Full Details


 pvdisplay
 lvdisplay
 vgdisplay
RESIZING THE LVM

 Step-1 – Umount the LVM


 umount /dev/VG1/LV1
 Step-2 – Resize the LVM
 lvextend -L +200M /dev/VG1/LV1
 Step-3 – Make the LVM active
 vgchange -a y VG1
 Step-4 – Update the /etc/fstab for new size
 mount -a
 Step-5 – Configuring the HDD for new extended space
 resize2fs /dev/VG1/LV1
ADVANCE LVM ------------- extraaa

 Renaming a Volume Group


vgrename /dev/vg02 /dev/my_volume_group
vgrename vg02 my_volume_group
 Creating LVM by percentage % of free space
lvcreate -l 60%VG -n mylv testvg
lvcreate -l 100%FREE -n yourlv testvg
 Creating Striped LVM
lvcreate -L50G -i2 -I64 -n gfslv vg0
ADVANCE LVM ------------- extraaa

 Creating Mirror LVM

lvcreate -L50G -m1 -n gfslv vg0


lvcreate -L12MB -m1 --corelog -n ondiskmirvol bigvg
 Changing LVM Type
lvconvert -m1 vg00/lvol1 ---- converting linear to mirror
lvconvert -m0 vg00/lvol1 ---- converting mirror to linear
 Changing Permission of LVM
lvchange -pr vg00/lvol1 ---- changing LVM to read-only
Cluster LVM
Logical Volume Management

 Objectives
 Upon completion of this unit, you should be able to:
 understand advanced LVM topics
 move and rename Volume groups
 setup Clustered Logical Volumes
LVM2

 An LVM2 Review
 Review of LVM2 layers:
LVM2

 LVM2 - Creating a Logical Volume


 From VG vg0's free extents, "carve" out a 50GB logical volume (LV) named gfslv:
 lvcreate -L 50G -n gfslv vg0
 Create a striped LV across 2 PVs with a stride of 64kB:
 lvcreate -L 50G -i2 -I64 -n gfslv vg0
 Allocate space for the LV from a specific PV in the VG:
 lvcreate -L 50G -i2 -I64 -n gfslv vg0 /dev/sdb
 Display LV information
 lvdisplay, lvs, lvscan
LVM2

 Changing LVM options


 pvchange
 changing allocation permission on physical volumes
 disable alocation on a physical volume: pvchange -x n device
 allocation allowed on all phyical volumes (default): pvchange -ax y
 vgchange
 mainly for activating/deactivating volume groups
 activation: vgchange -ay vgname
 deactivation: vgchange -an vgname
 lvchange
 used for controlling visibility of the logical volume for the kernel
 activation: lvchange -ay lvname
 deactivation: lvchange -an lvname
 marking a volume read-only: lvchange -pr lvname
 marking it read-write again: lvchange -pw lvname
Clustered Logical Volume Manager (CLVM)

 CLVM is the clustered version of LVM2


 Aims to provide the same functionality of single-machine LVM
 Provides for storage virtualization
 Based on LVM2
 Device mapper (kernel)
 LVM2 tools (user space)
 Relies on a cluster infrastructure
 Used to coordinate logical volume changes between nodes
 CLVMD allows LV metadata changes only if the following conditions are true:
 All nodes in the cluster are running
 Cluster is quorate
LVM Components
Physical Volume
 The underlying physical storage unit of an LVM
logical volume is a block device such as a partition or whole disk.
 To use the device for an LVM logical volume the device must be
initialized as a physical volume (PV).
 Initializing a block device as a physical volume places a label near
the start of the device.
 By default, the LVM label is placed in the second 512-byte sector.

The Physical Volume Label (LVM Physical Volume Layout)


 By default, the pvcreate command places the physical volume label in the 2nd 512-byte sector. This label can
optionally be placed in any of the first four sectors, since the LVM tools that scan for a physical volume label
check the first 4 sectors.
 The physical volume label begins with the string LABELONE.
The physical volume label Contains:
i. Physical volume UUID
ii. Size of block device in bytes
iii. NULL-terminated list of data area locations
iv. NULL-terminated lists of metadata area locations

 Metadata locations are stored as offset and size (in bytes). There is room in the label for about 15 locations,
but the LVM tools currently use 3: a single data area plus up to two metadata areas.
LVM Components cont.

Multiple Partitions on a Disk


It is recommended that you create a single partition that covers the whole disk to label as an LVM
physical volume for the following reasons:

1. Administrative convenience - It is easier to keep track of the hardware in a system if each real
disk only appears once. This becomes particularly true if a disk fails. In addition, multiple physical
volumes on a single disk may cause a kernel warning about unknown partition types at boot-up.

2. Striping performance – LVM cannot tell that two physical volumes are on the same physical disk. If
you create a striped logical volume when two physical volumes are on the same physical disk, the
stripes could be on different partitions on the same disk. This would result in a decrease in
performance rather than an increase.
LVM Components cont.

Volume Groups

 Physical volumes are combined into volume groups (VGs). This creates a pool of disk space out of
which logical volumes can be allocated.
 Within a volume group, the disk space available for allocation is divided into units of a fixed-size
called extents. An extent is the smallest unit of space that can be allocated. Within a physical
volume, extents are referred to as physical extents.
 A logical volume is allocated into logical extents of the same size as the physical extents. The extent
size is thus the same for all logical volumes in the volume group. The volume group maps the
logical extents to physical extents.
LVM Components cont.

LVM Logical Volumes


In LVM, a volume group is divided up into logical volumes.
There are three types of LVM logical volumes:
1. linear volumes
2. striped volumes
3. mirrored volumes.

Linear Volumes
A linear volume aggregates space from one or more physical
volumes into one logical volume.
For example, if you have two 60GB disks, you can create a
120GB logical volume. The physical storage is concatenated.

Creating a linear volume assigns a range of physical extents to an area of a logical volume in order.
For example, as shown in Figure , logical extents 1 to 99 could map to one physical volume and logical extents
100 to 198 could map to a second physical volume. From the point of view of the application, there is one
device that is 198 extents in size
LVM Components cont.

Linear LVM cont.


The physical volumes that make up a logical volume do not have to be the same size

 This volume group includes 2 physical volumes named PV1


and PV2.
 The physical volumes are divided into 4MB units, since that
is the extent size.
 In this example, PV1 is 200 extents in size (800MB) and PV2
is 100 extents in size (400MB).
 So, you can create a linear volume any size between 1 and 300
extents (4MB to 1200MB).
 In this example, the linear volume named LV1 is 300 extents
in size.
LVM Components cont.

Linear lvm cont.


 You can configure more than one linear
logical volume of whatever size you require
from the pool of physical extents.

 Figure shows, “Multiple Logical Volumes


shows the same volume group as in
“Linear Volume with Unequal Physical
Volumes” , but in this case two logical
volumes have been carved out of the volume group: LV1, which is 250 extents in size (1000MB)
and LV2 which is 50 extents in size (200MB).
LVM Components cont.

Striped Logical Volumes


 LVM writes the data out across the underlying
Physical volumes.
 You can control the way the data is written to the
physical volumes by creating a striped logical volume.

 For large sequential reads and writes, this can


improve the efficiency of the data I/O.

 Striping enhances performance by writing data to a


predetermined number of physical volumes in
round-round fashion.
 With striping, I/O can be done in parallel.
 In some situations, this can result in near-linear performance gain for each additional physical volume in the stripe.

In Figure
the first stripe of data is written to PV1
the second stripe of data is written to PV2
the third stripe of data is written to PV3
the fourth stripe of data is written to PV1
In a striped logical volume, the size of the stripe cannot exceed the size of an extent.
LVM Components cont.
Mirrored Logical Volumes
 A mirror maintains identical copies of data on different
devices , When data is written to one device, it is written
to a second device as well, mirroring the data.
 This provides protection for device failures. When one leg
of a mirror fails, the logical volume becomes a linear volume
and can still be accessed.
 An LVM mirror divides the device being copied into regions
that are typically 512KB in size.
 LVM maintains a small log which it uses to keep track of which regions are in sync with the mirror
or mirrors.
 This log can be kept on disk, which will keep it persistent across reboots, or it can be maintained in
memory.
How to create LVM

Logical Steps Used to create LVM


1. Create some physical partition by using “fdisk” utility

2. Assign the partition id 8e so linux kernel


label it as LVM partition (To maintain it’s meta data)

3. Now convert physical partition into


Physical volume by using “pvcreate” command

4. Create a volume group named myvg (any name


- On this layer spooling of all the pv space) by using
“vgcreate” command

5. Create a logical volume named mylv (any name


For logical volume) by using “lvcrerate” command
lv space is dependent upon vg.

6. Now format the lvm by using file system

7. Permanent mount by using “fstab”


How to create LVM cont.
Linear LVM
A linear volume aggregates space from one or more physical volumes into one logical volume

1. Create a new Linear logical volume with size of 100MB


# lvcreate -L 100M myvg

 If you run above command it will create a lv by the name “lvol0” by using the space from vg
named “myvg”
2. To create the lv with specific name use the option “-n”
# lvcreate –L 100M –n mylv myvg (create the lv name “mylv” by using space from vg “myvg”)

To check the detail use the “lvs” command like that


# lvs –o +devices

3. create a volume with percentage from the volume group. (Ex: 50% of VG need to be allocated to the new
volume).Note:You need to "-l" to specify the percentage.

#lvcreate –l 20%VG –n mylv1 myvg


How to create LVM cont.

Mirror LVM
Mirror lvm is used for data redundancy.
 To create a mirrored volume, specify the number of copies of the data to make with the -m
argument of the lvcreate command
 Specifying -m1 creates one mirror, which yields two copies of the file system: a linear logical
volume plus one copy. Similarly, specifying -m2 creates two mirrors, yielding three copies of the
file system.

 Create a mirror volume from volume group “myvg”. An LVM mirror divides the device being
copied into regions that, by default, are 512KB in size. You can use the -R argument to specify
the region size in MB.
 LVM maintains a small log which it uses to keep track of which regions are in sync with the
mirror or mirrors. By default, this log is kept on disk, which keeps it persistent across reboots.
 You can specify instead that this log be kept in memory with the --corelog argument; this
eliminates the need for an extra log device, but it requires that the entire mirror be
resynchronized at every reboot.
How to create LVM cont.

Mirror LVM cont.


 The following command creates a mirrored logical volume from the volume group myvg. The logical is
named mirrorvol and has a single mirror. The volume is 100MB in size and keeps the mirror log in
memory (This command will force to keep the log in memory instead of disk)
# lvcreate -L 100MB -m1 --corelog -n mirrorvol myvg
# lvs –a -o +devices

2. To keep the corelog in disk ,LVM will always try to keep the mirror copies and logs in different
disks .So you need three disks

The following command creates a mirrored logical volume with a single mirror.
The volume is 500 megabytes in size, it is named mirrorlv, and it is carved out of volume group
myvg. The first leg of the mirror is on device /dev/sda1, the second leg of the mirror is on
device /dev/sdb1, and the mirror log is on /dev/sdc1.

# lvcreate -L 500M -m1 -n mirrorlv vg0 /dev/sda1 /dev/sdb1 /dev/sdc1


How to create LVM cont.

# lvcreate -L 100M -m1 -n mirrorvol1 myvg


# lvs –a –o +devices
LV VG Attr Lsize Origin Snap% Move Log Copy% Convert Devices
mirrorvol1 myvg mwi-a-m 100.0.0m mirrorvol1_log 100.0.0 mirror_vol1_mimagew_0,mirrorvol1_mimage_1(0)

3. LVM provides an option to specify the number of mirror copies.Number of copies can be increased by increasing
the "m" number.

# lvcreate -L 10M --corelog -m2 –n mirrorvol myvg

Changing Mirrored Volume Configuration


converts the linear logical volume vg00/lvol1 to a mirrored logical volume.
# lvconvert -m1 /dev/vg00/lvol1

converts the mirrored logical volume vg00/lvol1 to a linear logical volume, removing the mirror leg.
# lvconvert -m0 /dev/vg00/lvol1
How to create LVM cont.

stripe volume in LVM


Stripe volumes are very useful where you need performance.This volume will be mostly used where
large sequential reads and writes happening.

Create a stripped volume.


Options:
i - number of stripe (not more than number physical disks in the volume group)
I - Stripe size.

#lvcreate –L 100M –i 3 –I 4 –n mylv myvg

#dmsetup table /dev/myvg/mylv


0 12288000 linear 252:2 2048
0 – first block of device vg00-demo that the entry describes
12288000 – number of blocks in the entry
linear – type of mapping – linear means 1:1 block mapping
252:2 – major and minor of the source device
2048 - starting block on the source device
Some Advance Command For LV

1. The following command creates a logical volume called yourlv that uses all of the unallocated space
in the volume group testvol.
# lvcreate -l 100%FREE -n yourlv testvg

2. To create a logical volume to be allocated from a specific physical volume in the volume group,
# lvcreate -L 1500 -ntestlv testvg /dev/sdg1
creates a logical volume named testlv in volume group testvg allocated from the physical volume
/dev/sdg1

3. You can specify which extents of a physical volume are to be used for a logical volume. The following
example creates a linear logical volume out of extents 0 through 25 of physical volume /dev/sda1 and
extents 50 through 125 of physical volume /dev/sdb1 in volume group testvg
# lvcreate -l 100 -n testlv testvg /dev/sda1:0-25 /dev/sdb1:50-125

4. The following example creates a linear logical volume out of extents 0 through 25 of physical volume
/dev/sda1 and then continues laying out the logical volume at extent 100.
# lvcreate -l 100 -n testlv testvg /dev/sda1:0-25:100-
Advance Concept of LV

The underlying physical volumes used to create a logical volume can be important if the physical volume
needs to be removed, so you may need to consider this possibility when you create the logical volume.

 The default policy for how the extents of a logical volume are allocated is inherit, which applies the
same policy as for the volume group. These policies can be changed using the lvchange command. For
information on allocation policies
 When physical volumes are used to create a volume group, its disk space is divided into 4MB extents,
by default.
 This extent is the minimum amount by which the logical volume may be increased or decreased in
size.
 Large numbers of extents will have no impact on I/O performance of the logical volume.
 To define the extent size use –s option , if the default extent size is not suitable.
 You can put limits on the number of physical or logical volumes the volume group can have by using
the -p and -l arguments of the vgcreate command.
 By default, a volume group allocates physical extents according to common-sense rules such as not
placing parallel stripes on the same physical volume. This is the normal allocation policy. You can use
the --alloc argument of the vgcreate command to specify an allocation policy of contiguous,
anywhere, or cling
 The contiguous policy requires that new extents are adjacent to existing extents
Advance Concept of LV

 If there are sufficient free extents to satisfy an allocation request but a normal
allocation policy would not use them, the anywhere allocation policy will, even if
that reduces performance by placing two stripes on the same physical volume.
 In general, allocation policies other than normal are required only in special cases
where you need to specify unusual or nonstandard extent allocation.
 The maximum device size with LVM is 8 Exabytes on 64-bit CPUs.

Persistent Device Numbers


Major and minor device numbers are allocated dynamically at module load. Some
applications work best if the block device always is activated with the same device
(major and minor) number.
 You can specify these with the lvcreate and the lvchange commands by using the
following arguments:
--persistent y --major major --minor minor
Advance Concept of LV

Online Data Relocation


 The pvmove moves all extents allocated to to the physical volume /dev/sdc1 over to /dev/sdf1 in
the background.
# pvmove -b /dev/sdc1 /dev/sdf1

 The following command reports the progress of the move as a percentage at five second intervals.
#pvmove -i5 /dev/sdd1
Customized Reporting for LVM

The following command displays only the physical volume name and size.
# pvs -o pv_name,pv_size
The following example displays the UUID of the physical volume in addition to the default fields.
# pvs -o +pv_uuid

The --noheadings argument suppresses the headings line. This can be useful for writing scripts.
# pvs --noheadings -o pv_name
The --separator separator argument uses separator to separate each field.
# pvs --separator =
To keep the fields aligned when using the separator argument, use the separator argument in conjunction
with the --aligned argument
# pvs --separator = --aligned

# vgs -o +pv_name
Displaying Information on Failed Devices
 -P argument of the lvs or vgs command to display information about a failed volume that would
otherwise not appear in the output.
 This argument permits some operations even though the metatdata is not completely consistent
internally. For example, if one of the devices that made up the volume group vg failed, the vgs
command might show the following output.
# vgs -o +devices
Volume group "vg" not found
 If you specify the -P argument of the vgs command, the volume group is still unusable but you can
see more information about the failed device.
# vgs -P -o +devices
Partial mode. Incomplete volume groups will be activated read-only.
VG #PV #LV #SN Attr VSize VFree Devices
vg 9 2 0 rz-pn- 2.11T 2.07T unknown device(0)
vg 9 2 0 rz-pn- 2.11T 2.07T unknown device(5120),/dev/sda1(0)
 In this example, the failed device caused both a linear and a striped logical volume in the volume
group to fail. The lvs command without the -P argument shows the following output.
# lvs -a -o +devices
Volume group "vg" not found
Displaying Information on Failed Devices
Using the -P argument shows the logical volumes that have failed.
# lvs -P -a -o +devices
LV VG Attr LSize Origin Snap% Move Log Copy% Devices
Partial mode. Incomplete volume groups will be activated read-only.
linear vg -wi-a- 20.00G unknown device(0)
stripe vg -wi-a- 20.00G unknown device(5120),/dev/sda1(0)

 The following examples show the output of the pvs and lvs commands with the -P argument
specified when a leg of a mirrored logical volume has failed.
# vgs -a -o +devices –P
Partial mode. Incomplete volume groups will be activated read-only.
VG #PV #LV #SN Attr VSize VFree Devices
corey 4 4 0 rz-pnc 1.58T 1.34T my_mirror_mimage_0(0),my_mirror_mimage_1(0)
corey 4 4 0 rz-pnc 1.58T 1.34T /dev/sdd1(0)
corey 4 4 0 rz-pnc 1.58T 1.34T unknown device(0)
corey 4 4 0 rz-pnc 1.58T 1.34T /dev/sdb1(0)

# lvs -a -o +devices –P
Partial mode. Incomplete volume groups will be activated read-only.
LV VG Attr LSize Origin Snap% Move Log Copy% Devices
my_mirror corey mwi-a- 120.00G my_mirror_mlog 1.95 my_mirror_mimage_0(0),my_mirror_mimage_1(0)
[my_mirror_mimage_0] corey iwi-ao 120.00G unknown device(0
[my_mirror_mimage_1] corey iwi-ao 120.00G /dev/sdb1(0)
[my_mirror_mlog] corey lwi-ao 4.00M /dev/sdd1(0)
LVM Configuration Files / and commands

File Name Description


/etc/lvm/lvm.conf Central configuration file read by the tools
/etc/lvm/lvm_hosttag.conf For each host tag, an extra configuration file is read if it exists:
lvm_hosttag.conf (In Cluster)
/etc/lvm/.cache Device name filter cache file (configurable).
/etc/lvm/backup/ Directory for automatic volume group metadata backups (configurable).
/etc/lvm/archive/ Directory for automatic volume group metadata archives (configurable
with regard to directory path and archive history depth).
/var/lock/lvm/ In single-host configuration, lock files to prevent parallel tool runs from
corrupting the metadata; in a cluster, cluster-wide DLM is used.

For list of lvm command use (The below option show you the
detail of lvm command)
#lvm
Lvm> help
LVM snapshot

Snapshot – Taking a photograph of something at particular instance of time.


• creating a block device which presents an exact copy of a logical volumes, frozen at some point
in time.
• Because snapshot is read/write , you can test application against production data by taking a
snapshot and running tests against the snapshot, leaving the real data untouched.
• A backup process uses the read-only ``snapped'' LV which does not change. The file system or
database can continue running while the backup takes place.
• A snapshot is created as a new LV in the same volume group as the target LV. The volume
group must have free space available to allocate the snapshot LV.
LVM Snap shot cont.

 A snapshot volume is a special type of volume that presents all the data that was in the volume at
the time the snapshot was created.
 This allows the administrator to create a new block device which presents an exact copy of a logical
volume, frozen at some point in time.
 Used when some batch processing, a backup for instance, needs to be performed on the logical
volume, but you don't want to halt a live system that is changing the data
 When the snapshot device has been finished with the system administrator can just remove the
device.
 This facility does require that the snapshot be made at a time when the data on the logical volume
is in a consistent state - the VFS-lock patch for LVM1 makes sure that some filesystems do this
automatically when a snapshot is created, and many of the filesystems in the 2.6 kernel do this
automatically when a snapshot is created without patching.
How To Click the snapshot
#lvcreate –s original-lvm-name -n snapshot-name -L +DefienSize-for-LVM
LVM snapshot

LVM snapshot is of two types


LVM1 (available up to kernel version 2.4)
LVM2 (available from kernel version 2.6)

52
LVM snapshot

LVM2
LVM two is read / write snapshot
As we have the snapshot which can be mounted in read / write mode, we can do our testing on the
snapshot, and in case we are not satisfied with it, we want to delete data ,that was edit while the
testing ,We can just role back original logical volume . And All the data that was added to snapshot
that will removed.

How LVM1 works


If we want to modified a particular block, this block is first copied to the snapshot its marked as used
And then data is modified in it.
In LVM1 ,Finally we are modifying the original data in it but Snapshot is present in read only mode.
How use it
Simply take a snapshot at particular time instance and take the backup and when you have done
remove it.
This is used basically for data bases, because backing of databases is takes much time.

53
LVM snapshot

How LVM2 works


In lvm2 snapshot is present in read as well as in write mode.
If want to modified in snapshot it will marked as used and it is modified directly.
If we modify some thing in snapshot it doesn’t modify the original LVM.
After testing on snap shot just unmount it and mount original lvm and remove the snapshot.
We have original data that was present at the time when snapshot is taken.
Snapshot size depends on, how much are modifying,
Snapshot is consistent and the data is consistent.
Example
# lvcreate –s /dev/mapper/vg-lv-root –n snapshot –L +500M
# mount /dev/mapper/vg-lv-snapshot /mnt
Boot the os with snapshot Again boot from original lvm
# vim /boot/grub/menu.lst # vim /boot/grub/menu.lst
kernel /vmlinuz ro root=/dev/mapper/vg-lv-rootsnap kernel /vmlinuz ro root=/dev/mapper/vg-lv-rootsnap
# vim /mnt/etc/fstab
/dev/mapper/vg-lv-snapshot / ext4 defaults 1 1
# reboot # mount /dev/mapper/vg-lv /mnt
# cp –pr /etc /etc.def Check
# mv /etc/fstab /etc/fstab.def vim /mnt/etc/fstab
# df –h
# lvremove /dev/mapper/vg-lv-snapshot
Df: Can not read the table of mounted file 54

system:No such file or directory.


 lvcreate –s /dev/mapper/vg-lv-root –n snapshot –L +500M
 # mount /dev/mapper/vg-lv-snapshot /mnt
 Boot the os with snapshot
 # vim /boot/grub/menu.lst
 kernel /vmlinuz ro root=/dev/mapper/vg-lv-rootsnap
 # vim /mnt/etc/fstab
 /dev/mapper/vg-lv-snapshot / ext4 defaults 1 1
 # reboot
 # cp –pr /etc /etc.def
 # mv /etc/fstab /etc/fstab.def
 # df –h
 Df: Can not read the table of mounted file system:No such file or directory.
 Again boot from original lvm
 # vim /boot/grub/menu.lst
 kernel /vmlinuz ro root=/dev/mapper/vg-lv-root
 # mount /dev/mapper/vg-lv /mnt
 Check
 vim /mnt/etc/fstab
 # lvremove /dev/mapper/vg-lv-snapshot 55
LVM snapshot

Merge the LVM snapshot to original LVM

# fdisk -c /dev/sda # lvs


# partx -a /dev/sda # umount /lv1/
# pvcreate /dev/sda{5,6} # mount /dev/vg0/lv1snap /lv1/
# vgcreate vg0 /dev/sda{5,6} # ll /lv1/
# lvcreate -L 100M -n lv1 vg0 # cp -pr /lv1/sysconfig/ /lv1/sysconfig.2
# mkfs.ext4 /dev/vg0/lv1 # umount /lv1/
# mkdir /lv1 # mount /dev/vg0/lv1 /lv1/
# mount /dev/vg0/lv1 /lv1/ # ls /lv1/
# cp -p /etc/sysconfig/ /lv1/ # umount /lv1/
# cp -rp /etc/sysconfig/ /lv1/ # lvconvert --merge /dev/vg0/lv1snap
# lvcreate -s /dev/vg0/lv1 -n lv1snap -L100M

56
LVM Snap shot

LVM1: lvm1 has read-only snapshot. Read-only snapshots work by creating an exception table, which is used to keep
track of which blocks have been changed. If a block is to be changed on the origin, it is first copied to the snapshot,
marked as copied in the exception table, and then the new data is written to the original volume.
LVM1 is available till kernel version 2.4

LVM2: snapshots are read/write by default. Read/write snapshots work like read-only snapshots, with the additional
feature that if data is written to the snapshot, that block is marked in the exception table as used, and never gets
copied from the original volume
This opens up many new possibilities that were not possible with LVM1's read-only snapshots. One example is to
snapshot a volume, mount the snapshot, and try an experimental program that change files on that volume. If you
don't like what it did, you can unmount the snapshot, remove it, and mount the original filesystem in its place. It is
also useful for creating volumes for use with Xen. You can create a disk image, then snapshot it and modify the
snapshot for a particular domU instance. You can then create another snapshot of the original volume, and modify
that one for a different domU instance. Since the only storage used by a snapshot is blocks that were changed on
the origin or the snapshot, the majority of the volume is shared by the domU's.
LVM2 is available from 2.6 kernel

57
Snap shot cont.

Make a backup with snapshot


 A consistent backup is achieved when no data is changed between the start and the end of the
backup process. This can be hard to guarantee without stopping the system for the time
required by the copy process.
 Linux LVM implements a feature called Snapshots that does exactly what the name says: It's
like taking a picture of a logical volume at a given moment in time. With a Snapshot, you are
provided with two copies of the same LV—one can be used for backup purposes while the
other continues in operation.
 The two great advantages of Snapshots are:
1. Snapshot creation is instantaneous; no need to stop a production environment.
2. Two copies are made, but not at twice the size. A Snapshot will use only the space needed to
accommodate the difference between the two LVs.

This is accomplished by having an exception list that is updated every time something changes
between the LVs (formally known as CoW, Copy-on-Write).
Snap shot cont.

Create a snapshot
# lvcreate –s <original-lv-name> -n <snap-shot-name> -L <size-for-snap>

To check the detail of snapshot volume


#lvdispaly /dev/vg-name/snap-lv-name

To backup mount the snapshot read only and take the backup
# mount -o ro /dev/vg-name/snap-lv-name /mount-point-of-snap

(Dump and Restore for Linux Backup )


1. For Bakcup use dump command
# dump –0uaf /dump-directory/ snap.dump /mount-point-of-snap
2. For restore use “restore ” command
# mkdir /data; cd /data
# restore –f /dump-directory/snap.dump
dump command

dump command makes backup of filesystem or file and directories.


syntax:
dump [options] [dump-file] [File-system or file or directories].

-[level] The dump level any integer


Command Options:
-f Make the backup in a specified file
Updates /etc/dumpdats file for the backup
-u
made
-v Displays Verbose Information

-e Exclude inode while making backup


Example of dump command
1. To make a backup for a directory or file :
# dump -0uf databackup /home/user1/data (This command creates a dump-file called databackup. Which is the backup of
/home/user1/data Directory )
-0 -Is the dump-level [0 specifies full-backup]
databackup -Is a dump-file [or backup-file]
/home/user1/data -Is a directory for which a backup is created

2. To make a backup for a directory or file which is already backedup with dump level 0:
# dump -1uf databackup /home/user1/data ( This command backups all the new files added to /home/user1/data directory
after level-0 dump is made )
-1 -Is the dump-level [1 specifies incremental backup]
databackup -Is a dump-file [or backup-file]
/home/user1/data -Is a directory for which a backup is created
restore command

restore COMMAND:
restore - command restores the data from the dump-file or backup-file created using dump
command.

SYNTAX:
The Syntax is
restore [options]
Options Commands used in interactive mode:

-f Used to specify the backup or dump file ls List the files and directories in backup file
-C Used to compare dump-file with original file add Add files in dump-file to current working directory
-i Restore in Interactive mode cd Changes the directory
-v Displays Verbose Information pwd Displays the current working directory
-e Exclude inode while making backup extract Extract the files from the dump
quit Quit from the interactive mode
restore command
restore command example:
To restore file and directories from backup-file
# restore -if databack
i -To make restore with interactive mode
restore -if databack
f -To restore from the backup-file specifed
databack Is a name of backup-file or dump-file

This command gets you to interactive mode as follows


restore >
Now the following commands are entered to restore
restore > ls -Lists files and directories in dump file
restore > add -add files to the current directory
restore > ls -Lists the file added from the backup file to current directory
restore > extract -Extracts the file from the backup file to current directory
restore > quit -Quits from the interactive mode
To compare and display any dump-file with the original file
# restore -Cf databack
-1 -Is the dump-level [1 specifies incremental backup]
Databackup -Is a dump-file [or backup-file]
/home/user1/data -Is a directory for which a backup is created
Difference between lvm1 & lvm2

Features LVM1 LVM2


RHEL AS 2.1 support No No
RHEL 3 support Yes No
RHEL 4 support No Yes
Transactional metadata for
No Yes
fast recovery
Shared volume mounts
No Yes
with GFS
Cluster Suite failover
Yes Yes
supported
Striped volume expansion No Yes
Max number PVs, LVs 256 PVs, 256 LVs 2**32 PVs, 2**32 LVs
Max device size 2 Terabytes 8 Exabytes (64-bit CPUs)
Volume mirroring support No Yes, in Fall 2005
LVM1
Relationship of various Elements in LVM
LVM2
LVM2 provides the following improvements over LVM1:
 flexible capacity
 more efficient metadata storage
 better recovery format
 new ASCII metadata format
 atomic changes to metadata
 redundant copies of metadata

LVM2 is backwards compatible with LVM1, with the exception of snapshot and cluster support. You can
convert a volume group from LVM1 format to LVM2 format with the vgconvert command

 The underlying physical storage unit of an LVM logical volume is a block device such as a partition or
whole disk. This device is initialized as an LVM physical volume (PV).
 To create an LVM logical volume, the physical volumes are combined into a volume group (VG). This
creates a pool of disk space out of which LVM logical volumes (LVs) can be allocated. This process is
analogous to the way in which disks are divided into partitions. A logical volume is used by file systems
and applications (such as databases)
Removing Physical Volumes from a Volume Group

 To remove unused physical volumes from a volume group, use the vgreduce command.
 The vgreduce command shrinks a volume group's capacity by removing one or more empty
physical volumes.
 This frees those physical volumes to be used in different volume groups or to be removed from the
system.
 Before removing a physical volume from a volume group, you can make sure that the physical
volume is not used by any logical volumes by using the pvdisplay command.
 If the physical volume is still being used you will have to migrate the data to another physical
volume using the pvmove command.

# vgdisplay –vv myvg


#pvmove /dev/sda5 /dev/sda6 (will move all the allocated pv to /dev/sda6)
To remove a pv named /dev/sda5 from vg named “myvg”
#vgreduce myvg /dev/sda5
Activating and Deactivating Volume Groups

 When you create a volume group it is, by default, activated. This means that the logical volumes in
that group are accessible and subject to change.
 There are various circumstances for which you you need to make a volume group inactive and thus
unknown to the kernel. To deactivate or activate a volume group, use the -a (--available) argument
of the vgchange command.
 The following example deactivates the volume group myvg.
# vgchange –a n myvg

 If clustered locking is enabled, add ’e’ to activate or deactivate a volume group exclusively on one
node or ’l’ to activate or/deactivate a volume group only on the local node. Logical volumes with
single-host snapshots are always activated exclusively because they can only be used on one node
at once.
Changing the Parameters of a Volume Group / removing volume group / Split
a vg from exiting vg/ Combine Volume Group

“vgchange” command to change several volume group parameters for an existing volume group.

 The following command changes the maximum number of logical volumes of volume group myvg to 128.
 vgchange -l 128 myvg

Removing Volume Groups


# vgremove myvg

Splitting a Volume Group


Make sure exiting vg is not the member of any lv
# vgsplit myvg newvg /dev/sda5 (Split a newvg from exiting vg named myuvg)
Combining Volume Groups
command merges the inactive volume group myv1 into the active or inactive volume group myvg giving
verbose runtime information.
vgmerge –v myvg myvg1 (Merge vg named myvg1 to myvg)
Backing up Volume group Metadata

 Metadata backups and archives are automatically created on every volume group and logical volume
configuration change unless disabled in the lvm.conf file.
 By default, the metadata backup is stored in the /etc/lvm/backup file and the metadata archives are
stored in the /etc/lvm/archives file.
 You can manually back up the metadata to the /etc/lvm/backup file with the vgcfgbackup command.
 The vgcfrestore command restores the metadata of a volume group from the archive to all the
physical volumes in the volume groups.

Recovering Physical Volume Metadata


If the volume group metadata area of a physical volume is accidentally overwritten or otherwise
destroyed, you will get an error message indicating that the metadata area is incorrect, or that the
system was unable to find a physical volume with a particular UUID.

You may be able to recover the data the physical volume by writing a new metadata area on the physical
volume specifying the same UUID as the lost metadata.

NOTE: You should not attempt this procedure with a working LVM logical volume. You will lose your data
if you specify the incorrect UUID.
Backing up Volume group Metadata cont.

Recovering Physical Volume Metadata cont.


If metadata is missing then we get error like that
# lvs -a -o +devices
Couldn't find device with uuid 'FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk'.
Couldn't find all physical volumes for volume group VG.
Couldn't find device with uuid 'FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk'.
Couldn't find all physical volumes for volume group VG.

You may be able to find the UUID by two ways


1. You may be able to find the UUID for the physical volume that was overwritten by looking in the
/etc/lvm/archive directory. Look in the file VolumeGroupName_xxxx.vg for the last known valid
archived LVM metadata for that volume group.
2. By deactivating the volume and setting the partial (-P) argument will enable you to find the UUID of
the missing corrupted physical volume.
# vgchange -an --partial
Partial mode. Incomplete volume groups will be activated read-only.
Couldn't find device with uuid 'FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk'.
Couldn't find device with uuid 'FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk'.
Backing up Volume group Metadata cont.

Recovering Physical Volume Metadata cont.


Use the --uuid and --restorefile arguments of the pvcreate command to restore the physical volume

# pvcreate --uuid "FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk" --restorefile


/etc/lvm/archive/VG_00050.vg /dev/sdh1
Physical volume "/dev/sdh1" successfully created

 The above example labels the /dev/sdh1 device as a physical volume with the UUID indicated above,
FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk.
 This command restores the physical volume label with the metadata information contained in
VG_00050.vg, the most recent good archived metatdata for volume group
 The restorefile argument instructs the pvcreate command to make the new physical volume
compatible with the old one on the volume group, ensuring that the the new metadata will not be
placed where the old physical volume contained data (which could happen, for example, if the
original pvcreate command had used the command line arguments that control metadata placement,
or it the physical volume was originally created using a different version of the software that used
different defaults). The pvcreate command overwrites only the LVM metadata areas and does not
affect the existing data areas.
Backing up Volume group Metadata cont.

Recovering Physical Volume Metadata cont.


After restoring physical volume from its, UUID now use the vgcfgrestore command to restore the volume
group's metadata.
# vgcfgrestore VG
Restored volume group VG
Now display the logical volumes.
# # lvs -a -o +devices
LV VG Attr LSize Origin Snap% Move Log Copy% Devices
stripe VG -wi--- 300.00G /dev/sdh1 (0),/dev/sda1(0)
stripe VG -wi--- 300.00G /dev/sdh1 (34728),/dev/sdb1(0)
The following commands activate the volumes and display the active volumes.
# lvchange -ay /dev/VG/stripe
# lvs -a -o +devices
LV VG Attr LSize Origin Snap% Move Log Copy% Devices
stripe VG -wi-a- 300.00G /dev/sdh1 (0),/dev/sda1(0)
stripe VG -wi-a- 300.00G /dev/sdh1 (34728),/dev/sdb1(0)

If the on-disk LVM metadata takes as least as much space as what overrode it, this command can recover the
physical volume. If what overrode the metadata went past the metadata area, the data on the volume may
have been affected. You might be able to use the fsck command to recover that data.
Renaming a Volume Group & Moving a Volume Group to Another System

Use the vgrename command to rename an existing volume group.


#Vgrename vg02 my_volume_group (Rename vg02 to my_volume_group)

Moving a Volume Group to Another System


You can move an entire LVM volume group to another system by using the vgexport and vgimport commands.
1. Make sure that no users are accessing files on the active volumes in the volume group, then
unmount the logical volumes.
2. Use the -a n argument of the vgchange command to mark the volume group as inactive, which
prevents any further activity on the volume group.
3. Use the vgexport command to export the volume group. This prevents it from being accessed
by the system from which you are removing it.
NOTE: After you export the volume group, the physical volume will show up as being in an
exported volume group when you execute the pvscan command, as in the following example.
# pvscan
PV /dev/sda1 is in exported VG myvg [17.15 GB / 7.15 GB free]
PV /dev/sdc1 is in exported VG myvg [17.15 GB / 15.15 GB free]
PV /dev/sdd1 is in exported VG myvg [17.15 GB / 15.15 GB free]
4. When the system is next shut down, you can unplug the disks that constitute the volume group and connect them to the new system.
Recreating a Volume Group Directory

 To recreate a volume group directory and logical volume special files, use the vgmknodes
command. This command checks the LVM2 special files in the /dev directory that are needed for
active logical volumes. It creates any special files that are missing and removes unused ones.

 You can incorporate the vgmknodes command into the vgscan command by specifying the
mknodes argument to the vgscan command.
Logical Volume Backup

 Metadata backups and archives are automatically created on every volume group and logical volume
configuration change unless disabled in the lvm.conf file. By default, the metadata backup is stored in the
/etc/lvm/backup file and the metadata archives are stored in the /etc/lvm/archive file. How long the metadata
archives stored in the /etc/lvm/archive file are kept and how many archive files are kept is determined by
parameters you can set in the lvm.conf file. A daily system backup should include the contents of the /etc/lvm
directory in the backup.
 Note that a metadata backup does not back up the user and system data contained in the logical volumes.
 You can manually back up the metadata to the /etc/lvm/backup file with the vgcfgbackup command. You can
restore metadata with the vgcfgrestore command
 As discussed in vgbackup by using vgcfgbackup command
Snap shot concept from RHEL6

 The LVM snapshot feature provides the ability to create virtual images of a device at a
particular instant without causing a service interruption.
 When a change is made to the original device (the origin) after a snapshot is taken, the
snapshot feature makes a copy of the changed data area as it was prior to the change
so that it can reconstruct the state of the device.
 Because a snapshot copies only the data areas that change after the snapshot is
created, the snapshot feature requires a minimal amount of storage. For example, with
a rarely updated origin, 3-5 % of the origin's capacity is sufficient to maintain the
snapshot

NOTE
Snapshot copies of a file system are virtual copies, not actual media backup for a file
system. Snapshots do not provide a substitute for a backup procedure.
Snap shot concept from RHEL6

 The size of the snapshot governs the amount of space set aside for storing the changes to the
origin volume. For example, if you made a snapshot and then completely overwrote the origin the
snapshot would have to be at least as big as the origin volume to hold the changes. You need to
dimension a snapshot according to the expected level of change. So for example a short-lived
snapshot of a read-mostly volume, such as /usr, would need less space than a long-lived snapshot
of a volume that sees a greater number of writes, such as /home.
 If a snapshot runs full, the snapshot becomes invalid, since it can no longer track changes on the
origin volume. You should regularly monitor the size of the snapshot. Snapshots are fully
resizeable, however, so if you have the storage capacity you can increase the size of the snapshot
volume to prevent it from getting dropped. Conversely, if you find that the snapshot volume is
larger than you need, you can reduce the size of the volume to free up space that is needed by
other logical volumes.
 When you create a snapshot file system, full read and write access to the origin stays possible. If a
chunk on a snapshot is changed, that chunk is marked and never gets copied from the original
volume.
Snap shot concept from RHEL6

There are several uses for the snapshot feature:


 Most typically, a snapshot is taken when you need to perform a backup on a logical volume without
halting the live system that is continuously updating the data.
 You can execute the fsck command on a snapshot file system to check the file system integrity and
determine whether the original file system requires file system repair.
 Because the snapshot is read/write, you can test applications against production data by taking a
snapshot and running tests against the snapshot, leaving the real data untouched.
 You can create LVM volumes for use with Red Hat virtualization. LVM snapshots can be used to
create snapshots of virtual guest images. These snapshots can provide a convenient way to modify
existing guests or create new guests with minimal additional storage. For information on creating
LVM-based storage pools with Red Hat Virtualization, see the Virtualization Administration Guide.
Snap shot concept from RHEL6

Create a snapshot
# lvcreate --size 100M --snapshot --name snap /dev/vg00/lvol1

If the original logical volume contains a file system, you can mount the snapshot logical volume on
an arbitrary directory in order to access the contents of the file system to run a backup while the
original file system continues to get updated.
Merging Snapshot Volumes
For snapshot merging --merge option is used in lvconvert command.
If both the origin and snapshot volume are not open, the merge will start immediately
Otherwise, the merge will start the first time either the origin or snapshot are activated and both
are closed.
Merging a snapshot into an origin that cannot be closed, for example a root file system, is
deferred until the next time the origin volume is activated. When merging starts, the resulting
logical volume will have the origin’s name, minor number and UUID. While the merge is in
progress, reads or writes to the origin appear as they were directed to the snapshot being
merged. When the merge finishes, the merged snapshot is removed.
# lvconvert --merge /dev/vg00/lvol1_snap  Merges lvo1_snap to i
Day 8 Topic 2
Advance RAID

81
Advanced RAID

 Objectives
 Upon completion of this unit, you should be able to:
 Understand the different types of RAID supported by Red Hat
 Learn how to administer software RAID
 Learn how to optimize software RAID
 Planning for and implementing storage growth

82
Redundant Array of Inexpensive Disks

 Redundant Array of Inexpensive Disks


 Software RAID
 0, 1, 5, 6, 10
 Software versus Hardware RAID
 Provides
 Data integrity
 Fault-tolerance
 Throughput
 Capacity
 mdadm
 Creates device files named /dev/md0, /dev/md1, etc...
 -a yes option for non-precreated device files (/dev/md1 and higher)
 A-1

83
RAID6

 RAID6
 Block-level striping with dual distributed parity
 Comparable to RAID5, with differences:
 Decreased write performance
 Greater fault tolerance
 Survives the failure of up to 2 array devices
 Degraded mode
 Protection during single-device rebuild
 SATA drives become more viable
 Requires 4 or more block devices
 Storage efficiency: 100*(1 - 2/N)%, where N=#devices
 Example:
 mdadm --create /dev/md0 --level=6 --raid-devices=4 /dev/sd[abcd]1

84
RAID6 Parity and Data Distribution

85
RAID10

 RAID10
 A stripe of mirrors (nested RAID)
 Increased performance and fault tolerance
 Requires 4 or more block devices
 Storage efficiency: (100/N)%, where N=#devices/mirror
 Example:
 mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/sd[abcd]1

86
Stripe Parameters

 Stripe Parameters
 RAID0, RAID5, RAID6
 Chunk size (mdadm -c N):
 Amount of space to use on each round-robin device before moving on to the next
 64 kiB default
 Stride (mke2fs -E stride=N):
 (chunk size) / (filesystem block size)
 Can be used to offset ext2-specific data structures across the array devices for
more even distribution

87
/proc/mdstat

 /proc/mdstat
 Lists and provides information on all active RAID arrays
 Used by mdadm during --scan
 Monitor array reconstruction (watch -n .5 'cat /proc/mdstat')
 Examples:
 Initial sync'ing of a RAID1 (mirror):
 Personalities : [raid1]md0 : active raid1 sda5[1] sdb5[0] 987840 blocks [2/2] [UU]
[=======>.............] resync = 35.7% (354112/987840) finish=0.9minspeed=10743K/sec Active
functioning RAID1:
 # cat /proc/mdstatPersonalities : [raid1]md0 : active raid1 sda5[1] sdb5[0] 987840 blocks
[2/2] [UU] unused devices: <none> Failed half of a RAID1:
 # cat /proc/mdstatPersonalities : [raid1]md0 : active raid1 sda5[1](F) sdb5[0] 987840 blocks
[2/1] [U_] unused devices: <none>

88
Event Notification

 Event Notification
 Make sure e-mail works
 /etc/mdadm.conf
 MAILADDR [email protected]
 MAILFROM [email protected]
 PROGRAM /usr/sbin/my-RAID-script
 Test:
 mdadm --monitor --scan --oneshot --test
 Implement continuous monitoring of the array:
 chkconfig mdmonitor on; service mdmonitor start

89
Restriping/Reshaping RAID Devices

 Restriping/Reshaping RAID Devices


 Re-arrange the data stored in each stripe into a new layout
 Necessary after changing:
 Number of devices
 Chunk size
 Arrangement of data
 Parity location/type
 Must back up the Critical Section

90
Improving the Process

 Improving the Process with a Critical Section Backup


 During the first stages of a reshape, the critical section is backed up, by default, to:
 a spare device, if one exists
 otherwise, memory
 If the critical section is backed up to memory, it is prone to loss in the event of a failure
 Backup critical section to a file during reshape:
 mdadm --grow /dev/md0 --raid-devices=4 --backup-file=/tmp/md0.bu
 Once past the critical section, mdadm will delete the file
 In the event of a failure during the critical section:
 mdadm --assemble /dev/md0 --backup-file=/tmp/md0.bu /dev/sd[a-d]

91
Growing the Size

 Growing the Size of Disks in a RAID5 Array


 One at a time:
 Fail a device
 Grow its size
 Re-add to array
 Then, grow the array into the new space
 Finally, grow the filesystem into the new space

92
Enabling Write-Intent on a RAID1 Array

 Enabling Write-Intent on a RAID1 Array


 Internal (metadata area) or external (file)
 Can be added to (or removed from) active array
 Enabling write-intent bitmap:
 RAID volume must be in sync
 Must have a persistent superblock
 Internal
 mdadm --grow /dev/mdX --bitmap=internal
 External
 mdadm --grow /dev/mdX --bitmap=/root/filename
 Filename must contain at least one slash ('/') character
 ext2/ext3 Filesystems only

93
Write-behind on RAID1

 Write-behind on RAID1
 --write-behind=256 (default)
 Required:

 write-intent (--bitmap= )
 --write-mostly
 Facilitates slow-link RAID1 mirrors
 Mirror can be on a remote network
 Write-intent bitmap prevents application from blocking during writes

94
RAID Error Handling

 RAID Error Handling and Data Consistency Checking


 RAID passively detects bad blocks
 Tries to fix read errors, evicts device from array otherwise
 The larger the disk, the more likely a bad block encounter
 Initiate consistency and bad block check:
 echo check >> /sys/block/mdX/md/sync_action

95

You might also like