Red Hat Enterprise Linux-8-Configuring and Managing Logical Volumes-En-Us
Red Hat Enterprise Linux-8-Configuring and Managing Logical Volumes-En-Us
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
This documentation collection provides instructions on how to manage LVM logical volumes on Red
Hat Enterprise Linux 8.
Table of Contents
Table of Contents
. . . . . . . . . .OPEN
MAKING . . . . . . SOURCE
. . . . . . . . . .MORE
. . . . . . .INCLUSIVE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. . . . . . . . . . . . .
. . . . . . . . . . . . . FEEDBACK
PROVIDING . . . . . . . . . . . . ON
. . . .RED
. . . . .HAT
. . . . .DOCUMENTATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . .
.CHAPTER
. . . . . . . . . . 1.. .LOGICAL
. . . . . . . . . . VOLUMES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. . . . . . . . . . . . .
1.1. LVM ARCHITECTURE 8
1.2. PHYSICAL VOLUMES 9
1.3. VOLUME GROUPS 11
1.4. LVM LOGICAL VOLUMES 11
1.4.1. Linear Volumes 11
1.4.2. Striped Logical Volumes 13
1.4.3. RAID logical volumes 14
1.4.4. Thinly-provisioned logical volumes (thin volumes) 15
1.4.5. Snapshot Volumes 16
1.4.6. Thinly-provisioned snapshot volumes 17
1.4.7. Cache Volumes 18
.CHAPTER
. . . . . . . . . . 2.
. . MANAGING
. . . . . . . . . . . . .LOCAL
. . . . . . . .STORAGE
. . . . . . . . . . .USING
. . . . . . . RHEL
. . . . . . SYSTEM
. . . . . . . . . .ROLES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
..............
2.1. INTRODUCTION TO THE STORAGE ROLE 19
2.2. PARAMETERS THAT IDENTIFY A STORAGE DEVICE IN THE STORAGE SYSTEM ROLE 19
2.3. EXAMPLE ANSIBLE PLAYBOOK TO CREATE AN XFS FILE SYSTEM ON A BLOCK DEVICE 20
2.4. EXAMPLE ANSIBLE PLAYBOOK TO PERSISTENTLY MOUNT A FILE SYSTEM 21
2.5. EXAMPLE ANSIBLE PLAYBOOK TO MANAGE LOGICAL VOLUMES 21
2.6. EXAMPLE ANSIBLE PLAYBOOK TO ENABLE ONLINE BLOCK DISCARD 22
2.7. EXAMPLE ANSIBLE PLAYBOOK TO CREATE AND MOUNT AN EXT4 FILE SYSTEM 22
2.8. EXAMPLE ANSIBLE PLAYBOOK TO CREATE AND MOUNT AN EXT3 FILE SYSTEM 23
2.9. EXAMPLE ANSIBLE PLAYBOOK TO RESIZE AN EXISTING EXT4 OR EXT3 FILE SYSTEM USING THE
STORAGE RHEL SYSTEM ROLE 24
2.10. EXAMPLE ANSIBLE PLAYBOOK TO RESIZE AN EXISTING FILE SYSTEM ON LVM USING THE STORAGE
RHEL SYSTEM ROLE 25
2.11. EXAMPLE ANSIBLE PLAYBOOK TO CREATE A SWAP PARTITION USING THE STORAGE RHEL SYSTEM
ROLE 26
2.12. CONFIGURING A RAID VOLUME USING THE STORAGE SYSTEM ROLE 26
2.13. CONFIGURING AN LVM POOL WITH RAID USING THE STORAGE SYSTEM ROLE 28
2.14. EXAMPLE ANSIBLE PLAYBOOK TO COMPRESS AND DEDUPLICATE A VDO VOLUME ON LVM USING
THE STORAGE RHEL SYSTEM ROLE 29
2.15. CREATING A LUKS ENCRYPTED VOLUME USING THE STORAGE ROLE 30
2.16. EXAMPLE ANSIBLE PLAYBOOK TO EXPRESS POOL VOLUME SIZES AS PERCENTAGE USING THE
STORAGE RHEL SYSTEM ROLE 31
2.17. ADDITIONAL RESOURCES 31
.CHAPTER
. . . . . . . . . . 3.
. . DEPLOYING
. . . . . . . . . . . . . .LVM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32
..............
3.1. CREATING LVM PHYSICAL VOLUME 32
3.2. CREATING LVM VOLUME GROUP 33
3.3. CREATING LVM LOGICAL VOLUME 34
.CHAPTER
. . . . . . . . . . 4.
. . .CONFIGURING
. . . . . . . . . . . . . . . .LVM
. . . . .LOGICAL
. . . . . . . . . .VOLUMES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37
..............
4.1. USING CLI COMMANDS 37
Specifying units in a command line argument 37
Specifying volume groups and logical volumes 37
Increasing output verbosity 37
Displaying help for LVM CLI commands 38
4.2. CREATING AN LVM LOGICAL VOLUME ON THREE DISKS 38
1
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
.CHAPTER
. . . . . . . . . . 5.
. . MODIFYING
. . . . . . . . . . . . . THE
. . . . . SIZE
. . . . . OF
. . . .A. .LOGICAL
. . . . . . . . . . VOLUME
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46
..............
5.1. GROWING A LOGICAL VOLUME AND FILE SYSTEM 46
5.2. SHRINKING LOGICAL VOLUMES 48
5.3. EXTENDING A STRIPED LOGICAL VOLUME 49
.CHAPTER
. . . . . . . . . . 6.
. . .MANAGING
. . . . . . . . . . . . LVM
. . . . . VOLUME
. . . . . . . . . . GROUPS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
..............
6.1. CREATING LVM VOLUME GROUP 51
6.2. COMBINING LVM VOLUME GROUPS 52
6.3. REMOVING PHYSICAL VOLUMES FROM A VOLUME GROUP 52
6.4. SPLITTING A LVM VOLUME GROUP 53
6.5. RENAMING LVM VOLUME GROUPS 54
6.6. MOVING A VOLUME GROUP TO ANOTHER SYSTEM 55
6.7. REMOVING LVM VOLUME GROUPS 56
. . . . . . . . . . . 7.
CHAPTER . . MANAGING
. . . . . . . . . . . . .LVM
. . . . . PHYSICAL
. . . . . . . . . . . .VOLUMES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .57
..............
7.1. SCANNING FOR BLOCK DEVICES TO USE AS PHYSICAL VOLUMES 57
7.2. SETTING THE PARTITION TYPE FOR A PHYSICAL VOLUME 57
7.3. RESIZING AN LVM PHYSICAL VOLUME 58
7.4. REMOVING LVM PHYSICAL VOLUMES 58
7.5. ADDING PHYSICAL VOLUMES TO A VOLUME GROUP 58
7.6. REMOVING PHYSICAL VOLUMES FROM A VOLUME GROUP 58
.CHAPTER
. . . . . . . . . . 8.
. . .DISPLAYING
. . . . . . . . . . . . . LVM
. . . . . COMPONENTS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60
..............
8.1. DISPLAYING LVM INFORMATION WITH THE LVM COMMAND 60
8.2. DISPLAYING PHYSICAL VOLUMES 60
8.3. DISPLAYING VOLUME GROUPS 61
8.4. DISPLAYING LOGICAL VOLUMES 62
. . . . . . . . . . . 9.
CHAPTER . . .CUSTOMIZED
. . . . . . . . . . . . . . REPORTING
. . . . . . . . . . . . . FOR
. . . . . LVM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .63
..............
9.1. CONTROLLING THE FORMAT OF THE LVM DISPLAY 63
9.2. LVM OBJECT DISPLAY FIELDS 64
9.3. SORTING LVM REPORTS 73
9.4. SPECIFYING THE UNITS FOR AN LVM REPORT DISPLAY 73
9.5. DISPLAYING LVM COMMAND OUTPUT IN JSON FORMAT 74
9.6. DISPLAYING THE LVM COMMAND LOG 75
.CHAPTER
. . . . . . . . . . 10.
. . . CONFIGURING
. . . . . . . . . . . . . . . . RAID
. . . . . .LOGICAL
. . . . . . . . . .VOLUMES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .77
..............
10.1. RAID LOGICAL VOLUMES 77
10.2. RAID LEVELS AND LINEAR SUPPORT 77
10.3. LVM RAID SEGMENT TYPES 79
10.4. CREATING RAID LOGICAL VOLUMES 81
10.5. CREATING A RAID0 (STRIPED) LOGICAL VOLUME 81
10.6. USING DM INTEGRITY WITH RAID LV 83
10.6.1. Soft data corruption 83
10.6.2. Creating a RAID LV with DM integrity 84
2
Table of Contents
. . . . . . . . . . . 11.
CHAPTER . . .SNAPSHOT
. . . . . . . . . . . . LOGICAL
. . . . . . . . . . VOLUMES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .105
...............
11.1. SNAPSHOT VOLUMES 105
11.2. CREATING SNAPSHOT VOLUMES 106
11.3. MERGING SNAPSHOT VOLUMES 108
. . . . . . . . . . . 12.
CHAPTER . . . CREATING
. . . . . . . . . . . .AND
. . . . .MANAGING
. . . . . . . . . . . . .THINLY-PROVISIONED
. . . . . . . . . . . . . . . . . . . . . . . .LOGICAL
. . . . . . . . . .VOLUMES
. . . . . . . . . . . (THIN
. . . . . . .VOLUMES)
..........................
109
12.1. THINLY-PROVISIONED LOGICAL VOLUMES (THIN VOLUMES) 109
12.2. CREATING THINLY-PROVISIONED LOGICAL VOLUMES 109
12.3. THINLY-PROVISIONED SNAPSHOT VOLUMES 112
12.4. CREATING THINLY-PROVISIONED SNAPSHOT VOLUMES 113
12.5. TRACKING AND DISPLAYING THIN SNAPSHOT VOLUMES THAT HAVE BEEN REMOVED 115
.CHAPTER
. . . . . . . . . . 13.
. . . ENABLING
. . . . . . . . . . . .CACHING
. . . . . . . . . . TO
. . . .IMPROVE
. . . . . . . . . . LOGICAL
. . . . . . . . . . .VOLUME
. . . . . . . . . .PERFORMANCE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .119
..............
13.1. CACHING METHODS IN LVM 119
13.2. LVM CACHING COMPONENTS 119
13.3. ENABLING DM-CACHE CACHING FOR A LOGICAL VOLUME 119
13.4. ENABLING DM-CACHE CACHING WITH A CACHEPOOL FOR A LOGICAL VOLUME 121
13.5. ENABLING DM-WRITECACHE CACHING FOR A LOGICAL VOLUME 122
13.6. DISABLING CACHING FOR A LOGICAL VOLUME 124
. . . . . . . . . . . 14.
CHAPTER . . . LOGICAL
. . . . . . . . . . .VOLUME
. . . . . . . . . ACTIVATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
...............
14.1. CONTROLLING AUTOACTIVATION OF LOGICAL VOLUMES 125
14.2. CONTROLLING LOGICAL VOLUME ACTIVATION 126
14.3. ACTIVATING SHARED LOGICAL VOLUMES 126
14.4. ACTIVATING A LOGICAL VOLUME WITH MISSING DEVICES 127
. . . . . . . . . . . 15.
CHAPTER . . . CONTROLLING
. . . . . . . . . . . . . . . . .LVM
. . . . .DEVICE
. . . . . . . . SCANNING
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
...............
3
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
. . . . . . . . . . . 16.
CHAPTER . . . LAYERING
. . . . . . . . . . . .LVM
. . . . .PHYSICAL
. . . . . . . . . . . VOLUMES
. . . . . . . . . . . ON
. . . . TOP
. . . . . OF
. . . .LOGICAL
. . . . . . . . . .VOLUMES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .130
...............
. . . . . . . . . . . 17.
CHAPTER . . . CONTROLLING
. . . . . . . . . . . . . . . . .LVM
. . . . .ALLOCATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .131
..............
17.1. LVM ALLOCATION POLICIES 131
17.2. PREVENTING ALLOCATION ON A PHYSICAL VOLUME 132
17.3. EXTENDING A LOGICAL VOLUME WITH THE CLING ALLOCATION POLICY 132
17.4. DIFFERENTIATING BETWEEN LVM RAID OBJECTS USING TAGS 134
. . . . . . . . . . . 18.
CHAPTER . . . GROUPING
. . . . . . . . . . . . .LVM
. . . . .OBJECTS
. . . . . . . . . . WITH
. . . . . . TAGS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
...............
18.1. LVM OBJECT TAGS 135
18.2. LISTING LVM TAGS 135
18.3. ADDING LVM OBJECT TAGS 135
18.4. REMOVING LVM OBJECT TAGS 136
18.5. DEFINING LVM HOST TAGS 136
18.6. CONTROLLING LOGICAL VOLUME ACTIVATION WITH TAGS 136
. . . . . . . . . . . 19.
CHAPTER . . . TROUBLESHOOTING
. . . . . . . . . . . . . . . . . . . . . . .LVM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .138
...............
19.1. GATHERING DIAGNOSTIC DATA ON LVM 138
19.2. DISPLAYING INFORMATION ON FAILED LVM DEVICES 139
19.3. REMOVING LOST LVM PHYSICAL VOLUMES FROM A VOLUME GROUP 140
19.4. FINDING THE METADATA OF A MISSING LVM PHYSICAL VOLUME 141
19.5. RESTORING METADATA ON AN LVM PHYSICAL VOLUME 141
19.6. ROUNDING ERRORS IN LVM OUTPUT 143
19.7. PREVENTING THE ROUNDING ERROR WHEN CREATING AN LVM VOLUME 144
19.8. TROUBLESHOOTING LVM RAID 145
19.8.1. Checking data coherency in a RAID logical volume (RAID scrubbing) 145
19.8.2. Failed devices in LVM RAID 146
19.8.3. Recovering a failed RAID device in a logical volume 146
19.8.4. Replacing a failed RAID device in a logical volume 147
19.9. TROUBLESHOOTING DUPLICATE PHYSICAL VOLUME WARNINGS FOR MULTIPATHED LVM DEVICES
148
19.9.1. Root cause of duplicate PV warnings 149
19.9.2. Cases of duplicate PV warnings 149
19.9.3. The LVM device filter 150
19.9.4. Example LVM device filters that prevent duplicate PV warnings 150
19.9.5. Applying an LVM device filter configuration 151
19.9.6. Additional resources 151
4
Table of Contents
5
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
6
PROVIDING FEEDBACK ON RED HAT DOCUMENTATION
1. Make sure you are viewing the documentation in the Multi-page HTML format. In addition,
ensure you see the Feedback button in the upper right corner of the document.
2. Use your mouse cursor to highlight the part of text that you want to comment on.
3. Click the Add Feedback pop-up that appears below the highlighted text.
3. Fill in the Description field with your suggestion for improvement. Include a link to the
relevant part(s) of documentation.
7
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
Logical volumes provide the following advantages over using physical storage directly:
Flexible capacity
When using logical volumes, file systems can extend across multiple disks, since you can
aggregate disks and partitions into a single logical volume.
Disk striping
You can create a logical volume that stripes data across two or more disks. This can dramatically
increase throughput.
Mirroring volumes
Logical volumes provide a convenient way to configure a mirror for your data.
Volume snapshots
Using logical volumes, you can take device snapshots for consistent backups or to test the
effect of changes without affecting the real data.
Thin volumes
Logical volumes can be thinly provisioned. This allows you to create logical volumes that are
larger than the available extents.
Cache volumes
A cache logical volume uses a small logical volume consisting of fast block devices (such as SSD
drives) to improve the performance of a larger and slower logical volume by storing the
frequently used blocks on the smaller, faster logical volume.
Physical volume
A physical volume (PV) is a partition or whole disk designated for LVM use. For more information, see
Managing LVM physical volumes.
Volume group
A volume group (VG) is a collection of physical volumes (PVs), which creates a pool of disk space out
8
CHAPTER 1. LOGICAL VOLUMES
A volume group (VG) is a collection of physical volumes (PVs), which creates a pool of disk space out
of which logical volumes can be allocated. For more information, see Managing LVM volume groups.
Logical volume
A logical volume represents a mountable storage device. For more information, see Managing LVM
logical volumes.
By default, the LVM label is placed in the second 512-byte sector. You can override this default by
placing the label on any of the first 4 sectors when you create the physical volume. This allows LVM
volumes to co-exist with other users of these sectors, if necessary.
An LVM label provides correct identification and device ordering for a physical device, since devices can
come up in any order when the system is booted. An LVM label remains persistent across reboots and
throughout a cluster.
The LVM label identifies the device as an LVM physical volume. It contains a random unique identifier
(the UUID) for the physical volume. It also stores the size of the block device in bytes, and it records
where the LVM metadata will be stored on the device.
The LVM metadata contains the configuration details of the LVM volume groups on your system. By
default, an identical copy of the metadata is maintained in every metadata area in every physical volume
within the volume group. LVM metadata is small and stored as ASCII.
Currently LVM allows you to store 0, 1 or 2 identical copies of its metadata on each physical volume. The
default is 1 copy. Once you configure the number of metadata copies on the physical volume, you cannot
9
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
change that number at a later time. The first copy is stored at the start of the device, shortly after the
label. If there is a second copy, it is placed at the end of the device. If you accidentally overwrite the area
at the beginning of your disk by writing to a different disk than you intend, a second copy of the
metadata at the end of the device will allow you to recover the metadata.
NOTE
In the Linux kernel (and throughout this document), sectors are considered to be 512
bytes in size.
Administrative convenience
It is easier to keep track of the hardware in a system if each real disk only appears once. This
becomes particularly true if a disk fails. In addition, multiple physical volumes on a single disk may
cause a kernel warning about unknown partition types at boot.
Striping performance
LVM cannot tell that two physical volumes are on the same physical disk. If you create a striped
logical volume when two physical volumes are on the same physical disk, the stripes could be on
different partitions on the same disk. This would result in a decrease in performance rather than an
increase.
Although it is not recommended, there may be specific circumstances when you will need to divide a disk
into separate LVM physical volumes. For example, on a system with few disks it may be necessary to
move data around partitions when you are migrating an existing system to LVM volumes. Additionally, if
you have a very large disk and want to have more than one volume group for administrative purposes
then it is necessary to partition the disk. If you do have a disk with more than one partition and both of
those partitions are in the same volume group, take care to specify which partitions are to be included in
a logical volume when creating striped volumes.
10
CHAPTER 1. LOGICAL VOLUMES
Note that although LVM supports using a non-partitioned disk as physical volume (PV), it is
recommended to create a single, whole-disk partition for the following reasons:
Creating PVs on multiple partitions of the same device can result in loss of performance or
redundancy. For example, it might place striped or RAID1 layouts on different partitions that
actually exist on the same device.
Within a volume group, the disk space available for allocation is divided into units of a fixed-size called
extents. An extent is the smallest unit of space that can be allocated. Within a physical volume, extents
are referred to as physical extents.
A logical volume is allocated into logical extents of the same size as the physical extents. The extent size
is thus the same for all logical volumes in the volume group. The volume group maps the logical extents
to physical extents.
You can lose data if you shrink a logical volume to a smaller capacity than the data on the volume
requires. To ensure maximum flexibility, create logical volumes to meet your current needs, and leave
excess storage capacity unallocated. You can safely extend logical volumes to use unallocated space,
depending on your needs.
IMPORTANT
On AMD, Intel, ARM systems, and IBM Power Systems servers, the boot loader cannot
read LVM volumes. You must make a standard, non-LVM disk partition for your /boot
partition. On IBM Z, the zipl boot loader supports /boot on LVM logical volumes with
linear mapping. By default, the installation process always creates the / and swap
partitions within LVM volumes, with a separate /boot partition on a physical volume.
Creating a linear volume assigns a range of physical extents to an area of a logical volume in order. For
example, as shown in Figure 1.3, “Extent Mapping” logical extents 1 to 99 could map to one physical
11
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
volume and logical extents 100 to 198 could map to a second physical volume. From the point of view of
the application, there is one device that is 198 extents in size.
The physical volumes that make up a logical volume do not have to be the same size. Figure 1.4, “Linear
volume with unequal physical volumes” shows volume group VG1 with a physical extent size of 4MB.
This volume group includes 2 physical volumes named PV1 and PV2. The physical volumes are divided
into 4MB units, since that is the extent size. In this example, PV1 is 200 extents in size (800MB) and
PV2 is 100 extents in size (400MB). You can create a linear volume any size between 1 and 300 extents
(4MB to 1200MB). In this example, the linear volume named LV1 is 300 extents in size.
You can configure more than one linear logical volume of whatever size you require from the pool of
physical extents. Figure 1.5, “Multiple logical volumes” shows the same volume group as in Figure 1.4,
“Linear volume with unequal physical volumes”, but in this case two logical volumes have been carved
out of the volume group: LV1, which is 250 extents in size (1000MB) and LV2 which is 50 extents in size
(200MB).
13
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
The following illustration shows data being striped across three physical volumes. In this figure:
In a striped logical volume, the size of the stripe cannot exceed the size of an extent.
Striped logical volumes can be extended by concatenating another set of devices onto the end of the
first set. In order to extend a striped logical volume, however, there must be enough free space on the
set of underlying physical volumes that make up the volume group to support the stripe. For example, if
you have a two-way stripe that uses up an entire volume group, adding a single physical volume to the
volume group will not enable you to extend the stripe. Instead, you must add at least two physical
volumes to the volume group.
14
CHAPTER 1. LOGICAL VOLUMES
RAID logical volumes created and managed by LVM leverage the Multiple Devices (MD) kernel
drivers.
You can temporarily split RAID1 images from the array and merge them back into the array later.
Clusters
RAID logical volumes are not cluster-aware.
Although you can create and activate RAID logical volumes exclusively on one machine, you cannot
activate them simultaneously on more than one machine.
Subvolumes
When you create a RAID logical volume, LVM creates a metadata subvolume that is one extent in size
for every data or parity subvolume in the array.
For example, creating a 2-way RAID1 array results in two metadata subvolumes (lv_rmeta_0 and
lv_rmeta_1) and two data subvolumes (lv_rimage_0 and lv_rimage_1). Similarly, creating a 3-way
stripe (plus 1 implicit parity device) RAID4 results in 4 metadata subvolumes (lv_rmeta_0, lv_rmeta_1,
lv_rmeta_2, and lv_rmeta_3) and 4 data subvolumes (lv_rimage_0, lv_rimage_1, lv_rimage_2, and
lv_rimage_3).
Integrity
You can lose data when a RAID device fails or when soft corruption occurs. Soft corruption in data
storage implies that the data retrieved from a storage device is different from the data written to that
device. Adding integrity to a RAID LV helps mitigate or prevent soft corruption. To learn more about soft
corruption and how to add integrity to a RAID LV, see Section 10.6, “Using DM integrity with RAID LV” .
NOTE
Thin volumes are not supported across the nodes in a cluster. The thin pool and all its thin
volumes must be exclusively activated on only one cluster node.
By using thin provisioning, a storage administrator can overcommit the physical storage, often avoiding
the need to purchase additional storage. For example, if ten users each request a 100GB file system for
their application, the storage administrator can create what appears to be a 100GB file system for each
user but which is backed by less actual storage that is used only when needed.
NOTE
15
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
NOTE
When using thin provisioning, it is important that the storage administrator monitor the
storage pool and add more capacity if it starts to become full.
To make sure that all available space can be used, LVM supports data discard. This allows for re-use of
the space that was formerly used by a discarded file or other block range.
For information on creating thin volumes, see Creating thinly-provisioned logical volumes .
Thin volumes provide support for a new implementation of copy-on-write (COW) snapshot logical
volumes, which allow many virtual devices to share the same data in the thin pool. For information on
thin snapshot volumes, see Thinly-provisioned snapshot volumes .
NOTE
Because a snapshot copies only the data areas that change after the snapshot is created, the snapshot
feature requires a minimal amount of storage. For example, with a rarely updated origin, 3-5 % of the
origin’s capacity is sufficient to maintain the snapshot.
NOTE
Snapshot copies of a file system are virtual copies, not an actual media backup for a file
system. Snapshots do not provide a substitute for a backup procedure.
The size of the snapshot governs the amount of space set aside for storing the changes to the origin
volume. For example, if you made a snapshot and then completely overwrote the origin the snapshot
would have to be at least as big as the origin volume to hold the changes. You need to dimension a
snapshot according to the expected level of change. So for example a short-lived snapshot of a read-
mostly volume, such as /usr, would need less space than a long-lived snapshot of a volume that sees a
greater number of writes, such as /home.
If a snapshot runs full, the snapshot becomes invalid, since it can no longer track changes on the origin
volume. You should regularly monitor the size of the snapshot. Snapshots are fully resizable, however, so
if you have the storage capacity you can increase the size of the snapshot volume to prevent it from
getting dropped. Conversely, if you find that the snapshot volume is larger than you need, you can
reduce the size of the volume to free up space that is needed by other logical volumes.
When you create a snapshot file system, full read and write access to the origin stays possible. If a chunk
on a snapshot is changed, that chunk is marked and never gets copied from the original volume.
Most typically, a snapshot is taken when you need to perform a backup on a logical volume
without halting the live system that is continuously updating the data.
16
CHAPTER 1. LOGICAL VOLUMES
You can execute the fsck command on a snapshot file system to check the file system integrity
and determine whether the original file system requires file system repair.
Because the snapshot is read/write, you can test applications against production data by taking
a snapshot and running tests against the snapshot, leaving the real data untouched.
You can create LVM volumes for use with Red Hat Virtualization. LVM snapshots can be used to
create snapshots of virtual guest images. These snapshots can provide a convenient way to
modify existing guests or create new guests with minimal additional storage.
You can use the --merge option of the lvconvert command to merge a snapshot into its origin volume.
One use for this feature is to perform system rollback if you have lost data or files or otherwise need to
restore your system to a previous state. After you merge the snapshot volume, the resulting logical
volume will have the origin volume’s name, minor number, and UUID and the merged snapshot is
removed.
As for all LVM snapshot volumes, as well as all thin volumes, thin snapshot volumes are not supported
across the nodes in a cluster. The snapshot volume must be exclusively activated on only one cluster
node.
A thin snapshot volume can reduce disk usage when there are multiple snapshots of the same
origin volume.
If there are multiple snapshots of the same origin, then a write to the origin will cause one COW
operation to preserve the data. Increasing the number of snapshots of the origin should yield no
major slowdown.
Thin snapshot volumes can be used as a logical volume origin for another snapshot. This allows
for an arbitrary depth of recursive snapshots (snapshots of snapshots of snapshots…).
A snapshot of a thin logical volume also creates a thin logical volume. This consumes no data
space until a COW operation is required, or until the snapshot itself is written.
A thin snapshot volume does not need to be activated with its origin, so a user may have only the
origin active while there are many inactive snapshot volumes of the origin.
When you delete the origin of a thinly-provisioned snapshot volume, each snapshot of that
origin volume becomes an independent thinly-provisioned volume. This means that instead of
merging a snapshot with its origin volume, you may choose to delete the origin volume and then
create a new thinly-provisioned snapshot using that independent volume as the origin volume
for the new snapshot.
Although there are many advantages to using thin snapshot volumes, there are some use cases for which
the older LVM snapshot volume feature may be more appropriate to your needs:
You cannot change the chunk size of a thin pool. If the thin pool has a large chunk size (for
example, 1MB) and you require a short-living snapshot for which a chunk size that large is not
efficient, you may elect to use the older snapshot feature.
17
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
You cannot limit the size of a thin snapshot volume; the snapshot will use all of the space in the
thin pool, if necessary. This may not be appropriate for your needs.
In general, you should consider the specific requirements of your site when deciding which snapshot
format to use.
NOTE
When using thin provisioning, it is important that the storage administrator monitor the
storage pool and add more capacity if it starts to become full. For information on
configuring and displaying information on thinly-provisioned snapshot volumes, see
Creating thinly-provisioned snapshot volumes .
18
CHAPTER 2. MANAGING LOCAL STORAGE USING RHEL SYSTEM ROLES
Using the storage role enables you to automate administration of file systems on disks and logical
volumes on multiple machines and across all versions of RHEL starting with RHEL 7.7.
For more information about RHEL System Roles and how to apply them, see Introduction to RHEL
System Roles.
Complete LVM volume groups including their logical volumes and file systems
With the storage role you can perform the following tasks:
storage_volumes
List of file systems on all unpartitioned disks to be managed.
Partitions are currently unsupported.
19
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
storage_pools
List of pools to be managed.
Currently the only supported pool type is LVM. With LVM, pools represent volume groups (VGs).
Under each pool there is a list of volumes to be managed by the role. With LVM, each volume
corresponds to a logical volume (LV) with a file system.
WARNING
The storage role can create a file system only on an unpartitioned, whole disk or a
logical volume (LV). It cannot create the file system on a partition.
---
- hosts: all
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: xfs
roles:
- rhel-system-roles.storage
The volume name (barefs in the example) is currently arbitrary. The storage role identifies
the volume by the disk device listed under the disks: attribute.
You can omit the fs_type: xfs line because XFS is the default file system in RHEL 8.
To create the file system on an LV, provide the LVM setup under the disks: attribute,
including the enclosing volume group. For details, see Example Ansible playbook to manage
logical volumes.
Do not provide the path to the LV device.
Additional resources
---
- hosts: all
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: xfs
mount_point: /mnt/data
roles:
- rhel-system-roles.storage
This playbook adds the file system to the /etc/fstab file, and mounts the file system
immediately.
If the file system on the /dev/sdb device or the mount point directory do not exist, the
playbook creates them.
Additional resources
Example 2.3. A playbook that creates a mylv logical volume in the myvg volume group
- hosts: all
vars:
storage_pools:
- name: myvg
disks:
- sda
- sdb
- sdc
volumes:
- name: mylv
size: 2G
fs_type: ext4
mount_point: /mnt
roles:
- rhel-system-roles.storage
21
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
/dev/sda
/dev/sdb
/dev/sdc
If the myvg volume group already exists, the playbook adds the logical volume to the volume
group.
If the myvg volume group does not exist, the playbook creates it.
The playbook creates an Ext4 file system on the mylv logical volume, and persistently
mounts the file system at /mnt.
Additional resources
---
- hosts: all
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: xfs
mount_point: /mnt/data
mount_options: discard
roles:
- rhel-system-roles.storage
Additional resources
22
CHAPTER 2. MANAGING LOCAL STORAGE USING RHEL SYSTEM ROLES
This section provides an example Ansible playbook. This playbook applies the storage role to create and
mount an Ext4 file system.
Example 2.5. A playbook that creates Ext4 on /dev/sdb and mounts it at /mnt/data
---
- hosts: all
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: ext4
fs_label: label-name
mount_point: /mnt/data
roles:
- rhel-system-roles.storage
The playbook persistently mounts the file system at the /mnt/data directory.
Additional resources
Example 2.6. A playbook that creates Ext3 on /dev/sdb and mounts it at/mnt/data
---
- hosts: all
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: ext3
fs_label: label-name
mount_point: /mnt/data
roles:
- rhel-system-roles.storage
23
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
The playbook persistently mounts the file system at the /mnt/data directory.
Additional resources
---
- name: Create a disk device mounted on /opt/barefs
- hosts: all
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- /dev/sdb
size: 12 GiB
fs_type: ext4
mount_point: /opt/barefs
roles:
- rhel-system-roles.storage
If the volume in the previous example already exists, to resize the volume, you need to run the
same playbook, just with a different value for the parameter size. For example:
---
- name: Create a disk device mounted on /opt/barefs
- hosts: all
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- /dev/sdb
size: 10 GiB
fs_type: ext4
mount_point: /opt/barefs
roles:
- rhel-system-roles.storage
The volume name (barefs in the example) is currently arbitrary. The storage role identifies
24
CHAPTER 2. MANAGING LOCAL STORAGE USING RHEL SYSTEM ROLES
The volume name (barefs in the example) is currently arbitrary. The storage role identifies
the volume by the disk device listed under the disks: attribute.
NOTE
Using the Resizing action in other file systems can destroy the data on the device you
are working on.
Additional resources
WARNING
Using the Resizing action in other file systems can destroy the data on the device
you are working on.
Example 2.9. A playbook that resizes existing mylv1 and myvl2 logical volumes in the myvg
volume group
---
- hosts: all
vars:
storage_pools:
- name: myvg
disks:
- /dev/sda
- /dev/sdb
- /dev/sdc
volumes:
- name: mylv1
size: 10 GiB
fs_type: ext4
mount_point: /opt/mount1
- name: mylv2
size: 50 GiB
fs_type: ext4
mount_point: /opt/mount2
25
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
The Ext4 file system on the mylv1 volume, which is mounted at /opt/mount1, resizes to
10 GiB.
The Ext4 file system on the mylv2 volume, which is mounted at /opt/mount2, resizes to
50 GiB.
Additional resources
---
- name: Create a disk device with swap
- hosts: all
vars:
storage_volumes:
- name: swap_fs
type: disk
disks:
- /dev/sdb
size: 15 GiB
fs_type: swap
roles:
- rhel-system-roles.storage
The volume name (swap_fs in the example) is currently arbitrary. The storage role identifies
the volume by the disk device listed under the disks: attribute.
Additional resources
With the storage System Role, you can configure a RAID volume on RHEL using Red Hat Ansible
26
CHAPTER 2. MANAGING LOCAL STORAGE USING RHEL SYSTEM ROLES
With the storage System Role, you can configure a RAID volume on RHEL using Red Hat Ansible
Automation Platform. In this section you will learn how to set up an Ansible playbook with the available
parameters to configure a RAID volume to suit your requirements.
Prerequisites
You have Red Hat Ansible Engine installed on the system from which you want to run the
playbook.
NOTE
You do not have to have Red Hat Ansible Automation Platform installed on the
systems on which you want to deploy the storage solution.
You have the rhel-system-roles package installed on the system from which you want to run
the playbook.
You have an inventory file detailing the systems on which you want to deploy a RAID volume
using the storage System Role.
Procedure
- hosts: all
vars:
storage_safe_mode: false
storage_volumes:
- name: data
type: raid
disks: [sdd, sde, sdf, sdg]
raid_level: raid0
raid_chunk_size: 32 KiB
mount_point: /mnt/data
state: present
roles:
- name: rhel-system-roles.storage
WARNING
Device names can change in certain circumstances; for example, when you
add a new disk to a system. Therefore, to prevent data loss, we do not
recommend using specific disk names in the playbook.
27
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
Additional resources
Managing RAID.
Prerequisites
You have Red Hat Ansible Engine installed on the system from which you want to run the
playbook.
NOTE
You do not have to have Red Hat Ansible Automation Platform installed on the
systems on which you want to deploy the storage solution.
You have the rhel-system-roles package installed on the system from which you want to run
the playbook.
You have an inventory file detailing the systems on which you want to configure an LVM pool
with RAID using the storage System Role.
Procedure
- hosts: all
vars:
storage_safe_mode: false
storage_pools:
- name: my_pool
type: lvm
disks: [sdh, sdi]
raid_level: raid1
volumes:
- name: my_pool
size: "1 GiB"
mount_point: "/mnt/app/shared"
fs_type: xfs
state: present
roles:
- name: rhel-system-roles.storage
NOTE
28
CHAPTER 2. MANAGING LOCAL STORAGE USING RHEL SYSTEM ROLES
NOTE
To create an LVM pool with RAID, you must specify the RAID type using the
raid_level parameter.
Additional resources
Managing RAID.
Example 2.11. A playbook that creates a mylv1 LVM VDO volume in themyvg volume group
---
- name: Create LVM VDO volume under volume group 'myvg'
hosts: all
roles:
-rhel-system-roles.storage
vars:
storage_pools:
- name: myvg
disks:
- /dev/sdb
volumes:
- name: mylv1
compression: true
deduplication: true
vdo_pool_size: 10 GiB
size: 30 GiB
mount_point: /mnt/app/shared
In this example, the compression and deduplication pools are set to true, which specifies that the VDO
is used. The following describes the usage of these parameters:
The deduplication is used to deduplicate the duplicated data stored on the storage volume.
29
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
The compression is used to compress the data stored on the storage volume, which results in
more storage capacity.
The vdo_pool_size specifies the actual size the volume takes on the device. The virtual size of
VDO volume is set by the size parameter. NOTE: Because of the storage role use of LVM VDO,
only one volume per pool can use the compression and deduplication.
Prerequisites
You have Red Hat Ansible Engine installed on the system from which you want to run the
playbook.
NOTE
You do not have to have Red Hat Ansible Automation Platform installed on the
systems on which you want to create the volume.
You have an inventory file detailing the systems on which you want to deploy a LUKS encrypted
volume using the storage System Role.
Procedure
- hosts: all
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: xfs
fs_label: label-name
mount_point: /mnt/data
encryption: true
encryption_password: your-password
roles:
- rhel-system-roles.storage
30
CHAPTER 2. MANAGING LOCAL STORAGE USING RHEL SYSTEM ROLES
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
Example 2.12. A playbook that express volume sizes as a percentage of the pool’s total size
---
- name: Express volume sizes as a percentage of the pool's total size
hosts: all
roles
- rhel-system-roles.storage
vars:
storage_pools:
- name: myvg
disks:
- /dev/sdb
volumes:
- name: data
size: 60%
mount_point: /opt/mount/data
- name: web
size: 30%
mount_point: /opt/mount/web
- name: cache
size: 10%
mount_point: /opt/cache/mount
This example specifies the size of LVM volumes as a percentage of the pool size, for example: "60%".
Additionally, you can also specify the size of LVM volumes as a percentage of the pool size in a human-
readable size of the file system, for example, "10g" or "50 GiB".
/usr/share/ansible/roles/rhel-system-roles.storage/
31
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
In this procedure, replace the /dev/vdb1, /dev/vdb2, and /dev/vdb3 with the available storage devices in
your system.
Prerequisites
Procedure
1. Create multiple physical volumes by using the space-delimited device names as arguments to
the pvcreate command:
This places a label on /dev/vdb1, /dev/vdb2, and /dev/vdb3, marking them as physical volumes
belonging to LVM.
2. View the created physical volumes by using any one of the following commands as per your
requirement:
a. The pvdisplay command, which provides a verbose multi-line output for each physical
volume. It displays physical properties, such as size, extents, volume group, and other
options in a fixed format:
# pvdisplay
--- NEW Physical volume ---
PV Name /dev/vdb1
VG Name
PV Size 1.00 GiB
[..]
--- NEW Physical volume ---
PV Name /dev/vdb2
VG Name
PV Size 1.00 GiB
[..]
--- NEW Physical volume ---
PV Name /dev/vdb3
VG Name
PV Size 1.00 GiB
[..]
b. The pvs command provides physical volume information in a configurable form, displaying
32
CHAPTER 3. DEPLOYING LVM
b. The pvs command provides physical volume information in a configurable form, displaying
one line per physical volume:
# pvs
PV VG Fmt Attr PSize PFree
/dev/vdb1 lvm2 1020.00m 0
/dev/vdb2 lvm2 1020.00m 0
/dev/vdb3 lvm2 1020.00m 0
c. The pvscan command scans all supported LVM block devices in the system for physical
volumes. You can define a filter in the lvm.conf file so that this command avoids scanning
specific physical volumes:
# pvscan
PV /dev/vdb1 lvm2 [1.00 GiB]
PV /dev/vdb2 lvm2 [1.00 GiB]
PV /dev/vdb3 lvm2 [1.00 GiB]
Additional resources
Prerequisites
One or more physical volumes are created. For more information on creating physical volumes,
see Creating LVM physical volume .
Procedure
This creates a VG with the name of myvg. The PVs /dev/vdb1 and /dev/vdb2 are the base
storage level for the myvg VG .
2. View the created volume groups by using any one of the following commands as per your
requirement:
a. The vgs command provides volume group information in a configurable form, displaying
one line per volume groups:
# vgs
VG #PV #LV #SN Attr VSize VFree
myvg 4 1 0 wz--n- 3.98g 1008.00m
33
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
b. The vgdisplay command displays volume group properties such as size, extents, number of
physical volumes, and other options in a fixed form. The following example shows the
output of the vgdisplay command for the volume group myvg. If you do not specify a
volume group, all existing volume groups are displayed:
c. The vgscan command scans all supported LVM block devices in the system for volume
group:
# vgscan
Found volume group "myvg" using metadata type lvm2
3. Optional: Increase a volume group’s capacity by adding one or more free physical volumes:
Additional resources
Prerequisites
The volume group is created. For more information, see Creating LVM volume group .
Procedure
Use the -n option to set the LV name to mylv, and the -L option to set the size of LV in units of
Mb, but it is possible to use any other units. The LV type is linear by default, but the user can
specify the desired type by using the --type option.
IMPORTANT
34
CHAPTER 3. DEPLOYING LVM
IMPORTANT
The command fails if the VG does not have a sufficient number of free physical
extents for the requested size and type.
2. View the created logical volumes by using any one of the following commands as per your
requirement:
a. The lvs command provides logical volume information in a configurable form, displaying one
line per logical volume:
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
mylv myvg -wi-ao---- 500.00m
b. The lvdisplay command displays logical volume properties, such as size, layout, and
mapping in a fixed format:
# lvdisplay -v /dev/myvg/mylv
--- Logical volume ---
LV Path /dev/myvg/mylv
LV Name mylv
VG Name myvg
LV UUID YTnAk6-kMlT-c4pG-HBFZ-Bx7t-ePMk-7YjhaM
LV Write Access read/write
[..]
c. The lvscan command scans for all logical volumes in the system and lists them:
# lvscan
ACTIVE '/dev/myvg/mylv' [500.00 MiB] inherit
3. Create a file system on the logical volume. The following command creates an xfs file system
on the logical volume:
# mkfs.xfs /dev/myvg/mylv
meta-data=/dev/myvg/mylv isize=512 agcount=4, agsize=32000 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=128000, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=1368, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Discarding blocks...Done.
4. Mount the logical volume and report the file system disk space usage:
# df -h
35
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
Additional resources
36
CHAPTER 4. CONFIGURING LVM LOGICAL VOLUMES
Where commands take volume group or logical volume names as arguments, the full path name
is optional. A logical volume called lvol0 in a volume group called vg0 can be specified as
vg0/lvol0.
Where a list of volume groups is required but is left empty, a list of all volume groups will be
substituted.
Where a list of logical volumes is required but a volume group is given, a list of all the logical
volumes in that volume group will be substituted. For example, the lvdisplay vg0 command will
display all the logical volumes in volume group vg0.
The following command shows the output of the lvcreate command with the -v argument.
37
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
The -vv, -vvv and the -vvvv arguments display increasingly more details about the command execution.
The -vvvv argument provides the maximum amount of information at this time. The following example
shows the first few lines of output for the lvcreate command with the -vvvv argument specified.
# commandname --help
To display the man page for a command, execute the man command:
# man commandname
The man lvm command provides general online information about LVM.
1. To use disks in a volume group, label them as LVM physical volumes with the pvcreate
command.
WARNING
2. Create the a volume group that consists of the LVM physical volumes you have created. The
following command creates the volume group myvg.
38
CHAPTER 4. CONFIGURING LVM LOGICAL VOLUMES
You can use the vgs command to display the attributes of the new volume group.
# vgs
VG #PV #LV #SN Attr VSize VFree
myvg 3 0 0 wz--n- 51.45G 51.45G
3. Create the logical volume from the volume group you have created. The following command
creates the logical volume mylv from the volume group myvg. This example creates a logical
volume that uses 2 gigabytes of the volume group.
4. Create a file system on the logical volume. The following command creates an ext4 file system
on the logical volume.
# mkfs.ext4 /dev/myvg/mylv
mke2fs 1.44.3 (10-July-2018)
Creating filesystem with 524288 4k blocks and 131072 inodes
Filesystem UUID: 616da032-8a48-4cd7-8705-bd94b7a1c8c4
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
The following commands mount the logical volume and report the file system disk space usage.
39
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
Parameter Description
This example procedure creates an LVM RAID0 logical volume called mylv that stripes data across the
disks at /dev/sda1, /dev/sdb1, and /dev/sdc1.
1. Label the disks you will use in the volume group as LVM physical volumes with the pvcreate
command.
WARNING
2. Create the volume group myvg. The following command creates the volume group myvg.
40
CHAPTER 4. CONFIGURING LVM LOGICAL VOLUMES
You can use the vgs command to display the attributes of the new volume group.
# vgs
VG #PV #LV #SN Attr VSize VFree
myvg 3 0 0 wz--n- 51.45G 51.45G
3. Create a RAID0 logical volume from the volume group you have created. The following
command creates the RAID0 volume mylv from the volume group myvg. This example creates
a logical volume that is 2 gigabytes in size, with three stripes and a stripe size of 4 kilobytes.
4. Create a file system on the RAID0 logical volume. The following command creates an ext4 file
system on the logical volume.
# mkfs.ext4 /dev/myvg/mylv
mke2fs 1.44.3 (10-July-2018)
Creating filesystem with 525312 4k blocks and 131376 inodes
Filesystem UUID: 9d4c0704-6028-450a-8b0a-8875358c0511
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
The following commands mount the logical volume and report the file system disk space usage.
Procedure
# umount /mnt
2. If the logical volume exists in a clustered environment, deactivate the logical volume on all
nodes where it is active. Use the following command on each such node:
41
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
You can also rename the logical volume by specifying the full paths to the devices:
Additional resources
In order to remove a disk, you must first move the extents on the LVM physical volume to a different
disk or set of disks.
Procedure
1. View the used and free space of physical volumes when using the LV:
# pvs -o+pv_used
PV VG Fmt Attr PSize PFree Used
/dev/vdb1 myvg lvm2 a-- 1020.00m 0 1020.00m
/dev/vdb2 myvg lvm2 a-- 1020.00m 0 1020.00m
/dev/vdb3 myvg lvm2 a-- 1020.00m 1008.00m 12.00m
a. If there are enough free extents on the other physical volumes in the existing volume group,
use the following command to move the data:
# pvmove /dev/vdb3
/dev/vdb3: Moved: 2.0%
...
/dev/vdb3: Moved: 79.2%
...
/dev/vdb3: Moved: 100.0%
b. If there are no enough free extents on the other physical volumes in the existing volume
group, use the following commands to add a new physical volume, extend the volume group
using the newly created physical volume, and move the data to this physical volume:
# pvcreate /dev/vdb4
Physical volume "/dev/vdb4" successfully created
42
CHAPTER 4. CONFIGURING LVM LOGICAL VOLUMES
If a logical volume contains a physical volume that fails, you cannot use that logical volume. To
remove missing physical volumes from a volume group, you can use the --removemissing
parameter of the vgreduce command, if there are no logical volumes that are allocated on the
missing physical volumes:
Additional resources
Use a large minor number to be sure that it has not already been allocated to another device
dynamically.
If you are exporting a file system using NFS, specifying the fsid parameter in the exports file may avoid
the need to set a persistent device number within LVM.
You can specify the extent size with the -s option to the vgcreate command if the default extent size is
not suitable. You can put limits on the number of physical or logical volumes the volume group can have
by using the -p and -l arguments of the vgcreate command.
43
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
Create an ext4 file system with a given label on the logical volume.
Prerequisites
Example 4.1. A playbook that creates a mylv logical volume in the myvg volume group
- hosts: all
vars:
storage_pools:
- name: myvg
disks:
- sda
- sdb
- sdc
volumes:
- name: mylv
size: 2G
fs_type: ext4
mount_point: /mnt
roles:
- rhel-system-roles.storage
/dev/sda
/dev/sdb
/dev/sdc
If the myvg volume group already exists, the playbook adds the logical volume to the volume
group.
If the myvg volume group does not exist, the playbook creates it.
The playbook creates an Ext4 file system on the mylv logical volume, and persistently
mounts the file system at /mnt.
Additional resources
44
CHAPTER 4. CONFIGURING LVM LOGICAL VOLUMES
For more information about the storage role, see Managing local storage using RHEL System
Roles.
Procedure
# umount /mnt
2. If the logical volume exists in a clustered environment, deactivate the logical volume on all
nodes where it is active. Use the following command on each such node:
# lvremove /dev/myvg/mylv1
NOTE
In this case, the logical volume has not been deactivated. If you explicitly
deactivated the logical volume before removing it, you would not see the prompt
verifying whether you want to remove an active logical volume.
Additional resources
45
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
To increase the size of a logical volume, use the lvextend command. When you extend the logical
volume, you can indicate how much you want to extend the volume, or how large you want it to be after
you extend it.
Prerequisites
1. You have an existing logical volume (LV) with a file system on it. Determine the file system type
by using the df -Th command.
For more information on creating LV and a file system, see Creating LVM logical volume.
2. You have sufficient space in the volume group to grow your LV and file system. Use the vgs -o
name,vgfree command to determine the available space.
Procedure
1. Optional: If the volume group has insufficient space to grow your LV, then add a new physical
volume to the volume group by using the following command:
2. Now that the volume group is large enough, execute any one of the following steps as per your
requirement:
a. To extend the LV with the provided size, use the following command:
# lvextend -L 3G /dev/myvg/mylv
Size of logical volume myvg/mylv changed from 2.00 GiB (512 extents) to 3.00 GiB (768
extents).
Logical volume myvg/mylv successfully resized.
NOTE
You can use the -r option of the lvextend command to extend the logical
volume and resize the underlying file system with a single command:
# lvextend -r -L 3G /dev/myvg/mylv
46
CHAPTER 5. MODIFYING THE SIZE OF A LOGICAL VOLUME
WARNING
You can also extend the logical volume using the lvresize command
with the same arguments, but this command does not guarantee
against accidental shrinkage.
b. To extend the mylv logical volume to fill all of the unallocated space in the myvg volume
group, use the following command:
As with the lvcreate command, you can use the -l argument of the lvextend command to
specify the number of extents by which to increase the size of the logical volume. You can
also use this argument to specify a percentage of the volume group, or a percentage of the
remaining free space in the volume group.
3. If you are not using the r option with the lvextend command to extend the LV and resize the file
system with a single command, then resize the file system on the logical volume by using the
following command:
xfs_growfs /mnt/mnt1/
meta-data=/dev/mapper/myvg-mylv isize=512 agcount=4, agsize=65536 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=262144, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 262144 to 524288
NOTE
Without the -D option, xfs_growfs grows the file system to the maximum size
supported by the underlying device. For more information, see Increasing the
size of an XFS file system.
For resizing an ext4 file system, see Resizing an ext4 file system .
Verification
# df -Th
47
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
Additional resources
NOTE
Shrinking is not supported on a GFS2 or XFS file system, so you cannot reduce the size of
a logical volume that contains a GFS2 or XFS file system.
If the logical volume you are reducing contains a file system, to prevent data loss you must ensure that
the file system is not using the space in the logical volume that is being reduced. For this reason, it is
recommended that you use the --resizefs option of the lvreduce command when the logical volume
contains a file system.
When you use this option, the lvreduce command attempts to reduce the file system before shrinking
the logical volume. If shrinking the file system fails, as can occur if the file system is full or the file system
does not support shrinking, then the lvreduce command will fail and not attempt to shrink the logical
volume.
WARNING
In most cases, the lvreduce command warns about possible data loss and asks for a
confirmation. However, you should not rely on these confirmation prompts to
prevent data loss because in some cases you will not see these prompts, such as
when the logical volume is inactive or the --resizefs option is not used.
Note that using the --test option of the lvreduce command does not indicate where
the operation is safe, as this option does not check the file system or test the file
system resize.
Procedure
To shrink the mylv logical volume in myvg volume group to 64 megabytes, use the following
command:
48
CHAPTER 5. MODIFYING THE SIZE OF A LOGICAL VOLUME
Size of logical volume myvg/mylv changed from 100.00 MiB (25 extents) to 64.00 MiB (16
extents).
Logical volume myvg/mylv successfully resized.
In this example, mylv contains a file system, which this command resizes together with the logical
volume.
Specifying the - sign before the resize value indicates that the value will be subtracted from the
logical volume’s actual size. To shrink a logical volume to an absolute size of 64 megabytes, use
the following command:
Additional resources
For example, consider a volume group vg that consists of two underlying physical volumes, as displayed
with the following vgs command.
# vgs
VG #PV #LV #SN Attr VSize VFree
vg 2 0 0 wz--n- 271.31G 271.31G
You can create a stripe using the entire amount of space in the volume group.
Note that the volume group now has no more free space.
49
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
# vgs
VG #PV #LV #SN Attr VSize VFree
vg 2 1 0 wz--n- 271.31G 0
The following command adds another physical volume to the volume group, which then has 135
gigabytes of additional space.
# vgextend vg /dev/sdc1
Volume group "vg" successfully extended
# vgs
VG #PV #LV #SN Attr VSize VFree
vg 3 1 0 wz--n- 406.97G 135.66G
At this point you cannot extend the striped logical volume to the full size of the volume group, because
two underlying devices are needed in order to stripe the data.
To extend the striped logical volume, add another physical volume and then extend the logical volume.
In this example, having added two physical volumes to the volume group we can extend the logical
volume to the full size of the volume group.
# vgextend vg /dev/sdd1
Volume group "vg" successfully extended
# vgs
VG #PV #LV #SN Attr VSize VFree
vg 4 1 0 wz--n- 542.62G 271.31G
# lvextend vg/stripe1 -L 542G
Using stripesize of last segment 64.00 KB
Extending logical volume stripe1 to 542.00 GB
Logical volume stripe1 successfully resized
If you do not have enough underlying physical devices to extend the striped logical volume, it is possible
to extend the volume anyway if it does not matter that the extension is not striped, which may result in
uneven performance. When adding space to the logical volume, the default operation is to use the same
striping parameters of the last segment of the existing logical volume, but you can override those
parameters. The following example extends the existing striped logical volume to use the remaining free
space after the initial lvextend command fails.
50
CHAPTER 6. MANAGING LVM VOLUME GROUPS
Within a volume group, the disk space available for allocation is divided into units of a fixed-size called
extents. An extent is the smallest unit of space that can be allocated. Within a physical volume, extents
are referred to as physical extents.
A logical volume is allocated into logical extents of the same size as the physical extents. The extent size
is thus the same for all logical volumes in the volume group. The volume group maps the logical extents
to physical extents.
Prerequisites
One or more physical volumes are created. For more information on creating physical volumes,
see Creating LVM physical volume .
Procedure
This creates a VG with the name of myvg. The PVs /dev/vdb1 and /dev/vdb2 are the base
storage level for the myvg VG .
2. View the created volume groups by using any one of the following commands as per your
requirement:
a. The vgs command provides volume group information in a configurable form, displaying
one line per volume groups:
# vgs
VG #PV #LV #SN Attr VSize VFree
myvg 4 1 0 wz--n- 3.98g 1008.00m
b. The vgdisplay command displays volume group properties such as size, extents, number of
physical volumes, and other options in a fixed form. The following example shows the
output of the vgdisplay command for the volume group myvg. If you do not specify a
volume group, all existing volume groups are displayed:
51
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
Metadata Sequence No 6
VG Access read/write
[..]
c. The vgscan command scans all supported LVM block devices in the system for volume
group:
# vgscan
Found volume group "myvg" using metadata type lvm2
3. Optional: Increase a volume group’s capacity by adding one or more free physical volumes:
Additional resources
Procedure
Merge the inactive volume group databases into the active or inactive volume group myvg
giving verbose runtime information:
Additional resources
Procedure
1. If the physical volume is still being used, migrate the data to another physical volume from the
same volume group :
# pvmove /dev/vdb3
/dev/vdb3: Moved: 2.0%
...
52
CHAPTER 6. MANAGING LVM VOLUME GROUPS
2. If there are no enough free extents on the other physical volumes in the existing volume group:
# pvcreate /dev/vdb4
Physical volume "/dev/vdb4" successfully created
b. Add the newly created physical volume to the myvg volume group:
Verification
Verify if the /dev/vdb3 physical volume is removed from the myvg volume group:
# pvs
PV VG Fmt Attr PSize PFree Used
/dev/vdb1 myvg lvm2 a-- 1020.00m 0 1020.00m
/dev/vdb2 myvg lvm2 a-- 1020.00m 0 1020.00m
/dev/vdb3 lvm2 a-- 1020.00m 1008.00m 12.00m
Additional resources
In the initial setup, the volume group myvg consists of /dev/vdb1, /dev/vdb2, and /dev/vdb3. After
completing this procedure, the volume group myvg will consist of /dev/vdb1 and /dev/vdb2, and the
second volume group, yourvg, will consist of /dev/vdb3.
Prerequisites
You have sufficient space in the volume group. Use the vgscan command to determine how
53
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
You have sufficient space in the volume group. Use the vgscan command to determine how
much free space is currently available in the volume group.
Depending on the free capacity in the existing physical volume, move all the used physical
extents to other physical volume using the pvmove command. For more information, see
Removing physical volumes from a volume group .
Procedure
1. Split the existing volume group myvg to the new volume group yourvg:
NOTE
If you have created a logical volume using the existing volume group, use the
following command to deactivate the logical volume:
# lvchange -a n /dev/myvg/mylv
For more information on creating logical volumes, see Managing LVM logical
volumes.
# vgs
VG #PV #LV #SN Attr VSize VFree
myvg 2 1 0 wz--n- 34.30G 10.80G
yourvg 1 0 0 wz--n- 17.15G 17.15G
Verification
Verify if the newly created volume group yourvg consists of /dev/vdb3 physical volume:
# pvs
PV VG Fmt Attr PSize PFree Used
/dev/vdb1 myvg lvm2 a-- 1020.00m 0 1020.00m
/dev/vdb2 myvg lvm2 a-- 1020.00m 0 1020.00m
/dev/vdb3 yourvg lvm2 a-- 1020.00m 1008.00m 12.00m
Additional resources
Procedure
1. Deactivate the volume group. If it is a clustered volume group, deactivate the volume group on
54
CHAPTER 6. MANAGING LVM VOLUME GROUPS
1. Deactivate the volume group. If it is a clustered volume group, deactivate the volume group on
all nodes where it is active by using the following command on each such node:
You can also rename the volume group by specifying the full paths to the devices:
Additional resources
NOTE
You can use the --force argument of the vgimport command. This allows you to import
volume groups that are missing physical volumes and subsequently run the vgreduce --
removemissing command.
The vgexport command makes an inactive volume group inaccessible to the system, which allows you to
detach its physical volumes. The vgimport command makes a volume group accessible to a machine
again after the vgexport command has made it inactive.
To move a volume group from one system to another, perform the following steps:
1. Make sure that no users are accessing files on the active volumes in the volume group, then
unmount the logical volumes.
2. Use the -a n argument of the vgchange command to mark the volume group as inactive, which
prevents any further activity on the volume group.
3. Use the vgexport command to export the volume group. This prevents it from being accessed
by the system from which you are removing it.
After you export the volume group, the physical volume will show up as being in an exported
volume group when you execute the pvscan command, as in the following example.
# pvscan
PV /dev/sda1 is in exported VG myvg [17.15 GB / 7.15 GB free]
PV /dev/sdc1 is in exported VG myvg [17.15 GB / 15.15 GB free]
PV /dev/sdd1 is in exported VG myvg [17.15 GB / 15.15 GB free]
...
When the system is next shut down, you can unplug the disks that constitute the volume group
55
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
When the system is next shut down, you can unplug the disks that constitute the volume group
and connect them to the new system.
4. When the disks are plugged into the new system, use the vgimport command to import the
volume group, making it accessible to the new system.
5. Activate the volume group with the -a y argument of the vgchange command.
Prerequisites
The volume group contains no logical volumes. To remove logical volumes from a volume group,
see Removing LVM logical volumes.
Procedure
1. If the volume group exists in a clustered environment, stop the lockspace of the volume group
on all other nodes. Use the following command on all nodes except the node where you are
performing the removing:
# vgremove vg-name
Volume group "vg-name" successfully removed
Additional resources
56
CHAPTER 7. MANAGING LVM PHYSICAL VOLUMES
# lvmdiskscan
/dev/ram0 [ 16.00 MB]
/dev/sda [ 17.15 GB]
/dev/root [ 13.69 GB]
/dev/ram [ 16.00 MB]
/dev/sda1 [ 17.14 GB] LVM physical volume
/dev/VolGroup00/LogVol01 [ 512.00 MB]
/dev/ram2 [ 16.00 MB]
/dev/new_vg/lvol0 [ 52.00 MB]
/dev/ram3 [ 16.00 MB]
/dev/pkl_new_vg/sparkie_lv [ 7.14 GB]
/dev/ram4 [ 16.00 MB]
/dev/ram5 [ 16.00 MB]
/dev/ram6 [ 16.00 MB]
/dev/ram7 [ 16.00 MB]
/dev/ram8 [ 16.00 MB]
/dev/ram9 [ 16.00 MB]
/dev/ram10 [ 16.00 MB]
/dev/ram11 [ 16.00 MB]
/dev/ram12 [ 16.00 MB]
/dev/ram13 [ 16.00 MB]
/dev/ram14 [ 16.00 MB]
/dev/ram15 [ 16.00 MB]
/dev/sdb [ 17.15 GB]
/dev/sdb1 [ 17.14 GB] LVM physical volume
/dev/sdc [ 17.15 GB]
/dev/sdc1 [ 17.14 GB] LVM physical volume
/dev/sdd [ 17.15 GB]
/dev/sdd1 [ 17.14 GB] LVM physical volume
7 disks
17 partitions
0 LVM physical volume whole disks
4 LVM physical volumes
57
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
Procedure
# pvremove /dev/vdb3
Labels on physical volume "/dev/vdb3" successfully wiped.
2. View the existing physical volumes and verify if the required volume is removed:
# pvs
PV VG Fmt Attr PSize PFree
/dev/vdb1 lvm2 1020.00m 0
/dev/vdb2 lvm2 1020.00m 0
If the physical volume you want to remove is currently part of a volume group, you must remove it from
the volume group with the vgreduce command. For more information, see Removing physical volumes
from a volume group
Additional resources
The following command adds the physical volume /dev/sdf1 to the volume group vg1.
Procedure
1. If the physical volume is still being used, migrate the data to another physical volume from the
same volume group :
58
CHAPTER 7. MANAGING LVM PHYSICAL VOLUMES
# pvmove /dev/vdb3
/dev/vdb3: Moved: 2.0%
...
/dev/vdb3: Moved: 79.2%
...
/dev/vdb3: Moved: 100.0%
2. If there are no enough free extents on the other physical volumes in the existing volume group:
# pvcreate /dev/vdb4
Physical volume "/dev/vdb4" successfully created
b. Add the newly created physical volume to the myvg volume group:
Verification
Verify if the /dev/vdb3 physical volume is removed from the myvg volume group:
# pvs
PV VG Fmt Attr PSize PFree Used
/dev/vdb1 myvg lvm2 a-- 1020.00m 0 1020.00m
/dev/vdb2 myvg lvm2 a-- 1020.00m 0 1020.00m
/dev/vdb3 lvm2 a-- 1020.00m 1008.00m 12.00m
Additional resources
59
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
lvm devtypes
Displays the recognized built-in block device types
lvm formats
Displays recognized metadata formats.
lvm help
Displays LVM help text.
lvm segtypes
Displays recognized logical volume segment types.
lvm tags
Displays any tags defined on this host.
lvm version
Displays the current version information.
The pvs command provides physical volume information in a configurable form, displaying one line per
physical volume. The pvs command provides a great deal of format control, and is useful for scripting.
The pvdisplay command provides a verbose multi-line output for each physical volume. It displays
physical properties (size, extents, volume group, and so on) in a fixed format.
The following example shows the output of the pvdisplay command for a single physical volume.
# pvdisplay
--- Physical volume ---
PV Name /dev/sdc1
VG Name new_vg
PV Size 17.14 GB / not usable 3.40 MB
Allocatable yes
PE Size (KByte) 4096
Total PE 4388
Free PE 4375
Allocated PE 13
PV UUID Joqlch-yWSj-kuEn-IdwM-01S9-XO8M-mcpsVe
The pvscan command scans all supported LVM block devices in the system for physical volumes.
60
CHAPTER 8. DISPLAYING LVM COMPONENTS
# pvscan
PV /dev/sdb2 VG vg0 lvm2 [964.00 MB / 0 free]
PV /dev/sdc1 VG vg0 lvm2 [964.00 MB / 428.00 MB free]
PV /dev/sdc2 lvm2 [964.84 MB]
Total: 3 [2.83 GB] / in use: 2 [1.88 GB] / in no VG: 1 [964.84 MB]
You can define a filter in the lvm.conf file so that this command will avoid scanning specific physical
volumes.
The vgs command provides volume group information in a configurable form, displaying one line per
volume group. The vgs command provides a great deal of format control, and is useful for scripting.
The vgdisplay command displays volume group properties (such as size, extents, number of physical
volumes, and so on) in a fixed form. The following example shows the output of the vgdisplay
command for the volume group new_vg. If you do not specify a volume group, all existing volume groups
are displayed.
# vgdisplay new_vg
--- Volume group ---
VG Name new_vg
System ID
Format lvm2
Metadata Areas 3
Metadata Sequence No 11
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 3
Act PV 3
VG Size 51.42 GB
PE Size 4.00 MB
Total PE 13164
Alloc PE / Size 13 / 52.00 MB
Free PE / Size 13151 / 51.37 GB
VG UUID jxQJ0a-ZKk0-OpMO-0118-nlwO-wwqd-fD5D32
# vgscan
Reading all physical volumes. This may take a while...
Found volume group "new_vg" using metadata type lvm2
Found volume group "officevg" using metadata type lvm2
61
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
The lvs command provides logical volume information in a configurable form, displaying one line per
logical volume. The lvs command provides a great deal of format control, and is useful for scripting.
The lvdisplay command displays logical volume properties (such as size, layout, and mapping) in a fixed
format.
The following command shows the attributes of lvol2 in vg00. If snapshot logical volumes have been
created for this original logical volume, this command shows a list of all snapshot logical volumes and
their status (active or inactive) as well.
# lvdisplay -v /dev/vg00/lvol2
The lvscan command scans for all logical volumes in the system and lists them, as in the following
example.
# lvscan
ACTIVE '/dev/vg0/gfslv' [1.46 GB] inherit
62
CHAPTER 9. CUSTOMIZED REPORTING FOR LVM
You can produce concise and customizable reports of LVM objects with the pvs, lvs, and vgs
commands. The reports that these commands generate include one line of output for each object. Each
line contains an ordered list of fields of properties related to the object. There are five ways to select the
objects to be reported: by physical volume, volume group, logical volume, physical volume segment, and
logical volume segment.
You can report information about physical volumes, volume groups, logical volumes, physical volume
segments, and logical volume segments all at once with the lvm fullreport command. For information
on this command and its capabilities, see the lvm-fullreport(8) man page.
LVM supports log reports, which contain a log of operations, messages, and per-object status with
complete object identification collected during LVM command execution. For further information about
the LVM log report. see the lvmreport(7) man page.
You can change what fields are displayed to something other than the default by using the -o
argument. For example, the following command displays only the physical volume name and
size.
# pvs -o pv_name,pv_size
PV PSize
/dev/sdb1 17.14G
/dev/sdc1 17.14G
/dev/sdd1 17.14G
You can append a field to the output with the plus sign (+), which is used in combination with the
-o argument.
The following example displays the UUID of the physical volume in addition to the default fields.
# pvs -o +pv_uuid
PV VG Fmt Attr PSize PFree PV UUID
/dev/sdb1 new_vg lvm2 a- 17.14G 17.14G onFF2w-1fLC-ughJ-D9eB-M7iv-6XqA-dqGeXY
/dev/sdc1 new_vg lvm2 a- 17.14G 17.09G Joqlch-yWSj-kuEn-IdwM-01S9-X08M-mcpsVe
/dev/sdd1 new_vg lvm2 a- 17.14G 17.14G yvfvZK-Cf31-j75k-dECm-0RZ3-0dGW-UqkCS
Adding the -v argument to a command includes some extra fields. For example, the pvs -v
command will display the DevSize and PV UUID fields in addition to the default fields.
# pvs -v
Scanning for physical volume names
PV VG Fmt Attr PSize PFree DevSize PV UUID
/dev/sdb1 new_vg lvm2 a- 17.14G 17.14G 17.14G onFF2w-1fLC-ughJ-D9eB-M7iv-6XqA-
dqGeXY
/dev/sdc1 new_vg lvm2 a- 17.14G 17.09G 17.14G Joqlch-yWSj-kuEn-IdwM-01S9-XO8M-
63
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
mcpsVe
/dev/sdd1 new_vg lvm2 a- 17.14G 17.14G 17.14G yvfvZK-Cf31-j75k-dECm-0RZ3-0dGW-
tUqkCS
The --noheadings argument suppresses the headings line. This can be useful for writing scripts.
The following example uses the --noheadings argument in combination with the pv_name
argument, which will generate a list of all physical volumes.
# pvs --separator =
PV=VG=Fmt=Attr=PSize=PFree
/dev/sdb1=new_vg=lvm2=a-=17.14G=17.14G
/dev/sdc1=new_vg=lvm2=a-=17.14G=17.09G
/dev/sdd1=new_vg=lvm2=a-=17.14G=17.14G
To keep the fields aligned when using the separator argument, use the separator argument in
conjunction with the --aligned argument.
You can use the -P argument of the lvs or vgs command to display information about a failed volume
that would otherwise not appear in the output.
For a full listing of display arguments, see the pvs(8), vgs(8) and lvs(8) man pages.
Volume group fields can be mixed with either physical volume (and physical volume segment) fields or
with logical volume (and logical volume segment) fields, but physical volume and logical volume fields
cannot be mixed. For example, the following command will display one line of output for each physical
volume.
# vgs -o +pv_name
VG #PV #LV #SN Attr VSize VFree PV
new_vg 3 1 0 wz--n- 51.42G 51.37G /dev/sdc1
new_vg 3 1 0 wz--n- 51.42G 51.37G /dev/sdd1
new_vg 3 1 0 wz--n- 51.42G 51.37G /dev/sdb1
For convenience, a field name prefix can be dropped if it matches the default for the command. For
64
CHAPTER 9. CUSTOMIZED REPORTING FOR LVM
For convenience, a field name prefix can be dropped if it matches the default for the command. For
example, with the pvs command, name means pv_name, but with the vgs command, name is
interpreted as vg_name.
# pvs -o free
PFree
17.14G
17.09G
17.14G
NOTE
The number of characters in the attribute fields in pvs, vgs, and lvs output may increase
in later releases. The existing character fields will not change position, but new fields may
be added to the end. You should take this into account when writing scripts that search
for particular attribute characters, searching for the character based on its relative
position to the beginning of the field, but not for its relative position to the end of the
field. For example, to search for the character p in the ninth bit of the lv_attr field, you
could search for the string "^/……..p/", but you should not search for the string "/*p$/".
Table 9.1, “The pvs Command Display Fields” lists the display arguments of the pvs command, along
with the field name as it appears in the header display and a description of the field.
dev_size DevSize The size of the underlying device on which the physical
volume was created
pe_start 1st PE Offset to the start of the first physical extent in the
underlying device
pv_fmt Fmt The metadata format of the physical volume ( lvm2 or lvm1)
pvseg_start Start The starting physical extent of the physical volume segment
65
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
pv_used Used The amount of space currently used on the physical volume
The pvs command displays the following fields by default: pv_name, vg_name, pv_fmt, pv_attr,
pv_size, pv_free. The display is sorted by pv_name.
# pvs
PV VG Fmt Attr PSize PFree
/dev/sdb1 new_vg lvm2 a- 17.14G 17.14G
/dev/sdc1 new_vg lvm2 a- 17.14G 17.09G
/dev/sdd1 new_vg lvm2 a- 17.14G 17.13G
Using the -v argument with the pvs command adds the following fields to the default display: dev_size,
pv_uuid.
# pvs -v
Scanning for physical volume names
PV VG Fmt Attr PSize PFree DevSize PV UUID
/dev/sdb1 new_vg lvm2 a- 17.14G 17.14G 17.14G onFF2w-1fLC-ughJ-D9eB-M7iv-6XqA-
dqGeXY
/dev/sdc1 new_vg lvm2 a- 17.14G 17.09G 17.14G Joqlch-yWSj-kuEn-IdwM-01S9-XO8M-mcpsVe
/dev/sdd1 new_vg lvm2 a- 17.14G 17.13G 17.14G yvfvZK-Cf31-j75k-dECm-0RZ3-0dGW-tUqkCS
You can use the --segments argument of the pvs command to display information about each physical
volume segment. A segment is a group of extents. A segment view can be useful if you want to see
whether your logical volume is fragmented.
The pvs --segments command displays the following fields by default: pv_name, vg_name, pv_fmt,
pv_attr, pv_size, pv_free, pvseg_start, pvseg_size. The display is sorted by pv_name and pvseg_size
within the physical volume.
# pvs --segments
PV VG Fmt Attr PSize PFree Start SSize
/dev/hda2 VolGroup00 lvm2 a- 37.16G 32.00M 0 1172
/dev/hda2 VolGroup00 lvm2 a- 37.16G 32.00M 1172 16
/dev/hda2 VolGroup00 lvm2 a- 37.16G 32.00M 1188 1
/dev/sda1 vg lvm2 a- 17.14G 16.75G 0 26
/dev/sda1 vg lvm2 a- 17.14G 16.75G 26 24
/dev/sda1 vg lvm2 a- 17.14G 16.75G 50 26
/dev/sda1 vg lvm2 a- 17.14G 16.75G 76 24
/dev/sda1 vg lvm2 a- 17.14G 16.75G 100 26
/dev/sda1 vg lvm2 a- 17.14G 16.75G 126 24
/dev/sda1 vg lvm2 a- 17.14G 16.75G 150 22
/dev/sda1 vg lvm2 a- 17.14G 16.75G 172 4217
/dev/sdb1 vg lvm2 a- 17.14G 17.14G 0 4389
66
CHAPTER 9. CUSTOMIZED REPORTING FOR LVM
You can use the pvs -a command to see devices detected by LVM that have not been initialized as LVM
physical volumes.
# pvs -a
PV VG Fmt Attr PSize PFree
/dev/VolGroup00/LogVol01 -- 0 0
/dev/new_vg/lvol0 -- 0 0
/dev/ram -- 0 0
/dev/ram0 -- 0 0
/dev/ram2 -- 0 0
/dev/ram3 -- 0 0
/dev/ram4 -- 0 0
/dev/ram5 -- 0 0
/dev/ram6 -- 0 0
/dev/root -- 0 0
/dev/sda -- 0 0
/dev/sdb -- 0 0
/dev/sdb1 new_vg lvm2 a- 17.14G 17.14G
/dev/sdc -- 0 0
/dev/sdc1 new_vg lvm2 a- 17.14G 17.09G
/dev/sdd -- 0 0
/dev/sdd1 new_vg lvm2 a- 17.14G 17.14G
Table 9.2, “vgs Display Fields” lists the display arguments of the vgs command, along with the field
name as it appears in the header display and a description of the field.
lv_count #LV The number of logical volumes the volume group contains
pv_count #PV The number of physical volumes that define the volume
group
67
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
vg_extent_size Ext The size of the physical extents in the volume group
vg_fmt Fmt The metadata format of the volume group (lvm2 or lvm1)
vg_free VFree Size of the free space remaining in the volume group
The vgs command displays the following fields by default: vg_name, pv_count, lv_count, snap_count,
vg_attr, vg_size, vg_free. The display is sorted by vg_name.
# vgs
VG #PV #LV #SN Attr VSize VFree
new_vg 3 1 1 wz--n- 51.42G 51.36G
Using the -v argument with the vgs command adds the following fields to the default display:
vg_extent_size, vg_uuid.
# vgs -v
Finding all volume groups
Finding volume group "new_vg"
VG Attr Ext #PV #LV #SN VSize VFree VG UUID
new_vg wz--n- 4.00M 3 1 1 51.42G 51.36G jxQJ0a-ZKk0-OpMO-0118-nlwO-wwqd-fD5D32
Table 9.3, “lvs Display Fields” lists the display arguments of the lvs command, along with the field name
as it appears in the header display and a description of the field.
NOTE
In later releases of Red Hat Enterprise Linux, the output of the lvs command may differ,
with additional fields in the output. The order of the fields, however, will remain the same
and any additional fields will appear at the end of the display.
68
CHAPTER 9. CUSTOMIZED REPORTING FOR LVM
* chunk_size
devices Devices The underlying devices that make up the logical volume: the
physical volumes, logical volumes, and start physical extents
and logical extents
lv_ancestors Ancestors For thin pool snapshots, the ancestors of the logical volume
lv_descendants Descendant For thin pool snapshots, the descendants of the logical
s volume
lv_attr Attr The status of the logical volume. The logical volume attribute
bits are as follows:
69
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
lv_kernel_major KMaj Actual major device number of the logical volume (-1 if
inactive)
lv_kernel_minor KMIN Actual minor device number of the logical volume (-1 if
inactive)
lv_major Maj The persistent major device number of the logical volume (-1
if not specified)
lv_minor Min The persistent minor device number of the logical volume (-1
if not specified)
70
CHAPTER 9. CUSTOMIZED REPORTING FOR LVM
* region_size
seg_tags Seg Tags LVM tags attached to the segments of the logical volume
segtype Type The segment type of a logical volume (for example: mirror,
striped, linear)
* stripe_size
The lvs command provides the following display by default. The default display is sorted by vg_name
and lv_name within the volume group.
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
origin VG owi-a-s--- 1.00g
snap VG swi-a-s--- 100.00m origin 0.00
A common use of the lvs command is to append devices to the command to display the underlying
devices that make up the logical volume. This example also specifies the -a option to display the internal
volumes that are components of the logical volumes, such as RAID mirrors, enclosed in brackets. This
example includes a RAID volume, a striped volume, and a thinly-pooled volume.
# lvs -a -o +devices
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
Devices
raid1 VG rwi-a-r--- 1.00g 100.00
raid1_rimage_0(0),raid1_rimage_1(0)
[raid1_rimage_0] VG iwi-aor--- 1.00g /dev/sde1(7041)
71
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
Using the -v argument with the lvs command adds the following fields to the default display:
seg_count, lv_major, lv_minor, lv_kernel_major, lv_kernel_minor, lv_uuid.
# lvs -v
Finding all logical volumes
LV VG #Seg Attr LSize Maj Min KMaj KMin Origin Snap% Move Copy% Log Convert LV
UUID
lvol0 new_vg 1 owi-a- 52.00M -1 -1 253 3 LBy1Tz-sr23-OjsI-LT03-
nHLC-y8XW-EhCl78
newvgsnap1 new_vg 1 swi-a- 8.00M -1 -1 253 5 lvol0 0.20 1ye1OU-1cIu-
o79k-20h2-ZGF0-qCJm-CfbsIx
You can use the --segments argument of the lvs command to display information with default columns
that emphasize the segment information. When you use the segments argument, the seg prefix is
optional. The lvs --segments command displays the following fields by default: lv_name, vg_name,
lv_attr, stripes, segtype, seg_size. The default display is sorted by vg_name, lv_name within the
volume group, and seg_start within the logical volume. If the logical volumes were fragmented, the
output from this command would show that.
# lvs --segments
LV VG Attr #Str Type SSize
LogVol00 VolGroup00 -wi-ao 1 linear 36.62G
LogVol01 VolGroup00 -wi-ao 1 linear 512.00M
lv vg -wi-a- 1 linear 104.00M
lv vg -wi-a- 1 linear 104.00M
lv vg -wi-a- 1 linear 104.00M
lv vg -wi-a- 1 linear 88.00M
Using the -v argument with the lvs --segments command adds the following fields to the default
display: seg_start, stripesize, chunksize.
# lvs -v --segments
Finding all logical volumes
LV VG Attr Start SSize #Str Type Stripe Chunk
lvol0 new_vg owi-a- 0 52.00M 1 linear 0 0
newvgsnap1 new_vg swi-a- 0 8.00M 1 linear 0 8.00K
The following example shows the default output of the lvs command on a system with one logical
volume configured, followed by the default output of the lvs command with the segments argument
specified.
72
CHAPTER 9. CUSTOMIZED REPORTING FOR LVM
# lvs
LV VG Attr LSize Origin Snap% Move Log Copy%
lvol0 new_vg -wi-a- 52.00M
# lvs --segments
LV VG Attr #Str Type SSize
lvol0 new_vg -wi-a- 1 linear 52.00M
To specify an alternative ordered list of columns to sort on, use the -O argument of any of the reporting
commands. It is not necessary to include these fields within the output itself.
The following example shows the output of the pvs command that displays the physical volume name,
size, and free space.
# pvs -o pv_name,pv_size,pv_free
PV PSize PFree
/dev/sdb1 17.14G 17.14G
/dev/sdc1 17.14G 17.09G
/dev/sdd1 17.14G 17.14G
The following example shows the same output, sorted by the free space field.
The following example shows that you do not need to display the field on which you are sorting.
To display a reverse sort, precede a field you specify after the -O argument with the - character.
73
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
(h)uman-readable. The default display is human-readable. You can override the default by setting the
units parameter in the global section of the /etc/lvm/lvm.conf file.
The following example specifies the output of the pvs command in megabytes rather than the default
gigabytes.
# pvs --units m
PV VG Fmt Attr PSize PFree
/dev/sda1 lvm2 -- 17555.40M 17555.40M
/dev/sdb1 new_vg lvm2 a- 17552.00M 17552.00M
/dev/sdc1 new_vg lvm2 a- 17552.00M 17500.00M
/dev/sdd1 new_vg lvm2 a- 17552.00M 17552.00M
By default, units are displayed in powers of 2 (multiples of 1024). You can specify that units be displayed
in multiples of 1000 by capitalizing the unit specification (B, K, M, G, T, H).
The following command displays the output as a multiple of 1024, the default behavior.
# pvs
PV VG Fmt Attr PSize PFree
/dev/sdb1 new_vg lvm2 a- 17.14G 17.14G
/dev/sdc1 new_vg lvm2 a- 17.14G 17.09G
/dev/sdd1 new_vg lvm2 a- 17.14G 17.14G
# pvs --units G
PV VG Fmt Attr PSize PFree
/dev/sdb1 new_vg lvm2 a- 18.40G 18.40G
/dev/sdc1 new_vg lvm2 a- 18.40G 18.35G
/dev/sdd1 new_vg lvm2 a- 18.40G 18.40G
You can also specify (s)ectors (defined as 512 bytes) or custom units.
The following example displays the output of the pvs command as a number of sectors.
# pvs --units s
PV VG Fmt Attr PSize PFree
/dev/sdb1 new_vg lvm2 a- 35946496S 35946496S
/dev/sdc1 new_vg lvm2 a- 35946496S 35840000S
/dev/sdd1 new_vg lvm2 a- 35946496S 35946496S
The following example displays the output of the pvs command in units of 4 MB.
# pvs --units 4m
PV VG Fmt Attr PSize PFree
/dev/sdb1 new_vg lvm2 a- 4388.00U 4388.00U
/dev/sdc1 new_vg lvm2 a- 4388.00U 4375.00U
/dev/sdd1 new_vg lvm2 a- 4388.00U 4388.00U
You can use the --reportformat option of the LVM display commands to display the output in JSON
74
CHAPTER 9. CUSTOMIZED REPORTING FOR LVM
You can use the --reportformat option of the LVM display commands to display the output in JSON
format.
The following example shows the output of the lvs in standard default format.
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
my_raid my_vg Rwi-a-r--- 12.00m 100.00
root rhel_host-075 -wi-ao---- 6.67g
swap rhel_host-075 -wi-ao---- 820.00m
The following command shows the output of the same LVM configuration when you specify JSON
format.
You can also set the report format as a configuration option in the /etc/lvm/lvm.conf file, using the
output_format setting. The --reportformat setting of the command line, however, takes precedence
over this setting.
The following examples configures LVM to generate a complete log report for LVM commands. In this
example, you can see that both logical volumes lvol0 and lvol1 were successfully processed, as was the
volume group VG that contains the volumes.
# lvs
Logical Volume
==============
LV LSize Cpy%Sync
lvol1 4.00m 100.00
75
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
lvol0 4.00m
Command Log
===========
Seq LogType Context ObjType ObjName ObjGrp Msg Errno RetCode
1 status processing lv lvol0 vg success 0 1
2 status processing lv lvol1 vg success 0 1
3 status processing vg vg success 0 1
For further information on configuring LVM reports and command logs, see the lvmreport man page.
76
CHAPTER 10. CONFIGURING RAID LOGICAL VOLUMES
RAID logical volumes created and managed by LVM leverage the Multiple Devices (MD) kernel
drivers.
You can temporarily split RAID1 images from the array and merge them back into the array later.
Clusters
RAID logical volumes are not cluster-aware.
Although you can create and activate RAID logical volumes exclusively on one machine, you cannot
activate them simultaneously on more than one machine.
Subvolumes
When you create a RAID logical volume, LVM creates a metadata subvolume that is one extent in size
for every data or parity subvolume in the array.
For example, creating a 2-way RAID1 array results in two metadata subvolumes (lv_rmeta_0 and
lv_rmeta_1) and two data subvolumes (lv_rimage_0 and lv_rimage_1). Similarly, creating a 3-way
stripe (plus 1 implicit parity device) RAID4 results in 4 metadata subvolumes (lv_rmeta_0, lv_rmeta_1,
lv_rmeta_2, and lv_rmeta_3) and 4 data subvolumes (lv_rimage_0, lv_rimage_1, lv_rimage_2, and
lv_rimage_3).
Integrity
You can lose data when a RAID device fails or when soft corruption occurs. Soft corruption in data
storage implies that the data retrieved from a storage device is different from the data written to that
device. Adding integrity to a RAID LV helps mitigate or prevent soft corruption. To learn more about soft
corruption and how to add integrity to a RAID LV, see Section 10.6, “Using DM integrity with RAID LV” .
Level 0
RAID level 0, often called striping, is a performance-oriented striped data mapping technique. This
means the data being written to the array is broken down into stripes and written across the member
disks of the array, allowing high I/O performance at low inherent cost but provides no redundancy.
Many RAID level 0 implementations only stripe the data across the member devices up to the size of
the smallest device in the array. This means that if you have multiple devices with slightly different
sizes, each device gets treated as though it was the same size as the smallest drive. Therefore, the
77
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
common storage capacity of a level 0 array is equal to the capacity of the smallest member disk in a
Hardware RAID or the capacity of smallest member partition in a Software RAID multiplied by the
number of disks or partitions in the array.
Level 1
RAID level 1, or mirroring, provides redundancy by writing identical data to each member disk of the
array, leaving a "mirrored" copy on each disk. Mirroring remains popular due to its simplicity and high
level of data availability. Level 1 operates with two or more disks, and provides very good data
reliability and improves performance for read-intensive applications but at a relatively high cost.
RAID level 1 comes at a high cost because you write the same information to all of the disks in the
array, provides data reliability, but in a much less space-efficient manner than parity based RAID
levels such as level 5. However, this space inefficiency comes with a performance benefit: parity-
based RAID levels consume considerably more CPU power in order to generate the parity while RAID
level 1 simply writes the same data more than once to the multiple RAID members with very little
CPU overhead. As such, RAID level 1 can outperform the parity-based RAID levels on machines
where software RAID is employed and CPU resources on the machine are consistently taxed with
operations other than RAID activities.
The storage capacity of the level 1 array is equal to the capacity of the smallest mirrored hard disk in
a Hardware RAID or the smallest mirrored partition in a Software RAID. Level 1 redundancy is the
highest possible among all RAID types, with the array being able to operate with only a single disk
present.
Level 4
Level 4 uses parity concentrated on a single disk drive to protect data. Parity information is
calculated based on the content of the rest of the member disks in the array. This information can
then be used to reconstruct data when one disk in the array fails. The reconstructed data can then be
used to satisfy I/O requests to the failed disk before it is replaced and to repopulate the failed disk
after it has been replaced.
Because the dedicated parity disk represents an inherent bottleneck on all write transactions to the
RAID array, level 4 is seldom used without accompanying technologies such as write-back caching, or
in specific circumstances where the system administrator is intentionally designing the software
RAID device with this bottleneck in mind (such as an array that will have little to no write transactions
once the array is populated with data). RAID level 4 is so rarely used that it is not available as an
option in Anaconda. However, it could be created manually by the user if truly needed.
The storage capacity of Hardware RAID level 4 is equal to the capacity of the smallest member
partition multiplied by the number of partitions minus one. Performance of a RAID level 4 array is
always asymmetrical, meaning reads outperform writes. This is because writes consume extra CPU
and main memory bandwidth when generating parity, and then also consume extra bus bandwidth
when writing the actual data to disks because you are writing not only the data, but also the parity.
Reads need only read the data and not the parity unless the array is in a degraded state. As a result,
reads generate less traffic to the drives and across the buses of the computer for the same amount
of data transfer under normal operating conditions.
Level 5
This is the most common type of RAID. By distributing parity across all the member disk drives of an
array, RAID level 5 eliminates the write bottleneck inherent in level 4. The only performance
bottleneck is the parity calculation process itself. With modern CPUs and Software RAID, that is
usually not a bottleneck at all since modern CPUs can generate parity very fast. However, if you have
a sufficiently large number of member devices in a software RAID5 array such that the combined
aggregate data transfer speed across all devices is high enough, then this bottleneck can start to
come into play.
As with level 4, level 5 has asymmetrical performance, and reads substantially outperforming writes.
78
CHAPTER 10. CONFIGURING RAID LOGICAL VOLUMES
As with level 4, level 5 has asymmetrical performance, and reads substantially outperforming writes.
The storage capacity of RAID level 5 is calculated the same way as with level 4.
Level 6
This is a common level of RAID when data redundancy and preservation, and not performance, are
the paramount concerns, but where the space inefficiency of level 1 is not acceptable. Level 6 uses a
complex parity scheme to be able to recover from the loss of any two drives in the array. This
complex parity scheme creates a significantly higher CPU burden on software RAID devices and also
imposes an increased burden during write transactions. As such, level 6 is considerably more
asymmetrical in performance than levels 4 and 5.
The total capacity of a RAID level 6 array is calculated similarly to RAID level 5 and 4, except that you
must subtract 2 devices (instead of 1) from the device count for the extra parity storage space.
Level 10
This RAID level attempts to combine the performance advantages of level 0 with the redundancy of
level 1. It also helps to alleviate some of the space wasted in level 1 arrays with more than 2 devices.
With level 10, it is possible for instance to create a 3-drive array configured to store only 2 copies of
each piece of data, which then allows the overall array size to be 1.5 times the size of the smallest
devices instead of only equal to the smallest device (like it would be with a 3-device, level 1 array).
This avoids CPU process usage to calculate parity like with RAID level 6, but it is less space efficient.
The creation of RAID level 10 is not supported during installation. It is possible to create one manually
after installation.
Linear RAID
Linear RAID is a grouping of drives to create a larger virtual drive.
In linear RAID, the chunks are allocated sequentially from one member drive, going to the next drive
only when the first is completely filled. This grouping provides no performance benefit, as it is unlikely
that any I/O operations split between member drives. Linear RAID also offers no redundancy and
decreases reliability. If any one member drive fails, the entire array cannot be used. The capacity is
the total of all member disks.
For most users, specifying one of the five available primary types (raid1, raid4, raid5, raid6, raid10)
should be sufficient.
raid1 RAID1 mirroring. This is the default value for the --type argument of the
lvcreate command when you specify the -m but you do not specify
striping.
79
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
raid5_la
RAID5 left asymmetric.
raid5_ra
RAID5 right asymmetric.
raid5_ls
RAID5 left symmetric.
raid5_rs
RAID5 right symmetric.
raid6_zr
RAID6 zero restart
raid6_nr
RAID6 N restart
raid6_nc
RAID6 N continue
raid10
Striped mirrors. This is the default value for the --type
argument of the lvcreate command if you specify the-m and
you specify a number of stripes that is greater than 1.
raid0/raid0_meta Striping. RAID0 spreads logical volume data across multiple data
subvolumes in units of stripe size. This is used to increase performance.
Logical volume data will be lost if any of the data subvolumes fail.
80
CHAPTER 10. CONFIGURING RAID LOGICAL VOLUMES
You can create RAID1 arrays with different numbers of copies according to the value you specify for the
-m argument. Similarly, you specify the number of stripes for a RAID 4/5/6 logical volume with the -i
argument. You can also specify the stripe size with the -I argument.
The following command creates a 2-way RAID1 array named my_lv in the volume group my_vg that is
one gigabyte in size.
The following command creates a RAID5 array (3 stripes + 1 implicit parity drive) named my_lv in the
volume group my_vg that is one gigabyte in size. Note that you specify the number of stripes just as you
do for an LVM striped volume; the correct number of parity drives is added automatically.
The following command creates a RAID6 array (3 stripes + 2 implicit parity drives) named my_lv in the
volume group my_vg that is one gigabyte in size.
Parameter Description
81
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
Parameter Description
This example procedure creates an LVM RAID0 logical volume called mylv that stripes data across the
disks at /dev/sda1, /dev/sdb1, and /dev/sdc1.
1. Label the disks you will use in the volume group as LVM physical volumes with the pvcreate
command.
WARNING
2. Create the volume group myvg. The following command creates the volume group myvg.
You can use the vgs command to display the attributes of the new volume group.
# vgs
VG #PV #LV #SN Attr VSize VFree
myvg 3 0 0 wz--n- 51.45G 51.45G
3. Create a RAID0 logical volume from the volume group you have created. The following
command creates the RAID0 volume mylv from the volume group myvg. This example creates
a logical volume that is 2 gigabytes in size, with three stripes and a stripe size of 4 kilobytes.
82
CHAPTER 10. CONFIGURING RAID LOGICAL VOLUMES
4. Create a file system on the RAID0 logical volume. The following command creates an ext4 file
system on the logical volume.
# mkfs.ext4 /dev/myvg/mylv
mke2fs 1.44.3 (10-July-2018)
Creating filesystem with 525312 4k blocks and 131376 inodes
Filesystem UUID: 9d4c0704-6028-450a-8b0a-8875358c0511
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
The following commands mount the logical volume and report the file system disk space usage.
Depending on the type of configuration, a Redundant Array of Independent Disks (RAID) LV prevents
data loss when a device fails. If a device comprising a RAID array fails, the data can be recovered from
other devices that are part of that RAID LV. However, a RAID configuration does not ensure the
integrity of the data itself. Soft corruption, silent corruption, soft errors, and silent errors are terms that
describe data that has become corrupted, even if the system design and software continues to function
as expected.
DM integrity is used with RAID levels 1, 4, 5, 6, and 10 to help mitigate or prevent data loss due to soft
corruption. The RAID layer ensures that a non-corrupted copy of the data can fix the soft corruption
errors. The integrity layer sits above each RAID image while an extra sub LV stores the integrity
metadata (data checksums) for each RAID image. When you retrieve data from an RAID LV with
integrity, the integrity data checksums analyze the data for corruption. If corruption is detected, the
integrity layer returns an error message, and the RAID layer retrieves a non-corrupted copy of the data
from another RAID image. The RAID layer automatically rewrites non-corrupted data over the corrupted
data to repair the soft corruption.
When creating a new RAID LV with DM integrity or adding integrity to an existing RAID LV, consider the
83
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
When creating a new RAID LV with DM integrity or adding integrity to an existing RAID LV, consider the
following:
The integrity metadata requires additional storage space. For each RAID image, every 500MB
data requires 4MB of additional storage space because of the checksums that get added to the
data.
While some RAID configurations are impacted more than others, adding DM integrity impacts
performance due to latency when accessing the data. A RAID1 configuration typically offers
better performance than RAID5 or its variants.
The RAID integrity block size also impacts performance. Configuring a larger RAID integrity
block size offers better performance. However, a smaller RAID integrity block size offers greater
backward compatibility.
There are two integrity modes available: bitmap or journal. The bitmap integrity mode typically
offers better performance than journal mode.
TIP
If you experience performance issues, Red Hat recommends that you either use RAID1 with integrity or
that you test the performance of a particular RAID configuration to ensure that it meets your
requirements.
Prerequisites
Procedure
where
<raid-level>
Specifies the RAID level of the RAID LV that you want to create.
<usable-size>
Specifies the usable size in MB.
<logical-volume>
Specifies the name of the LV that you want to create.
<volume-group>
Specifies the name of the volume group that you want to create the RAID LV under.
In the following example, we create an RAID LV with integrity named test-lv in the test-vg volume group,
with a usable size of 256M and RAID level 1.
84
CHAPTER 10. CONFIGURING RAID LOGICAL VOLUMES
Prerequisites
Procedure
where
<volume-group>
Specifies the name of the volume group that you want to create the RAID LV under.
<logical-volume>
Specifies the name of the LV that you want to create.
Prerequisites
Procedure
where
<volume-group>
Specifies the name of the volume group that you want to create the RAID LV under.
<logical-volume>
85
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
# lvs -a <volume-group>
where <volume-group> is the name of the volume group that contains the RAID LV with integrity.
The following example shows information about the test-lv RAID LV that was created in the test-vg
volume group.
# lvs -a test-vg
LV VG Attr LSize Origin Cpy%Sync
test-lv test-vg rwi-a-r--- 256.00m 2.10
[test-lv_rimage_0] test-vg gwi-aor--- 256.00m [test-lv_rimage_0_iorig] 93.75
[test-lv_rimage_0_imeta] test-vg ewi-ao---- 8.00m
[test-lv_rimage_0_iorig] test-vg -wi-ao---- 256.00m
[test-lv_rimage_1] test-vg gwi-aor--- 256.00m [test-lv_rimage_1_iorig] 85.94
[test-lv_rimage_1_imeta] test-vg ewi-ao---- 8.00m
[test-lv_rimage_1_iorig] test-vg -wi-ao---- 256.00m
[test-lv_rmeta_0] test-vg ewi-aor--- 4.00m
[test-lv_rmeta_1] test-vg ewi-aor--- 4.00m
Synchronization
When you create a RAID LV with integrity or add integrity to an existing RAID LV, we recommend that
you wait for the integrity synchronization and the RAID metadata to complete before using the LV.
Otherwise, the background initialization might impact the LV’s performance. The Cpy%Sync column
indicates the synchronization progress for both the top level RAID LV and for each RAID image. The
RAID image is indicated in the LV column by raid_image_N. Refer to the LV column to ensure that the
synchronization progress displays 100% for the top level RAID LV and for each RAID image.
To display the type for each RAID LV, add the -o+segtype option to the lvs command:
Integrity mismatches
There is an incremental counter that counts the number of mismatches detected on each RAID image.
86
CHAPTER 10. CONFIGURING RAID LOGICAL VOLUMES
There is an incremental counter that counts the number of mismatches detected on each RAID image.
To view the data mismatches detected by integrity on a particular RAID image, run the following
command:
where
<volume-group>
Specifies the name of the volume group that you want to create the RAID LV under.
<logical-volume>
Specifies the name of the LV that you want to create.
<n>
Specifies the RAID image that you want to view the integrity mismatch information for.
You must run the command for each RAID image that you want to view. In the following example, we will
view the data mismatches from rimage_0 under test-vg/test-lv.
We can see that integrity has not detected any data mismatches and thus the IntegMismatches
counter shows zero (0).
You can control the rate at which a RAID logical volume is initialized by implementing recovery throttling.
You control the rate at which sync operations are performed by setting the minimum and maximum I/O
87
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
rate for those operations with the --minrecoveryrate and --maxrecoveryrate options of the lvcreate
command. You specify these options as follows.
--maxrecoveryrate Rate[bBsSkKmMgG]
Sets the maximum recovery rate for a RAID logical volume so that it will not crowd out nominal
I/O operations. The Rate is specified as an amount per second for each device in the array. If no
suffix is given, then kiB/sec/device is assumed. Setting the recovery rate to 0 means it will be
unbounded.
--minrecoveryrate Rate[bBsSkKmMgG]
Sets the minimum recovery rate for a RAID logical volume to ensure that I/O for sync
operations achieves a minimum throughput, even when heavy nominal I/O is present. The Rate
is specified as an amount per second for each device in the array. If no suffix is given, then
kiB/sec/device is assumed.
The following command creates a 2-way RAID10 array with 3 stripes that is 10 gigabytes in size with a
maximum recovery rate of 128 kiB/sec/device. The array is named my_lv and is in the volume group
my_vg.
You can also specify minimum and maximum recovery rates for a RAID scrubbing operation.
The following command converts the linear logical volume my_lv in volume group my_vg to a 2-way
RAID1 array.
Since RAID logical volumes are composed of metadata and data subvolume pairs, when you convert a
linear device to a RAID1 array, a new metadata subvolume is created and associated with the original
logical volume on (one of) the same physical volumes that the linear volume is on. The additional images
are added in metadata/data subvolume pairs. For example, if the original device is as follows:
After conversion to a 2-way RAID1 array the device contains the following data and metadata subvolume
pairs:
If the metadata image that pairs with the original logical volume cannot be placed on the same physical
88
CHAPTER 10. CONFIGURING RAID LOGICAL VOLUMES
If the metadata image that pairs with the original logical volume cannot be placed on the same physical
volume, the lvconvert will fail.
The following command converts the LVM RAID1 logical volume my_vg/my_lv to an LVM linear device.
When you convert an LVM RAID1 logical volume to an LVM linear volume, you can specify which physical
volumes to remove. The following example shows the layout of an LVM RAID1 logical volume made up of
two images: /dev/sda1 and /dev/sdb1. In this example, the lvconvert command specifies that you want
to remove /dev/sda1, leaving /dev/sdb1 as the physical volume that makes up the linear device.
89
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
The following example shows the layout of a mirrored logical volume my_vg/my_lv.
The following command converts the mirrored logical volume my_vg/my_lv to a RAID1 logical volume.
You can increase the size of a RAID logical volume of any type with the lvresize or lvextend
command. This does not change the number of RAID images. For striped RAID logical volumes
the same stripe rounding constraints apply as when you create a striped RAID logical volume.
You can reduce the size of a RAID logical volume of any type with the lvresize or lvreduce
command. This does not change the number of RAID images. As with the lvextend command,
the same stripe rounding constraints apply as when you create a striped RAID logical volume.
You can change the number of stripes on a striped RAID logical volume (raid4/5/6/10) with the -
-stripes N parameter of the lvconvert command. This increases or reduces the size of the RAID
logical volume by the capacity of the stripes added or removed. Note that raid10 volumes are
capable only of adding stripes. This capability is part of the RAID reshaping feature that allows
you to change attributes of a RAID logical volume while keeping the same RAID level. For
information on RAID reshaping and examples of using the lvconvert command to reshape a
RAID logical volume, see the lvmraid(7) man page.
When you add images to a RAID1 device with the lvconvert command, you can specify the total number
of images for the resulting device, or you can specify how many images to add to the device. You can
also optionally specify on which physical volumes the new metadata/data image pairs will reside.
Metadata subvolumes (named rmeta) always exist on the same physical devices as their data
subvolume counterparts rimage). The metadata/data subvolume pairs will not be created on the same
physical volumes as those from another metadata/data subvolume pair in the RAID array (unless you
90
CHAPTER 10. CONFIGURING RAID LOGICAL VOLUMES
The format for the command to add images to a RAID1 volume is as follows:
For example, the following command displays the LVM device my_vg/my_lv, which is a 2-way RAID1
array:
The following command converts the 2-way RAID1 device my_vg/my_lv to a 3-way RAID1 device:
# lvconvert -m 2 my_vg/my_lv
# lvs -a -o name,copy_percent,devices my_vg
LV Copy% Devices
my_lv 6.25 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0)
[my_lv_rimage_0] /dev/sde1(0)
[my_lv_rimage_1] /dev/sdf1(1)
[my_lv_rimage_2] /dev/sdg1(1)
[my_lv_rmeta_0] /dev/sde1(256)
[my_lv_rmeta_1] /dev/sdf1(0)
[my_lv_rmeta_2] /dev/sdg1(0)
When you add an image to a RAID1 array, you can specify which physical volumes to use for the image.
The following command converts the 2-way RAID1 device my_vg/my_lv to a 3-way RAID1 device,
specifying that the physical volume /dev/sdd1 be used for the array:
To remove images from a RAID1 array, use the following command. When you remove images from a
91
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
RAID1 device with the lvconvert command, you can specify the total number of images for the resulting
device, or you can specify how many images to remove from the device. You can also optionally specify
the physical volumes from which to remove the device.
Additionally, when an image and its associated metadata subvolume volume are removed, any higher-
numbered images will be shifted down to fill the slot. If you remove lv_rimage_1 from a 3-way RAID1
array that consists of lv_rimage_0, lv_rimage_1, and lv_rimage_2, this results in a RAID1 array that
consists of lv_rimage_0 and lv_rimage_1. The subvolume lv_rimage_2 will be renamed and take over
the empty slot, becoming lv_rimage_1.
The following example shows the layout of a 3-way RAID1 logical volume my_vg/my_lv.
The following command converts the 3-way RAID1 logical volume into a 2-way RAID1 logical volume.
The following command converts the 3-way RAID1 logical volume into a 2-way RAID1 logical volume,
specifying the physical volume that contains the image to remove as /dev/sde1.
92
CHAPTER 10. CONFIGURING RAID LOGICAL VOLUMES
Just as when you are removing a RAID image from an existing RAID1 logical volume, when you remove a
RAID data subvolume (and its associated metadata subvolume) from the middle of the device any
higher numbered images will be shifted down to fill the slot. The index numbers on the logical volumes
that make up a RAID array will thus be an unbroken sequence of integers.
NOTE
You cannot split off a RAID image if the RAID1 array is not yet in sync.
The following example splits a 2-way RAID1 logical volume, my_lv, into two linear logical volumes, my_lv
and new.
The following example splits a 3-way RAID1 logical volume, my_lv, into a 2-way RAID1 logical volume,
my_lv, and a linear logical volume, new
You can temporarily split off an image of a RAID1 array for read-only use while keeping track of any
93
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
You can temporarily split off an image of a RAID1 array for read-only use while keeping track of any
changes by using the --trackchanges argument in conjunction with the --splitmirrors argument of the
lvconvert command. This allows you to merge the image back into the array at a later time while
resyncing only those portions of the array that have changed since the image was split.
The format for the lvconvert command to split off a RAID image is as follows.
When you split off a RAID image with the --trackchanges argument, you can specify which image to
split but you cannot change the name of the volume being split. In addition, the resulting volumes have
the following constraints.
You can activate the new volume and the remaining array independently.
You can merge an image that was split off with the --trackchanges argument specified by executing a
subsequent lvconvert command with the --merge argument. When you merge the image, only the
portions of the array that have changed since the image was split are resynced.
The format for the lvconvert command to merge a RAID image is as follows.
The following example creates a RAID1 logical volume and then splits off an image from that volume
while tracking changes to the remaining array.
94
CHAPTER 10. CONFIGURING RAID LOGICAL VOLUMES
[my_lv_rmeta_0] /dev/sdb1(0)
[my_lv_rmeta_1] /dev/sdc1(0)
[my_lv_rmeta_2] /dev/sdd1(0)
The following example splits off an image from a RAID1 volume while tracking changes to the remaining
array, then merges the volume back into the array.
If the raid_fault_policy field is set to allocate, the system will attempt to replace the failed
device with a spare device from the volume group. If there is no available spare device, this will
be reported to the system log.
If the raid_fault_policy field is set to warn, the system will produce a warning and the log will
indicate that a device has failed. This allows the user to determine the course of action to take.
As long as there are enough devices remaining to support usability, the RAID logical volume will continue
to operate.
95
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
[my_lv_rmeta_0] /dev/sde1(0)
[my_lv_rmeta_1] /dev/sdf1(0)
[my_lv_rmeta_2] /dev/sdg1(0)
If the /dev/sde device fails, the system log will display error messages.
Since the raid_fault_policy field has been set to allocate, the failed device is replaced with a new
device from the volume group.
# lvs -a -o name,copy_percent,devices vg
Couldn't find device with uuid 3lugiV-3eSP-AFAR-sdrP-H20O-wM2M-qdMANy.
LV Copy% Devices
lv 100.00 lv_rimage_0(0),lv_rimage_1(0),lv_rimage_2(0)
[lv_rimage_0] /dev/sdh1(1)
[lv_rimage_1] /dev/sdf1(1)
[lv_rimage_2] /dev/sdg1(1)
[lv_rmeta_0] /dev/sdh1(0)
[lv_rmeta_1] /dev/sdf1(0)
[lv_rmeta_2] /dev/sdg1(0)
Note that even though the failed device has been replaced, the display still indicates that LVM could not
find the failed device. This is because, although the failed device has been removed from the RAID
logical volume, the failed device has not yet been removed from the volume group. To remove the failed
device from the volume group, you can execute vgreduce --removemissing VG.
If the raid_fault_policy has been set to allocate but there are no spare devices, the allocation will fail,
leaving the logical volume as it is. If the allocation fails, you have the option of fixing the drive, then
initiating recovery of the failed device with the --refresh option of the lvchange command. Alternately,
you can replace the failed device.
96
CHAPTER 10. CONFIGURING RAID LOGICAL VOLUMES
[my_lv_rimage_2] /dev/sdg1(1)
[my_lv_rmeta_0] /dev/sdh1(0)
[my_lv_rmeta_1] /dev/sdf1(0)
[my_lv_rmeta_2] /dev/sdg1(0)
If the /dev/sdh device fails, the system log will display error messages. In this case, however, LVM will
not automatically attempt to repair the RAID device by replacing one of the images. Instead, if the
device has failed you can replace the device with the --repair argument of the lvconvert command.
If there has been no failure on the RAID device, follow Section 10.16.1, “Replacing a RAID device
that has not failed”.
If the RAID device has failed, follow Section 10.16.4, “Replacing a failed RAID device in a logical
volume”.
Prerequisites
The RAID device has not failed. The following commands will not work if the RAID device has
failed.
Procedure
Replace dev_to_remove with the path to the physical volume that you want to replace.
Replace vg/lv with the volume group and logical volume name of the RAID array.
Replace possible_replacements with the path to the physical volume that you want to use as
a replacement.
The following example creates a RAID1 logical volume and then replaces a device in that volume.
97
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
LV Copy% Devices
my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0)
[my_lv_rimage_0] /dev/sdb1(1)
[my_lv_rimage_1] /dev/sdb2(1)
[my_lv_rimage_2] /dev/sdc1(1)
[my_lv_rmeta_0] /dev/sdb1(0)
[my_lv_rmeta_1] /dev/sdb2(0)
[my_lv_rmeta_2] /dev/sdc1(0)
LV Copy% Devices
my_lv 37.50 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0)
[my_lv_rimage_0] /dev/sdb1(1)
[my_lv_rimage_1] /dev/sdc2(1)
[my_lv_rimage_2] /dev/sdc1(1)
[my_lv_rmeta_0] /dev/sdb1(0)
[my_lv_rmeta_1] /dev/sdc2(0)
[my_lv_rmeta_2] /dev/sdc1(0)
The following example creates a RAID1 logical volume and then replaces a device in that volume,
specifying which physical volume to use for the replacement.
LV Copy% Devices
my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0)
[my_lv_rimage_0] /dev/sda1(1)
[my_lv_rimage_1] /dev/sdb1(1)
[my_lv_rmeta_0] /dev/sda1(0)
[my_lv_rmeta_1] /dev/sdb1(0)
# pvs
98
CHAPTER 10. CONFIGURING RAID LOGICAL VOLUMES
LV Copy% Devices
my_lv 28.00 my_lv_rimage_0(0),my_lv_rimage_1(0)
[my_lv_rimage_0] /dev/sda1(1)
[my_lv_rimage_1] /dev/sdd1(1)
[my_lv_rmeta_0] /dev/sda1(0)
[my_lv_rmeta_1] /dev/sdd1(0)
You can replace more than one RAID device at a time by specifying multiple replace arguments, as
in the following example.
LV Copy% Devices
my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0)
[my_lv_rimage_0] /dev/sda1(1)
[my_lv_rimage_1] /dev/sdb1(1)
[my_lv_rimage_2] /dev/sdc1(1)
[my_lv_rmeta_0] /dev/sda1(0)
[my_lv_rmeta_1] /dev/sdb1(0)
[my_lv_rmeta_2] /dev/sdc1(0)
99
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
LV Copy% Devices
my_lv 60.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0)
[my_lv_rimage_0] /dev/sda1(1)
[my_lv_rimage_1] /dev/sdd1(1)
[my_lv_rimage_2] /dev/sde1(1)
[my_lv_rmeta_0] /dev/sda1(0)
[my_lv_rmeta_1] /dev/sdd1(0)
[my_lv_rmeta_2] /dev/sde1(0)
Therefore, rather than removing a failed device unconditionally and potentially allocating a replacement,
LVM allows you to replace a failed device in a RAID volume in a one-step solution by using the --repair
argument of the lvconvert command.
Prerequisites
Procedure
Verification steps
Prerequisites
The volume group includes a physical volume that provides enough free capacity to replace the
failed device.
If no physical volume with sufficient free extents is available on the volume group, add a new,
sufficiently large physical volume using the vgextend utility.
100
CHAPTER 10. CONFIGURING RAID LOGICAL VOLUMES
Procedure
LV Cpy%Sync Devices
my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0)
[my_lv_rimage_0] /dev/sde1(1)
[my_lv_rimage_1] /dev/sdc1(1)
[my_lv_rimage_2] /dev/sdd1(1)
[my_lv_rmeta_0] /dev/sde1(0)
[my_lv_rmeta_1] /dev/sdc1(0)
[my_lv_rmeta_2] /dev/sdd1(0)
2. If the /dev/sdc device fails, the output of the lvs command is as follows:
Optional: To manually specify the physical volume that replaces the failed device, add the
physical volume at the end of the command:
101
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
Until you remove the failed device from the volume group, LVM utilities still indicate that LVM
cannot find the failed device.
# vgreduce --removemissing VG
Procedure
1. Optional: Limit the I/O bandwidth that the scrubbing process uses.
When you perform a RAID scrubbing operation, the background I/O required by the sync
operations can crowd out other I/O to LVM devices, such as updates to volume group metadata.
This might cause the other LVM operations to slow down. You can control the rate of the
scrubbing operation by implementing recovery throttling.
Add the following options to the lvchange --syncaction commands in the next steps:
--maxrecoveryrate Rate[bBsSkKmMgG]
Sets the maximum recovery rate so that the operation does crowd out nominal I/O
operations. Setting the recovery rate to 0 means that the operation is unbounded.
--minrecoveryrate Rate[bBsSkKmMgG]
Sets the minimum recovery rate to ensure that I/O for sync operations achieves a minimum
throughput, even when heavy nominal I/O is present.
Specify the Rate value as an amount per second for each device in the array. If you provide no
suffix, the options assume kiB per second per device.
102
CHAPTER 10. CONFIGURING RAID LOGICAL VOLUMES
NOTE
The lvchange --syncaction repair operation does not perform the same
function as the lvconvert --repair operation:
The raid_sync_action field displays the current synchronization operation that the RAID
volume is performing. It can be one of the following values:
idle
All sync operations complete (doing nothing)
resync
Initializing an array or recovering after a machine failure
recover
Replacing a device in the array
check
Looking for array inconsistencies
repair
Looking for and repairing inconsistencies
The lv_attr field provides additional indicators. Bit 9 of this field displays the health of the
logical volume, and it supports the following indicators:
m (mismatches) indicates that there are discrepancies in a RAID logical volume. This
character is shown after a scrubbing operation has detected that portions of the RAID
are not coherent.
r (refresh) indicates that a device in a RAID array has suffered a failure and the kernel
regards it as failed, even though LVM can read the device label and considers the
device to be operational. Refresh the logical volume to notify the kernel that the device
is now available, or replace the device if you suspect that it failed.
Additional resources
For more information, see the lvchange(8) and lvmraid(7) man pages.
103
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
--[raid]writemostly PhysicalVolume[:{t|y|n}]
Marks a device in a RAID1 logical volume as write-mostly. All reads to these drives will be
avoided unless necessary. Setting this parameter keeps the number of I/O operations to the
drive to a minimum. By default, the write-mostly attribute is set to yes for the specified physical
volume in the logical volume. It is possible to remove the write-mostly flag by appending :n to
the physical volume or to toggle the value by specifying :t. The --writemostly argument can be
specified more than one time in a single command, making it possible to toggle the write-mostly
attributes for all the physical volumes in a logical volume at once.
--[raid]writebehind IOCount
Specifies the maximum number of outstanding writes that are allowed to devices in a RAID1
logical volume that are marked as write-mostly. Once this value is exceeded, writes become
synchronous, causing all writes to the constituent devices to complete before the array signals
the write has completed. Setting the value to zero clears the preference and allows the system
to choose the value arbitrarily.
After you have created a RAID logical volume, you can change the region size of the volume with the -R
option of the lvconvert command. The following example changes the region size of logical volume
vg/raidlv to 4096K. The RAID volume must be synced in order to change the region size.
104
CHAPTER 11. SNAPSHOT LOGICAL VOLUMES
NOTE
Because a snapshot copies only the data areas that change after the snapshot is created, the snapshot
feature requires a minimal amount of storage. For example, with a rarely updated origin, 3-5 % of the
origin’s capacity is sufficient to maintain the snapshot.
NOTE
Snapshot copies of a file system are virtual copies, not an actual media backup for a file
system. Snapshots do not provide a substitute for a backup procedure.
The size of the snapshot governs the amount of space set aside for storing the changes to the origin
volume. For example, if you made a snapshot and then completely overwrote the origin the snapshot
would have to be at least as big as the origin volume to hold the changes. You need to dimension a
snapshot according to the expected level of change. So for example a short-lived snapshot of a read-
mostly volume, such as /usr, would need less space than a long-lived snapshot of a volume that sees a
greater number of writes, such as /home.
If a snapshot runs full, the snapshot becomes invalid, since it can no longer track changes on the origin
volume. You should regularly monitor the size of the snapshot. Snapshots are fully resizable, however, so
if you have the storage capacity you can increase the size of the snapshot volume to prevent it from
getting dropped. Conversely, if you find that the snapshot volume is larger than you need, you can
reduce the size of the volume to free up space that is needed by other logical volumes.
When you create a snapshot file system, full read and write access to the origin stays possible. If a chunk
on a snapshot is changed, that chunk is marked and never gets copied from the original volume.
Most typically, a snapshot is taken when you need to perform a backup on a logical volume
without halting the live system that is continuously updating the data.
You can execute the fsck command on a snapshot file system to check the file system integrity
and determine whether the original file system requires file system repair.
Because the snapshot is read/write, you can test applications against production data by taking
a snapshot and running tests against the snapshot, leaving the real data untouched.
You can create LVM volumes for use with Red Hat Virtualization. LVM snapshots can be used to
105
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
You can create LVM volumes for use with Red Hat Virtualization. LVM snapshots can be used to
create snapshots of virtual guest images. These snapshots can provide a convenient way to
modify existing guests or create new guests with minimal additional storage.
You can use the --merge option of the lvconvert command to merge a snapshot into its origin volume.
One use for this feature is to perform system rollback if you have lost data or files or otherwise need to
restore your system to a previous state. After you merge the snapshot volume, the resulting logical
volume will have the origin volume’s name, minor number, and UUID and the merged snapshot is
removed.
NOTE
LVM snapshots are not supported across the nodes in a cluster. You cannot create a
snapshot volume in a shared volume group. However, if you need to create a consistent
backup of data on a shared logical volume you can activate the volume exclusively and
then create the snapshot.
NOTE
LVM does not allow you to create a snapshot volume that is larger than the size of the origin volume plus
needed metadata for the volume. If you specify a snapshot volume that is larger than this, the system
will create a snapshot volume that is only as large as will be needed for the size of the origin.
The following procedure creates an origin logical volume named origin and a snapshot volume of the
original volume named snap.
1. Create a logical volume named origin from the volume group VG.
# lvcreate -L 1G -n origin VG
Logical volume "origin" created.
2. Create a snapshot logical volume of /dev/VG/origin that is 100 MB in size named snap. If the
original logical volume contains a file system, you can mount the snapshot logical volume on an
arbitrary directory in order to access the contents of the file system to run a backup while the
original file system continues to get updated.
3. Display the status of logical volume /dev/VG/origin, showing all snapshot logical volumes and
their status (active or inactive).
# lvdisplay /dev/VG/origin
--- Logical volume ---
LV Path /dev/VG/origin
106
CHAPTER 11. SNAPSHOT LOGICAL VOLUMES
LV Name origin
VG Name VG
LV UUID EsFoBp-CB9H-Epl5-pUO4-Yevi-EdFS-xtFnaF
LV Write Access read/write
LV Creation host, time host-083.virt.lab.msp.redhat.com, 2019-04-11 14:45:06 -0500
LV snapshot status source of
snap [active]
LV Status available
# open 0
LV Size 1.00 GiB
Current LE 256
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:6
4. The lvs command, by default, displays the origin volume and the current percentage of the
snapshot volume being used. The following example shows the default output for the lvs
command after you have created the snapshot volume, with a display that includes the devices
that constitute the logical volumes.
# lvs -a -o +devices
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
Devices
origin VG owi-a-s--- 1.00g /dev/sde1(0)
snap VG swi-a-s--- 100.00m origin 0.00 /dev/sde1(256)
WARNING
Because the snapshot increases in size as the origin volume changes, it is important
to monitor the percentage of the snapshot volume regularly with the lvs command
to be sure it does not fill. A snapshot that is 100% full is lost completely, as a write to
unchanged parts of the origin would be unable to succeed without corrupting the
snapshot.
In addition to the snapshot itself being invalidated when full, any mounted file systems on that snapshot
device are forcibly unmounted, avoiding the inevitable file system errors upon access to the mount
point. In addition, you can specify the snapshot_autoextend_threshold option in the lvm.conf file. This
option allows automatic extension of a snapshot whenever the remaining snapshot space drops below
the threshold you set. This feature requires that there be unallocated space in the volume group.
LVM does not allow you to create a snapshot volume that is larger than the size of the origin volume plus
needed metadata for the volume. Similarly, automatic extension of a snapshot will not increase the size
of a snapshot volume beyond the maximum calculated size that is necessary for the snapshot. Once a
snapshot has grown large enough to cover the origin, it is no longer monitored for automatic extension.
107
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
The following command merges snapshot volume vg00/lvol1_snap into its origin.
You can specify multiple snapshots on the command line, or you can use LVM object tags to specify
that multiple snapshots be merged to their respective origins. In the following example, logical volumes
vg00/lvol1, vg00/lvol2, and vg00/lvol3 are all tagged with the tag @some_tag. The following command
merges the snapshot logical volumes for all three volumes serially: vg00/lvol1, then vg00/lvol2, then
vg00/lvol3. If the --background option were used, all snapshot logical volume merges would start in
parallel.
For further information on the lvconvert --merge command, see the lvconvert(8) man page.
108
CHAPTER 12. CREATING AND MANAGING THINLY-PROVISIONED LOGICAL VOLUMES (THIN VOLUMES)
NOTE
Thin volumes are not supported across the nodes in a cluster. The thin pool and all its thin
volumes must be exclusively activated on only one cluster node.
By using thin provisioning, a storage administrator can overcommit the physical storage, often avoiding
the need to purchase additional storage. For example, if ten users each request a 100GB file system for
their application, the storage administrator can create what appears to be a 100GB file system for each
user but which is backed by less actual storage that is used only when needed.
NOTE
When using thin provisioning, it is important that the storage administrator monitor the
storage pool and add more capacity if it starts to become full.
To make sure that all available space can be used, LVM supports data discard. This allows for re-use of
the space that was formerly used by a discarded file or other block range.
For information on creating thin volumes, see Creating thinly-provisioned logical volumes .
Thin volumes provide support for a new implementation of copy-on-write (COW) snapshot logical
volumes, which allow many virtual devices to share the same data in the thin pool. For information on
thin snapshot volumes, see Thinly-provisioned snapshot volumes .
3. Create a thin volume in the thin pool with the lvcreate command.
109
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
You can use the -T (or --thin) option of the lvcreate command to create either a thin pool or a thin
volume. You can also use -T option of the lvcreate command to create both a thin pool and a thin
volume in that pool at the same time with a single command.
The following command uses the -T option of the lvcreate command to create a thin pool named
mythinpool in the volume group vg001 and that is 100M in size. Note that since you are creating a pool
of physical space, you must specify the size of the pool. The -T option of the lvcreate command does
not take an argument; it deduces what type of device is to be created from the other options the
command specifies.
The following command uses the -T option of the lvcreate command to create a thin volume named
thinvolume in the thin pool vg001/mythinpool. Note that in this case you are specifying virtual size, and
that you are specifying a virtual size for the volume that is greater than the pool that contains it.
The following command uses the -T option of the lvcreate command to create a thin pool and a thin
volume in that pool by specifying both a size and a virtual size argument for the lvcreate command. This
command creates a thin pool named mythinpool in the volume group vg001 and it also creates a thin
volume named thinvolume in that pool.
You can also create a thin pool by specifying the --thinpool parameter of the lvcreate command. Unlike
the -T option, the --thinpool parameter requires an argument, which is the name of the thin pool logical
volume that you are creating. The following example specifies the --thinpool parameter of the lvcreate
command to create a thin pool named mythinpool in the volume group vg001 and that is 100M in size:
110
CHAPTER 12. CREATING AND MANAGING THINLY-PROVISIONED LOGICAL VOLUMES (THIN VOLUMES)
A smaller chunk size requires more metadata and hinders performance, but provides better
space utilization with snapshots.
A bigger chunk size requires less metadata manipulation, but makes the snapshot less space
efficient.
By default, lvm2 starts with a 64KiB chunk size and increases its value when the resulting size of the thin
pool metadata device grows above 128MiB, this keeps the metadata size compact. However, this may
result in some big chunk size values, which are less space efficient for snapshot usage. In such cases, a
smaller chunk size and a bigger metadata size is a better option.
If the volume data size is in the range of TiB, use ~15.8GiB as the metadata size, which is the maximum
supported size, and set the chunk size as per your requirement. But, note that it is not possible to
increase the metadata size if you need to extend the volume’s data size and have a small chunk size.
WARNING
Red Hat does not recommend setting a chunk size smaller than the default value. If
the chunk size is too small and your volume runs out of space for metadata, the
volume is unable to create data. Monitor your logical volumes to ensure that they
are expanded, or create more storage before the metadata volumes become
completely full. Ensure that you set up your thin pool with a large enough chunk size
so that they do not run out of room for the metadata.
Striping is supported for pool creation. The following command creates a 100M thin pool named pool in
volume group vg001 with two 64 kB stripes and a chunk size of 256 kB. It also creates a 1T thin volume,
vg00/thin_lv.
You can extend the size of a thin volume with the lvextend command. You cannot, however, reduce the
size of a thin pool.
The following command resizes an existing thin pool that is 100M in size by extending it another 100M.
111
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
As with other types of logical volumes, you can rename the volume with the lvrename, you can remove
the volume with the lvremove, and you can display information about the volume with the lvs and
lvdisplay commands.
By default, the lvcreate command sets the size of the thin pool’s metadata logical volume according to
the formula (Pool_LV_size / Pool_LV_chunk_size * 64). If you will have large numbers of snapshots or if
you have have small chunk sizes for your thin pool and thus expect significant growth of the size of the
thin pool at a later time, you may need to increase the default value of the thin pool’s metadata volume
with the --poolmetadatasize parameter of the lvcreate command. The supported value for the thin
pool’s metadata logical volume is in the range between 2MiB and 16GiB.
You can use the --thinpool parameter of the lvconvert command to convert an existing logical volume
to a thin pool volume. When you convert an existing logical volume to a thin pool volume, you must use
the --poolmetadata parameter in conjunction with the --thinpool parameter of the lvconvert to
convert an existing logical volume to the thin pool volume’s metadata volume.
NOTE
Converting a logical volume to a thin pool volume or a thin pool metadata volume
destroys the content of the logical volume, since in this case the lvconvert does not
preserve the content of the devices but instead overwrites the content.
The following example converts the existing logical volume lv1 in volume group vg001 to a thin pool
volume and converts the existing logical volume lv2 in volume group vg001 to the metadata volume for
that thin pool volume.
As for all LVM snapshot volumes, as well as all thin volumes, thin snapshot volumes are not supported
across the nodes in a cluster. The snapshot volume must be exclusively activated on only one cluster
node.
A thin snapshot volume can reduce disk usage when there are multiple snapshots of the same
origin volume.
If there are multiple snapshots of the same origin, then a write to the origin will cause one COW
operation to preserve the data. Increasing the number of snapshots of the origin should yield no
major slowdown.
Thin snapshot volumes can be used as a logical volume origin for another snapshot. This allows
for an arbitrary depth of recursive snapshots (snapshots of snapshots of snapshots…).
112
CHAPTER 12. CREATING AND MANAGING THINLY-PROVISIONED LOGICAL VOLUMES (THIN VOLUMES)
A snapshot of a thin logical volume also creates a thin logical volume. This consumes no data
space until a COW operation is required, or until the snapshot itself is written.
A thin snapshot volume does not need to be activated with its origin, so a user may have only the
origin active while there are many inactive snapshot volumes of the origin.
When you delete the origin of a thinly-provisioned snapshot volume, each snapshot of that
origin volume becomes an independent thinly-provisioned volume. This means that instead of
merging a snapshot with its origin volume, you may choose to delete the origin volume and then
create a new thinly-provisioned snapshot using that independent volume as the origin volume
for the new snapshot.
Although there are many advantages to using thin snapshot volumes, there are some use cases for which
the older LVM snapshot volume feature may be more appropriate to your needs:
You cannot change the chunk size of a thin pool. If the thin pool has a large chunk size (for
example, 1MB) and you require a short-living snapshot for which a chunk size that large is not
efficient, you may elect to use the older snapshot feature.
You cannot limit the size of a thin snapshot volume; the snapshot will use all of the space in the
thin pool, if necessary. This may not be appropriate for your needs.
In general, you should consider the specific requirements of your site when deciding which snapshot
format to use.
NOTE
When using thin provisioning, it is important that the storage administrator monitor the
storage pool and add more capacity if it starts to become full. For information on
configuring and displaying information on thinly-provisioned snapshot volumes, see
Creating thinly-provisioned snapshot volumes .
NOTE
This section provides an overview of the basic commands you use to create and grow
thinly-provisioned snapshot volumes. For detailed information on LVM thin provisioning
as well as information on using the LVM commands and utilities with thinly-provisioned
logical volumes, see the lvmthin(7) man page.
IMPORTANT
When creating a thin snapshot volume, you do not specify the size of the volume. If you
specify a size parameter, the snapshot that will be created will not be a thin snapshot
volume and will not use the thin pool for storing data. For example, the command
lvcreate -s vg/thinvolume -L10M will not create a thin snapshot, even though the origin
volume is a thin volume.
Thin snapshots can be created for thinly-provisioned origin volumes, or for origin volumes that are not
thinly-provisioned.
You can specify a name for the snapshot volume with the --name option of the lvcreate command. The
113
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
You can specify a name for the snapshot volume with the --name option of the lvcreate command. The
following command creates a thinly-provisioned snapshot volume of the thinly-provisioned logical
volume vg001/thinvolume that is named mysnapshot1.
NOTE
When using thin provisioning, it is important that the storage administrator monitor the
storage pool and add more capacity if it starts to become full. For information on
extending the size of a thin volume, see Creating thinly-provisioned logical volumes .
A thin snapshot volume has the same characteristics as any other thin volume. You can independently
activate the volume, extend the volume, rename the volume, remove the volume, and even snapshot
the volume.
By default, a snapshot volume is skipped during normal activation commands. For information on
controlling the activation of a logical volume, see Logical volume activation .
You can also create a thinly-provisioned snapshot of a non-thinly-provisioned logical volume. Since the
non-thinly-provisioned logical volume is not contained within a thin pool, it is referred to as an external
origin. External origin volumes can be used and shared by many thinly-provisioned snapshot volumes,
even from different thin pools. The external origin must be inactive and read-only at the time the thinly-
provisioned snapshot is created.
To create a thinly-provisioned snapshot of an external origin, you must specify the --thinpool option.
The following command creates a thin snapshot volume of the read-only inactive volume
origin_volume. The thin snapshot volume is named mythinsnap. The logical volume origin_volume
then becomes the thin external origin for the thin snapshot volume mythinsnap in volume group vg001
that will use the existing thin pool vg001/pool. Because the origin volume must be in the same volume
group as the snapshot volume, you do not need to specify the volume group when specifying the origin
logical volume.
You can create a second thinly-provisioned snapshot volume of the first snapshot volume, as in the
following command.
You can display a list of all ancestors and descendants of a thin snapshot logical volume by specifying
the lv_ancestors and lv_descendants reporting fields of the lvs command.
114
CHAPTER 12. CREATING AND MANAGING THINLY-PROVISIONED LOGICAL VOLUMES (THIN VOLUMES)
Additionally:
NOTE
The lv_ancestors and lv_descendants fields display existing dependencies but do not
track removed entries which can break a dependency chain if the entry was removed from
the middle of the chain. For example, if you remove the logical volume stack3 from this
sample configuration, the display is as follows.
You can configure your system to track and display logical volumes that have been
removed, and you can display the full dependency chain that includes those volumes by
specifying the lv_ancestors_full and lv_descendants_full fields.
You can configure your system to retain historical volumes for a defined period of time by specifying the
retention time, in seconds, with the lvs_history_retention_time metadata option in the lvm.conf
configuration file.
A historical logical volume retains a simplified representation of the logical volume that has been
removed, including the following reporting fields for the volume:
115
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
When a volume is removed, the historical logical volume name acquires a hypen as a prefix. For example,
when you remove the logical volume lvol1, the name of the historical volume is -lvol1. A historical logical
volume cannot be reactivated.
Even when the record_lvs_history metadata option enabled, you can prevent the retention of historical
logical volumes on an individual basis when you remove a logical volume by specifying the --nohistory
option of the lvremove command.
To include historical logical volumes in volume display, you specify the -H|--history option of an LVM
display command. You can display a full thin snapshot dependency chain that includes historical volumes
by specifying the lv_full_ancestors and lv_full_descendants reporting fields along with the -H option.
The following series of commands provides examples of how you can display and manage historical
logical volumes.
1. Ensure that historical logical volumes are retained by setting record_lvs_history=1 in the
lvm.conf file. This metadata option is not enabled by default.
# lvs -H -o name,full_ancestors,full_descendants
LV FAncestors FDescendants
lvol1 lvol2,lvol3,lvol4,lvol5
lvol2 lvol1 lvol3,lvol4,lvol5
lvol3 lvol2,lvol1 lvol4,lvol5
lvol4 lvol3,lvol2,lvol1
lvol5 lvol3,lvol2,lvol1
pool
3. Remove logical volume lvol3 from the snapshot chain, then run the following lvs command
again to see how historical logical volumes are displayed, along with their ancestors and
descendants.
116
CHAPTER 12. CREATING AND MANAGING THINLY-PROVISIONED LOGICAL VOLUMES (THIN VOLUMES)
# lvremove -f vg/lvol3
Logical volume "lvol3" successfully removed
# lvs -H -o name,full_ancestors,full_descendants
LV FAncestors FDescendants
lvol1 lvol2,-lvol3,lvol4,lvol5
lvol2 lvol1 -lvol3,lvol4,lvol5
-lvol3 lvol2,lvol1 lvol4,lvol5
lvol4 -lvol3,lvol2,lvol1
lvol5 -lvol3,lvol2,lvol1
pool
4. You can use the lv_time_removed reporting field to display the time a historical volume was
removed.
# lvs -H -o name,full_ancestors,full_descendants,time_removed
LV FAncestors FDescendants RTime
lvol1 lvol2,-lvol3,lvol4,lvol5
lvol2 lvol1 -lvol3,lvol4,lvol5
-lvol3 lvol2,lvol1 lvol4,lvol5 2016-03-14 14:14:32 +0100
lvol4 -lvol3,lvol2,lvol1
lvol5 -lvol3,lvol2,lvol1
pool
5. You can reference historical logical volumes individually in a display command by specifying the
vgname/lvname format, as in the following example. Note that the fifth bit in the lv_attr field is
set to h to indicate the volume is a historical volume.
# lvs -H vg/-lvol3
LV VG Attr LSize
-lvol3 vg ----h----- 0
6. LVM does not keep historical logical volumes if the volume has no live descendant. This means
that if you remove a logical volume at the end of a snapshot chain, the logical volume is not
retained as a historical logical volume.
# lvremove -f vg/lvol5
Automatically removing historical logical volume vg/-lvol5.
Logical volume "lvol5" successfully removed
# lvs -H -o name,full_ancestors,full_descendants
LV FAncestors FDescendants
lvol1 lvol2,-lvol3,lvol4
lvol2 lvol1 -lvol3,lvol4
-lvol3 lvol2,lvol1 lvol4
lvol4 -lvol3,lvol2,lvol1
pool
7. Run the following commands to remove the volume lvol1 and lvol2 and to see how the lvs
command displays the volumes once they have been removed.
117
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
-lvol1 -lvol2,-lvol3,lvol4
-lvol2 -lvol1 -lvol3,lvol4
-lvol3 -lvol2,-lvol1 lvol4
lvol4 -lvol3,-lvol2,-lvol1
pool
8. To remove a historical logical volume completely, you can run the lvremove command again,
specifying the name of the historical volume that now includes the hyphen, as in the following
example.
# lvremove -f vg/-lvol3
Historical logical volume "lvol3" successfully removed
# lvs -H -o name,full_ancestors,full_descendants
LV FAncestors FDescendants
-lvol1 -lvol2,lvol4
-lvol2 -lvol1 lvol4
lvol4 -lvol2,-lvol1
pool
9. A historical logical volumes is retained as long as there is a chain that includes live volumes in its
descendants. This means that removing a historical logical volume also removes all of the logical
volumes in the chain if no existing descendant is linked to them, as shown in the following
example.
# lvremove -f vg/lvol4
Automatically removing historical logical volume vg/-lvol1.
Automatically removing historical logical volume vg/-lvol2.
Automatically removing historical logical volume vg/-lvol4.
Logical volume "lvol4" successfully removed
118
CHAPTER 13. ENABLING CACHING TO IMPROVE LOGICAL VOLUME PERFORMANCE
The following procedures create a special LV from the fast device, and attach this special LV to the
original LV to improve the performance.
dm-cache
This method speeds up access to frequently used data by caching it on the faster volume. The
method caches both read and write operations.
The dm-cache method creates logical volumes of the type cache.
dm-writecache
This method caches only write operations. The faster volume stores the write operations and then
migrates them to the slower disk in the background. The faster volume is usually an SSD or a
persistent memory (PMEM) disk.
The dm-writecache method creates logical volumes of the type writecache.
Additional resources
For information on cache modes and other details, see the lvmcache(7) man page.
The cachevol and cachepool options expose different levels of control over the placement of the
caching components:
With the cachevol option, the faster device stores both the cached copies of data blocks and
the metadata for managing the cache.
With the cachepool option, separate devices can store the cached copies of data blocks and
the metadata for managing the cache.
The dm-writecache method is not compatible with cachepool.
In all configurations, LVM exposes a single resulting device, which groups together all the caching
components. The resulting device has the same name as the original slow logical volume.
119
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
Prerequisites
A slow logical volume that you want to speed up using dm-cache exists on your system.
The volume group that contains the slow logical volume also contains an unused physical volume
on a fast block device.
Procedure
cachevol-size
The size of the cachevol volume, such as 5G
fastvol
A name for the cachevol volume
vg
The volume group name
/dev/fast-pv
The path to the fast block device, such as /dev/sdf1
2. Attach the cachevol volume to the main logical volume to begin caching:
fastvol
The name of the cachevol volume
vg
The volume group name
main-lv
The name of the slow logical volume
Verification steps
Additional resources
For information on this procedure and other details, including administrative examples, see the
120
CHAPTER 13. ENABLING CACHING TO IMPROVE LOGICAL VOLUME PERFORMANCE
For information on this procedure and other details, including administrative examples, see the
lvmcache(7) man page.
Prerequisites
A slow logical volume that you want to speed up using dm-cache exists on your system.
The volume group that contains the slow logical volume also contains an unused physical volume
on a fast block device.
Procedure
cachepool-size
The size of the cachepool, such as 5G
fastpool
A name for the cachepool volume
vg
The volume group name
/dev/fast
The path to the fast block device, such as /dev/sdf1
NOTE
You can use --poolmetadata option to specify the location of the pool
metadata when creating the cache-pool.
fastpool
The name of the cachepool volume
vg
The volume group name
main
121
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
Verification steps
Additional resources
Prerequisites
A slow logical volume that you want to speed up using dm-writecache exists on your system.
The volume group that contains the slow logical volume also contains an unused physical volume
on a fast block device.
Procedure
vg
The volume group name
main-lv
The name of the slow logical volume
122
CHAPTER 13. ENABLING CACHING TO IMPROVE LOGICAL VOLUME PERFORMANCE
cachevol-size
The size of the cachevol volume, such as 5G
fastvol
A name for the cachevol volume
vg
The volume group name
/dev/fast-pv
The path to the fast block device, such as /dev/sdf1
3. Attach the cachevol volume to the main logical volume to begin caching:
fastvol
The name of the cachevol volume
vg
The volume group name
main-lv
The name of the slow logical volume
vg
The volume group name
main-lv
The name of the slow logical volume
Verification steps
123
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
Additional resources
For information, including administrative examples, see the lvmcache(7) man page.
Prerequisites
Procedure
vg
The volume group name
main-lv
The name of the logical volume where caching is enabled
vg
The volume group name
main-lv
The name of the logical volume where caching is enabled
Verification steps
Additional resources
124
CHAPTER 14. LOGICAL VOLUME ACTIVATION
There are various circumstances for which you need to make an individual logical volume inactive and
thus unknown to the kernel. You can activate or deactivate individual logical volume with the -a option of
the lvchange command.
The format for the command to deactivate an individual logical volume is as follows.
The format for the command to activate an individual logical volume is as follows.
You can and activate or deactivate all of the logical volumes in a volume group with the -a option of the
vgchange command. This is the equivalent of running the lvchange -a command on each individual
logical volume in the volume group.
The format for the command to deactivate all of the logical volumes in a volume group is as follows.
vgchange -an vg
The format for the command to activate all of the logical volumes in a volume group is as follows.
vgchange -ay vg
You can use the following configuration options in the /etc/lvm/lvm.conf configuration file to control
autoactivation of logical volumes.
global/event_activation
When event_activation is disabled, systemd/udev will autoactivate logical volume only on
whichever physical volumes are present during system startup. If all physical volumes have not
appeared yet, then some logical volumes may not be autoactivated.
activation/auto_activation_volume_list
Setting auto_activation_volume_list to an empty list disables autoactivation entirely. Setting
auto_activation_volume_list to specific logical volumes and volume groups limits
autoactivation to those logical volumes.
For information on setting these options, see the /etc/lvm/lvm.conf configuration file.
125
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
Through the activation/volume_list setting in the /etc/lvm/conf file. This allows you to specify
which logical volumes are activated. For information on using this option, see the
/etc/lvm/lvm.conf configuration file.
By means of the activation skip flag for a logical volume. When this flag is set for a logical
volume, the volume is skipped during normal activation commands.
You can set the activation skip flag on a logical volume in the following ways.
You can turn off the activation skip flag when creating a logical volume by specifying the -kn or
--setactivationskip n option of the lvcreate command.
You can turn off the activation skip flag for an existing logical volume by specifying the -kn or --
setactivationskip n option of the lvchange command.
You can turn on the activation skip flag on again for a volume where it has been turned off with
the -ky or --setactivationskip y option of the lvchange command.
To determine whether the activation skip flag is set for a logical volume run the lvs command, which
displays the k attribute as in the following example.
# lvs vg/thin1s1
LV VG Attr LSize Pool Origin
thin1s1 vg Vwi---tz-k 1.00t pool0 thin1
You can activate a logical volume with the k attribute set by using the -K or --ignoreactivationskip
option in addition to the standard -ay or --activate y option.
By default, thin snapshot volumes are flagged for activation skip when they are created. You can control
the default activation skip setting on new thin snapshot volumes with the auto_set_activation_skip
setting in the /etc/lvm/lvm.conf file.
The following command activates a thin snapshot logical volume that has the activation skip flag set.
The following command creates a thin snapshot without the activation skip flag
The following command removes the activation skip flag from a snapshot logical volume.
126
CHAPTER 14. LOGICAL VOLUME ACTIVATION
Command Activation
lvchange -ay|e Activate the shared logical volume in exclusive mode, allowing only a
single host to activate the logical volume. If the activation fails, as would
happen if the logical volume is active on another host, an error is
reported.
lvchange -asy Activate the shared logical volume in shared mode, allowing multiple
hosts to activate the logical volume concurrently. If the activation fails,
as would happen if the logical volume is active exclusively on another
host, an error is reported. If the logical type prohibits shared access,
such as a snapshot, the command will report an error and fail. Logical
volume types that cannot be used concurrently from multiple hosts
include thin, cache, raid, and snapshot.
partial Allows any logical volume with missing physical volumes to be activated.
This option should be used for recovery or repair only.
127
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
The filter is a list of patterns that LVM applies to each device found by a scan of the /dev directory, or
the directory specified by the dir keyword in the /etc/lvm/lvm.conf file. Patterns are regular expressions
delimited by any character and preceded by a for accept or r for reject. The first regular expression in
the list that matches a device determines if LVM accepts or rejects (ignores) the device. LVM accepts
devices that do not match any patterns.
The following is the default configuration of the filter, which scans all devices:
filter = [ "a/.*/" ]
WARNING
Some of the examples presented here might unintentionally match extra devices on
the system and might not represent recommended practice for your system. For
example, a/loop/ is equivalent to a/.*loop.*/ and would match
/dev/solooperation/lvol1.
The following filter adds all discovered devices, which is the default behavior because no filter is
configured in the configuration file:
filter = [ "a/.*/" ]
The following filter removes the cdrom device in order to avoid delays if the drive contains no
media:
filter = [ "r|^/dev/cdrom$|" ]
The following filter adds all loop devices and removes all other block devices:
The following filter adds all loop and IDE devices and removes all other block devices:
128
CHAPTER 15. CONTROLLING LVM DEVICE SCANNING
The following filter adds just partition 8 on the first IDE drive and removes all other block
devices:
Prerequisites
Procedure
1. Test your device filter pattern without modifying the /etc/lvm/lvm.conf file.
Use an LVM command with the --config 'devices{ filter = [ your device filter pattern ] }'
option. For example:
2. Edit the filter option in the /etc/lvm/lvm.conf configuration file to use your new device filter
pattern.
3. Check that no physical volumes or volume groups that you want to use are missing with the new
configuration:
# pvscan
# vgscan
4. Rebuild the initramfs file system so that LVM scans only the necessary devices upon reboot:
129
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
By default, LVM commands do not scan the logical volumes on your system. This default behavior
provides the following advantages:
If there are many active logical volumes on the system, every LVM command would require
additional time, negatively impacting performance and causing unwanted delays or timeouts.
If logical volumes contain physical volumes from a guest VM image, the host usually does not
want to scan or use those layered physical volumes which belong to the guest. Note, however,
that in the cases where a guest VM’s physical volume exists directly on an SCSI device visible to
the host, in order to prevent LVM on the host from accessing those physical volumes you will
need to configure a filter, as described in Chapter 15, Controlling LVM device scanning.
Scanning logical volumes may be necessary when layering physical volumes on top of logical volumes is
intentional. This will allow the pvcreate command to be run on a logical volume. To configure LVM to
scan all logical volumes, set the scan_lvs configuration option in the /etc/lvm/lvm.conf file to
scan_lvs=1. To restrict which logical volumes LVM commands scan, you can then set up device filters in
the /etc/lvm/lvm.conf configuration file, as described in Chapter 15, Controlling LVM device scanning.
130
CHAPTER 17. CONTROLLING LVM ALLOCATION
The complete set of unallocated physical extents in the volume group is generated for
consideration. If you supply any ranges of physical extents at the end of the command line, only
unallocated physical extents within those ranges on the specified physical volumes are
considered.
Each allocation policy is tried in turn, starting with the strictest policy (contiguous) and ending
with the allocation policy specified using the --alloc option or set as the default for the
particular logical volume or volume group. For each policy, working from the lowest-numbered
logical extent of the empty logical volume space that needs to be filled, as much space as
possible is allocated, according to the restrictions imposed by the allocation policy. If more
space is needed, LVM moves on to the next policy.
An allocation policy of contiguous requires that the physical location of any logical extent that
is not the first logical extent of a logical volume is adjacent to the physical location of the logical
extent immediately preceding it.
When a logical volume is striped or mirrored, the contiguous allocation restriction is applied
independently to each stripe or mirror image (leg) that needs space.
An allocation policy of cling requires that the physical volume used for any logical extent be
added to an existing logical volume that is already in use by at least one logical extent earlier in
that logical volume. If the configuration parameter allocation/cling_tag_list is defined, then
two physical volumes are considered to match if any of the listed tags is present on both
physical volumes. This allows groups of physical volumes with similar properties (such as their
physical location) to be tagged and treated as equivalent for allocation purposes.
When a Logical Volume is striped or mirrored, the cling allocation restriction is applied
independently to each stripe or mirror image (leg) that needs space.
An allocation policy of normal will not choose a physical extent that shares the same physical
volume as a logical extent already allocated to a parallel logical volume (that is, a different stripe
or mirror image/leg) at the same offset within that parallel logical volume.
When allocating a mirror log at the same time as logical volumes to hold the mirror data, an
allocation policy of normal will first try to select different physical volumes for the log and the
data. If that is not possible and the allocation/mirror_logs_require_separate_pvs
configuration parameter is set to 0, it will then allow the log to share physical volume(s) with
part of the data.
Similarly, when allocating thin pool metadata, an allocation policy of normal will follow the same
considerations as for allocation of a mirror log, based on the value of the
allocation/thin_pool_metadata_require_separate_pvs configuration parameter.
If there are sufficient free extents to satisfy an allocation request but a normal allocation policy
131
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
If there are sufficient free extents to satisfy an allocation request but a normal allocation policy
would not use them, the anywhere allocation policy will, even if that reduces performance by
placing two stripes on the same physical volume.
NOTE
If you rely upon any layout behavior beyond that documented in this section according to
the defined allocation policies, you should note that this might change in future versions
of the code. For example, if you supply on the command line two empty physical volumes
that have an identical number of free physical extents available for allocation, LVM
currently considers using each of them in the order they are listed; there is no guarantee
that future releases will maintain that property. If it is important to obtain a specific layout
for a particular Logical Volume, then you should build it up through a sequence of
lvcreate and lvconvert steps such that the allocation policies applied to each step leave
LVM no discretion over the layout.
To view the way the allocation process currently works in any specific case, you can read the debug
logging output, for example by adding the -vvvv option to a command.
# pvchange -x n /dev/sdk1
You can also use the -xy arguments of the pvchange command to allow allocation where it had
previously been disallowed.
For example, if you have logical volumes that are mirrored between two sites within a single volume
group, you can tag the physical volumes according to where they are situated by tagging the physical
volumes with @site1 and @site2 tags. You can then specify the following line in the lvm.conf file:
In the following example, the lvm.conf file has been modified to contain the following line:
132
CHAPTER 17. CONTROLLING LVM ALLOCATION
Also in this example, a volume group taft has been created that consists of the physical volumes
/dev/sdb1, /dev/sdc1, /dev/sdd1, /dev/sde1, /dev/sdf1, /dev/sdg1, and /dev/sdh1. These physical
volumes have been tagged with tags A, B, and C. The example does not use the C tag, but this will show
that LVM uses the tags to select which physical volumes to use for the mirror legs.
The following command creates a 10 gigabyte mirrored volume from the volume group taft.
The following command shows which devices are used for the mirror legs and RAID metadata
subvolumes.
# lvs -a -o +devices
LV VG Attr LSize Log Cpy%Sync Devices
mirror taft Rwi-a-r--- 10.00g 100.00 mirror_rimage_0(0),mirror_rimage_1(0)
[mirror_rimage_0] taft iwi-aor--- 10.00g /dev/sdb1(1)
[mirror_rimage_1] taft iwi-aor--- 10.00g /dev/sdc1(1)
[mirror_rmeta_0] taft ewi-aor--- 4.00m /dev/sdb1(0)
[mirror_rmeta_1] taft ewi-aor--- 4.00m /dev/sdc1(0)
The following command extends the size of the mirrored volume, using the cling allocation policy to
indicate that the mirror legs should be extended using physical volumes with the same tag.
The following display command shows that the mirror legs have been extended using physical volumes
with the same tag as the leg. Note that the physical volumes with a tag of C were ignored.
# lvs -a -o +devices
LV VG Attr LSize Log Cpy%Sync Devices
mirror taft Rwi-a-r--- 20.00g 100.00 mirror_rimage_0(0),mirror_rimage_1(0)
[mirror_rimage_0] taft iwi-aor--- 20.00g /dev/sdb1(1)
[mirror_rimage_0] taft iwi-aor--- 20.00g /dev/sdg1(0)
[mirror_rimage_1] taft iwi-aor--- 20.00g /dev/sdc1(1)
[mirror_rimage_1] taft iwi-aor--- 20.00g /dev/sdd1(0)
[mirror_rmeta_0] taft ewi-aor--- 4.00m /dev/sdb1(0)
[mirror_rmeta_1] taft ewi-aor--- 4.00m /dev/sdc1(0)
133
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
The physical volume (PV) tags are responsible for the allocation control in the LVM raid, as opposed to
logical volume (LV) or volume group (VG) tags, because allocation in lvm occurs at the PV level based
on allocation policies. To distinguish storage types by their different properties, tag them appropriately
(e.g. NVMe, SSD, HDD). Red Hat recommends that you tag each new PV appropriately after you add it
to a VG.
This procedure adds object tags to your logical volumes, assuming /dev/sda is an SSD, and /dev/sd[b-f]
are HDDs with one partition.
Prerequisites
Procedure
Additional resources
134
CHAPTER 18. GROUPING LVM OBJECTS WITH TAGS
Tags are given on the command line in place of PV, VG or LV arguments. Tags should be prefixed with @
to avoid ambiguity. Each tag is expanded by replacing it with all objects possessing that tag which are of
the type expected by its position on the command line.
LVM tags are strings of up to 1024 characters. LVM tags cannot start with a hyphen.
A valid tag consists of a limited range of characters only. The allowed characters are A-Z a-z 0-9 _ + . - /
= ! : # &.
Only objects in a volume group can be tagged. Physical volumes lose their tags if they are removed from
a volume group; this is because tags are stored as part of the volume group metadata and that is
deleted when a physical volume is removed.
Procedure
Use the following command to list all the logical volumes with the database tag:
# lvs @database
Use the following command to list the currently active host tags:
# lvm tags
Prerequisites
One or more physical volumes, volume groups, or logical volumes are created.
Procedure
To create tags from physical volumes, add the option to the pvchange command.
To create tags from volume groups, add the option to the vgchange or vgcreate
135
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
To create tags from volume groups, add the option to the vgchange or vgcreate
commands.
To create tags from logical volumes, add the option to the lvchange or lvcreate
commands.
Prerequisites
Object tags on physical volumes, volume groups, or logical volumes are created.
Procedure
To delete tags from physical volumes, add the option to the pvchange command.
To delete tags from volume groups, add the option to the vgchange or vgcreate
commands.
To delete tags from logical volumes, add the option to the lvchange or lvcreate
commands.
Procedure
Set hosttags = 1 in the tags section to automatically define host tag using the machine’s host
name.
This allows you to use a common configuration file which can be replicated on all your machines
so they hold identical copies of the file, but the behavior can differ between machines according
to the host name.
For each host tag, an extra configuration file is read if it exists: lvm_hosttag.conf. If that file defines new
tags, then further configuration files will be appended to the list of files to read in.
For example, the following entry in the configuration file always defines tag1, and defines tag2 if the
host name is host1:
136
CHAPTER 18. GROUPING LVM OBJECTS WITH TAGS
Prerequisites
A bulleted list of conditions that must be satisfied before the user starts following this assembly.
You can also link to other modules or assemblies the user must follow before starting this
assembly.
Delete the section title and bullets if the assembly has no prerequisites.
Procedure
For example, the following entry acts as a filter for activation requests (such as vgchange -ay) and only
activates vg1/lvol0 and any logical volumes or volume groups with the database tag in the metadata on
that host:
The special match @* that causes a match only if any metadata tag matches any host tag on that
machine.
As another example, consider a situation where every machine in the cluster has the following entry in
the configuration file:
tags { hosttags = 1 }
1. Run lvchange --addtag @db2 vg1/lvol2 from any host in the cluster.
This solution involves storing host names inside the volume group metadata.
137
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
Procedure
Add the -v argument to any LVM command to increase the verbosity level of the command
output. Verbosity can be further increased by adding additional v’s. A maximum of four
such v’s is allowed, for example, -vvvv.
In the log section of the /etc/lvm/lvm.conf configuration file, increase the value of the level
option. This causes LVM to provide more details in the system log.
If the problem is related to the logical volume activation, enable LVM to log messages
during the activation:
i. Set the activation = 1 option in the log section of the /etc/lvm/lvm.conf configuration
file.
# lvmdump
# lvs -v
# pvs --all
Examine the last backup of the LVM metadata in the /etc/lvm/backup/ directory and
archived versions in the /etc/lvm/archive/ directory.
# lvmconfig
Check the /run/lvm/hints cache file for a record of which devices have physical volumes on
them.
138
CHAPTER 19. TROUBLESHOOTING LVM
Additional resources
Procedure
In this example, one of the devices that made up the volume group myvg failed. The volume
group is unusable but you can see information about the failed device.
In this example, one of the devices failed due to which the logical volume in the volume group
failed. The command output shows the failed logical volumes.
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
Devices
mylv myvg -wi-a---p- 20.00g [unknown](0)
[unknown](5120),/dev/sdc1(0)
139
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
The following examples show the command output from the vgs and lvs utilities when a leg
of a mirrored logical volume has failed.
Procedure
3. Remove all the logical volumes that used the lost physical volume from the volume group:
4. Optional: If you accidentally removed logical volumes that you wanted to keep, you can reverse
the vgreduce operation:
# vgcfgrestore myvg
140
CHAPTER 19. TROUBLESHOOTING LVM
WARNING
This procedure finds the latest archived metadata of a physical volume that is missing or corrupted.
Procedure
1. Find the archived metadata file of the volume group that contains the physical volume. The
archived metadata files are located at the /etc/lvm/archive/volume-group-name_backup-
number.vg path:
# cat /etc/lvm/archive/myvg_00000-1248998876.vg
Replace 00000-1248998876 with the backup-number. Select the last known valid metadata
file, which has the highest number for the volume group.
2. Find the UUID of the physical volume. Use one of the following methods.
Examine the archived metadata file. Find the UUID as the value labeled id = in the
physical_volumes section of the volume group configuration.
This procedure restores metadata on a physical volume that is either corrupted or replaced with a new
141
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
This procedure restores metadata on a physical volume that is either corrupted or replaced with a new
device. You might be able to recover the data from the physical volume by rewriting the metadata area
on the physical volume.
WARNING
Do not attempt this procedure on a working LVM logical volume. You will lose your
data if you specify the incorrect UUID.
Prerequisites
You have identified the metadata of the missing physical volume. For details, see Finding the
metadata of a missing LVM physical volume.
Procedure
NOTE
The command overwrites only the LVM metadata areas and does not affect the
existing data areas.
The following example labels the /dev/vdb1 device as a physical volume with the following
properties:
The metadata information contained in VG_00050.vg, which is the most recent good
archived metadata for the volume group
...
Physical volume "/dev/vdb1" successfully created
142
CHAPTER 19. TROUBLESHOOTING LVM
# vgcfgrestore myvg
4. If the segment type of the logical volumes is RAID, resynchronize the logical volumes:
6. If the on-disk LVM metadata takes at least as much space as what overrode it, this procedure
can recover the physical volume. If what overrode the metadata went past the metadata area,
the data on the volume may have been affected. You might be able to use the fsck command to
recover that data.
Verification steps
As a result of the rounding, the reported value of free space might be larger than what the physical
extents on the volume group provide. If you attempt to create a logical volume the size of the reported
free space, you might get the following error:
To work around the error, you must examine the number of free physical extents on the volume group,
which is the accurate value of free space. You can then use the number of extents to create the logical
volume successfully.
143
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
Procedure
# vgdisplay myvg
For example, the following volume group has 8780 free physical extents:
2. Create the logical volume. Enter the volume size in extents rather than bytes.
Example 19.6. Creating a logical volume by specifying the number of extents
Example 19.7. Creating a logical volume to occupy all the remaining space
Alternatively, you can extend the logical volume to use a percentage of the remaining free
space in the volume group. For example:
Verification steps
Check the number of extents that the volume group now uses:
144
CHAPTER 19. TROUBLESHOOTING LVM
Procedure
1. Optional: Limit the I/O bandwidth that the scrubbing process uses.
When you perform a RAID scrubbing operation, the background I/O required by the sync
operations can crowd out other I/O to LVM devices, such as updates to volume group metadata.
This might cause the other LVM operations to slow down. You can control the rate of the
scrubbing operation by implementing recovery throttling.
Add the following options to the lvchange --syncaction commands in the next steps:
--maxrecoveryrate Rate[bBsSkKmMgG]
Sets the maximum recovery rate so that the operation does crowd out nominal I/O
operations. Setting the recovery rate to 0 means that the operation is unbounded.
--minrecoveryrate Rate[bBsSkKmMgG]
Sets the minimum recovery rate to ensure that I/O for sync operations achieves a minimum
throughput, even when heavy nominal I/O is present.
Specify the Rate value as an amount per second for each device in the array. If you provide no
suffix, the options assume kiB per second per device.
NOTE
The lvchange --syncaction repair operation does not perform the same
function as the lvconvert --repair operation:
145
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
The raid_sync_action field displays the current synchronization operation that the RAID
volume is performing. It can be one of the following values:
idle
All sync operations complete (doing nothing)
resync
Initializing an array or recovering after a machine failure
recover
Replacing a device in the array
check
Looking for array inconsistencies
repair
Looking for and repairing inconsistencies
The lv_attr field provides additional indicators. Bit 9 of this field displays the health of the
logical volume, and it supports the following indicators:
m (mismatches) indicates that there are discrepancies in a RAID logical volume. This
character is shown after a scrubbing operation has detected that portions of the RAID
are not coherent.
r (refresh) indicates that a device in a RAID array has suffered a failure and the kernel
regards it as failed, even though LVM can read the device label and considers the
device to be operational. Refresh the logical volume to notify the kernel that the device
is now available, or replace the device if you suspect that it failed.
Additional resources
For more information, see the lvchange(8) and lvmraid(7) man pages.
Therefore, rather than removing a failed device unconditionally and potentially allocating a replacement,
LVM allows you to replace a failed device in a RAID volume in a one-step solution by using the --repair
argument of the lvconvert command.
Prerequisites
146
CHAPTER 19. TROUBLESHOOTING LVM
Procedure
Verification steps
Prerequisites
The volume group includes a physical volume that provides enough free capacity to replace the
failed device.
If no physical volume with sufficient free extents is available on the volume group, add a new,
sufficiently large physical volume using the vgextend utility.
Procedure
LV Cpy%Sync Devices
my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0)
[my_lv_rimage_0] /dev/sde1(1)
[my_lv_rimage_1] /dev/sdc1(1)
[my_lv_rimage_2] /dev/sdd1(1)
[my_lv_rmeta_0] /dev/sde1(0)
[my_lv_rmeta_1] /dev/sdc1(0)
[my_lv_rmeta_2] /dev/sdd1(0)
2. If the /dev/sdc device fails, the output of the lvs command is as follows:
147
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
[my_lv_rimage_1] [unknown](1)
[my_lv_rimage_2] /dev/sdd1(1)
[my_lv_rmeta_0] /dev/sde1(0)
[my_lv_rmeta_1] [unknown](0)
[my_lv_rmeta_2] /dev/sdd1(0)
Optional: To manually specify the physical volume that replaces the failed device, add the
physical volume at the end of the command:
Until you remove the failed device from the volume group, LVM utilities still indicate that LVM
cannot find the failed device.
# vgreduce --removemissing VG
148
CHAPTER 19. TROUBLESHOOTING LVM
You can troubleshoot these warnings to understand why LVM displays them, or to hide the warnings.
The multipath software then creates a new device that maps to those individual paths. Because each
LUN has multiple device nodes in the /dev directory that point to the same underlying data, all the
device nodes contain the same LVM metadata.
HDLM /dev/sddlmab
As a result of the multiple device nodes, LVM tools find the same metadata multiple times and report
them as duplicates.
If you list the current DM Multipath topology using the multipath -ll command, you can find both
/dev/sdd and /dev/sdf under the same multipath map.
These duplicate messages are only warnings and do not mean that the LVM operation has failed.
Rather, they are alerting you that LVM uses only one of the devices as a physical volume and ignores
the others.
If the messages indicate that LVM chooses the incorrect device or if the warnings are disruptive to
users, you can apply a filter. The filter configures LVM to search only the necessary devices for
physical volumes, and to leave out any underlying paths to multipath devices. As a result, the
149
Red Hat Enterprise Linux 8 Configuring and managing logical volumes
Multipath maps
The two devices displayed in the output are both multipath maps.
The following examples show a duplicate PV warning for two devices that are both multipath maps.
The duplicate physical volumes are located on two different devices rather than on two different
paths to the same device.
This situation is more serious than duplicate warnings for devices that are both single paths to the
same device. These warnings often mean that the machine is accessing devices that it should not
access: for example, LUN clones or mirrors.
Unless you clearly know which devices you should remove from the machine, this situation might be
unrecoverable. Red Hat recommends that you contact Red Hat Technical Support to address this
issue.
The filter is a list of patterns that LVM applies to each device found by a scan of the /dev directory, or
the directory specified by the dir keyword in the /etc/lvm/lvm.conf file. Patterns are regular expressions
delimited by any character and preceded by a for accept or r for reject. The first regular expression in
the list that matches a device determines if LVM accepts or rejects (ignores) the device. LVM accepts
devices that do not match any patterns.
The following is the default configuration of the filter, which scans all devices:
filter = [ "a/.*/" ]
The filter that you configure must include all devices that LVM needs to be check for metadata, such as
the local hard drive with the root volume group on it and any multipathed devices. By rejecting the
underlying paths to a multipath device (such as /dev/sdb, /dev/sdd, and so on), you can avoid these
duplicate PV warnings, because LVM finds each unique metadata area once on the multipath device
itself.
This filter accepts the second partition on the first hard drive and any DM Multipath devices, but
rejects everything else:
150
CHAPTER 19. TROUBLESHOOTING LVM
This filter accepts all HP SmartArray controllers and any EMC PowerPath devices:
This filter accepts any partitions on the first IDE drive and any multipath devices:
Prerequisites
Procedure
1. Test your device filter pattern without modifying the /etc/lvm/lvm.conf file.
Use an LVM command with the --config 'devices{ filter = [ your device filter pattern ] }'
option. For example:
2. Edit the filter option in the /etc/lvm/lvm.conf configuration file to use your new device filter
pattern.
3. Check that no physical volumes or volume groups that you want to use are missing with the new
configuration:
# pvscan
# vgscan
4. Rebuild the initramfs file system so that LVM scans only the necessary devices upon reboot:
151