Red Hat Enterprise Linux-8-Configuring and Managing Virtualization-En-Us
Red Hat Enterprise Linux-8-Configuring and Managing Virtualization-En-Us
Setting up your host, creating and administering virtual machines, and understanding
virtualization features in Red Hat Enterprise Linux 8
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
This document describes how to manage virtualization in Red Hat Enterprise Linux 8 (RHEL 8). In
addition to general information about virtualization, it describes how to manage virtualization using
command-line utilities, as well as using the web console.
Table of Contents
Table of Contents
. . . . . . . . . .OPEN
MAKING . . . . . . SOURCE
. . . . . . . . . .MORE
. . . . . . .INCLUSIVE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. . . . . . . . . . . . .
. . . . . . . . . . . . . FEEDBACK
PROVIDING . . . . . . . . . . . . ON
. . . .RED
. . . . .HAT
. . . . .DOCUMENTATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9. . . . . . . . . . . . .
.CHAPTER
. . . . . . . . . . 1.. .INTRODUCING
. . . . . . . . . . . . . . . .VIRTUALIZATION
. . . . . . . . . . . . . . . . . . IN
. . .RHEL
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
..............
1.1. WHAT IS VIRTUALIZATION? 10
1.2. ADVANTAGES OF VIRTUALIZATION 10
1.3. VIRTUAL MACHINE COMPONENTS AND THEIR INTERACTION 11
1.4. TOOLS AND INTERFACES FOR VIRTUALIZATION MANAGEMENT 12
1.5. RED HAT VIRTUALIZATION SOLUTIONS 13
.CHAPTER
. . . . . . . . . . 2.
. . GETTING
. . . . . . . . . . .STARTED
. . . . . . . . . .WITH
. . . . . .VIRTUALIZATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
..............
2.1. ENABLING VIRTUALIZATION 15
2.2. CREATING VIRTUAL MACHINES 17
2.2.1. Creating virtual machines using the command-line interface 17
2.2.2. Creating virtual machines and installing guest operating systems using the web console 20
2.2.2.1. Creating virtual machines using the web console 20
2.2.2.2. Creating virtual machines by importing disk images using the web console 22
2.2.2.3. Installing guest operating systems using the web console 23
2.3. STARTING VIRTUAL MACHINES 24
2.3.1. Starting a virtual machine using the command-line interface 24
2.3.2. Starting virtual machines using the web console 25
2.3.3. Starting virtual machines automatically when the host starts 26
2.4. CONNECTING TO VIRTUAL MACHINES 27
2.4.1. Interacting with virtual machines using the web console 28
2.4.1.1. Viewing the virtual machine graphical console in the web console 28
2.4.1.2. Viewing the graphical console in a remote viewer using the web console 29
2.4.1.3. Viewing the virtual machine serial console in the web console 31
2.4.2. Opening a virtual machine graphical console using Virt Viewer 32
2.4.3. Connecting to a virtual machine using SSH 33
2.4.4. Opening a virtual machine serial console 35
2.4.5. Setting up easy access to remote virtualization hosts 36
2.5. SHUTTING DOWN VIRTUAL MACHINES 38
2.5.1. Shutting down a virtual machine using the command-line interface 38
2.5.2. Shutting down and restarting virtual machines using the web console 39
2.5.2.1. Shutting down virtual machines in the web console 39
2.5.2.2. Restarting virtual machines using the web console 39
2.5.2.3. Sending non-maskable interrupts to VMs using the web console 40
2.6. DELETING VIRTUAL MACHINES 41
2.6.1. Deleting virtual machines using the command line interface 41
2.6.2. Deleting virtual machines using the web console 41
2.7. ADDITIONAL RESOURCES 42
.CHAPTER
. . . . . . . . . . 3.
. . GETTING
. . . . . . . . . . .STARTED
. . . . . . . . . . WITH
. . . . . . VIRTUALIZATION
. . . . . . . . . . . . . . . . . . .ON
. . . IBM
. . . . .POWER
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .43
..............
3.1. ENABLING VIRTUALIZATION ON IBM POWER 43
3.2. HOW VIRTUALIZATION ON IBM POWER DIFFERS FROM AMD64 AND INTEL 64 44
.CHAPTER
. . . . . . . . . . 4.
. . .GETTING
. . . . . . . . . .STARTED
. . . . . . . . . . WITH
. . . . . . VIRTUALIZATION
. . . . . . . . . . . . . . . . . . .ON
. . . IBM
. . . . .Z. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47
..............
4.1. ENABLING VIRTUALIZATION ON IBM Z 47
4.2. HOW VIRTUALIZATION ON IBM Z DIFFERS FROM AMD64 AND INTEL 64 48
4.3. ADDIRIONAL RESOURCES 50
. . . . . . . . . . . 5.
CHAPTER . . MANAGING
. . . . . . . . . . . . .VIRTUAL
. . . . . . . . . .MACHINES
. . . . . . . . . . . .IN
. . THE
. . . . . WEB
. . . . . .CONSOLE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .52
..............
1
Red Hat Enterprise Linux 8 Configuring and managing virtualization
. . . . . . . . . . . 6.
CHAPTER . . .VIEWING
. . . . . . . . . INFORMATION
. . . . . . . . . . . . . . . . ABOUT
. . . . . . . . VIRTUAL
. . . . . . . . . . MACHINES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .57
..............
6.1. VIEWING VIRTUAL MACHINE INFORMATION USING THE COMMAND-LINE INTERFACE 57
6.2. VIEWING VIRTUAL MACHINE INFORMATION USING THE WEB CONSOLE 59
6.2.1. Viewing a virtualization overview in the web console 59
6.2.2. Viewing storage pool information using the web console 60
6.2.3. Viewing basic virtual machine information in the web console 62
6.2.4. Viewing virtual machine resource usage in the web console 64
6.2.5. Viewing virtual machine disk information in the web console 65
6.2.6. Viewing and editing virtual network interface information in the web console 66
6.3. SAMPLE VIRTUAL MACHINE XML CONFIGURATION 67
. . . . . . . . . . . 7.
CHAPTER . . SAVING
. . . . . . . . . AND
. . . . . RESTORING
. . . . . . . . . . . . . VIRTUAL
. . . . . . . . . . MACHINES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .73
..............
7.1. HOW SAVING AND RESTORING VIRTUAL MACHINES WORKS 73
7.2. SAVING A VIRTUAL MACHINE USING THE COMMAND LINE INTERFACE 73
7.3. STARTING A VIRTUAL MACHINE USING THE COMMAND-LINE INTERFACE 74
7.4. STARTING VIRTUAL MACHINES USING THE WEB CONSOLE 75
.CHAPTER
. . . . . . . . . . 8.
. . .CLONING
. . . . . . . . . . VIRTUAL
. . . . . . . . . .MACHINES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .77
..............
8.1. HOW CLONING VIRTUAL MACHINES WORKS 77
8.2. CREATING VIRTUAL MACHINE TEMPLATES 77
8.2.1. Creating a virtual machine template using virt-sysrep 77
8.2.2. Creating a virtual machine template manually 79
8.3. CLONING A VIRTUAL MACHINE USING THE COMMAND-LINE INTERFACE 81
8.4. CLONING A VIRTUAL MACHINE USING THE WEB CONSOLE 83
. . . . . . . . . . . 9.
CHAPTER . . .MIGRATING
. . . . . . . . . . . . VIRTUAL
. . . . . . . . . . MACHINES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .84
..............
9.1. HOW MIGRATING VIRTUAL MACHINES WORKS 84
9.2. BENEFITS OF MIGRATING VIRTUAL MACHINES 85
9.3. LIMITATIONS FOR MIGRATING VIRTUAL MACHINES 85
9.4. SHARING VIRTUAL MACHINE DISK IMAGES WITH OTHER HOSTS 86
9.5. MIGRATING A VIRTUAL MACHINE USING THE COMMAND-LINE INTERFACE 87
9.6. LIVE MIGRATING A VIRTUAL MACHINE USING THE WEB CONSOLE 90
9.7. SUPPORTED HOSTS FOR VIRTUAL MACHINE MIGRATION 92
9.8. ADDITIONAL RESOURCES 92
. . . . . . . . . . . 10.
CHAPTER . . . MANAGING
. . . . . . . . . . . . .VIRTUAL
. . . . . . . . . .DEVICES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .93
..............
10.1. HOW VIRTUAL DEVICES WORK 93
10.2. VIEWING DEVICES ATTACHED TO VIRTUAL MACHINES USING THE WEB CONSOLE 94
10.3. ATTACHING DEVICES TO VIRTUAL MACHINES 95
10.4. MODIFYING DEVICES ATTACHED TO VIRTUAL MACHINES 96
10.5. REMOVING DEVICES FROM VIRTUAL MACHINES 98
10.6. TYPES OF VIRTUAL DEVICES 99
10.7. MANAGING VIRTUAL USB DEVICES 100
10.7.1. Attaching USB devices to virtual machines 101
10.7.2. Removing USB devices from virtual machines 102
10.7.3. Additional resources 102
10.8. MANAGING VIRTUAL OPTICAL DRIVES 102
10.8.1. Attaching optical drives to virtual machines 103
2
Table of Contents
. . . . . . . . . . . 11.
CHAPTER . . .MANAGING
. . . . . . . . . . . . STORAGE
. . . . . . . . . . . FOR
. . . . . VIRTUAL
. . . . . . . . . .MACHINES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .115
..............
11.1. UNDERSTANDING VIRTUAL MACHINE STORAGE 115
11.1.1. Introduction to storage pools 115
11.1.2. Introduction to storage volumes 116
11.1.3. Storage management using libvirt 116
11.1.4. Overview of storage management 116
11.1.5. Supported and unsupported storage pool types 117
11.2. VIEWING VIRTUAL MACHINE STORAGE INFORMATION USING THE CLI 117
11.2.1. Viewing storage pool information using the CLI 118
11.2.2. Viewing storage volume information using the CLI 118
11.3. CREATING AND ASSIGNING STORAGE POOLS FOR VIRTUAL MACHINES USING THE CLI 118
11.3.1. Creating directory-based storage pools using the CLI 119
11.3.2. Creating disk-based storage pools using the CLI 120
11.3.3. Creating filesystem-based storage pools using the CLI 122
11.3.4. Creating GlusterFS-based storage pools using the CLI 125
11.3.5. Creating iSCSI-based storage pools using the CLI 126
11.3.6. Creating LVM-based storage pools using the CLI 128
11.3.7. Creating NFS-based storage pools using the CLI 130
11.3.8. Creating SCSI-based storage pools with vHBA devices using the CLI 131
11.4. PARAMETERS FOR CREATING STORAGE POOLS 133
11.4.1. Directory-based storage pool parameters 133
11.4.2. Disk-based storage pool parameters 133
11.4.3. Filesystem-based storage pool parameters 134
11.4.4. GlusterFS-based storage pool parameters 136
11.4.5. iSCSI-based storage pool parameters 136
11.4.6. LVM-based storage pool parameters 138
11.4.7. NFS-based storage pool parameters 139
11.4.8. Parameters for SCSI-based storage pools with vHBA devices 140
11.5. CREATING AND ASSIGNING STORAGE VOLUMES USING THE CLI 143
11.6. DELETING STORAGE FOR VIRTUAL MACHINES USING THE CLI 144
11.6.1. Deleting storage pools using the CLI 144
11.6.2. Deleting storage volumes using the CLI 145
11.7. MANAGING STORAGE FOR VIRTUAL MACHINES USING THE WEB CONSOLE 146
11.7.1. Viewing storage pool information using the web console 147
11.7.2. Creating storage pools using the web console 148
11.7.3. Removing storage pools using the web console 150
11.7.4. Deactivating storage pools using the web console 151
11.7.5. Creating storage volumes using the web console 152
11.7.6. Removing storage volumes using the web console 154
11.7.7. Viewing virtual machine disk information in the web console 156
11.7.8. Adding new disks to virtual machines using the web console 157
11.7.9. Attaching existing disks to virtual machines using the web console 159
11.7.10. Detaching disks from virtual machines using the web console 160
3
Red Hat Enterprise Linux 8 Configuring and managing virtualization
. . . . . . . . . . . 12.
CHAPTER . . . MANAGING
. . . . . . . . . . . . .GPU
. . . . .DEVICES
. . . . . . . . . .IN
. . VIRTUAL
. . . . . . . . . . MACHINES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .166
...............
12.1. ASSIGNING A GPU TO A VIRTUAL MACHINE 166
12.2. MANAGING NVIDIA VGPU DEVICES 169
12.2.1. Setting up NVIDIA vGPU devices 169
12.2.2. Removing NVIDIA vGPU devices 171
12.2.3. Obtaining NVIDIA vGPU information about your system 172
12.2.4. Remote desktop streaming services for NVIDIA vGPU 174
12.2.5. Additional resources 174
. . . . . . . . . . . 13.
CHAPTER . . . CONFIGURING
. . . . . . . . . . . . . . . . VIRTUAL
. . . . . . . . . .MACHINE
. . . . . . . . . . NETWORK
. . . . . . . . . . . .CONNECTIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
...............
13.1. UNDERSTANDING VIRTUAL NETWORKING 175
13.1.1. How virtual networks work 175
13.1.2. Virtual networking default configuration 176
13.2. USING THE WEB CONSOLE FOR MANAGING VIRTUAL MACHINE NETWORK INTERFACES 177
13.2.1. Viewing and editing virtual network interface information in the web console 177
13.2.2. Adding and connecting virtual network interfaces in the web console 178
13.2.3. Disconnecting and removing virtual network interfaces in the web console 179
13.3. RECOMMENDED VIRTUAL MACHINE NETWORKING CONFIGURATIONS USING THE COMMAND-LINE
INTERFACE 180
13.3.1. Configuring externally visible virtual machines using the command-line interface 180
13.3.2. Isolating virtual machines from each other using the command-line interface 181
13.4. RECOMMENDED VIRTUAL MACHINE NETWORKING CONFIGURATIONS USING THE WEB CONSOLE
183
13.4.1. Configuring externally visible virtual machines using the web console 183
13.4.2. Isolating virtual machines from each other using the web console 185
13.5. TYPES OF VIRTUAL MACHINE NETWORK CONNECTIONS 186
13.5.1. Virtual networking with network address translation 187
13.5.2. Virtual networking in routed mode 187
13.5.3. Virtual networking in bridged mode 189
13.5.4. Virtual networking in isolated mode 190
13.5.5. Virtual networking in open mode 191
13.5.6. Direct attachment of the virtual network device 191
13.5.7. Comparison of virtual machine connection types 191
13.6. BOOTING VIRTUAL MACHINES FROM A PXE SERVER 192
13.6.1. Setting up a PXE boot server on a virtual network 192
13.6.2. Booting virtual machines using PXE and a virtual network 193
13.6.3. Booting virtual machines using PXE and a bridged network 194
13.7. ADDITIONAL RESOURCES 195
. . . . . . . . . . . 14.
CHAPTER . . . SHARING
. . . . . . . . . . .FILES
. . . . . .BETWEEN
. . . . . . . . . . . THE
. . . . .HOST
. . . . . . AND
. . . . . .ITS
. . . VIRTUAL
. . . . . . . . . . MACHINES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .196
...............
14.1. SHARING FILES BETWEEN THE HOST AND LINUX VIRTUAL MACHINES 196
14.2. SHARING FILES BETWEEN THE HOST AND WINDOWS VIRTUAL MACHINES 198
.CHAPTER
. . . . . . . . . . 15.
. . . SECURING
. . . . . . . . . . . .VIRTUAL
. . . . . . . . . .MACHINES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .203
...............
15.1. HOW SECURITY WORKS IN VIRTUAL MACHINES 203
15.2. BEST PRACTICES FOR SECURING VIRTUAL MACHINES 204
15.3. CREATING A SECUREBOOT VIRTUAL MACHINE 205
15.4. LIMITING WHAT ACTIONS ARE AVAILABLE TO VIRTUAL MACHINE USERS 206
15.5. AUTOMATIC FEATURES FOR VIRTUAL MACHINE SECURITY 207
15.6. VIRTUALIZATION BOOLEANS 208
15.7. SETTING UP IBM SECURE EXECUTION ON IBM Z 209
4
Table of Contents
. . . . . . . . . . . 16.
CHAPTER . . . OPTIMIZING
. . . . . . . . . . . . . .VIRTUAL
. . . . . . . . . MACHINE
. . . . . . . . . . .PERFORMANCE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
...............
16.1. WHAT INFLUENCES VIRTUAL MACHINE PERFORMANCE 218
The impact of virtualization on system performance 218
Reducing VM performance loss 218
16.2. OPTIMIZING VIRTUAL MACHINE PERFORMANCE USING TUNED 219
16.3. CONFIGURING VIRTUAL MACHINE MEMORY 220
16.3.1. Adding and removing virtual machine memory using the web console 220
16.3.2. Adding and removing virtual machine memory using the command-line interface 221
16.3.3. Additional resources 223
16.4. OPTIMIZING VIRTUAL MACHINE I/O PERFORMANCE 223
16.4.1. Tuning block I/O in virtual machines 223
16.4.2. Disk I/O throttling in virtual machines 224
16.4.3. Enabling multi-queue virtio-scsi 225
16.5. OPTIMIZING VIRTUAL MACHINE CPU PERFORMANCE 225
16.5.1. Adding and removing virtual CPUs using the command-line interface 226
16.5.2. Managing virtual CPUs using the web console 227
16.5.3. Configuring NUMA in a virtual machine 228
16.5.4. Sample vCPU performance tuning scenario 230
16.5.5. Deactivating kernel same-page merging 236
16.6. OPTIMIZING VIRTUAL MACHINE NETWORK PERFORMANCE 236
16.7. VIRTUAL MACHINE PERFORMANCE MONITORING TOOLS 237
16.8. ADDITIONAL RESOURCES 239
. . . . . . . . . . . 17.
CHAPTER . . . INSTALLING
. . . . . . . . . . . . . .AND
. . . . .MANAGING
. . . . . . . . . . . . WINDOWS
. . . . . . . . . . . .VIRTUAL
. . . . . . . . . .MACHINES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
................
17.1. INSTALLING WINDOWS VIRTUAL MACHINES 240
17.2. OPTIMIZING WINDOWS VIRTUAL MACHINES 241
17.2.1. Installing KVM paravirtualized drivers for Windows virtual machines 241
17.2.1.1. How Windows virtio drivers work 241
17.2.1.2. Preparing virtio driver installation media on a host machine 242
17.2.1.3. Installing virtio drivers on a Windows guest 243
17.2.2. Enabling Hyper-V enlightenments 246
17.2.2.1. Enabling Hyper-V enlightenments on a Windows virtual machine 246
17.2.2.2. Configurable Hyper-V enlightenments 247
17.2.3. Configuring NetKVM driver parameters 250
17.2.4. NetKVM driver parameters 251
17.2.5. Optimizing background processes on Windows virtual machines 252
17.3. SHARING FILES BETWEEN THE HOST AND WINDOWS VIRTUAL MACHINES 253
17.4. ENABLING STANDARD HARDWARE SECURITY ON WINDOWS VIRTUAL MACHINES 257
17.5. ENABLING ENHANCED HARDWARE SECURITY ON WINDOWS VIRTUAL MACHINES 258
17.6. ADDITIONAL RESOURCES 259
. . . . . . . . . . . 18.
CHAPTER . . . CREATING
. . . . . . . . . . . .NESTED
. . . . . . . . .VIRTUAL
. . . . . . . . . .MACHINES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
................
18.1. CREATING A NESTED VIRTUAL MACHINE ON INTEL 260
18.2. CREATING A NESTED VIRTUAL MACHINE ON AMD 261
18.3. CREATING A NESTED VIRTUAL MACHINE ON IBM Z 263
18.4. CREATING A NESTED VIRTUAL MACHINE ON IBM POWER9 263
18.5. RESTRICTIONS AND LIMITATIONS FOR NESTED VIRTUALIZATION 265
. . . . . . . . . . . 19.
CHAPTER . . . DIAGNOSING
. . . . . . . . . . . . . . .VIRTUAL
. . . . . . . . . MACHINE
. . . . . . . . . . .PROBLEMS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .267
...............
19.1. GENERATING VIRTUAL MACHINE DEBUG LOGS 267
5
Red Hat Enterprise Linux 8 Configuring and managing virtualization
. . . . . . . . . . . 20.
CHAPTER . . . .FEATURE
. . . . . . . . . . SUPPORT
. . . . . . . . . . .AND
. . . . . LIMITATIONS
. . . . . . . . . . . . . . .IN
. . RHEL
. . . . . . .8. VIRTUALIZATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .273
...............
20.1. HOW RHEL 8 VIRTUALIZATION SUPPORT WORKS 273
20.2. RECOMMENDED FEATURES IN RHEL 8 VIRTUALIZATION 273
20.3. UNSUPPORTED FEATURES IN RHEL 8 VIRTUALIZATION 275
20.4. RESOURCE ALLOCATION LIMITS IN RHEL 8 VIRTUALIZATION 278
20.5. AN OVERVIEW OF VIRTUALIZATION FEATURES SUPPORT 279
6
Table of Contents
7
Red Hat Enterprise Linux 8 Configuring and managing virtualization
8
PROVIDING FEEDBACK ON RED HAT DOCUMENTATION
1. Make sure you are viewing the documentation in the Multi-page HTML format. In addition,
ensure you see the Feedback button in the upper right corner of the document.
2. Use your mouse cursor to highlight the part of text that you want to comment on.
3. Click the Add Feedback pop-up that appears below the highlighted text.
3. Fill in the Description field with your suggestion for improvement. Include a link to the
relevant part(s) of documentation.
9
Red Hat Enterprise Linux 8 Configuring and managing virtualization
In other words, virtualization makes it possible to have operating systems within operating systems.
VMs enable you to safely test software configurations and features, run legacy software, or optimize the
workload efficiency of your hardware. For more information on the benefits, see Section 1.2,
“Advantages of virtualization”.
For more information on what virtualization is, see the Red Hat Customer Portal .
Next steps
To try out virtualization in Red Hat Enterprise Linux 8, see Chapter 2, Getting started with
virtualization.
In addition to Red Hat Enterprise Linux 8 virtualization, Red Hat offers a number of specialized
virtualization solutions, each with a different user focus and features. For more information, see
Section 1.5, “Red Hat virtualization solutions” .
For example, what the guest OS sees as its disk can be represented as a file on the host file
system, and the size of that disk is less constrained than the available sizes for physical disks.
Software-controlled configurations
The entire configuration of a VM is saved as data on the host, and is under software control.
Therefore, a VM can easily be created, removed, cloned, migrated, operated remotely, or
connected to remote storage.
10
CHAPTER 1. INTRODUCING VIRTUALIZATION IN RHEL
A single physical machine can host a large number of VMs. Therefore, it avoids the need for
multiple physical machines to do the same tasks, and thus lowers the space, power, and
maintenance requirements associated with physical hardware.
Software compatibility
Because a VM can use a different OS than its host, virtualization makes it possible to run
applications that were not originally released for your host OS. For example, using a RHEL 7
guest OS, you can run applications released for RHEL 7 on a RHEL 8 host system.
NOTE
Not all operating systems are supported as a guest OS in a RHEL 8 host. For
details, see Section 20.2, “Recommended features in RHEL 8 virtualization” .
Hypervisor
The basis of creating virtual machines (VMs) in RHEL 8 is the hypervisor, a software layer that controls
hardware and enables running multiple operating systems on a host machine.
The hypervisor includes the Kernel-based Virtual Machine (KVM)module and virtualization kernel
drivers, such as virtio and vfio. These components ensure that the Linux kernel on the host machine
provides resources for virtualization to user-space software.
At the user-space level, the QEMU emulator simulates a complete virtualized hardware platform that
the guest operating system can run in, and manages how resources are allocated on the host and
presented to the guest.
In addition, the libvirt software suite serves as a management and communication layer, making QEMU
easier to interact with, enforcing security rules, and providing a number of additional tools for
configuring and running VMs.
XML configuration
A host-based XML configuration file (also known as a domain XML file) determines all settings and
devices in a specific VM. The configuration includes:
Metadata such as the name of the VM, time zone, and other information about the VM.
A description of the devices in the VM, including virtual CPUs (vCPUS), storage devices,
input/output devices, network interface cards, and other hardware, real and virtual.
VM settings such as the maximum amount of memory it can use, restart settings, and other
settings about the behavior of the VM.
For more information on the contents of an XML configuration, see Section 6.3, “Sample virtual
machine XML configuration”.
Component interaction
When a VM is started, the hypervisor uses the XML configuration to create an instance of the VM as a
user-space process on the host. The hypervisor also makes the VM process accessible to the host-
based interfaces, such as the virsh, virt-install, and guestfish utilities, or the web console GUI.
11
Red Hat Enterprise Linux 8 Configuring and managing virtualization
When these virtualization tools are used, libvirt translates their input into instructions for QEMU. QEMU
communicates the instructions to KVM, which ensures that the kernel appropriately assigns the
resources necessary to carry out the instructions. As a result, QEMU can execute the corresponding
user-space changes, such as creating or modifying a VM, or performing an action in the VM’s guest
operating system.
NOTE
For more information on the host-based interfaces, see Section 1.4, “Tools and interfaces for
virtualization management”.
Command-line interface
The CLI is the most powerful method of managing virtualization in RHEL 8. Prominent CLI commands
for virtual machine (VM) management include:
virsh - A versatile virtualization command-line utility and shell with a great variety of purposes,
depending on the provided arguments. For example:
12
CHAPTER 1. INTRODUCING VIRTUALIZATION IN RHEL
virt-install - A CLI utility for creating new VMs. For more information, see the virt-install(1)
man page.
guestfish - A utility for examining and modifying VM disk images. For more information, see the
guestfish(1) man page.
Graphical interfaces
You can use the following GUIs to manage virtualization in RHEL 8:
The RHEL 8 web console, also known as Cockpit, provides a remotely accessible and easy to
use graphical user interface for managing VMs and virtualization hosts.
For instructions on basic virtualization management with the web console, see Chapter 5,
Managing virtual machines in the web console .
The Virtual Machine Manager (virt-manager) application provides a specialized GUI for
managing VMs and virtualization hosts.
IMPORTANT
Although still supported in RHEL 8, virt-manager has been deprecated. The web
console is intended to become its replacement in a subsequent release. It is,
therefore, recommended that you get familiar with the web console for
managing virtualization in a GUI.
However, in RHEL 8, some features may only be accessible from either virt-
manager or the command line. For details, see Section 5.4, “Differences between
virtualization features in Virtual Machine Manager and the web console”.
The Gnome Boxes application is a lightweight graphical interface to view and access VMs and
remote systems. Gnome Boxes is primarily designed for use on desktop systems.
IMPORTANT
Additional resources
13
Red Hat Enterprise Linux 8 Configuring and managing virtualization
RHV is designed for enterprise-class scalability and performance, and enables the management of
your entire virtual infrastructure, including hosts, virtual machines, networks, storage, and users from
a centralized graphical interface.
Red Hat Virtualization can be used by enterprises running large deployments or mission-critical
applications. Examples of large deployments suited to Red Hat Virtualization include databases,
trading platforms, and messaging systems that must run continuously without any downtime.
For more information about Red Hat Virtualization, see the Red Hat Customer Portal or the Red Hat
Virtualization documentation suite.
To download a fully supported 60-day evaluation version of Red Hat Virtualization, see the Red Hat
Customer Portal.
NOTE
For details on virtualization features not supported on RHEL but supported on RHV or
RHOSP, see Section 20.3, “Unsupported features in RHEL 8 virtualization” .
In addition, specific Red Hat products provide operating-system-level virtualization, also known as
containerization:
Containers are isolated instances of the host OS and operate on top of an existing OS kernel.
For more information on containers, see the Red Hat Customer Portal .
Containers do not have the versatility of KVM virtualization, but are more lightweight and
flexible to handle. For a more detailed comparison, see the Introduction to Linux Containers .
14
CHAPTER 2. GETTING STARTED WITH VIRTUALIZATION
1. Enable the virtualization module and install the virtualization packages - see Section 2.1,
“Enabling virtualization”.
For CLI, see Section 2.2.1, “Creating virtual machines using the command-line interface” .
For GUI, see Section 2.2.2, “Creating virtual machines and installing guest operating
systems using the web console”.
For CLI, see Section 2.3.1, “Starting a virtual machine using the command-line interface” .
For GUI, see Section 2.3.2, “Starting virtual machines using the web console” .
For CLI, see Connecting to a virtual machine using SSH or Opening a virtual machine
graphical console using Virt Viewer.
For GUI, see Section 2.4.1, “Interacting with virtual machines using the web console” .
NOTE
The web console currently provides only a subset of VM management functions, so using
the command line is recommended for advanced use of virtualization in RHEL 8.
Prerequisites
Red Hat Enterprise Linux 8 is installed and registered on your host machine.
Your system meets the following hardware requirements to work as a virtualization host:
6 GB free disk space for the host, plus another 6 GB for each intended VM.
2 GB of RAM for the host, plus another 2 GB for each intended VM.
4 CPUs on the host. VMs can generally run with a single assigned vCPU, but Red Hat
recommends assigning 2 or more vCPUs per VM to avoid VMs becoming unresponsive
during high load.
Notably, RHEL 8 does not support virtualization on the 64-bit ARM architecture (ARM
15
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Notably, RHEL 8 does not support virtualization on the 64-bit ARM architecture (ARM
64).
The procedure below applies to the AMD64 and Intel 64 architecture (x86_64). To
enable virtualization on a host with a different supported architecture, see one of the
following sections:
Procedure
Verification
# virt-host-validate
[...]
QEMU: Checking for device assignment IOMMU support : PASS
QEMU: Checking if IOMMU is enabled by kernel : WARN (IOMMU appears to be
disabled in kernel. Add intel_iommu=on to kernel cmdline arguments)
LXC: Checking for Linux >= 2.6.26 : PASS
[...]
LXC: Checking for cgroup 'blkio' controller mount-point : PASS
LXC: Checking if device /sys/fs/fuse/connections exists : FAIL (Load the 'fuse' module to
enable /proc/ overrides)
2. If all virt-host-validate checks return a PASS value, your system is prepared for creating VMs.
If any of the checks return a FAIL value, follow the displayed instructions to fix the problem.
If any of the checks return a WARN value, consider following the displayed instructions to
improve virtualization capabilities.
NOTE
16
CHAPTER 2. GETTING STARTED WITH VIRTUALIZATION
NOTE
QEMU: Checking for hardware virtualization: FAIL (Only emulated CPUs are
available, performance will be significantly limited)
However, attempting to create VMs on such a host system will fail, rather than
have performance problems.
Prerequisites
You have sufficient a amount of system resources to allocate to your VMs, such as disk space,
RAM, or CPUs. The recommended values may vary significantly depending on the intended
tasks and workload of the VMs.
An operating system (OS) installation source is available locally or on a network. This can be one
of the following:
WARNING
Optional: A Kickstart file can be provided for faster and easier configuration of the installation.
Procedure
To create a VM and start its OS installation, use the virt-install command, along with the following
mandatory arguments:
17
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Based on the chosen installation method, the necessary options and values can vary. See below for
examples:
The following creates a VM named demo-guest1 that installs the Windows 10 OS from an ISO
image locally stored in the /home/username/Downloads/Win10install.iso file. This VM is also
allocated with 2048 MiB of RAM and 2 vCPUs, and an 80 GiB qcow2 virtual disk is automatically
configured for the VM.
# virt-install --name demo-guest2 --memory 4096 --vcpus 4 --disk none --livecd --os-
variant rhel8.0 --cdrom /home/username/Downloads/rhel8.iso
The following creates a RHEL 8 VM named demo-guest3 that connects to an existing disk
image, /home/username/backup/disk.qcow2. This is similar to physically moving a hard drive
between machines, so the OS and data available to demo-guest3 are determined by how the
image was handled previously. In addition, this VM is allocated with 2048 MiB of RAM and 2
vCPUs.
Note that the --os-variant option is highly recommended when importing a disk image. If it is not
provided, the performance of the created VM will be negatively affected.
The following creates a VM named demo-guest5 that installs from a RHEL8.iso image file in
text-only mode, without graphics. It connects the guest console to the serial console. The VM
has 16384 MiB of memory, 16 vCPUs, and 280 GiB disk. This kind of installation is useful when
connecting to a host over a slow network link.
18
CHAPTER 2. GETTING STARTED WITH VIRTUALIZATION
The following creates a VM named demo-guest6, which has the same configuration as demo-
guest5, but resides on the 10.0.0.1 remote host.
Verification
If the VM is created successfully, a virt-viewer window opens with a graphical console of the VM
and starts the guest OS installation.
Troubleshooting
b. Verify that the libvirt default network is active and configured to start automatically:
i. If activating the default network fails with the following error, the libvirt-daemon-
config-network package has not been installed correctly.
ii. If activating the default network fails with an error similar to the following, a conflict has
occurred between the default network’s subnet and an existing interface on the host.
19
Red Hat Enterprise Linux 8 Configuring and managing virtualization
To fix this, use the virsh net-edit default command and change the 192.168.122.*
values in the configuration to a subnet not already in use on the host.
Additional resources
man virt-install
2.2.2. Creating virtual machines and installing guest operating systems using the
web console
To manage virtual machines (VMs) in a GUI on a RHEL 8 host, use the web console. The following
sections provide information on how to use the RHEL 8 web console to create VMs and install guest
operating systems on them.
To create a virtual machine (VM) on the host machine to which the web console is connected, follow the
instructions below.
Prerequisites
You have sufficient a amount of system resources to allocate to your VMs, such as disk space,
RAM, or CPUs. The recommended values may vary significantly depending on the intended
tasks and workload of the VMs.
Procedure
1. In the Virtual Machines interface of the web console, click Create VM.
The Create new virtual machine dialog appears.
20
CHAPTER 2. GETTING STARTED WITH VIRTUALIZATION
Connection - The type of libvirt connection, system or session. For more details, see
System and session connections .
Installation type - The installation can use a local installation medium, a URL, a PXE
network boot, or download an OS from a limited set of operating systems.
Operating system - The VM’s operating system. Note that Red Hat provides support only
for a limited set of guest operating systems .
Size - The amount of storage space with which to configure the VM.
Run unattended installation - Whether or not to run the installation unattended. This
option is available only when the Installation type is Download an OS.
Immediately Start VM - Whether or not the VM will start immediately after it is created.
3. Click Create.
The VM is created. If the Immediately Start VM checkbox is selected, the VM will immediately
start and begin installing the guest operating system.
Additional resources
21
Red Hat Enterprise Linux 8 Configuring and managing virtualization
For information on installing an operating system on a VM, see Section 2.2.2.3, “Installing guest
operating systems using the web console”.
2.2.2.2. Creating virtual machines by importing disk images using the web console
To create a virtual machine (VM) by importing a disk image of an existing VM installation, follow the
instructions below.
Prerequisites
You have sufficient a amount of system resources to allocate to your VMs, such as disk space,
RAM, or CPUs. The recommended values can vary significantly depending on the intended tasks
and workload of the VMs.
Procedure
1. In the Virtual Machines interface of the web console, click Import VM.
The Import a virtual machine dialog appears.
Connection - The type of libvirt connection, system or session. For more details, see
System and session connections .
Disk image - The path to the existing disk image of a VM on the host system.
Operating system - The VM’s operating system. Note that Red Hat provides support only
for a limited set of guest operating systems .
Immediately start VM - Whether or not the VM will start immediately after it is created.
22
CHAPTER 2. GETTING STARTED WITH VIRTUALIZATION
3. Click Import.
The first time a virtual machine (VM) loads, you must install an operating system on the VM.
NOTE
If the Immediately Start VM checkbox in the Create New Virtual Machine dialog is
checked, the installation routine of the operating system starts automatically when the
VM is created.
Prerequisites
Procedure
1. In the Virtual Machines interface, click the VM on which you want to install a guest OS.
A new page opens with basic information about the selected VM and controls for managing
various aspects of the VM.
NOTE
You can change the firmware only if you had not selected the Immediately Start
VM check box in the Create New Virtual Machinedialog, and the OS has not
already been installed on the VM.
23
Red Hat Enterprise Linux 8 Configuring and managing virtualization
c. Click Save.
3. Click Install.
The installation routine of the operating system runs in the VM console.
Troubleshooting
Prerequisites
Before a VM can be started, it must be created and, ideally, also installed with an OS. For
instruction to do so, see Section 2.2, “Creating virtual machines”.
Prerequisites
Procedure
For a VM located on a remote host, use the virsh start utility along with the QEMU+SSH
24
CHAPTER 2. GETTING STARTED WITH VIRTUALIZATION
For a VM located on a remote host, use the virsh start utility along with the QEMU+SSH
connection to the host.
For example, the following command starts the demo-guest1 VM on the 192.168.123.123 host.
[email protected]'s password:
Last login: Mon Feb 18 07:28:55 2019
Additional Resources
For simplifying VM management on remote hosts, see modifying your libvirt and SSH
configuration.
You can use the virsh autostart utility to configure a VM to start automatically when the host
boots up. For more information about autostart, see starting virtual machines automatically
when the host starts.
Prerequisites
Procedure
2. Click Run.
The VM starts, and you can connect to its console or graphical output .
3. Optional: To set up the VM to start automatically when the host starts, click the Autostart
checkbox.
If you use network interfaces that are not managed by libvirt, you must also make additional
changes to the systemd configuration. Otherwise, the affected VMs might fail to start, see
starting virtual machines automatically when the host starts .
Additional resources
For information on shutting down a VM, see Section 2.5.2.1, “Shutting down virtual machines in
the web console”.
For information on restarting a VM, see Section 2.5.2.2, “Restarting virtual machines using the
25
Red Hat Enterprise Linux 8 Configuring and managing virtualization
For information on restarting a VM, see Section 2.5.2.2, “Restarting virtual machines using the
web console”.
Prerequisites
Procedure
1. Use the virsh autostart utility to configure the VM to start automatically when the host starts.
For example, the following command configures the demo-guest1 VM to start automatically.
2. If you use network interfaces that are not managed by libvirt, you must also make additional
changes to the systemd configuration. Otherwise, the affected VMs might fail to start.
NOTE
# mkdir -p /etc/systemd/system/libvirtd.service.d/
# touch /etc/systemd/system/libvirtd.service.d/10-network-online.conf
c. Add the following lines to the 10-network-online.conf file. This configuration change
ensures systemd starts libvirtd service only after the network on the host is ready.
[Unit]
After=network-online.target
Verification
1. View the VM configuration, and check that the autostart option is enabled.
For example, the following command displays basic information about the demo-guest1 VM,
26
CHAPTER 2. GETTING STARTED WITH VIRTUALIZATION
For example, the following command displays basic information about the demo-guest1 VM,
including the autostart option.
2. If you use network interfaces that are not managed by libvirt, check if the content of the 10-
network-online.conf file matches the following output.
$ cat /etc/systemd/system/libvirtd.service.d/10-network-online.conf
[Unit]
After=network-online.target
Additional resources
You can also enable autostart using the web console, see starting virtual machines using the
web console.
When using the web console interface, use the Virtual Machines pane in the web console
interface. For more information, see Section 2.4.1, “Interacting with virtual machines using the
web console”.
If you need to interact with a VM graphical display without using the web console, use the Virt
Viewer application. For details, see Section 2.4.2, “Opening a virtual machine graphical console
using Virt Viewer”.
When a graphical display is not possible or not necessary, use an SSH terminal connection .
When the virtual machine is not reachable from your system by using a network, use the virsh
console.
If the VMs to which you are connecting are on a remote host rather than a local one, you can optionally
configure your system for more convenient access to remote hosts .
27
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Prerequisites
The VMs you want to interact with are installed and started.
To interact with the VM’s graphical interface in the web console, use the graphical console.
To interact with the VM’s graphical interface in a remote viewer, use the graphical console in
remote viewers.
To interact with the VM’s CLI in the web console, use the serial console.
2.4.1.1. Viewing the virtual machine graphical console in the web console
Using the virtual machine (VM) console interface, you can view the graphical output of a selected VM in
the RHEL 8 web console.
Prerequisites
Ensure that both the host and the VM support a graphical interface.
Procedure
1. In the Virtual Machines interface, click the VM whose graphical console you want to view.
A new page opens with an Overview and a Console section for the VM.
3. Click Expand
You can now interact with the VM console using the mouse and keyboard in the same manner
28
CHAPTER 2. GETTING STARTED WITH VIRTUALIZATION
You can now interact with the VM console using the mouse and keyboard in the same manner
you interact with a real machine. The display in the VM console reflects the activities being
performed on the VM.
NOTE
The host on which the web console is running may intercept specific key combinations,
such as Ctrl+Alt+Del, preventing them from being sent to the VM.
To send such key combinations, click the Send key menu and select the key sequence to
send.
For example, to send the Ctrl+Alt+Del combination to the VM, click the Send key and
select the Ctrl+Alt+Del menu entry.
Additional resources
For instructions on viewing the graphical console in a remote viewer, see Section 2.4.1.2,
“Viewing the graphical console in a remote viewer using the web console”.
For instructions on viewing the serial console in the web console, see Section 2.4.1.3, “Viewing
the virtual machine serial console in the web console”.
2.4.1.2. Viewing the graphical console in a remote viewer using the web console
Using the web console interface, you can display the graphical console of a selected virtual machine
(VM) in a remote viewer, such as Virt Viewer.
NOTE
You can launch Virt Viewer from within the web console. Other VNC and SPICE remote
viewers can be launched manually.
Prerequisites
Ensure that both the host and the VM support a graphical interface.
Before you can view the graphical console in Virt Viewer, you must install Virt Viewer on the
machine to which the web console is connected.
NOTE
Procedure
1. In the Virtual Machines interface, click the VM whose graphical console you want to view.
29
Red Hat Enterprise Linux 8 Configuring and managing virtualization
A new page opens with an Overview and a Console section for the VM.
You can interact with the VM console using the mouse and keyboard in the same manner you
interact with a real machine. The display in the VM console reflects the activities being
performed on the VM.
30
CHAPTER 2. GETTING STARTED WITH VIRTUALIZATION
NOTE
The server on which the web console is running can intercept specific key combinations,
such as Ctrl+Alt+Del, preventing them from being sent to the VM.
To send such key combinations, click the Send key menu and select the key sequence to
send.
For example, to send the Ctrl+Alt+Del combination to the VM, click the Send key menu
and select the Ctrl+Alt+Del menu entry.
Troubleshooting
If launching the Remote Viewer in the web console does not work or is not optimal, you can
manually connect with any viewer application using the following protocols:
Address - The default address is 127.0.0.1. You can modify the vnc_listen or the
spice_listen parameter in /etc/libvirt/qemu.conf to change it to the host’s IP address.
Additional resources
For instructions on viewing the graphical console in the web console, see Section 2.4.1.1,
“Viewing the virtual machine graphical console in the web console”.
For instructions on viewing the serial console in the web console, see Section 2.4.1.3, “Viewing
the virtual machine serial console in the web console”.
2.4.1.3. Viewing the virtual machine serial console in the web console
You can view the serial console of a selected virtual machine (VM) in the RHEL 8 web console. This is
useful when the host machine or the VM is not configured with a graphical interface.
For more information about the serial console, see Section 2.4.4, “Opening a virtual machine serial
console”.
Prerequisites
Procedure
1. In the Virtual Machines pane, click the VM whose serial console you want to view.
A new page opens with an Overview and a Console section for the VM.
31
Red Hat Enterprise Linux 8 Configuring and managing virtualization
You can disconnect and reconnect the serial console from the VM.
Additional resources
For instructions on viewing the graphical console in the web console, see Section 2.4.1.1,
“Viewing the virtual machine graphical console in the web console”.
For instructions on viewing the graphical console in a remote viewer, see Section 2.4.1.2,
“Viewing the graphical console in a remote viewer using the web console”.
Prerequisites
Your system, as well as the VM you are connecting to, must support graphical displays.
If the target VM is located on a remote host, connection and root access privileges to the host
are needed.
Optional: If the target VM is located on a remote host, set up your libvirt and SSH for more
convenient access to remote hosts.
Procedure
To connect to a local VM, use the following command and replace guest-name with the name of
the VM you want to connect to:
# virt-viewer guest-name
To connect to a remote VM, use the virt-viewer command with the SSH protocol. For example,
the following command connects as root to a VM called guest-name, located on remote system
10.0.0.1. The connection also requires root authentication for 10.0.0.1.
32
CHAPTER 2. GETTING STARTED WITH VIRTUALIZATION
If the connection works correctly, the VM display is shown in the Virt Viewer window.
You can interact with the VM console using the mouse and keyboard in the same manner you interact
with a real machine. The display in the VM console reflects the activities being performed on the VM.
Additional resources
For more information on using Virt Viewer, see the virt-viewer man page.
Connecting to VMs on a remote host can be simplified by modifying your libvirt and SSH
configuration.
For management of VMs in an interactive GUI in RHEL 8, you can use the web console
interface. For more information, see Section 2.4.1, “Interacting with virtual machines using the
web console”.
33
Red Hat Enterprise Linux 8 Configuring and managing virtualization
To interact with the terminal of a virtual machine (VM) using the SSH connection protocol, follow the
procedure below:
Prerequisites
You have network connection and root access privileges to the target VM.
If the target VM is located on a remote host, you also have connection and root access
privileges to that host.
The libvirt-nss component is installed and enabled on the VM’s host. If it is not, do the
following:
b. Edit the /etc/nsswitch.conf file and add libvirt_guest to the hosts line:
[...]
passwd: compat
shadow: compat
group: compat
hosts: files libvirt_guest dns
[...]
Procedure
1. Optional: When connecting to a remote VM, SSH into its physical host first. The following
example demonstrates connecting to a host machine 10.0.0.1 using its root credentials:
# ssh [email protected]
[email protected]'s password:
Last login: Mon Sep 24 12:05:36 2018
root~#
2. Use the VM’s name and user access credentials to connect to it. For example, the following
connects to to the "testguest1" VM using its root credentials:
# ssh root@testguest1
root@testguest1's password:
Last login: Wed Sep 12 12:05:36 2018
root~]#
Troubleshooting
If you do not know the VM’s name, you can list all VMs available on the host using the virsh list -
-all command:
34
CHAPTER 2. GETTING STARTED WITH VIRTUALIZATION
Does not provide VNC or SPICE protocols, and thus does not offer video display for GUI tools.
Does not have a network connection, and thus cannot be interacted with using SSH.
Prerequisites
The VM must have the serial console configured in its kernel command line. To verify this, the
cat /proc/cmdline command output on the VM should include console=ttyS0. For example:
# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.10.0-948.el7.x86_64 root=/dev/mapper/rhel-root ro console=tty0
console=ttyS0,9600n8 rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb
If the serial console is not set up properly on a VM, using virsh console to connect to the VM
connects you to an unresponsive guest console. However, you can still exit the unresponsive
console by using the Ctrl+] shortcut.
a. On the VM, edit the /etc/default/grub file and add console=ttyS0 to the line that starts
with GRUB_CMDLINE_LINUX.
b. Clear the kernel options that may prevent your changes from taking effect.
# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-3.10.0-948.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-948.el7.x86_64.img
[...]
done
Procedure
1. On your host system, use the virsh console command. The following example connects to the
guest1 VM, if the libvirt driver supports safe console handling:
Subscription-name
35
Red Hat Enterprise Linux 8 Configuring and managing virtualization
localhost login:
2. You can interact with the virsh console in the same way as with a standard command-line
interface.
Additional resources
For more information about the VM serial console, see the virsh man page.
[email protected]'s password:
Last login: Mon Feb 18 07:28:55 2019
Id Name State
---------------------------------
1 remote-guest running
However, for convenience, you can remove the need to specify the connection details in full by
modifying your SSH and libvirt configuration. For example, you will be able to do:
[email protected]'s password:
Last login: Mon Feb 18 07:28:55 2019
Id Name State
---------------------------------
1 remote-guest running
Procedure
1. Edit or create the ~/.ssh/config file and add the following to it, where host-alias is a shortened
name associated with a specific remote host, and hosturl is the URL address of the host.
Host host-alias
User root
Hostname hosturl
For example, the following sets up the tyrannosaurus alias for [email protected]:
Host tyrannosaurus
User root
Hostname 10.0.0.1
36
CHAPTER 2. GETTING STARTED WITH VIRTUALIZATION
2. Edit or create the /etc/libvirt/libvirt.conf file, and add the following, where qemu-host-alias is a
host alias that QEMU and libvirt utilities will associate with the intended host:
uri_aliases = [
"qemu-host-alias=qemu+ssh://host-alias/system",
]
For example, the following uses the tyrannosaurus alias configured in the previous step to set up
the t-rex alias, which stands for qemu+ssh://10.0.0.1/system:
uri_aliases = [
"t-rex=qemu+ssh://tyrannosaurus/system",
]
3. As a result, you can manage remote VMs by using libvirt-based utilities on the local system with
an added -c qemu-host-alias parameter. This automatically performs the commands over SSH
on the remote host.
For example, the following lists VMs on the 10.0.0.1 remote host, the connection to which was
set up as t-rex in the previous steps:
[email protected]'s password:
Last login: Mon Feb 18 07:28:55 2019
Id Name State
---------------------------------
1 velociraptor running
4. Optional: If you want to use libvirt utilities exclusively on a single remote host, you can also set a
specific connection as the default target for libvirt-based utilities. To do so, edit the
/etc/libvirt/libvirt.conf file and set the value of the uri_default parameter to qemu-host-alias.
For example, the following uses the t-rex host alias set up in the previous steps as a default
libvirt target.
As a result, all libvirt-based commands will automatically be performed on the specified remote
host.
$ virsh list
[email protected]'s password:
Last login: Mon Feb 18 07:28:55 2019
Id Name State
---------------------------------
1 velociraptor running
However, this is not recommended if you also want to manage VMs on your local host or on
different remote hosts.
37
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Additional resources
When connecting to a remote host, you can avoid having to provide the root password to the
remote system. To do so, use one or more of the following methods:
Set up a kerberos authentication ticket on the remote system. For instructions, see
Kerberos authentication in Identity Management .
Utilities that can use the -c (or --connect) option and the remote host access configuration
described above include:
virt-install
virt-viewer
virsh
virt-manager
Use a shutdown command appropriate to the guest OS while connected to the guest .
[email protected]'s password:
Last login: Mon Feb 18 07:28:55 2019
Domain demo-guest1 is being shutdown
To force a VM to shut down, for example if it has become unresponsive, use the virsh destroy command
on the host:
NOTE
38
CHAPTER 2. GETTING STARTED WITH VIRTUALIZATION
NOTE
The virsh destroy command does not actually delete or remove the VM configuration or
disk images. It only destroys the running VM instance. However, in rare cases, this
command may cause corruption of the VM’s file system, so using virsh destroy is only
recommended if all other shutdown methods have failed.
2.5.2. Shutting down and restarting virtual machines using the web console
Using the RHEL 8 web console, you can shut down or restart running virtual machines. You can also send
a non-maskable interrupt to an unresponsive virtual machine.
If a virtual machine (VM) is in the running state, you can shut it down using the RHEL 8 web console.
Prerequisites
Procedure
1. In the Virtual Machines interface, click the row of the VM you want to shut down.
The row expands to reveal the Overview pane with basic information about the selected VM and
controls for shutting down and deleting the VM.
Troubleshooting
If the VM does not shut down, click the Menu button ⋮ next to the Shut Down button and
select Force Shut Down.
To shut down an unresponsive VM, you can also send a non-maskable interrupt. For more
information, see Section 2.5.2.3, “Sending non-maskable interrupts to VMs using the web
console”.
Additional resources
For information on starting a VM, see Section 2.3.2, “Starting virtual machines using the web
console”.
For information on restarting a VM, see Section 2.5.2.2, “Restarting virtual machines using the
web console”.
If a virtual machine (VM) is in the running state, you can restart it using the RHEL 8 web console.
Prerequisites
Procedure
39
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Procedure
1. In the Virtual Machines interface, click the row of the VM you want to restart.
The row expands to reveal the Overview pane with basic information about the selected VM and
controls for shutting down and deleting the VM.
2. Click Restart.
The VM shuts down and restarts.
Troubleshooting
If the VM does not restart, click the Menu button ⋮ next to the Restart button and select
Force Restart.
To restart an unresponsive VM, you can also send a non-maskable interrupt. For more
information, see Section 2.5.2.3, “Sending non-maskable interrupts to VMs using the web
console”.
Additional resources
For information on starting a VM, see Section 2.3.2, “Starting virtual machines using the web
console”.
For information on shutting down a VM, see Section 2.5.2.1, “Shutting down virtual machines in
the web console”.
Sending a non-maskable interrupt (NMI) may cause an unresponsive running virtual machine (VM) to
respond or shut down. For example, you can send the Ctrl+Alt+Del NMI to a VM that is not responding
to standard input.
Prerequisites
Procedure
1. In the Virtual Machines interface, click the row of the VM to which you want to send an NMI.
The row expands to reveal the Overview pane with basic information about the selected VM and
controls for shutting down and deleting the VM.
2. Click the Menu button ⋮ next to the Shut Down button and select Send Non-Maskable
Interrupt.
An NMI is sent to the VM.
Additional resources
For information on starting a VM, see Section 2.3.2, “Starting virtual machines using the web
console”.
For information on restarting a VM, see Section 2.5.2.2, “Restarting virtual machines using the
web console”.
For information on shutting down a VM, see Section 2.5.2.1, “Shutting down virtual machines in
the web console”.
40
CHAPTER 2. GETTING STARTED WITH VIRTUALIZATION
Prerequisites
Procedure
Additional resources
For other virsh undefine arguments, use virsh undefine --help or see the virsh man page.
Prerequisites
Procedure
1. In the Virtual Machines interface, click the Menu button ⋮ of the VM that you want to delete.
A drop down menu appears with controls for various VM operations.
41
Red Hat Enterprise Linux 8 Configuring and managing virtualization
2. Click Delete.
A confirmation dialog appears.
3. Optional: To delete all or some of the storage files associated with the VM, select the
checkboxes next to the storage files you want to delete.
4. Click Delete.
The VM and any selected storage files are deleted.
42
CHAPTER 3. GETTING STARTED WITH VIRTUALIZATION ON IBM POWER
Apart from the information in the following sections, using virtualization on IBM POWER works the same
as on AMD64 and Intel 64. Therefore, you can see other RHEL 8 virtualization documentation for more
information when using virtualization on IBM POWER.
Prerequisites
6 GB free disk space for the host, plus another 6 GB for each intended VM.
2 GB of RAM for the host, plus another 2 GB for each intended VM.
4 CPUs on the host. VMs can generally run with a single assigned vCPU, but Red Hat
recommends assigning 2 or more vCPUs per VM to avoid VMs becoming unresponsive
during high load.
If the output of this command includes the PowerNV entry, you are running a PowerNV machine
type and can use virtualization on IBM POWER.
Procedure
# modprobe kvm_hv
43
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Verification
# virt-host-validate
[...]
QEMU: Checking if device /dev/vhost-net exists : PASS
QEMU: Checking if device /dev/net/tun exists : PASS
QEMU: Checking for cgroup 'memory' controller support : PASS
QEMU: Checking for cgroup 'memory' controller mount-point : PASS
[...]
QEMU: Checking for cgroup 'blkio' controller support : PASS
QEMU: Checking for cgroup 'blkio' controller mount-point : PASS
QEMU: Checking if IOMMU is enabled by kernel : PASS
2. If all virt-host-validate checks return a PASS value, your system is prepared for creating VMs.
If any of the checks return a FAIL value, follow the displayed instructions to fix the problem.
If any of the checks return a WARN value, consider following the displayed instructions to
improve virtualization capabilities.
NOTE
QEMU: Checking for hardware virtualization: FAIL (Only emulated CPUs are
available, performance will be significantly limited)
However, attempting to create VMs on such a host system will fail, rather than
have performance problems.
Memory requirements
VMs on IBM POWER consume more memory. Therefore, the recommended minimum memory
allocation for a virtual machine (VM) on an IBM POWER host is 2GB RAM.
Display protocols
44
CHAPTER 3. GETTING STARTED WITH VIRTUALIZATION ON IBM POWER
The SPICE protocol is not supported on IBM POWER systems. To display the graphical output of a
VM, use the VNC protocol. In addition, only the following virtual graphics card devices are supported:
vga - only supported in -vga std mode and not in -vga cirrus mode.
virtio-vga
virtio-gpu
SMBIOS
SMBIOS configuration is not available.
Memory allocation errors
POWER8 VMs, including compatibility mode VMs, may fail with an error similar to:
qemu-kvm: Failed to allocate KVM HPT of order 33 (try smaller maxmem?): Cannot allocate
memory
This is significantly more likely to occur on VMs that use RHEL 7.3 and prior as the guest OS.
To fix the problem, increase the CMA memory pool available for the guest’s hashed page table (HPT)
by adding kvm_cma_resv_ratio=memory to the host’s kernel command line, where memory is the
percentage of the host memory that should be reserved for the CMA pool (defaults to 5).
Huge pages
Transparent huge pages (THPs) do not provide any notable performance benefits on IBM POWER8
VMs. However, IBM POWER9 VMs can benefit from THPs as expected.
In addition, the size of static huge pages on IBM POWER8 systems are 16 MiB and 16 GiB, as
opposed to 2 MiB and 1 GiB on AMD64, Intel 64, and IBM POWER9. As a consequence, to migrate a
VM configured with static huge pages from an IBM POWER8 host to an IBM POWER9 host, you must
first set up 1GiB huge pages on the VM.
kvm-clock
The kvm-clock service does not have to be configured for time management in VMs on IBM
POWER9.
pvpanic
IBM POWER9 systems do not support the pvpanic device. However, an equivalent functionality is
available and activated by default on this architecture. To enable it in a VM, use the <on_crash>
XML configuration element with the preserve value.
In addition, make sure to remove the <panic> element from the <devices> section, as its presence
can lead to the VM failing to boot on IBM POWER systems.
Single-threaded host
On IBM POWER8 systems, the host machine must run in single-threaded mode to support VMs.
This is automatically configured if the qemu-kvm packages are installed. However, VMs running on
single-threaded hosts can still use multiple threads.
Peripheral devices
A number of peripheral devices supported on AMD64 and Intel 64 systems are not supported on IBM
POWER systems, or a different device is supported as a replacement.
Devices used for PCI-E hierarchy, including ioh3420 and xio3130-downstream, are not
supported. This functionality is replaced by multiple independent PCI root bridges provided
by the spapr-pci-host-bridge device.
45
Red Hat Enterprise Linux 8 Configuring and managing virtualization
UHCI and EHCI PCI controllers are not supported. Use OHCI and XHCI controllers instead.
IDE devices, including the virtual IDE CD-ROM (ide-cd) and the virtual IDE disk ( ide-hd), are
not supported. Use the virtio-scsi and virtio-blk devices instead.
Emulated PCI NICs (rtl8139) are not supported. Use the virtio-net device instead.
Sound devices, including intel-hda, hda-output, and AC97, are not supported.
USB redirection devices, including usb-redir and usb-tablet, are not supported.
Additional sources
For a comparison of selected supported and unsupported virtualization features across system
architectures supported by Red Hat, see Section 20.5, “An overview of virtualization features
support”.
46
CHAPTER 4. GETTING STARTED WITH VIRTUALIZATION ON IBM Z
Apart from the information in the following sections, using virtualization on IBM Z works the same as on
AMD64 and Intel 64. Therefore, you can see other RHEL 8 virtualization documentation for more
information when using virtualization on IBM Z.
Prerequisites
6 GB free disk space for the host, plus another 6 GB for each intended VM.
2 GB of RAM for the host, plus another 2 GB for each intended VM.
4 CPUs on the host. VMs can generally run with a single assigned vCPU, but Red Hat
recommends assigning 2 or more vCPUs per VM to avoid VMs becoming unresponsive
during high load.
RHEL 8 is installed on a logical partition (LPAR). In addition, the LPAR supports the start-
interpretive execution (SIE) virtualization functions.
To verify this, search for sie in your /proc/cpuinfo file.
Red Hat Enterprise Linux Advanced Virtualization for IBM Z repository is enabled:
Procedure
# modprobe kvm
47
Red Hat Enterprise Linux 8 Configuring and managing virtualization
3. Remove any pre-existing virtualization packages and modules that your system already
contains:
Verification
# virt-host-validate
[...]
QEMU: Checking if device /dev/kvm is accessible : PASS
QEMU: Checking if device /dev/vhost-net exists : PASS
QEMU: Checking if device /dev/net/tun exists : PASS
QEMU: Checking for cgroup 'memory' controller support : PASS
QEMU: Checking for cgroup 'memory' controller mount-point : PASS
[...]
2. If all virt-host-validate checks return a PASS value, your system is prepared for creating VMs.
If any of the checks return a FAIL value, follow the displayed instructions to fix the problem.
If any of the checks return a WARN value, consider following the displayed instructions to
improve virtualization capabilities.
NOTE
QEMU: Checking for hardware virtualization: FAIL (Only emulated CPUs are
available, performance will be significantly limited)
However, attempting to create VMs on such a host system will fail, rather than
have performance problems.
48
CHAPTER 4. GETTING STARTED WITH VIRTUALIZATION ON IBM Z
Virtual PCI and USB devices are not supported on IBM Z. This also means that virtio-*-pci devices
are unsupported, and virtio-*-ccw devices should be used instead. For example, use virtio-net-ccw
instead of virtio-net-pci.
Note that direct attachment of PCI devices, also known as PCI passthrough, is supported.
Supported guest OS
Red Hat only supports VMs hosted on IBM Z if they use RHEL 7 or RHEL 8 as their guest operating
system.
Device boot order
IBM Z does not support the <boot dev='device'> XML configuration element. To define device boot
order, use the <boot order='number'> element in the <devices> section of the XML. For example:
NOTE
<devices>
<watchdog model='diag288' action='poweroff'/>
</devices>
kvm-clock
The kvm-clock service is specific to AMD64 and Intel 64 systems, and does not have to be
configured for VM time management on IBM Z.
v2v and p2v
The virt-v2v and virt-p2v utilities are supported only on the AMD64 and Intel 64 architecture, and
49
Red Hat Enterprise Linux 8 Configuring and managing virtualization
The virt-v2v and virt-p2v utilities are supported only on the AMD64 and Intel 64 architecture, and
are not provided on IBM Z.
Nested virtualization
Creating nested VMs requires different settings on IBM Z than on AMD64 and Intel 64. For details,
see Chapter 18, Creating nested virtual machines .
No graphical output in earlier releases
When using RHEL 8.3 or an earlier minor version on your host, displaying the VM graphical output is
not possible when connecting to the VM using the VNC protocol. This is because the gnome-
desktop utility was not supported in earlier RHEL versions on IBM Z. In addition, the SPICE display
protocol does not work on IBM Z.
Migrations
To successfully migrate to a later host model (for example from IBM z14 to z15), or to update the
hypervisor, use the host-model CPU mode. The host-passthrough and maximum CPU modes are
not recommended, as they are generally not migration-safe.
If you want to specify an explicit CPU model in the custom CPU mode, follow these guidelines:
To successfully migrate to an older host model (such as from z15 to z14), or to an earlier version of
QEMU, KVM, or the RHEL kernel, use the CPU type of the oldest available host model without -base
at the end.
If you have both the source host and the destination host running, you can instead use the
virsh cpu-baseline command on the destination host to obtain a suitable CPU model.
Additional resources
For a comparison of selected supported and unsupported virtualization features across system
architectures supported by Red Hat, see Section 20.5, “An overview of virtualization features
support”.
This makes the ppa15 and bpb features available to the guest if the host supports them.
If using a specific host model, add the ppa15 and pbp features. The following example uses
the zEC12 CPU model:
50
CHAPTER 4. GETTING STARTED WITH VIRTUALIZATION ON IBM Z
Note that when using the ppa15 feature with the z114 and z196 CPU models on a host
machine that uses a z12 CPU, you also need to use the latest microcode level (bundle 95 or
later).
For information on attaching DASD devices to VMs on IBM Z hosts, see Section 10.10,
“Attaching DASD devices to virtual machines on IBM Z”.
For instructions on using IBM Z hardware encryption in VMs, see Section 15.8, “Attaching
cryptographic coprocessors to virtual machines on IBM Z”.
For instructions on configuring IBM Z Secure Execution for your VMs, see Section 15.7, “Setting
up IBM Secure Execution on IBM Z”.
For information on configuring nested virtualization on IBM Z hosts, see Section 18.3, “Creating
a nested virtual machine on IBM Z”.
51
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Note that to use the web console to manage your VMs on RHEL 8, you must first install a web console
plug-in for virtualization.
Next steps
For instructions on enabling VMs management in your web console, see Setting up the web
console to manage virtual machines.
For a comprehensive list of VM management actions that the web console provides, see Virtual
machine management features available in the web console.
For a list of features that are currently not available in the web console but can be used in the
virt-manager application, see Differences between virtualization features in Virtual Machine
Manager and the web console.
Prerequisites
52
CHAPTER 5. MANAGING VIRTUAL MACHINES IN THE WEB CONSOLE
Ensure that the web console is installed and enabled on your machine.
If this command returns Unit cockpit.socket could not be found, follow the Installing the web
console document to enable the web console.
Procedure
Verification
1. Access the web console, for example by entering the https://localhost:9090 address in your
browser.
2. Log in.
3. If the installation was successful, Virtual Machines appears in the web console side menu.
Additional resources
For instructions on connecting to the web console, as well as other information on using the web
console, see the Managing systems using the RHEL 8 web console document.
Table 5.1. VM management tasks that can be performed in the RHEL 8 web console
53
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Create a VM and install it with a guest operating Creating virtual machines and installing guest
system operating systems using the web console
Start, shut down, and restart the VM Starting virtual machines using the web consoleand
Shutting down and restarting virtual machines using
the web console
Connect to and interact with a VM using a variety of Interacting with virtual machines using the web
consoles console
View a variety of information about the VM Viewing virtual machine information using the web
console
Adjust the host memory allocated to a VM Adding and removing virtual machine memory using
the web console
Manage network connections for the VM Using the web console for managing virtual machine
network interfaces
Manage the VM storage available on the host and Managing storage for virtual machines using the web
attach virtual disks to the VM console
Configure the virtual CPU settings of the VM Section 16.5.2, “Managing virtual CPUs using the web
console”
However, in RHEL 8, some VM management tasks can only be performed in virt-manager or the
command line. The following table highlights the features that are available in virt-manager but not
available in the RHEL 8.0 web console.
If a feature is available in a later minor version of RHEL 8, the minimum RHEL 8 version appears in the
Support in web console introduced column.
Table 5.2. VM managemennt tasks that cannot be performed using the web console in RHEL 8.0
54
CHAPTER 5. MANAGING VIRTUAL MACHINES IN THE WEB CONSOLE
Adding a new virtual network RHEL 8.1 virsh net-create or virsh net-
define
55
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Additional resources
56
CHAPTER 6. VIEWING INFORMATION ABOUT VIRTUAL MACHINES
Procedure
57
Red Hat Enterprise Linux 8 Configuring and managing virtualization
<uuid>a973434f-2f6e-4ěša-8949-76a7a98569e1</uuid>
<metadata>
[...]
For instructions on managing a VM’s storage, see Chapter 11, Managing storage for virtual
machines.
VCPU: 1
CPU: 0
State: running
CPU time: 88.6s
CPU Affinity: yyyy
To configure and optimize the vCPUs in your VM, see Section 16.5, “Optimizing virtual machine
CPU performance”.
58
CHAPTER 6. VIEWING INFORMATION ABOUT VIRTUAL MACHINES
UUID: c699f9f6-9202-4ca8-91d0-6b8cb9024116
Active: yes
Persistent: yes
Autostart: yes
Bridge: virbr0
For details about network interfaces, VM networks, and instructions for configuring them, see
Chapter 13, Configuring virtual machine network connections .
For instructions on viewing information about storage pools and storage volumes on your host,
see Section 11.2, “Viewing virtual machine storage information using the CLI” .
Prerequisites
Procedure
Storage Pools - The number of storage pools, active or inactive, that can be accessed by the
web console and their state.
Networks - The number of networks, active or inactive, that can be accessed by the web
59
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Networks - The number of networks, active or inactive, that can be accessed by the web
console and their state.
Additional resources
For instructions on viewing detailed information about the storage pools the web console
session can access, see Section 6.2.2, “Viewing storage pool information using the web console” .
For instructions on viewing basic information about a selected VM to which the web console
session is connected, see Section 6.2.3, “Viewing basic virtual machine information in the web
console”.
For instructions on viewing resource usage for a selected VM to which the web console session
is connected, see Section 6.2.4, “Viewing virtual machine resource usage in the web console” .
For instructions on viewing disk information about a selected VM to which the web console
session is connected, see Section 6.2.5, “Viewing virtual machine disk information in the web
console”.
For instructions on viewing virtual network interface information about a selected VM to which
the web console session is connected, see Section 6.2.6, “Viewing and editing virtual network
interface information in the web console”.
Prerequisites
Procedure
60
CHAPTER 6. VIEWING INFORMATION ABOUT VIRTUAL MACHINES
Size - The current allocation and the total capacity of the storage pool.
2. Click the row of the storage whose information you want to see.
The row expands to reveal the Overview pane with detailed information about the selected
storage pool.
Target path - The source for the types of storage pools backed by directories, such as dir
or netfs.
Persistent - Indicates whether or not the storage pool has a persistent configuration.
Autostart - Indicates whether or not the storage pool starts automatically when the system
boots up.
3. To view a list of storage volumes associated with the storage pool, click Storage Volumes.
The Storage Volumes pane appears, showing a list of configured storage volumes.
61
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Additional resources
For instructions on viewing information about all of the VMs to which the web console session is
connected, see Section 6.2.1, “Viewing a virtualization overview in the web console” .
For instructions on viewing basic information about a selected VM to which the web console
session is connected, see Section 6.2.3, “Viewing basic virtual machine information in the web
console”.
For instructions on viewing resource usage for a selected VM to which the web console session
is connected, see Section 6.2.4, “Viewing virtual machine resource usage in the web console” .
For instructions on viewing disk information about a selected VM to which the web console
session is connected, see Section 6.2.5, “Viewing virtual machine disk information in the web
console”.
For instructions on viewing virtual network interface information about a selected VM to which
the web console session is connected, see Section 6.2.6, “Viewing and editing virtual network
interface information in the web console”.
Prerequisites
Procedure
62
CHAPTER 6. VIEWING INFORMATION ABOUT VIRTUAL MACHINES
CPU Type - The architecture of the virtual CPUs configured for the VM.
Additional resources
For instructions on viewing information about all of the VMs to which the web console session is
connected, see Section 6.2.1, “Viewing a virtualization overview in the web console” .
For instructions on viewing information about the storage pools to which the web console
session is connected, see Section 6.2.2, “Viewing storage pool information using the web
console”.
For instructions on viewing resource usage for a selected VM to which the web console session
is connected, see Section 6.2.4, “Viewing virtual machine resource usage in the web console” .
For instructions on viewing disk information about a selected VM to which the web console
session is connected, see Section 6.2.5, “Viewing virtual machine disk information in the web
console”.
For instructions on viewing virtual network interface information about a selected VM to which
the web console session is connected, see Section 6.2.6, “Viewing and editing virtual network
interface information in the web console”.
To see more detailed virtual CPU information and configure the virtual CPUs configured for a
63
Red Hat Enterprise Linux 8 Configuring and managing virtualization
To see more detailed virtual CPU information and configure the virtual CPUs configured for a
VM, see Section 16.5.2, “Managing virtual CPUs using the web console” .
Prerequisites
Procedure
1. In the Virtual Machines interface, click the VM whose information you want to see.
A new page opens with an Overview section with basic information about the selected VM and a
Console section to access the VM’s graphical interface.
2. Scroll to Usage.
The Usage section displays information about the memory and virtual CPU usage of the VM.
Additional resources
For instructions on viewing information about all of the VMs to which the web console session is
connected, see Section 6.2.1, “Viewing a virtualization overview in the web console” .
For instructions on viewing information about the storage pools to which the web console
session is connected, see Section 6.2.2, “Viewing storage pool information using the web
console”.
For instructions on viewing basic information about a selected VM to which the web console
session is connected, see Section 6.2.3, “Viewing basic virtual machine information in the web
console”.
For instructions on viewing disk information about a selected VM to which the web console
session is connected, see Section 6.2.5, “Viewing virtual machine disk information in the web
console”.
For instructions on viewing virtual network interface information about a selected VM to which
the web console session is connected, see Section 6.2.6, “Viewing and editing virtual network
interface information in the web console”.
64
CHAPTER 6. VIEWING INFORMATION ABOUT VIRTUAL MACHINES
Prerequisites
Procedure
2. Scroll to Disks.
The Disks section displays information about the disks assigned to the VM as well as options to
Add, Remove, or Edit disks.
Access - Whether the disk is Writeable or Read-only. For raw disks, you can also set the
access to Writeable and shared.
Additional resources
For instructions on viewing information about all of the VMs to which the web console session is
connected, see Section 6.2.1, “Viewing a virtualization overview in the web console” .
For instructions on viewing information about the storage pools to which the web console
session is connected, see Section 6.2.2, “Viewing storage pool information using the web
console”.
For instructions on viewing basic information about a selected VM to which the web console
session is connected, see Section 6.2.3, “Viewing basic virtual machine information in the web
console”.
For instructions on viewing resource usage for a selected VM to which the web console session
65
Red Hat Enterprise Linux 8 Configuring and managing virtualization
For instructions on viewing resource usage for a selected VM to which the web console session
is connected, see Section 6.2.4, “Viewing virtual machine resource usage in the web console” .
For instructions on viewing virtual network interface information about a selected VM to which
the web console session is connected, see Section 6.2.6, “Viewing and editing virtual network
interface information in the web console”.
6.2.6. Viewing and editing virtual network interface information in the web console
Using the RHEL 8 web console, you can view and modify the virtual network interfaces on a selected
virtual machine (VM):
Prerequisites
Procedure
1. In the Virtual Machines interface, click the VM whose information you want to see.
A new page opens with an Overview section with basic information about the selected VM and a
Console section to access the VM’s graphical interface.
Type - The type of network interface for the VM. The types include virtual network, bridge
to LAN, and direct attachment.
NOTE
Source - The source of the network interface. This is dependent on the network type.
3. To edit the virtual network interface settings, Click Edit. The Virtual Network Interface Settings
dialog opens.
66
CHAPTER 6. VIEWING INFORMATION ABOUT VIRTUAL MACHINES
NOTE
Changes to the virtual network interface settings take effect only after restarting
the VM.
Additionally, MAC address can only be modified when the VM is shut off.
To obtain the XML configuration of a VM, you can use the virsh dumpxml command followed by the
VM’s name.
67
Red Hat Enterprise Linux 8 Configuring and managing virtualization
68
CHAPTER 6. VIEWING INFORMATION ABOUT VIRTUAL MACHINES
69
Red Hat Enterprise Linux 8 Configuring and managing virtualization
70
CHAPTER 6. VIEWING INFORMATION ABOUT VIRTUAL MACHINES
71
Red Hat Enterprise Linux 8 Configuring and managing virtualization
72
CHAPTER 7. SAVING AND RESTORING VIRTUAL MACHINES
This section provides information about saving VMs, as well as about restoring them to the same state
without a full VM boot-up.
This process frees up RAM and CPU resources on the host system in exchange for disk space, which
may improve the host system performance. When the VM is restored, because the guest OS does not
need to be booted, the long boot-up period is avoided as well.
To save a VM, you can use the command-line interface (CLI). For instructions, see Saving virtual
machines using the command line interface.
To restore a VM you can use the CLI or the web console GUI.
Prerequisites
Make sure you have sufficient disk space to save the VM and its configuration. Note that the
space occupied by the VM depends on the amount of RAM allocated to that VM.
Procedure
The next time the VM is started, it will automatically restore the saved state from the above file.
73
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Verification
You can make sure that the VM is in a saved state or shut off using the virsh list utility.
To list the VMs that have managed save enabled, use the following command. The VMs listed as
saved have their managed save enabled.
Note that to list the saved VMs that are in a shut off state, you must use the --all or --inactive
options with the command.
Troubleshooting
If the saved VM file becomes corrupted or unreadable, restoring the VM will initiate a standard
VM boot instead.
Additional resources
For more virsh managedsave arguments, use virsh managedsave --help or see the virsh
man page.
For instructions on restoring a saved VM using the command-line interface, see Section 7.3,
“Starting a virtual machine using the command-line interface”.
For instructions on restoring a saved VM using the web console, see Section 7.4, “Starting
virtual machines using the web console”.
Prerequisites
74
CHAPTER 7. SAVING AND RESTORING VIRTUAL MACHINES
Procedure
For a VM located on a remote host, use the virsh start utility along with the QEMU+SSH
connection to the host.
For example, the following command starts the demo-guest1 VM on the 192.168.123.123 host.
[email protected]'s password:
Last login: Mon Feb 18 07:28:55 2019
Additional Resources
For simplifying VM management on remote hosts, see modifying your libvirt and SSH
configuration.
You can use the virsh autostart utility to configure a VM to start automatically when the host
boots up. For more information about autostart, see starting virtual machines automatically
when the host starts.
Prerequisites
Procedure
2. Click Run.
The VM starts, and you can connect to its console or graphical output .
3. Optional: To set up the VM to start automatically when the host starts, click the Autostart
75
Red Hat Enterprise Linux 8 Configuring and managing virtualization
3. Optional: To set up the VM to start automatically when the host starts, click the Autostart
checkbox.
If you use network interfaces that are not managed by libvirt, you must also make additional
changes to the systemd configuration. Otherwise, the affected VMs might fail to start, see
starting virtual machines automatically when the host starts .
Additional resources
For information on shutting down a VM, see Section 2.5.2.1, “Shutting down virtual machines in
the web console”.
For information on restarting a VM, see Section 2.5.2.2, “Restarting virtual machines using the
web console”.
76
CHAPTER 8. CLONING VIRTUAL MACHINES
Cloning creates a new VM that uses its own disk image for storage, but most of the clone’s configuration
and stored data is identical to the source VM. This makes it possible to prepare a number of VMs
optimized for a certain task without the need to optimize each VM individually.
This process is faster than creating a new VM and installing it with a guest operating system, and can be
used to rapidly generate VMs with a specific configuration and content.
If you are planning to create multiple clones of a VM, first create a VM template that does not contain:
Unique settings, such as persistent network MAC configuration, which can prevent the clones
from working correctly.
To clone a VM, you can use the RHEL 8 CLI. For details, see Section 8.3, “Cloning a virtual machine using
the command-line interface”.
You can create VM templates using the virt-sysrep utility or you can create them manually based on
your requirements.
Prerequisites
You must know where the disk image for the source VM is located, and be the owner of the
77
Red Hat Enterprise Linux 8 Configuring and managing virtualization
You must know where the disk image for the source VM is located, and be the owner of the
VM’s disk image file.
Note that disk images for VMs created in the system connection of libvirt are by default located
in the /var/lib/libvirt/images directory and owned by the root user:
# ls -la /var/lib/libvirt/images
-rw-------. 1 root root 9665380352 Jul 23 14:50 a-really-important-vm.qcow2
-rw-------. 1 root root 8591507456 Jul 26 2017 an-actual-vm-that-i-use.qcow2
-rw-------. 1 root root 8591507456 Jul 26 2017 totally-not-a-fake-vm.qcow2
-rw-------. 1 root root 10739318784 Sep 20 17:57 another-vm-example.qcow2
Optional: Any important data on the VM’s disk has been backed up. If you want to preserve the
source VM intact, clone it first and edit the clone to create a template.
Procedure
1. Ensure you are logged in as the owner of the VM’s disk image:
# whoami
root
# cp /var/lib/libvirt/images/a-really-important-vm.qcow2 /var/lib/libvirt/images/a-really-
important-vm-original.qcow2
This is used later to verify the VM was successfully turned into a template.
# virt-sysprep -a /var/lib/libvirt/images/a-really-important-vm.qcow2
[ 0.0] Examining the guest ...
[ 7.3] Performing "abrt-data" ...
[ 7.3] Performing "backup-files" ...
[ 9.6] Performing "bash-history" ...
[ 9.6] Performing "blkid-tab" ...
[...]
Verification
To confirm that the process was successful, compare the modified disk image to the original
one. The following example shows a successful creation of a template:
# virt-diff -a /var/lib/libvirt/images/a-really-important-vm-orig.qcow2 -A
/var/lib/libvirt/images/a-really-important-vm.qcow2
- - 0644 1001 /etc/group-
- - 0000 797 /etc/gshadow-
= - 0444 33 /etc/machine-id
[...]
- - 0600 409 /home/username/.bash_history
- d 0700 6 /home/username/.ssh
- - 0600 868 /root/.bash_history
[...]
Additional resources
78
CHAPTER 8. CLONING VIRTUAL MACHINES
Additional resources
Using the virt-sysprep command as shown above performs the standard VM template
preparation. For more information, see the OPERATIONS section in the virt-sysprep man
page.
To customize which specific operations you want virt-sysprep to perform, use the --operations
option, and specify the intended operations as a comma-separated list.
For instructions on cloning a VM template, see Section 8.3, “Cloning a virtual machine using the
command-line interface”.
Prerequisites
Ensure that you know the location of the disk image for the source VM and are the owner of the
VM’s disk image file.
Note that disk images for VMs created in the system connection of libvirt are by default located
in the /var/lib/libvirt/images directory and owned by the root user:.
# ls -la /var/lib/libvirt/images
-rw-------. 1 root root 9665380352 Jul 23 14:50 a-really-important-vm.qcow2
-rw-------. 1 root root 8591507456 Jul 26 2017 an-actual-vm-that-i-use.qcow2
-rw-------. 1 root root 8591507456 Jul 26 2017 totally-not-a-fake-vm.qcow2
-rw-------. 1 root root 10739318784 Sep 20 17:57 another-vm-example.qcow2
Optional: Any important data on the VM’s disk has been backed up. If you want to preserve the
source VM intact, clone it first and edit the clone to create a template.
Procedure
# rm -f /etc/udev/rules.d/70-persistent-net.rules
NOTE
If udev rules are not removed, the name of the first NIC might be eth1
instead of eth0.
NOTE
If the HWADDR does not match the new guest’s MAC address, the ifcfg
will be ignored.
ii. Configure DHCP but do not include HWADDR or any other unique information:
c. Ensure the following files also contain the same content, if they exist on your system:
/etc/sysconfig/networking/devices/ifcfg-eth[x]
/etc/sysconfig/networking/profiles/default/ifcfg-eth[x]
NOTE
If you had used NetworkManager or any special settings with the VM,
ensure that any additional unique information is removed from the ifcfg
scripts.
# rm /etc/sysconfig/rhn/systemid
# subscription-manager clean
NOTE
80
CHAPTER 8. CLONING VIRTUAL MACHINES
NOTE
The original RHSM profile remains in the Portal along with your ID code.
Use the following command to reactivate your RHSM registration on the
VM after it is cloned:
# rm -rf /etc/ssh/ssh_host_example
5. Remove the gnome-initial-setup-done file to configure the VM to run the configuration wizard
on the next boot:
# rm ~/.config/gnome-initial-setup-done
NOTE
The wizard that runs on the next boot depends on the configurations that have
been removed from the VM. In addition, on the first boot of the clone, it is
recommended that you change the hostname.
Prerequisites
Ensure that there is sufficient disk space to store the cloned disk images.
Optional: When creating multiple VM clones, remove unique data and settings from the source
VM to ensure the cloned VMs work properly. For instructions, see Section 8.2, “Creating virtual
machine templates”.
Procedure
1. Use the virt-clone utility with options that are appropriate for your environment and use case.
Sample use cases
The following command clones a local VM named doppelganger and creates the
doppelganger-clone VM. It also creates the doppelganger-clone.qcow2 disk image in the
same location as the disk image of the original VM, and with the same data:
81
Red Hat Enterprise Linux 8 Configuring and managing virtualization
The following command clones a VM named geminus1, and creates a local VM named
geminus2, which uses only two of geminus1's multiple disks.
To clone your VM to a different host, migrate the VM without undefining it on the local host.
For example, the following commands clone the previously created geminus2 VM to the
10.0.0.1 remote system, including its local disks. Note that using these commands also
requires root privileges for 10.0.0.1.
# scp /var/lib/libvirt/images/disk1-clone.qcow2
[email protected]/user@remote_host.com://var/lib/libvirt/images/
# scp /var/lib/libvirt/images/disk2-clone.qcow2
[email protected]/user@remote_host.com://var/lib/libvirt/images/
Verification
To verify the VM has been successfully cloned and is working correctly:
1. Confirm the clone has been added to the list of VMs on your host.
Additional resources
For additional options for cloning VMs, see the virt-clone man page.
For details on moving the VM clones to a different host, including troubleshooting information,
see Chapter 9, Migrating virtual machines.
82
CHAPTER 8. CLONING VIRTUAL MACHINES
NOTE
Prerequisites
Procedure
1. In the Virtual Machines interface of the web console, click the Menu button ⋮ of the VM that
you want to clone.
A drop down menu appears with controls for various VM operations.
2. Click Clone.
The Create a clone VM dialog appears.
4. Click Clone.
A new VM is created based on the source VM.
Verification
Confirm whether the cloned VM appears in the list of VMs available on your host.
83
Red Hat Enterprise Linux 8 Configuring and managing virtualization
By default, the migrated VM is transient on the destination host, and remains defined also on the source
host.
You can migrate a running VM using live or non-live migrations. To migrate a shut-off VM, you must use
an offline migration. For details, see the following table.
Live migration The VM continues to run Useful for VMs that The VM’s disk images
on the source host require constant uptime. must be located on a
machine while KVM is However, VMs that shared network,
transferring the VM’s modify memory pages accessible both to the
memory pages to the faster than KVM can source host and the
destination host. When transfer them, such as destination host.
the migration is nearly VMs under heavy I/O
complete, KVM very load, cannot be live-
briefly suspends the VM, migrated, and non-live
and resumes it on the migration must be used
destination host. instead.
Non-live migration Suspends the VM, Creates downtime for The VM’s disk images
copies its configuration the VM, but is generally must be located on a
and its memory to the more reliable than live shared network,
destination host, and migration. accessible both to the
resumes the VM. Recommended for VMs source host and the
under heavy I/O load. destination host.
Offline migration Moves the VM’s Recommended for shut- The VM’s disk images do
configuration to the off VMs. not have to be available
destination host on a shared network,
and can be copied or
moved manually to the
destination host instead.
Additional resources
For more information on the benefits of VM migration, see Section 9.2, “Benefits of migrating
virtual machines”.
For instructions on setting up shared storage for migrating VMs, see Section 9.4, “Sharing
84
CHAPTER 9. MIGRATING VIRTUAL MACHINES
For instructions on setting up shared storage for migrating VMs, see Section 9.4, “Sharing
virtual machine disk images with other hosts”.
Load balancing
VMs can be moved to host machines with lower usage if their host becomes overloaded, or if another
host is under-utilized.
Hardware independence
When you need to upgrade, add, or remove hardware devices on the host machine, you can safely
relocate VMs to other hosts. This means that VMs do not experience any downtime for hardware
improvements.
Energy saving
VMs can be redistributed to other hosts, and the unloaded host systems can thus be powered off to
save energy and cut costs during low usage periods.
Geographic migration
VMs can be moved to another physical location for lower latency or when required for other reasons.
Live storage migration cannot be performed on RHEL 8, but you can migrate storage while the
VM is powered down. Note that live storage migration is available on Red Hat Virtualization.
Migrating VMs from or to a session connection of libvirt is unreliable and therefore not
recommended.
VMs with assigned host devices will not work correctly if migrated, or the migration will fail. Such
configurations include:
Device passthrough
A migration between hosts that use Non-Uniform Memory Access (NUMA) pinning works only if
the hosts have similar topology. However, the performance on running workloads might be
negatively affected by the migration.
The emulated CPUs, both on the source VM and the destination VM, must be identical,
otherwise the migration might fail. Any differences between the VMs in the following CPU
related areas can cause problems with the migration:
CPU model
Firmware settings
Microcode version
BIOS version
85
Red Hat Enterprise Linux 8 Configuring and managing virtualization
BIOS settings
QEMU version
Kernel version
Prerequisites
Optional: A host system is available for hosting the storage that is not the source or destination
host, but both the source and the destination host can reach it through the network. This is the
optimal solution for shared storage and is recommended by Red Hat.
Make sure that NFS file locking is not used as it is not supported in KVM.
The NFS is installed and enabled on the source and destination hosts. If it is not:
b. Make sure that the ports for NFS, such as 2049, are open in the firewall.
Procedure
1. Connect to the host that will provide shared storage. In this example, it is the cargo-bay host:
# ssh root@cargo-bay
root@cargo-bay's password:
Last login: Mon Sep 24 12:05:36 2019
root~#
2. Create a directory that will hold the disk image and will be shared with the migration hosts.
# mkdir /var/lib/libvirt/shared-images
3. Copy the disk image of the VM from the source host to the newly created directory. For
86
CHAPTER 9. MIGRATING VIRTUAL MACHINES
3. Copy the disk image of the VM from the source host to the newly created directory. For
example, the following copies the disk image of the wanderer1 VM to the
/var/lib/libvirt/shared-images/ directory on the`cargo-bay` host:
4. On the host that you want to use for sharing the storage, add the sharing directory to the
/etc/exports file. The following example shares the /var/lib/libvirt/shared-images directory
with the source-example and dest-example hosts:
5. On both the source and destination host, mount the shared directory in the
/var/lib/libvirt/images directory:
Verification
To verify the process was successful, start the VM on the source host and observe if it boots
correctly.
Additional resources
Prerequisites
The source host and the destination host both use the KVM hypervisor.
The source host and the destination host are able to reach each other over the network. Use the
ping utility to verify this.
For the migration to be supportable by Red Hat, the source host and destination host must be
using specific operating systems and machine types. To ensure this is the case, see Section 9.7,
“Supported hosts for virtual machine migration”.
The disk images of VMs that will be migrated are located on a separate networked location
accessible to both the source host and the destination host. This is optional for offline
migration, but required for migrating a running VM.
For instructions to set up such shared VM storage, see Section 9.4, “Sharing virtual machine
disk images with other hosts”.
When migrating a running VM, your network bandwidth must be higher than the rate in which the
87
Red Hat Enterprise Linux 8 Configuring and managing virtualization
When migrating a running VM, your network bandwidth must be higher than the rate in which the
VM generates dirty memory pages.
To obtain the dirty page rate of your VM before you start the live migration, do the following:
a. Monitor the rate of dirty page generation of the VM for a short period of time.
In this example, the VM is generating 2 MB of dirty memory pages per second. Attempting
to live-migrate such a VM on a network with a bandwidth of 2 MB/s or less will cause the live
migration not to progress if you do not pause the VM or lower its workload.
To ensure that the live migration finishes successfully, Red Hat recommends that your
network bandwidth is significantly greater than the VM’s dirty page generation rate.
When migrating an existing VM in a public bridge tap network, the source and destination hosts
must be located on the same network. Otherwise, the VM network will not operate after
migration.
Procedure
1. Use the virsh migrate command with options appropriate for your migration requirements.
The following migrates the wanderer1 VM from your local host to the system connection of
the dest-example host using an SSH tunnel. The VM will remain running during the
migration.
The following enables you to make manual adjustments to the configuration of the
wanderer2 VM running on your local host, and then migrates the VM to the dest-example
host. The migrated VM will automatically use the updated configuration.
This procedure can be useful for example when the destination host needs to use a different
path to access the shared VM storage or when configuring a feature specific to the
destination host.
88
CHAPTER 9. MIGRATING VIRTUAL MACHINES
The following suspends the wanderer3 VM from the source-example host, migrates it to
the dest-example host, and instructs it to use the adjusted XML configuration, provided by
the wanderer3-alt.xml file. When the migration is completed, libvirt resumes the VM on the
destination host.
After the migration, the VM is in the shut off state on the source host, and the migrated
copy is deleted after it is shut down.
The following deletes the shut-down wanderer4 VM from the source-example host, and
moves its configuration to the dest-example host.
Note that this type of migration does not require moving the VM’s disk image to shared
storage. However, for the VM to be usable on the destination host, you also need to migrate
the VM’s disk image. For example:
2. Wait for the migration to complete. The process may take some time depending on network
bandwidth, system load, and the size of the VM. If the --verbose option is not used for virsh
migrate, the CLI does not display any progress indicators except errors.
When the migration is in progress, you can use the virsh domjobinfo utility to display the
migration statistics.
Verification
On the destination host, list the available VMs to verify if the VM has been migrated:
# virsh list
Id Name State
----------------------------------
10 wanderer1 running
If the migration is still running, this command will list the VM state as paused.
Troubleshooting
In some cases, the target host will not be compatible with certain values of the migrated VM’s
XML configuration, such as the network name or CPU type. As a result, the VM will fail to boot
on the target host. To fix these problems, you can update the problematic values by using the
virsh edit command. After updating the values, you must restart the VM for the changes to be
applied.
If a live migration is taking a long time to complete, this may be because the VM is under heavy
load and too many memory pages are changing for live migration to be possible. To fix this
problem, change the migration to a non-live one by suspending the VM.
Additional resources
89
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Additional resources
For further options and examples for virtual machine migration, use virsh migrate --help or see
the virsh man page.
WARNING
For tasks that modify memory pages faster than KVM can transfer them, such as
heavy I/O load tasks, it is recommended that you do not live migrate the VM.
Prerequisites
The VM’s disk images are located on a shared storage that is accessible to the source host as
well as the destination host.
When migrating a running VM, your network bandwidth must be higher than the rate in which the
VM generates dirty memory pages.
To obtain the dirty page rate of your VM before you start the live migration, do the following in
your command-line interface:
a. Monitor the rate of dirty page generation of the VM for a short period of time.
In this example, the VM is generating 2 MB of dirty memory pages per second. Attempting
to live-migrate such a VM on a network with a bandwidth of 2 MB/s or less will cause the live
migration not to progress if you do not pause the VM or lower its workload.
To ensure that the live migration finishes successfully, Red Hat recommends that your
network bandwidth is significantly greater than the VM’s dirty page generation rate.
90
CHAPTER 9. MIGRATING VIRTUAL MACHINES
Procedure
1. In the Virtual Machines interface of the web console, click the Menu button ⋮ of the VM that
you want to migrate.
A drop down menu appears with controls for various VM operations.
2. Click Migrate
The Migrate VM to another host dialog appears.
Permanent - Do not check the box if you wish to migrate the VM permanently. Permanent
migration completely removes the VM configuration from the source host.
Temporary - Temporary migration migrates a copy of the VM to the destination host. This
copy is deleted from the destination host when the VM is shut down. The original VM
remains on the source host.
5. Click Migrate
Your VM is migrated to the destination host.
Verification
To verify whether the VM has been successfully migrated and is working correctly:
Confirm whether the VM appears in the list of VMs available on the destination host.
91
Red Hat Enterprise Linux 8 Configuring and managing virtualization
On supported RHEL 8
systems: machine type
q35.
On supported RHEL 8
systems: machine type
q35.
Additional resources
For information on the currently supported versions of RHEL 7 and RHEL 8, see Red Hat
Knowledgebase.
92
CHAPTER 10. MANAGING VIRTUAL DEVICES
The following sections provide a general overview of what virtual devices are, and instructions on how
they can be attached, modified, or removed from a VM.
Virtual devices attached to a VM can be configured when creating the VM, and can also be managed on
an existing VM. Generally, virtual devices can be attached or detached from a VM only when the VM is
shut off, but some can be added or removed when the VM is running. This feature is referred to as
device hot plug and hot unplug.
When creating a new VM, libvirt automatically creates and configures a default set of essential virtual
devices, unless specified otherwise by the user. These are based on the host system architecture and
machine type, and usually include:
the CPU
memory
a keyboard
a video card
a sound card
To manage virtual devices after the VM is created, use the command-line interface (CLI). However, to
manage virtual storage devices and NICs, you can also use the RHEL 8 web console.
Performance or flexibility
For some types of devices, RHEL 8 supports multiple implementations, often with a trade-off between
performance and flexibility.
For example, the physical storage used for virtual disks can be represented by files in various formats,
such as qcow2 or raw, and presented to the VM using a variety of controllers:
an emulated controller
virtio-scsi
virtio-blk
An emulated controller is slower than a virtio controller, because virtio devices are designed specifically
93
Red Hat Enterprise Linux 8 Configuring and managing virtualization
An emulated controller is slower than a virtio controller, because virtio devices are designed specifically
for virtualization purposes. On the other hand, emulated controllers make it possible to run operating
systems that have no drivers for virtio devices. Similarly, virtio-scsi offers a more complete support for
SCSI commands, and makes it possible to attach a larger number of disks to the VM. Finally, virtio-blk
provides better performance than both virtio-scsi and emulated controllers, but a more limited range of
use-cases. For example, attaching a physical disk as a LUN device to a VM is not possible when using
virtio-blk.
For more information on types of virtual devices, see Section 10.6, “Types of virtual devices”.
Additional resources
For instructions how to attach, remove, or modify VM storage devices using the CLI, see
Chapter 11, Managing storage for virtual machines .
For instructions how to manage VM disks using the web console, see Section 11.7, “Managing
storage for virtual machines using the web console”.
For instructions how to manage VM NICs using the web console, see Section 13.2, “Using the
web console for managing virtual machine network interfaces”.
For instructions how to create and manage NVIDIA vGPUs, see Section 12.2, “Managing NVIDIA
vGPU devices”.
Prerequisites
Procedure
1. In the Virtual Machines interface, click the VM whose information you want to see.
A new page opens with detailed information about the VM.
94
CHAPTER 10. MANAGING VIRTUAL DEVICES
Additional resources
For information about attaching or removing virtual devices, see Chapter 10, Managing virtual
devices.
The following procedure demostrates how to create and attach virtual devices to your virtual machines
(VMs) using the command-line interface (CLI). Some devices can also be attached to VMs using the
RHEL 8 web console.
Prerequisites
Obtain the required options for the device you intend to attach to a VM. To see the available
options for a specific device, use the virt-xml --device=? command. For example:
# virt-xml --network=?
--network options:
[...]
address.unit
boot_order
clearxml
driver_name
[...]
Procedure
1. To attach a device to a VM, use the virt-xml --add-device command, including the definition of
the device and the required options:
For example, the following command creates a 20GB newdisk qcow2 disk image in the
/var/lib/libvirt/images/ directory, and attaches it as a virtual disk to the running testguest
VM on the next start-up of the VM:
The following attaches a USB flash drive, attached as device 004 on bus 002 on the host,
to the testguest2 VM while the VM is running:
95
Red Hat Enterprise Linux 8 Configuring and managing virtualization
The bus-device combination for defining the USB can be obtained using the lsusb
command.
Verification
To verify the device has been added, do any of the following:
Use the virsh dumpxml command and see if the device’s XML definition has been added to the
<devices> section in the VM’s XML configuration.
For example, the following output shows the configuration of the testguest VM and confirms
that the 002.004 USB flash disk device has been added.
Run the VM and test if the device is present and works properly.
Additional resources
For further information on using the virt-xml command, use man virt-xml.
The following procedure provides general instructions for modifying virtual devices using the command-
line interface (CLI). Some devices attached to your VM, such as disks and NICs, can also be modified
using the RHEL 8 web console .
Prerequisites
Obtain the required options for the device you intend to attach to a VM. To see the available
options for a specific device, use the virt-xml --device=? command. For example:
# virt-xml --network=?
--network options:
[...]
address.unit
96
CHAPTER 10. MANAGING VIRTUAL DEVICES
boot_order
clearxml
driver_name
[...]
Optional: Back up the XML configuration of your VM by using virsh dumpxml vm-name and
sending the output to a file. For example, the following backs up the configuration of your
Motoko VM as the motoko.xml file:
Procedure
1. Use the virt-xml --edit command, including the definition of the device and the required
options:
For example, the following clears the <cpu> configuration of the shut-off testguest VM and sets
it to host-model:
Verification
To verify the device has been modified, do any of the following:
Run the VM and test if the device is present and reflects the modifications.
Use the virsh dumpxml command and see if the device’s XML definition has been modified in
the VM’s XML configuration.
For example, the following output shows the configuration of the testguest VM and confirms
that the CPU mode has been configured as host-model.
Troubleshooting
If modifying a device causes your VM to become unbootable, use the virsh define utility to
restore the XML configuration by reloading the XML configuration file you backed up previously.
NOTE
97
Red Hat Enterprise Linux 8 Configuring and managing virtualization
NOTE
For small changes to the XML configuration of your VM, you can use the virsh edit
command - for example virsh edit testguest. However, do not use this method for more
extensive changes, as it is more likely to break the configuration in ways that could
prevent the VM from booting.
Additional resources
The following procedure demonstrates how to remove virtual devices from your virtual machines (VMs)
using the command-line interface (CLI). Some devices, such as disks or NICs, can also be removed from
VMs using the RHEL 8 web console .
Prerequisites
Optional: Back up the XML configuration of your VM by using virsh dumpxml vm-name and
sending the output to a file. For example, the following backs up the configuration of your
Motoko VM as the motoko.xml file:
Procedure
1. Use the virt-xml --remove-device command, including a definition of the device. For example:
The following removes the storage device marked as vdb from the running testguest VM
after it shuts down:
The following immediately removes a USB flash drive device from the running testguest2
VM:
Troubleshooting
If removing a device causes your VM to become unbootable, use the virsh define utility to
98
CHAPTER 10. MANAGING VIRTUAL DEVICES
If removing a device causes your VM to become unbootable, use the virsh define utility to
restore the XML configuration by reloading the XML configuration file you backed up previously.
Additional resources
Emulated devices
Emulated devices are software implementations of widely used physical devices. Drivers designed for
physical devices are also compatible with emulated devices. Therefore, emulated devices can be
used very flexibly.
However, since they need to faithfully emulate a particular type of hardware, emulated devices may
suffer a significant performance loss compared with the corresponding physical devices or more
optimized virtual devices.
Virtual CPUs (vCPUs), with a large choice of CPU models available. The performance impact
of emulation depends significantly on the differences between the host CPU and the
emulated vCPU.
Paravirtualized devices
Paravirtualization provides a fast and efficient method for exposing virtual devices to VMs.
Paravirtualized devices expose interfaces that are designed specifically for use in VMs, and thus
significantly increase device performance. RHEL 8 provides paravirtualized devices to VMs using the
virtio API as a layer between the hypervisor and the VM. The drawback of this approach is that it
requires a specific device driver in the guest operating system.
It is recommended to use paravirtualized devices instead of emulated devices for VM whenever
possible, notably if they are running I/O intensive applications. Paravirtualized devices decrease I/O
latency and increase I/O throughput, in some cases bringing them very close to bare-metal
performance. Other paravirtualized devices also add functionality to VMs that is not otherwise
available.
99
Red Hat Enterprise Linux 8 Configuring and managing virtualization
The balloon device (virtio-balloon), used to share information about guest memory usage
with the hypervisor.
Note, however, that the balloon device also requires the balloon service to be installed.
Nevertheless, some devices can be shared across multiple VMs. For example, a single physical device
can in certain cases provide multiple mediated devices, which can then be assigned to distinct VMs.
Virtual Function I/O (VFIO) device assignment - safely exposes devices to applications or
VMs using hardware-enforced DMA and interrupt isolation.
USB, PCI, and SCSI passthrough - expose common industry standard buses directly to VMs
in order to make their specific features available to guest software.
N_Port ID virtualization (NPIV) - a Fibre Channel technology to share a single physical host
bus adapter (HBA) with multiple virtual ports.
GPUs and vGPUs - accelerators for specific kinds of graphic or compute workloads. Some
GPUs can be attached directly to a VM, while certain types also offer the ability to create
virtual GPUs (vGPUs) that share the underlying physical hardware.
The following sections provide information about using the command line to:
100
CHAPTER 10. MANAGING VIRTUAL DEVICES
Prerequisites
Ensure the device you want to pass through to the VM is attached to the host.
Procedure
1. Locate the bus and device values of the USB that you want to attach to the VM.
For example, the following command displays a list of USB devices attached to the host. The
device we will use in this example is attached on bus 001 as device 005.
# lsusb
[...]
Bus 001 Device 003: ID 2567:0a2b Intel Corp.
Bus 001 Device 005: ID 0407:6252 Kingston River 2.0
[...]
NOTE
To attach a USB device to a running VM, add the --update argument to the previous
command.
Verification
Run the VM and test if the device is present and works as expected.
Use the virsh dumpxml command to see if the device’s XML definition has been added to the
<devices> section in the VM’s XML configuration file.
101
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Additional resources
Procedure
1. Locate the bus and device values of the USB that you want to remove from the VM.
For example, the following command displays a list of USB devices attached to the host. The
device we will use in this example is attached on bus 001 as device 005.
# lsusb
[...]
Bus 001 Device 003: ID 2567:0a2b Intel Corp.
Bus 001 Device 005: ID 0407:6252 Kingston River 2.0
[...]
NOTE
To remove a USB device from a running VM, add the --update argument to the previous
command.
Verification
Run the VM and check if the device has been removed from the list of devices.
Additional resources
When using a virtual machine (VM), you can access information stored in an ISO image on the host. To
102
CHAPTER 10. MANAGING VIRTUAL DEVICES
When using a virtual machine (VM), you can access information stored in an ISO image on the host. To
do so, attach the ISO image to the VM as a virtual optical drive, such as a CD drive or a DVD drive.
The following sections provide information about using the command line to:
Prerequisites
Procedure
Verification
Run the VM and test if the device is present and works as expected.
Additional resources
Prerequisites
Procedure
1. Locate the target device where the CD-ROM is attached to the VM. You can find this
103
Red Hat Enterprise Linux 8 Configuring and managing virtualization
1. Locate the target device where the CD-ROM is attached to the VM. You can find this
information in the VM’s XML configuration file.
For example, the following command displays the DN1 VM’s XML configuration file, where the
target device for CD-ROM is sda.
Verification
Run the VM and test if the device is replaced and works as expected.
Additional resources
Procedure
1. Locate the target device where the CD-ROM is attached to the VM. You can find this
information in the VM’s XML configuration file.
For example, the following command displays the DN1 VM’s XML configuration file, where the
target device for CD-ROM is sda.
For example, the following command removes the DrDN ISO image from the CD drive attached
104
CHAPTER 10. MANAGING VIRTUAL DEVICES
For example, the following command removes the DrDN ISO image from the CD drive attached
to the DN1 VM.
Verification
Additional resources
Procedure
1. Locate the target device where the CD-ROM is attached to the VM. You can find this
information in the VM’s XML configuration file.
For example, the following command displays the DN1 VM’s XML configuration file, where the
target device for CD-ROM is sda.
Verification
Confirm that the device is no longer listed in the XML configuration file of the VM.
Additional resources
For information about managing other types of devices, see Section 10.3, “Attaching devices to
105
Red Hat Enterprise Linux 8 Configuring and managing virtualization
For information about managing other types of devices, see Section 10.3, “Attaching devices to
virtual machines”.
Is able to provide the same or similar service as the original PCIe device.
For example, a single SR-IOV capable network device can present VFs to multiple VMs. While all of the
VFs use the same physical card, the same network connection, and the same network cable, each of the
VMs directly controls its own hardware network device, and uses no extra resources from the host.
Physical functions (PFs) - A PCIe function that provides the functionality of its device (for
example networking) to the host, but can also create and manage a set of VFs. Each SR-IOV
capable device has one or more PFs.
Virtual functions (VFs) - Lightweight PCIe functions that behave as independent devices. Each
VF is derived from a PF. The maximum number of VFs a device can have depends on the device
hardware. Each VF can be assigned only to a single VM at a time, but a VM can have multiple
VFs assigned to it.
VMs recognize VFs as virtual devices. For example, a VF created by an SR-IOV network device appears
as a network card to a VM to which it is assigned, in the same way as a physical network card appears to
the host system.
Benefits
The primary advantages of using SR-IOV VFs rather than emulated devices are:
Improved performance
For example, a VF attached to a VM as a vNIC performs at almost the same level as a physical NIC, and
much better than paravirtualized or emulated NICs. In particular, when multiple VFs are used
simultaneously on a single host, the performance benefits can be significant.
Disadvantages
To modify the configuration of a PF, you must first change the number of VFs exposed by the
PF to zero. Therefore, you also need to remove the devices provided by these VFs from the VM
to which they are assigned.
In addition, VFIO-assigned devices require pinning of VM memory, which increases the memory
consumption of the VM and prevents the use of memory ballooning on the VM.
Additional resources
For a list of device types that support SR-IOV, see Section 10.9.3, “Supported devices for SR-
107
Red Hat Enterprise Linux 8 Configuring and managing virtualization
For a list of device types that support SR-IOV, see Section 10.9.3, “Supported devices for SR-
IOV assignment”.
Prerequisites
The CPU and the firmware of your host support the I/O Memory Management Unit (IOMMU).
If using an Intel CPU, it must support the Intel Virtualization Technology for Directed I/O
(VT-d).
The host system uses Access Control Service (ACS) to provide direct memory access (DMA)
isolation for PCIe topology. Verify this with the system vendor.
For additional information, see Hardware Considerations for Implementing SR-IOV .
The physical network device supports SR-IOV. To verify if any network devices on your system
support SR-IOV, use the lspci -v command and look for Single Root I/O Virtualization (SR-
IOV) in the output.
# lspci -v
[...]
02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
Flags: bus master, fast devsel, latency 0, IRQ 16, NUMA node 0
Memory at fcba0000 (32-bit, non-prefetchable) [size=128K]
[...]
Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
Capabilities: [160] Single Root I/O Virtualization (SR-IOV)
Kernel driver in use: igb
Kernel modules: igb
[...]
The host network interface you want to use for creating VFs is running. For example, to activate
the eth1 interface and verify it is running:
For SR-IOV device assignment to work, the IOMMU feature must be enabled in the host BIOS
and kernel. To do so:
108
CHAPTER 10. MANAGING VIRTUAL DEVICES
A. Edit the /etc/default/grub file and add the intel_iommu=on and iommu=pt
parameters at the end of the GRUB_CMDLINE_LINUX line:
GRUB_CMDLINE_LINUX="crashkernel=auto resume=/dev/mapper/rhel_dell-
per730-27-swap rd.lvm.lv=rhel_dell-per730-27/root rd.lvm.lv=rhel_dell-per730-
27/swap console=ttyS0,115200n81 intel_iommu=on iommu=pt"
# grub2-mkconfig -o /boot/grub2/grub.cfg
A. Edit the /etc/default/grub file and add the iommu=pt and amd_iommu=on
parameters at the end of the GRUB_CMDLINE_LINUX line:
GRUB_CMDLINE_LINUX="crashkernel=auto resume=/dev/mapper/rhel_dell-
per730-27-swap rd.lvm.lv=rhel_dell-per730-27/root rd.lvm.lv=rhel_dell-per730-
27/swap console=ttyS0,115200n81 iommu=pt amd_iommu=on"
# grub2-mkconfig -o /boot/grub2/grub.cfg
Procedure
1. Optional: Confirm the maximum number of VFs your network device can use. To do so, use the
following command and replace eth1 with your SR-IOV compatible network device.
109
Red Hat Enterprise Linux 8 Configuring and managing virtualization
# cat /sys/class/net/eth1/device/sriov_totalvfs
7
VF-number with the number of VFs you want to create on the PF.
network-interface with the name of the network interface for which the VFs will be created.
The following example creates 2 VFs from the eth1 network interface:
4. Make the created VFs persistent by creating a udev rule for the network interface you used to
create the VFs. For example, for the eth1 interface, create the /etc/udev/rules.d/eth1.rules file,
and add the following line:
This ensures that the two VFs that use the ixgbe driver will automatically be available for the
eth1 interface when the host starts.
WARNING
Currently, this command does not work correctly when attempting to make
VFs persistent on Broadcom NetXtreme II BCM57810 adapters. In addition,
attaching VFs based on these adapters to Windows VMs is currently not
reliable.
5. Use the virsh nodedev-list command to verify that libvirt recognizes the added VF devices.
For example, the following shows that the 01:00.0 and 07:00.0 PFs from the previous example
have been successfully converted into VFs:
110
CHAPTER 10. MANAGING VIRTUAL DEVICES
pci_0000_01_00_0
pci_0000_01_00_1
pci_0000_07_10_0
pci_0000_07_10_1
[...]
6. Obtain the bus, slot, and function values of a PF and one of its corresponding VFs. For
example, for pci_0000_01_00_0 and pci_0000_01_00_1:
7. Create a temporary XML file and add a configuration into using the bus, slot, and function
values you obtained in the previous step. For example:
8. Add the VF to a VM using the temporary XML file. For example, the following attaches a VF
saved in the /tmp/holdmyfunction.xml to a running testguest1 VM and ensures it is available
after the VM restarts:
111
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Verification
If the procedure is successful, the guest operating system detects a new network interface card.
Networking devices
Prerequisites
Your host system is using the IBM Z hardware architecture and supports the FICON protocol.
The necessary kernel modules have been loaded on the host. To verify, use:
vfio_ccw
vfio_mdev
vfio_iommu_type1
112
CHAPTER 10. MANAGING VIRTUAL DEVICES
You have a spare DASD device for exclusive use by the VM, and you know the device’s
identifier.
This procedure uses 0.0.002c as an example. When performing the commands, replace 0.0.002c
with the identifier of your DASD device.
Procedure
# lscss -d 0.0.002c
Device Subchan. DevType CU Type Use PIM PAM POM CHPIDs
----------------------------------------------------------------------
0.0.002c 0.0.29a8 3390/0c 3990/e9 yes f0 f0 ff 02111221 00000000
In this example, the subchannel identifier is detected as 0.0.29a8. In the following commands of
this procedure, replace 0.0.29a8 with the detected subchannel identifier of your device.
2. If the lscss command in the previous step only displayed the header output and no device
information, perform the following steps:
# cio_ignore -r 0.0.002c
b. Edit the kernel command line of the VM and add the device identifier with a ! mark to the
line that starts with cio_ignore=, if it is not present already.
cio_ignore=all,!condev,!0.0.002c
NOTE
This binds the 0.0.29a8 subchannel to vfio_ccw persistently, which means the
DASD will not be usable on the host. If you need to use the device on the host,
you must first remove the automatic binding to 'vfio_ccw' and rebind the
subchannel to the default driver:
4. Generate an UUID.
# uuidgen
30820a6f-b1a5-4503-91ca-0c10ba12345a
113
Red Hat Enterprise Linux 8 Configuring and managing virtualization
7. Attach the mediated device to the VM. To do so, use the virsh edit utility to edit the XML
configuration of the VM, add the following section to the XML, and replace the uuid value with
the UUID you generated in the previous step.
Verification
1. Obtain the identifier that libvirt assigned to the mediated DASD device. To do so, display the
XML configuration of the VM and look for a vfio-ccw device.
<domain>
[...]
<hostdev mode='subsystem' type='mdev' managed='no' model='vfio-ccw'>
<source>
<address uuid='10620d2f-ed4d-437b-8aff-beda461541f9'/>
</source>
<alias name='hostdev0'/>
<address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0009'/>
</hostdev>
[...]
</domain>
2. Log in to the guest operating system of the VM and confirm that the device is listed. For
example:
# chccwdev -e 0.0009
Setting device 0.0.0009 online
Done
Additional resources
114
CHAPTER 11. MANAGING STORAGE FOR VIRTUAL MACHINES
The following sections provide information about the different types of VM storage, how they work, and
how you can manage them using the CLI or the web console.
Storage pools
Storage volumes
Overview of VM storage
Furthermore, multiple VMs can share the same storage pool, allowing for better allocation of storage
resources.
A persistent storage pool survives a system restart of the host machine. You can use the
virsh pool-define to create a persistent storage pool.
A transient storage pool only exists until the host reboots. You can use the virsh pool-
create command to create a transient storage pool.
Local storage pools are useful for development, testing, and small deployments that do not
require migration or have a large number of VMs.
115
Red Hat Enterprise Linux 8 Configuring and managing virtualization
On the host machine, a storage volume is referred to by its name and an identifier for the storage pool
from which it derives. On the virsh command line, this takes the form --pool storage_pool
volume_name.
For example, to display information about a volume named firstimage in the guest_images pool.
You can use the libvirt API to query the list of volumes in a storage pool or to get information regarding
the capacity, allocation, and available storage in that storage pool. For storage pools that support it, you
can also use the libvirt API to create, clone, resize, and delete storage volumes. Furthermore, you can
use the libvirt API to upload data to storage volumes, download data from storage volumes, or wipe
data from storage volumes.
As a storage administrator:
You can define an NFS storage pool on the virtualization host to describe the exported server
path and the client target path. Consequently, libvirt can mount the storage either
automatically when libvirt is started or as needed while libvirt is running.
You can simply add the storage pool and storage volume to a VM by name. You do not need to
add the target path to the volume. Therefore, even if the target client path changes, it does not
affect the VM.
You can configure storage pools to autostart. When you do so, libvirt automatically mounts the
NFS shared disk on the directory which is specified when libvirt is started. libvirt mounts the
share on the specified directory, similar to the command mount
nfs.example.com:/path/to/share /vmdata.
You can query the storage volume paths using the libvirt API. These storage volumes are
116
CHAPTER 11. MANAGING STORAGE FOR VIRTUAL MACHINES
You can query the storage volume paths using the libvirt API. These storage volumes are
basically the files present in the NFS shared disk. You can then copy these paths into the section
of a VM’s XML definition that describes the source storage for the VM’s block devices.
In the case of NFS, you can use an application that uses the libvirt API to create and delete
storage volumes in the storage pool (files in the NFS share) up to the limit of the size of the pool
(the storage capacity of the share).
Note that, not all storage pool types support creating and deleting volumes.
You can stop a storage pool when no longer required. Stopping a storage pool (pool-destroy)
undoes the start operation, in this case, unmounting the NFS share. The data on the share is not
modified by the destroy operation, despite what the name of the command suggests. For more
information, see man virsh.
The following is a list of libvirt storage pool types not supported by RHEL:
117
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Procedure
Additional resources
For information on the available virsh pool-list options, use the virsh pool-list --help
command.
Procedure
1. Use the virsh vol-list command to list the storage volumes in a specified storage pool.
2. Use the virsh vol-info command to list the storage volumes in a specified storage pool.
You can create one or more storage pools from available storage media. For a list of supported storage
118
CHAPTER 11. MANAGING STORAGE FOR VIRTUAL MACHINES
You can create one or more storage pools from available storage media. For a list of supported storage
pool types, see Supported storage pool types .
To create persistent storage pools, use the virsh pool-define-as and virsh pool-define
commands.
The virsh pool-define-as command places the options in the command line. The virsh pool-
define command uses an XML file for the pool options.
To create temporary storage pools, use the virsh pool-create and virsh pool-create-as
commands.
The virsh pool-create-as command places the options in the command line. The virsh pool-
create command uses an XML file for the pool options.
The following sections provide information on creating and assigning various types of storage pools
using the CLI:
Prerequisites
Procedure
If you already have an XML configuration of the storage pool you want to create, you can also
define the pool based on the XML. For details, see Section 11.4.1, “Directory-based storage pool
parameters”.
# ls -la /guest_images
total 8
drwx------. 2 root root 4096 May 31 19:38 .
dr-xr-xr-x. 25 root root 4096 May 31 19:38 ..
119
Red Hat Enterprise Linux 8 Configuring and managing virtualization
NOTE
The virsh pool-start command is only necessary for persistent storage pools.
Transient storage pools are automatically started when they are created.
Verification
Use the virsh pool-info command to verify that the storage pool is in the running state. Check
if the sizes reported are as expected and if autostart is configured correctly.
Prerequisites
120
CHAPTER 11. MANAGING STORAGE FOR VIRTUAL MACHINES
Prepare a device on which you will base the storage pool. For this purpose, prefer partitions (for
example, /dev/sdb1) or LVM volumes. If you provide a VM with write access to an entire disk or
block device (for example, /dev/sdb), the VM will likely partition it or create its own LVM groups
on it. This can result in system errors on the host.
However, if you require using an entire block device for the storage pool, Red Hat recommends
protecting any important partitions on the device from GRUB’s os-prober function. To do so,
edit the /etc/default/grub file and apply one of the following configurations:
Disable os-prober.
GRUB_DISABLE_OS_PROBER=true
GRUB_OS_PROBER_SKIP_LIST="5ef6313a-257c-4d43@/dev/sdb1"
Back up any data on the selected storage device before creating a storage pool. Depending on
the version of libvirt being used, dedicating a disk to a storage pool may reformat and erase all
data currently stored on the disk device.
Procedure
If you already have an XML configuration of the storage pool you want to create, you can also
define the pool based on the XML. For details, see Section 11.4.2, “Disk-based storage pool
parameters”.
NOTE
121
Red Hat Enterprise Linux 8 Configuring and managing virtualization
NOTE
Building the target path is only necessary for disk-based, file system-based, and
logical storage pools. If libvirt detects that the source storage device’s data
format differs from the selected storage pool type, the build fails, unless the
overwrite option is specified.
NOTE
The virsh pool-start command is only necessary for persistent storage pools.
Transient storage pools are automatically started when they are created.
Verification
Use the virsh pool-info command to verify that the storage pool is in the running state. Check
if the sizes reported are as expected and if autostart is configured correctly.
122
CHAPTER 11. MANAGING STORAGE FOR VIRTUAL MACHINES
When you want to create a storage pool on a file system that is not mounted, use the filesystem-based
storage pool. This storage pool is based on a given file-system mountpoint. You can use the virsh utility
to create filesystem-based storage pools.
Prerequisites
Prepare a device on which you will base the storage pool. For this purpose, prefer partitions (for
example, /dev/sdb1) or LVM volumes. If you provide a VM with write access to an entire disk or
block device (for example, /dev/sdb), the VM will likely partition it or create its own LVM groups
on it. This can result in system errors on the host.
However, if you require using an entire block device for the storage pool, Red Hat recommends
protecting any important partitions on the device from GRUB’s os-prober function. To do so,
edit the /etc/default/grub file and apply one of the following configurations:
Disable os-prober.
GRUB_DISABLE_OS_PROBER=true
GRUB_OS_PROBER_SKIP_LIST="5ef6313a-257c-4d43@/dev/sdb1"
Procedure
If you already have an XML configuration of the storage pool you want to create, you can also
define the pool based on the XML. For details, see Section 11.4.3, “Filesystem-based storage
pool parameters”.
# ls -la /guest_images
total 8
drwx------. 2 root root 4096 May 31 19:38 .
dr-xr-xr-x. 25 root root 4096 May 31 19:38 ..
123
Red Hat Enterprise Linux 8 Configuring and managing virtualization
NOTE
The virsh pool-start command is only necessary for persistent storage pools.
Transient storage pools are automatically started when they are created.
Verification
1. Use the virsh pool-info command to verify that the storage pool is in the running state. Check
if the sizes reported are as expected and if autostart is configured correctly.
2. Verify there is a lost+found directory in the target path on the file system, indicating that the
device is mounted.
# ls -la /guest_images
total 24
124
CHAPTER 11. MANAGING STORAGE FOR VIRTUAL MACHINES
Prerequisites
Before you can create GlusterFS-based storage pool on a host, prepare a Gluster.
a. Obtain the IP address of the Gluster server by listing its status with the following command:
# setsebool virt_use_fusefs on
# getsebool virt_use_fusefs
virt_use_fusefs --> on
Procedure
If you already have an XML configuration of the storage pool you want to create, you can also
125
Red Hat Enterprise Linux 8 Configuring and managing virtualization
If you already have an XML configuration of the storage pool you want to create, you can also
define the pool based on the XML. For details, see Section 11.4.4, “GlusterFS-based storage
pool parameters”.
NOTE
The virsh pool-start command is only necessary for persistent storage pools.
Transient storage pools are automatically started when they are created.
Verification
Use the virsh pool-info command to verify that the storage pool is in the running state. Check
if the sizes reported are as expected and if autostart is configured correctly.
Internet Small Computer Systems Interface (iSCSI) is an IP-based storage networking standard for
126
CHAPTER 11. MANAGING STORAGE FOR VIRTUAL MACHINES
Internet Small Computer Systems Interface (iSCSI) is an IP-based storage networking standard for
linking data storage facilities. If you want to have a storage pool on an iSCSI server, you can use the virsh
utility to create iSCSI-based storage pools.
Prerequisites
Procedure
If you already have an XML configuration of the storage pool you want to create, you can also
define the pool based on the XML. For details, see Section 11.4.5, “iSCSI-based storage pool
parameters”.
NOTE
The virsh pool-start command is only necessary for persistent storage pools.
Transient storage pools are automatically started when they are created.
By default, a storage pool defined with the virsh command is not set to automatically start each
127
Red Hat Enterprise Linux 8 Configuring and managing virtualization
By default, a storage pool defined with the virsh command is not set to automatically start each
time libvirtd starts. Use the virsh pool-autostart command to configure the storage pool to
autostart.
Verification
Use the virsh pool-info command to verify that the storage pool is in the running state. Check
if the sizes reported are as expected and if autostart is configured correctly.
Recommendations
Be aware of the following before creating an LVM-based storage pool:
libvirt supports thin logical volumes, but does not provide the features of thin storage pools.
LVM-based storage pools are volume groups. You can create volume groups using the virsh
utility, but this way you can only have one device in the created volume group. To create a
volume group with multiple devices, use the LVM utility instead, see How to create a volume
group in Linux with LVM.
For more detailed information about volume groups, refer to the Red Hat Enterprise Linux
Logical Volume Manager Administration Guide.
LVM-based storage pools require a full disk partition. If you activate a new partition or device
using virsh commands, the partition will be formatted and all data will be erased. If you are using
a host’s existing volume group, as in these procedures, nothing will be erased.
Prerequisites
128
CHAPTER 11. MANAGING STORAGE FOR VIRTUAL MACHINES
Procedure
If you already have an XML configuration of the storage pool you want to create, you can also
define the pool based on the XML. For details, see Section 11.4.6, “LVM-based storage pool
parameters”.
NOTE
The virsh pool-start command is only necessary for persistent storage pools.
Transient storage pools are automatically started when they are created.
Verification
Use the virsh pool-info command to verify that the storage pool is in the running state. Check
if the sizes reported are as expected and if autostart is configured correctly.
129
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Persistent: yes
Autostart: yes
Capacity: 458.39 GB
Allocation: 197.91 MB
Available: 458.20 GB
Prerequisites
Procedure
If you already have an XML configuration of the storage pool you want to create, you can also
define the pool based on the XML. For details, see Section 11.4.7, “NFS-based storage pool
parameters”.
NOTE
130
CHAPTER 11. MANAGING STORAGE FOR VIRTUAL MACHINES
NOTE
The virsh pool-start command is only necessary for persistent storage pools.
Transient storage pools are automatically started when they are created.
Verification
Use the virsh pool-info command to verify that the storage pool is in the running state. Check
if the sizes reported are as expected and if autostart is configured correctly.
11.3.8. Creating SCSI-based storage pools with vHBA devices using the CLI
If you want to have a storage pool on a Small Computer System Interface (SCSI) device, your host must
be able to connect to the SCSI device using a virtual host bus adapter (vHBA). You can then use the
virsh utility to create SCSI-based storage pools.
Prerequisites
Before creating a SCSI-based storage pools with vHBA devices, create a vHBA. For more
information, see Creating vHBAs.
Procedure
131
Red Hat Enterprise Linux 8 Configuring and managing virtualization
If you already have an XML configuration of the storage pool you want to create, you can also
define the pool based on the XML. For details, see Section 11.4.8, “Parameters for SCSI-based
storage pools with vHBA devices”.
NOTE
The virsh pool-start command is only necessary for persistent storage pools.
Transient storage pools are automatically started when they are created.
Verification
Use the virsh pool-info command to verify that the storage pool is in the running state. Check
if the sizes reported are as expected and if autostart is configured correctly.
132
CHAPTER 11. MANAGING STORAGE FOR VIRTUAL MACHINES
You can use the virsh pool-define command to create a storage pool based on the XML configuration
in a specified file. For example:
Parameters
The following table provides a list of required parameters for the XML file for a directory-based storage
pool.
Description XML
The path specifying the target. This will be the path <target>
used for the storage pool. <path>target_path</path>
</target>
Example
The following is an example of an XML file for a storage pool based on the /guest_images directory:
<pool type='dir'>
<name>dirpool</name>
<target>
<path>/guest_images</path>
</target>
</pool>
Additional resources
For more information on creating directory-based storage pools, see Section 11.3.1, “Creating directory-
based storage pools using the CLI”.
133
Red Hat Enterprise Linux 8 Configuring and managing virtualization
When you want to create or modify a disk-based storage pool using an XML configuration file, you must
include certain required parameters. See the following table for more information about these
parameters.
You can use the virsh pool-define command to create a storage pool based on the XML configuration
in a specified file. For example:
Parameters
The following table provides a list of required parameters for the XML file for a disk-based storage pool.
Description XML
The path specifying the target device. This will be the <target>
path used for the storage pool. <path>target_path</path>
</target>
Example
The following is an example of an XML file for a disk-based storage pool:
<pool type='disk'>
<name>phy_disk</name>
<source>
<device path='/dev/sdb'/>
<format type='gpt'/>
</source>
<target>
<path>/dev</path>
</target>
</pool>
Additional resources
For more information on creating disk-based storage pools, see Section 11.3.2, “Creating disk-based
storage pools using the CLI”.
When you want to create or modify a filesystem-based storage pool using an XML configuration file, you
134
CHAPTER 11. MANAGING STORAGE FOR VIRTUAL MACHINES
When you want to create or modify a filesystem-based storage pool using an XML configuration file, you
must include certain required parameters. See the following table for more information about these
parameters.
You can use the virsh pool-define command to create a storage pool based on the XML configuration
in a specified file. For example:
Parameters
The following table provides a list of required parameters for the XML file for a filesystem-based
storage pool.
Description XML
The file system type, for example ext4. <format type=fs_type />
</source>
The path specifying the target. This will be the path <target>
used for the storage pool. <path>path-to-pool</path>
</target>
Example
The following is an example of an XML file for a storage pool based on the /dev/sdc1 partition:
<pool type='fs'>
<name>guest_images_fs</name>
<source>
<device path='/dev/sdc1'/>
<format type='auto'/>
</source>
<target>
<path>/guest_images</path>
</target>
</pool>
Additional resources
For more information on creating filesystem-based storage pools, see Section 11.3.3, “Creating
filesystem-based storage pools using the CLI”.
135
Red Hat Enterprise Linux 8 Configuring and managing virtualization
You can use the virsh pool-define command to create a storage pool based on the XML configuration
in a specified file. For example:
Parameters
The following table provides a list of required parameters for the XML file for a GlusterFS-based
storage pool.
Description XML
The path on the Gluster server used for the storage <dir path=gluster-path />
pool. </source>
Example
The following is an example of an XML file for a storage pool based on the Gluster file system at
111.222.111.222:
<pool type='gluster'>
<name>Gluster_pool</name>
<source>
<host name='111.222.111.222'/>
<dir path='/'/>
<name>gluster-vol1</name>
</source>
</pool>
For more information on creating filesystem-based storage pools, see Section 11.3.4, “Creating
GlusterFS-based storage pools using the CLI”.
136
CHAPTER 11. MANAGING STORAGE FOR VIRTUAL MACHINES
You can use the virsh pool-define command to create a storage pool based on the XML configuration
in a specified file. For example:
Parameters
The following table provides a list of required parameters for the XML file for an iSCSI-based storage
pool.
Description XML
The path specifying the target. This will be the path <target>
used for the storage pool. <path>/dev/disk/by-path</path>
</target>
NOTE
The IQN of the iSCSI initiator can be determined using the virsh find-storage-pool-
sources-as iscsi command.
Example
The following is an example of an XML file for a storage pool based on the specified iSCSI device:
<pool type='iscsi'>
<name>iSCSI_pool</name>
<source>
<host name='server1.example.com'/>
<device path='iqn.2010-05.com.example.server1:iscsirhel7guest'/>
</source>
<target>
<path>/dev/disk/by-path</path>
</target>
</pool>
137
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Additional resources
For more information on creating iSCSCI-based storage pools, see Section 11.3.5, “Creating iSCSI-
based storage pools using the CLI”.
You can use the virsh pool-define command to create a storage pool based on the XML configuration
in a specified file. For example:
Parameters
The following table provides a list of required parameters for the XML file for a LVM-based storage
pool.
Description XML
NOTE
138
CHAPTER 11. MANAGING STORAGE FOR VIRTUAL MACHINES
NOTE
If the logical volume group is made of multiple disk partitions, there may be multiple
source devices listed. For example:
<source>
<device path='/dev/sda1'/>
<device path='/dev/sdb3'/>
<device path='/dev/sdc2'/>
...
</source>
Example
The following is an example of an XML file for a storage pool based on the specified LVM:
<pool type='logical'>
<name>guest_images_lvm</name>
<source>
<device path='/dev/sdc'/>
<name>libvirt_lvm</name>
<format type='lvm2'/>
</source>
<target>
<path>/dev/libvirt_lvm</path>
</target>
</pool>
Additional resources
For more information on creating iSCSCI-based storage pools, see Section 11.3.6, “Creating LVM-based
storage pools using the CLI”.
You can use the virsh pool-define command to create a storage pool based on the XML configuration
in a specified file. For example:
Parameters
The following table provides a list of required parameters for the XML file for an NFS-based storage
pool.
139
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Description XML
The path specifying the target. This will be the path <target>
used for the storage pool. <path>target_path</path>
</target>
Example
The following is an example of an XML file for a storage pool based on the /home/net_mount directory
of the file_server NFS server:
<pool type='netfs'>
<name>nfspool</name>
<source>
<host name='file_server'/>
<format type='nfs'/>
<dir path='/home/net_mount'/>
</source>
<target>
<path>/var/lib/libvirt/images/nfspool</path>
</target>
</pool>
Additional resources
For more information on creating NFS-based storage pools, see Section 11.3.7, “Creating NFS-based
storage pools using the CLI”.
To create or modify an XML configuration file for a SCSi-based storage pool that uses a virtual host
140
CHAPTER 11. MANAGING STORAGE FOR VIRTUAL MACHINES
To create or modify an XML configuration file for a SCSi-based storage pool that uses a virtual host
adapter bus (vHBA) device, you must include certain required parameters in the XML configuration file.
See the following table for more information about the required parameters.
You can use the virsh pool-define command to create a storage pool based on the XML configuration
in a specified file. For example:
Parameters
The following table provides a list of required parameters for the XML file for a SCSI-based storage
pool with vHBA.
Table 11.8. Parameters for SCSI-based storage pools with vHBA devices
Description XML
The target path. This will be the path used for the <target>
storage pool. <path=target_path />
</target>
IMPORTANT
When the <path> field is /dev/, libvirt generates a unique short device path for the
volume device path. For example, /dev/sdc. Otherwise, the physical host path is used. For
example, /dev/disk/by-path/pci-0000:10:00.0-fc-0x5006016044602198-lun-0. The
unique short device path allows the same volume to be listed in multiple virtual machines
(VMs) by multiple storage pools. If the physical host path is used by multiple VMs,
duplicate device type warnings may occur.
NOTE
141
Red Hat Enterprise Linux 8 Configuring and managing virtualization
NOTE
The parent attribute can be used in the <adapter> field to identify the physical HBA
parent from which the NPIV LUNs by varying paths can be used. This field, scsi_hostN, is
combined with the vports and max_vports attributes to complete the parent
identification. The parent, parent_wwnn, parent_wwpn, or parent_fabric_wwn
attributes provide varying degrees of assurance that after the host reboots the same
HBA is used.
If no parent is specified, libvirt uses the first scsi_hostN adapter that supports
NPIV.
If only the parent is specified, problems can arise if additional SCSI host adapters
are added to the configuration.
If parent_fabric_wwn is used, after the host reboots an HBA on the same fabric
is selected, regardless of the scsi_hostN used.
Examples
The following are examples of XML files for SCSI-based storage pools with vHBA.
<pool type='scsi'>
<name>vhbapool_host3</name>
<source>
<adapter type='fc_host' wwnn='5001a4a93526d0a1' wwpn='5001a4ace3ee047d'/>
</source>
<target>
<path>/dev/disk/by-path</path>
</target>
</pool>
A storage pool that is one of several storage pools that use a single vHBA and uses the parent
attribute to identify the SCSI host device:
<pool type='scsi'>
<name>vhbapool_host3</name>
<source>
<adapter type='fc_host' parent='scsi_host3' wwnn='5001a4a93526d0a1'
wwpn='5001a4ace3ee047d'/>
</source>
<target>
<path>/dev/disk/by-path</path>
</target>
</pool>
Additional resources
For more information on creating SCSI-based storage pools with vHBA, see Section 11.3.8, “Creating
SCSI-based storage pools with vHBA devices using the CLI”.
142
CHAPTER 11. MANAGING STORAGE FOR VIRTUAL MACHINES
Prerequisites
If you do not have an existing storage pool, create one. For more information, see
Section 11.3, “Creating and assigning storage pools for virtual machines using the CLI”
Procedure
1. Create a storage volume using the virsh vol-create-as command. For example, to create a 20
GB qcow2 volume based on the guest-images-fs storage pool:
Important: Specific storage pool types do not support the virsh vol-create-as command and
instead require specific processes to create storage volumes:
2. Create an XML file, and add the following lines in it. This file will be used to add the storage
volume as a disk to a VM.
This example specifies a virtual disk that uses the vm-disk1 volume, created in the previous
step, and sets the volume to be set up as disk hdk on an ide bus. Modify the respective
parameters as appropriate for your environment.
Important: With specific storage pool types, you must use different XML formats to describe a
143
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Important: With specific storage pool types, you must use different XML formats to describe a
storage volume disk.
3. Use the XML file to assign the storage volume as a disk to a VM. For example, to assign a disk
defined in ~/vm-disk1.xml to the testguest1 VM:
Verification
In the guest operating system of the VM, confirm that the disk image has become available as
an un-formatted and un-allocated disk.
Procedure
144
CHAPTER 11. MANAGING STORAGE FOR VIRTUAL MACHINES
1. List the defined storage pools using the virsh pool-list command.
2. Stop the storage pool you want to delete using the virsh pool-destroy command.
3. Optional: For some types of storage pools, you can remove the directory where the storage
pool resides using the virsh pool-delete command. Note that to do so, the directory must be
empty.
4. Delete the definition of the storage pool using the virsh pool-undefine command.
Verification
Prerequisites
Any virtual machine that uses the storage volume you want to delete is shut down.
Procedure
1. Use the virsh vol-list command to list the storage volumes in a specified storage pool.
145
Red Hat Enterprise Linux 8 Configuring and managing virtualization
.bash_logout /home/VirtualMachines/.bash_logout
.bash_profile /home/VirtualMachines/.bash_profile
.bashrc /home/VirtualMachines/.bashrc
.git-prompt.sh /home/VirtualMachines/.git-prompt.sh
.gitconfig /home/VirtualMachines/.gitconfig
vm-disk1 /home/VirtualMachines/vm-disk1
2. Optional: Use the virsh vol-wipe command to wipe a storage volume. For example, to wipe a
storage volume named vm-disk1 associated with the storage pool RHEL-SP:
3. Use the virsh vol-delete command to delete a storage volume. For example, to delete a
storage volume named vm-disk1 associated with the storage pool RHEL-SP:
Verification
Use the virsh vol-list command again to verify that the storage volume was deleted.
146
CHAPTER 11. MANAGING STORAGE FOR VIRTUAL MACHINES
Attach disks to a VM .
Prerequisites
Procedure
Size - The current allocation and the total capacity of the storage pool.
2. Click the row of the storage whose information you want to see.
The row expands to reveal the Overview pane with detailed information about the selected
storage pool.
147
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Target path - The source for the types of storage pools backed by directories, such as dir
or netfs.
Persistent - Indicates whether or not the storage pool has a persistent configuration.
Autostart - Indicates whether or not the storage pool starts automatically when the system
boots up.
3. To view a list of storage volumes associated with the storage pool, click Storage Volumes.
The Storage Volumes pane appears, showing a list of configured storage volumes.
Additional resources
For instructions on viewing information about all of the VMs to which the web console session is
connected, see Section 6.2.1, “Viewing a virtualization overview in the web console” .
For instructions on viewing basic information about a selected VM to which the web console
session is connected, see Section 6.2.3, “Viewing basic virtual machine information in the web
console”.
For instructions on viewing resource usage for a selected VM to which the web console session
is connected, see Section 6.2.4, “Viewing virtual machine resource usage in the web console” .
For instructions on viewing disk information about a selected VM to which the web console
session is connected, see Section 6.2.5, “Viewing virtual machine disk information in the web
console”.
For instructions on viewing virtual network interface information about a selected VM to which
the web console session is connected, see Section 6.2.6, “Viewing and editing virtual network
interface information in the web console”.
A virtual machine (VM) requires a file, directory, or storage device that can be used to create storage
148
CHAPTER 11. MANAGING STORAGE FOR VIRTUAL MACHINES
A virtual machine (VM) requires a file, directory, or storage device that can be used to create storage
volumes to store the VM image or act as additional storage. You can create storage pools from local or
network-based resources that you can then use to create the storage volumes.
To create storage pools using the RHEL web console, see the following procedure.
Prerequisites
Procedure
1. Click Storage Pools at the top of the Virtual Machines tab. The Storage Pools window appears,
showing a list of configured storage pools.
Type - The type of the storage pool. This can be a file-system directory, a network file
system, an iSCSI target, a physical disk drive, or an LVM volume group.
Target Path - The source for the types of storage pools backed by directories, such as dir
149
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Target Path - The source for the types of storage pools backed by directories, such as dir
or netfs.
Startup - Whether or not the storage pool starts when the host boots.
4. Click Create. The storage pool is created, the Create Storage Pool dialog closes, and the new
storage pool appears in the list of storage pools.
Additional resources
For more information about storage pools, see Understanding storage pools .
For instructions on viewing information about storage pools using the web console, see Viewing
storage pool information using the web console.
IMPORTANT
Unless explicitly specified, deleting a storage pool does not simultaneously delete the
storage volumes inside that pool.
To delete a storage pool using the RHEL web console, see the following procedure.
NOTE
If you want to temporarily deactivate a storage pool instead of deleting it, see
Deactivating storage pools using the web console
Prerequisites
If you want to delete a storage volume along with the pool, you must first detach the disk from
the VM.
Procedure
1. Click Storage Pools at the top of the Virtual Machines tab. The Storage Pools window appears,
showing a list of configured storage pools.
150
CHAPTER 11. MANAGING STORAGE FOR VIRTUAL MACHINES
2. In the Storage Pools window, click the storage pool you want to delete.
The row expands to reveal the Overview pane with basic information about the selected storage
pool and controls for deactivating or deleting the storage pool.
3. Click Delete.
A confirmation dialog appears.
4. Optional: To delete the storage volumes inside the pool, select the check box in the dialog.
5. Click Delete.
The storage pool is deleted. If you had selected the checkbox in the previous step, the
associated storage volumes are deleted as well.
Additional resources
For more information about storage pools, see Understanding storage pools .
For instructions on viewing information about storage pools using the web console, see viewing
storage pool information using the web console.
151
Red Hat Enterprise Linux 8 Configuring and managing virtualization
When you deactivate a storage pool, no new volumes can be created in that pool. However, any virtual
machines (VMs) that have volumes in that pool will continue to run. This is useful for a number of
reasons, for example, you can limit the number of volumes that can be created in a pool to increase
system performance.
To deactivate a storage pool using the RHEL web console, see the following procedure.
Prerequisites
Procedure
1. Click Storage Pools at the top of the Virtual Machines tab. The Storage Pools window appears,
showing a list of configured storage pools.
2. In the Storage Pools window, click the storage pool you want to deactivate.
The row expands to reveal the Overview pane with basic information about the selected storage
pool and controls for deactivating and deleting the VM.
3. Click Deactivate.
The storage pool is deactivated.
Additional resources
For more information about storage pools, see Understanding storage pools .
For instructions on viewing information about storage pools using the web console, see Viewing
storage pool information using the web console.
152
CHAPTER 11. MANAGING STORAGE FOR VIRTUAL MACHINES
To create a functioning virtual machine (VM) you require a local storage device assigned to the VM that
can store the VM image and VM-related data. You can create a storage volume in a storage pool and
assign it to a VM as a storage disk.
To create storage volumes using the web console, see the following procedure.
Prerequisites
Procedure
1. Click Storage Pools at the top of the Virtual Machines tab. The Storage Pools window appears,
showing a list of configured storage pools.
2. In the Storage Pools window, click the storage pool from which you want to create a storage
volume.
The row expands to reveal the Overview pane with basic information about the selected storage
pool.
3. Click Storage Volumes next to the Overview tab in the expanded row.
The Storage Volume tab appears with basic information about existing storage volumes, if any.
153
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Format - The format of the storage volume. The supported types are qcow2 and raw.
6. Click Create.
The storage volume is created, the Create Storage Volume dialog closes, and the new storage
volume appears in the list of storage volumes.
Additional resources
For more information about storage volumes, see Understanding storage volumes .
For information about adding disks to VMs using the web console, see Adding new disks to
virtual machines using the web console.
To remove storage volumes using the RHEL web console, see the following procedure.
Prerequisites
Procedure
1. Click Storage Pools at the top of the Virtual Machines tab. The Storage Pools window appears,
showing a list of configured storage pools.
154
CHAPTER 11. MANAGING STORAGE FOR VIRTUAL MACHINES
2. In the Storage Pools window, click the storage pool from which you want to remove a storage
volume.
The row expands to reveal the Overview pane with basic information about the selected storage
pool.
3. Click Storage Volumes next to the Overview tab in the expanded row.
The Storage Volume tab appears with basic information about existing storage volumes, if any.
Additional resources
For more information about storage volumes, see Understanding storage volumes .
155
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Prerequisites
Procedure
2. Scroll to Disks.
The Disks section displays information about the disks assigned to the VM as well as options to
Add, Remove, or Edit disks.
Access - Whether the disk is Writeable or Read-only. For raw disks, you can also set the
access to Writeable and shared.
Additional resources
For instructions on viewing information about all of the VMs to which the web console session is
connected, see Section 6.2.1, “Viewing a virtualization overview in the web console” .
For instructions on viewing information about the storage pools to which the web console
session is connected, see Section 6.2.2, “Viewing storage pool information using the web
console”.
For instructions on viewing basic information about a selected VM to which the web console
session is connected, see Section 6.2.3, “Viewing basic virtual machine information in the web
console”.
For instructions on viewing resource usage for a selected VM to which the web console session
156
CHAPTER 11. MANAGING STORAGE FOR VIRTUAL MACHINES
For instructions on viewing resource usage for a selected VM to which the web console session
is connected, see Section 6.2.4, “Viewing virtual machine resource usage in the web console” .
For instructions on viewing virtual network interface information about a selected VM to which
the web console session is connected, see Section 6.2.6, “Viewing and editing virtual network
interface information in the web console”.
11.7.8. Adding new disks to virtual machines using the web console
You can add new disks to virtual machines (VMs) by creating a new storage volume and attaching it to a
VM using the RHEL 8 web console.
Prerequisites
Procedure
1. In the Virtual Machines interface, click the VM for which you want to create and attach a new
disk.
A new page opens with an Overview section with basic information about the selected VM and a
Console section to access the VM’s graphical interface.
2. Scroll to Disks.
The Disks section displays information about the disks assigned to the VM as well as options to
Add, Remove, or Edit disks.
157
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Pool - Select the storage pool from which the virtual disk will be created.
Name - Enter a name for the virtual disk that will be created.
Size - Enter the size and select the unit (MiB or GiB) of the virtual disk that will be created.
Format - Select the format for the virtual disk that will be created. The supported types are
qcow2 and raw.
Persistence - If checked, the virtual disk is persistent. If not checked, the virtual disk is
transient.
NOTE
6. Click Add.
The virtual disk is created and connected to the VM.
Additional resources
For instructions on viewing disk information about a selected VM to which the web console
session is connected, see Section 11.7.7, “Viewing virtual machine disk information in the web
console”.
For information on attaching existing disks to VMs, see Section 11.7.9, “Attaching existing disks
to virtual machines using the web console”.
158
CHAPTER 11. MANAGING STORAGE FOR VIRTUAL MACHINES
For information on detaching disks from VMs, see Section 11.7.10, “Detaching disks from virtual
machines using the web console”.
11.7.9. Attaching existing disks to virtual machines using the web console
Using the web console, you can attach existing storage volumes as disks to a virtual machine (VM).
Prerequisites
Procedure
1. In the Virtual Machines interface, click the VM for which you want to create and attach a new
disk.
A new page opens with an Overview section with basic information about the selected VM and a
Console section to access the VM’s graphical interface.
2. Scroll to Disks.
The Disks section displays information about the disks assigned to the VM as well as options to
Add, Remove, or Edit disks.
159
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Pool - Select the storage pool from which the virtual disk will be attached.
Persistence - Check to make the virtual disk persistent. Clear to make the virtual disk
transient.
6. Click Add
The selected virtual disk is attached to the VM.
Additional resources
For instructions on viewing disk information about a selected VM to which the web console
session is connected, see Section 11.7.7, “Viewing virtual machine disk information in the web
console”.
For information on creating new disks and attaching them to VMs, see Section 11.7.8, “Adding
new disks to virtual machines using the web console”.
For information on detaching disks from VMs, see Section 11.7.10, “Detaching disks from virtual
machines using the web console”.
11.7.10. Detaching disks from virtual machines using the web console
Using the web console, you can detach disks from virtual machines (VMs).
Prerequisites
160
CHAPTER 11. MANAGING STORAGE FOR VIRTUAL MACHINES
Procedure
1. In the Virtual Machines interface, click the VM from which you want to detach a disk.
A new page opens with an Overview section with basic information about the selected VM and a
Console section to access the VM’s graphical interface.
2. Scroll to Disks.
The Disks section displays information about the disks assigned to the VM as well as options to
Add, Remove, or Edit disks.
3. Click the Remove button next to the disk you want to detach from the VM. A Remove Disk
confirmation dialog box appears.
Additional resources
For instructions on viewing disk information about a selected VM to which the web console
session is connected, see Section 11.7.7, “Viewing virtual machine disk information in the web
console”.
For information on creating new disks and attaching them to VMs, see Section 11.7.8, “Adding
new disks to virtual machines using the web console”.
For information on attaching existing disks to VMs, see Section 11.7.9, “Attaching existing disks
to virtual machines using the web console”.
The following provides instructions for securing iSCSI-based storage pools with libvirt secrets.
NOTE
This procedure is required if a user_ID and password were defined when creating the
iSCSI target.
Prerequisites
Ensure that you have created an iSCSI-based storage pool. For more information, see
Section 11.3.5, “Creating iSCSI-based storage pools using the CLI”
Procedure
161
Red Hat Enterprise Linux 8 Configuring and managing virtualization
1. Create a libvirt secret file with a challenge-handshake authentication protocol (CHAP) user
name. For example:
# virsh secret-list
UUID Usage
-------------------------------------------------------------------
2d7891af-20be-4e5e-af83-190e8a922360 iscsi iscsirhel7secret
4. Assign a secret to the UUID in the output of the previous step using the virsh secret-set-value
command. This ensures that the CHAP username and password are in a libvirt-controlled secret
list. For example:
5. Add an authentication entry in the storage pool’s XML file using the virsh edit command, and
add an <auth> element, specifying authentication type, username, and secret usage.
For example:
<pool type='iscsi'>
<name>iscsirhel7pool</name>
<source>
<host name='192.168.122.1'/>
<device path='iqn.2010-05.com.example.server1:iscsirhel7guest'/>
<auth type='chap' username='redhat'>
<secret usage='iscsirhel7secret'/>
</auth>
</source>
<target>
<path>/dev/disk/by-path</path>
</target>
</pool>
NOTE
162
CHAPTER 11. MANAGING STORAGE FOR VIRTUAL MACHINES
NOTE
The <auth> sub-element exists in different locations within the virtual machine’s
<pool> and <disk> XML elements. For a <pool>, <auth> is specified within the
<source> element, as this describes where to find the pool sources, since
authentication is a property of some pool sources (iSCSI and RBD). For a <disk>,
which is a sub-element of a domain, the authentication to the iSCSI or RBD disk is
a property of the disk. In addition, the <auth> sub-element for a disk differs from
that of a storage pool.
<auth username='redhat'>
<secret type='iscsi' usage='iscsirhel7secret'/>
</auth>
6. To activate the changes, activate the storage pool. If the pool has already been started, stop and
restart the storage pool:
Procedure
1. Locate the HBAs on your host system, using the virsh nodedev-list --cap vports command.
The following example shows a host that has two HBAs that support vHBA:
2. View the HBA’s details, using the virsh nodedev-dumpxml HBA_device command.
The output from the command lists the <name>, <wwnn>, and <wwpn> fields, which are used
to create a vHBA. <max_vports> shows the maximum number of supported vHBAs. For
example:
<device>
<name>scsi_host3</name>
<path>/sys/devices/pci0000:00/0000:00:04.0/0000:10:00.0/host3</path>
<parent>pci_0000_10_00_0</parent>
<capability type='scsi_host'>
<host>3</host>
<unique_id>0</unique_id>
<capability type='fc_host'>
<wwnn>20000000c9848140</wwnn>
163
Red Hat Enterprise Linux 8 Configuring and managing virtualization
<wwpn>10000000c9848140</wwpn>
<fabric_wwn>2002000573de9a81</fabric_wwn>
</capability>
<capability type='vport_ops'>
<max_vports>127</max_vports>
<vports>0</vports>
</capability>
</capability>
</device>
In this example, the <max_vports> value shows there are a total 127 virtual ports available for
use in the HBA configuration. The <vports> value shows the number of virtual ports currently
being used. These values update after creating a vHBA.
3. Create an XML file similar to one of the following for the vHBA host. In these examples, the file
is named vhba_host3.xml.
This example uses scsi_host3 to describe the parent vHBA.
<device>
<parent>scsi_host3</parent>
<capability type='scsi_host'>
<capability type='fc_host'>
</capability>
</capability>
</device>
<device>
<name>vhba</name>
<parent wwnn='20000000c9848140' wwpn='10000000c9848140'/>
<capability type='scsi_host'>
<capability type='fc_host'>
</capability>
</capability>
</device>
NOTE
The WWNN and WWPN values must match those in the HBA details seen in the
previous step.
The <parent> field specifies the HBA device to associate with this vHBA device. The details in
the <device> tag are used in the next step to create a new vHBA device for the host. For more
information on the nodedev XML format, see the libvirt upstream pages .
NOTE
The virsh command does not provide a way to define the parent_wwnn,
parent_wwpn, or parent_fabric_wwn attributes.
4. Create a VHBA based on the XML file created in the previous step using the virsh nodev-
create command.
164
CHAPTER 11. MANAGING STORAGE FOR VIRTUAL MACHINES
Verification
Verify the new vHBA’s details (scsi_host5) using the virsh nodedev-dumpxml command:
Additional resources
For information about creating SCSI-based storage pools with vHBA devices, see Section 11.3.8,
“Creating SCSI-based storage pools with vHBA devices using the CLI”.
165
Red Hat Enterprise Linux 8 Configuring and managing virtualization
You can detach the GPU from the host and pass full control of the GPU directly to the VM.
You can create multiple mediated devices from a physical GPU, and assign these devices as
virtual GPUs (vGPUs) to multiple guests. This is currently only supported on selected NVIDIA
GPUs, and only one mediated device can be assigned to a single guest.
NOTE
If you are looking for information about assigning a virtual GPU, see Managing NVIDIA
vGPU devices.
Prerequisites
NOTE
166
CHAPTER 12. MANAGING GPU DEVICES IN VIRTUAL MACHINES
NOTE
Procedure
b. Prevent the host’s graphics driver from using the GPU. To do so, use the GPU’s PCI ID with
the pci-stub driver.
For example, the following command prevents the driver from binding to the GPU attached
at the 10de:11fa bus:
2. Optional: If certain GPU functions, such as audio, cannot be passed through to the VM due to
support limitations, you can modify the driver bindings of the endpoints within an IOMMU group
to pass through only the necessary GPU functions.
a. Convert the GPU settings to XML and note the PCI address of the endpoints that you want
to prevent from attaching to the host drivers.
To do so, convert the GPU’s PCI bus address to a libvirt-compatible format by adding the
pci_ prefix to the address, and converting the delimiters to underscores.
For example, the following command displays the XML configuration of the GPU attached
at the 0000:02:00.0 bus address.
<device>
<name>pci_0000_02_00_0</name>
<path>/sys/devices/pci0000:00/0000:00:03.0/0000:02:00.0</path>
<parent>pci_0000_00_03_0</parent>
<driver>
<name>pci-stub</name>
</driver>
<capability type='pci'>
<domain>0</domain>
<bus>2</bus>
<slot>0</slot>
<function>0</function>
167
Red Hat Enterprise Linux 8 Configuring and managing virtualization
a. Create an XML configuration file for the GPU by using the PCI bus address.
For example, you can create the following XML file, GPU-Assign.xml, by using parameters
from the GPU’s bus address.
NOTE
Verification
The device appears under the <devices> section in VM’s XML configuration. For more
information, see Sample virtual machine XML configuration .
168
CHAPTER 12. MANAGING GPU DEVICES IN VIRTUAL MACHINES
IMPORTANT
Assigning a physical GPU to VMs, with or without using mediated devices, makes it
impossible for the host to use the GPU.
Prerequisites
Your GPU supports vGPU mediated devices. For an up-to-date list of NVIDIA GPUs that
support creating vGPUs, see the NVIDIA GPU Software Documentation .
If you do not know which GPU your host is using, install the lshw package and use the lshw -
C display command. The following example shows the system is using an NVIDIA Tesla P4
GPU, compatible with vGPU.
# lshw -C display
*-display
description: 3D controller
product: GP104GL [Tesla P4]
vendor: NVIDIA Corporation
physical id: 0
bus info: pci@0000:01:00.0
version: a1
width: 64 bits
clock: 33MHz
capabilities: pm msi pciexpress cap_list
configuration: driver=vfio-pci latency=0
resources: irq:16 memory:f6000000-f6ffffff memory:e0000000-efffffff
memory:f0000000-f1ffffff
Procedure
1. Download the NVIDIA vGPU drivers and install them on your system. For instructions, see the
NVIDIA documentation.
169
Red Hat Enterprise Linux 8 Configuring and managing virtualization
blacklist nouveau
options nouveau modeset=0
3. Regenerate the initial ramdisk for the current kernel, then reboot.
# dracut --force
# reboot
4. Check that the kernel has loaded the nvidia_vgpu_vfio module and that the nvidia-vgpu-
mgr.service service is running.
# uuidgen
30820a6f-b1a5-4503-91ca-0c10ba58692a
6. Create a mediated device from the GPU hardware that you detected in the prerequisites, and
assign the generated UUID to the device.
The following example shows how to create a mediated device of the nvidia-63 vGPU type on
an NVIDIA Tesla P4 card that runs on the 0000:01:00.0 PCI bus:
NOTE
For the vGPU type values for specific GPU devices, see the Virtual GPU software
documentation.
8. Attach the mediated device to a VM that you want to share the vGPU resources. To do so, add
170
CHAPTER 12. MANAGING GPU DEVICES IN VIRTUAL MACHINES
8. Attach the mediated device to a VM that you want to share the vGPU resources. To do so, add
the following lines, along with the previously genereated UUID, to the <devices/> sections in the
XML configuration of the VM.
9. For full functionality of the vGPU mediated devices to be available on the assigned VMs, set up
NVIDIA vGPU guest software licensing on the VMs. For further information and instructions, see
the NVIDIA Virtual GPU Software License Server User Guide .
Verification
List the active mediated devices on your host. If the output displays a defined device with the
UUID used in the procedure, NVIDIA vGPU has been configured correctly. For example:
# mdevctl list
85006552-1b4b-45ef-ad62-de05be9171df 0000:01:00.0 nvidia-63
30820a6f-b1a5-4503-91ca-0c10ba58692a 0000:01:00.0 nvidia-63 (defined)
Additional resources
For more information on using the mdevctl utility, use man mdevctl.
Prerequisites
The VM from which you want to remove the device is shut down.
Procedure
1. Obtain the UUID of the mediated device that you want to remove. To do so, use the mdevctl
list command:
# mdevctl list
85006552-1b4b-45ef-ad62-de05be9171df 0000:01:00.0 nvidia-63 (defined)
30820a6f-b1a5-4503-91ca-0c10ba58692a 0000:01:00.0 nvidia-63 (defined)
2. Stop the running instance of the mediated vGPU device. To do so, use the mdevctl stop
171
Red Hat Enterprise Linux 8 Configuring and managing virtualization
2. Stop the running instance of the mediated vGPU device. To do so, use the mdevctl stop
command with the UUID of the device. For example, to stop the 30820a6f-b1a5-4503-91ca-
0c10ba58692a device:
3. Remove the device from the XML configuration of the VM. To do so, use the virsh edit utility to
edit the XML configuration of the VM, and remove the mdev’s configuration segment. The
segment will look similar to the following:
Note that stopping and detaching the mediated device does not delete it, but rather keeps it as
defined. As such, you can restart and attach the device to a different VM.
Verification
If you only stopped and detached the device, list the active mediated devices and the defined
mediated devices.
# mdevctl list
85006552-1b4b-45ef-ad62-de05be9171df 0000:01:00.0 nvidia-63 (defined)
# mdevctl list --defined
85006552-1b4b-45ef-ad62-de05be9171df 0000:01:00.0 nvidia-63 auto (active)
30820a6f-b1a5-4503-91ca-0c10ba58692a 0000:01:00.0 nvidia-63 manual
If the first command does not display the device but the second command does, the procedure
was successful.
If you also deleted the device, the second command should not display the device.
# mdevctl list
85006552-1b4b-45ef-ad62-de05be9171df 0000:01:00.0 nvidia-63 (defined)
# mdevctl list --defined
85006552-1b4b-45ef-ad62-de05be9171df 0000:01:00.0 nvidia-63 auto (active)
Additional resources
For more information on using the mdevctl utility, use man mdevctl.
172
CHAPTER 12. MANAGING GPU DEVICES IN VIRTUAL MACHINES
Prerequisites
Procedure
To see the available vGPU types on your host, use the mdevctl types command.
For example, the following shows the information for a system that uses a physical Tesla T4
card under the 0000:41:00.0 PCI bus:
# mdevctl types
0000:41:00.0
nvidia-222
Available instances: 0
Device API: vfio-pci
Name: GRID T4-1B
Description: num_heads=4, frl_config=45, framebuffer=1024M,
max_resolution=5120x2880, max_instance=16
nvidia-223
Available instances: 0
Device API: vfio-pci
Name: GRID T4-2B
Description: num_heads=4, frl_config=45, framebuffer=2048M,
max_resolution=5120x2880, max_instance=8
nvidia-224
Available instances: 0
Device API: vfio-pci
Name: GRID T4-2B4
Description: num_heads=4, frl_config=45, framebuffer=2048M,
max_resolution=5120x2880, max_instance=8
nvidia-225
Available instances: 0
Device API: vfio-pci
Name: GRID T4-1A
Description: num_heads=1, frl_config=60, framebuffer=1024M,
max_resolution=1280x1024, max_instance=16
[...]
To see the active vGPU devices on your host, including their types, UUIDs, and PCI buses of
parent devices, use the mdevctl list command:
# mdevctl list
85006552-1b4b-45ef-ad62-de05be9171df 0000:41:00.0 nvidia-223
83c32df7-d52e-4ec1-9668-1f3c7e4df107 0000:41:00.0 nvidia-223 (defined)
Additional resources
173
Red Hat Enterprise Linux 8 Configuring and managing virtualization
For more information on using the mdevctl utility, use man mdevctl.
HP-RGS - Note that it is currently not possible to use HP-RGS with RHEL 8 VMs.
Mechdyne TGX - Note that it is currently not possible to use Mechdyne TGX with Windows
Server 2016 VMs.
NICE DCV - When using this desktop streaming service, use fixed resolution settings. Using
dynamic resolution in some cases results in a black screen. In addition, it is currently not possible
to use NICE DCV with RHEL 8 VMs.
174
CHAPTER 13. CONFIGURING VIRTUAL MACHINE NETWORK CONNECTIONS
You can enable the VMs on your host to be discovered and connected to by locations outside
the host, as if the VMs were on the same network as the host.
You can partially or completely isolate a VM from inbound network traffic to increase its security
and minimize the risk of any problems with the VM impacting the host.
The following sections explain the various types of VM network configuration and provide instructions
for setting up selected VM network configurations.
The following figure shows a virtual network switch connecting two VMs to the network:
From the perspective of a guest operating system, a virtual network connection is the same as a physical
network connection. Host machines view virtual network switches as network interfaces. When the
libvirtd service is first installed and started, it creates virbr0, the default network interface for VMs.
To view information about this interface, use the ip utility on the host.
175
Red Hat Enterprise Linux 8 Configuring and managing virtualization
By default, all VMs on a single host are connected to the same NAT-type virtual network, named
default, which uses the virbr0 interface. For details, see Section 13.1.2, “Virtual networking default
configuration”.
For basic outbound-only network access from VMs, no additional network setup is usually needed,
because the default network is installed along with the libvirt package, and is automatically started when
the libvirtd service is started.
If a different VM network functionality is needed, you can create additional virtual networks and network
interfaces and configure your VMs to use them. In addition to the default NAT, these networks and
interfaces can be configured to use one of the following modes:
Routed mode
Bridged mode
Isolated mode
Open mode
VMs on the network are visible to the host and other VMs on the host, but the network traffic is
affected by the firewalls in the guest operating system’s network stack and by the libvirt
network filtering rules attached to the guest interface.
VMs on the network can connect to locations outside the host but are not visible to them.
Outbound traffic is affected by the NAT rules, as well as the host system’s firewall.
176
CHAPTER 13. CONFIGURING VIRTUAL MACHINE NETWORK CONNECTIONS
Add network interfaces to virtual machines , and disconnect or delete the interfaces.
13.2.1. Viewing and editing virtual network interface information in the web console
Using the RHEL 8 web console, you can view and modify the virtual network interfaces on a selected
virtual machine (VM):
Prerequisites
Procedure
1. In the Virtual Machines interface, click the VM whose information you want to see.
A new page opens with an Overview section with basic information about the selected VM and a
Console section to access the VM’s graphical interface.
177
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Type - The type of network interface for the VM. The types include virtual network, bridge
to LAN, and direct attachment.
NOTE
Source - The source of the network interface. This is dependent on the network type.
3. To edit the virtual network interface settings, Click Edit. The Virtual Network Interface Settings
dialog opens.
NOTE
Changes to the virtual network interface settings take effect only after restarting
the VM.
Additionally, MAC address can only be modified when the VM is shut off.
13.2.2. Adding and connecting virtual network interfaces in the web console
178
CHAPTER 13. CONFIGURING VIRTUAL MACHINE NETWORK CONNECTIONS
Using the RHEL 8 web console, you can create a virtual network interface and connect a virtual machine
(VM) to it.
Prerequisites
Procedure
1. In the Virtual Machines interface, click the VM whose information you want to see.
A new page opens with an Overview section with basic information about the selected VM and a
Console section to access the VM’s graphical interface.
3. Click Plug in the row of the virtual network interface you want to connect.
The selected virtual network interface connects to the VM.
13.2.3. Disconnecting and removing virtual network interfaces in the web console
Using the RHEL 8 web console, you can disconnect the virtual network interfaces connected to a
selected virtual machine (VM).
Prerequisites
Procedure
1. In the Virtual Machines interface, click the VM whose information you want to see.
A new page opens with an Overview section with basic information about the selected VM and a
Console section to access the VM’s graphical interface.
3. Click Unplug in the row of the virtual network interface you want to disconnect.
The selected virtual network interface disconnects from the VM.
If you require a VM to appear on the same external network as the hypervisor, you must use bridged
mode instead. To do so, attach the VM to a bridge device connected to the hypervisor’s physical
network device. To use the command-line interface for this, follow the instructions below.
Prerequisites
The IP configuration of the hypervisor. This varies depending on the network connection of the
host. As an example, this procedure uses a scenario where the host is connected to the network
using an ethernet cable, and the hosts' physical NIC MAC address is assigned to a static IP on a
DHCP server. Therefore, the ethernet interface is treated as the hypervisor IP.
To obtain the IP configuration of the ethernet interface, use the ip addr utility:
# ip addr
[...]
enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP
group default qlen 1000
link/ether 54:ee:75:49:dc:46 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.148/24 brd 10.0.0.255 scope global dynamic noprefixroute enp0s25
Procedure
1. Create and set up a bridge connection for the physical interface on the host. For instructions,
see the Configuring a network bridge .
Note that in a scenario where static IP assignment is used, you must move the IPv4 setting of
the physical ethernet interface to the bridge interface.
2. Modify the VM’s network to use the created bridged interface. For example, the following sets
testguest to use bridge0.
4. In the guest operating system, adjust the IP and DHCP settings of the system’s network
180
CHAPTER 13. CONFIGURING VIRTUAL MACHINE NETWORK CONNECTIONS
4. In the guest operating system, adjust the IP and DHCP settings of the system’s network
interface as if the VM was another physical system in the same network as the hypervisor.
The specific steps for this will differ depending on the guest OS used by the VM. For example, if
the guest OS is RHEL 8, see Configuring an Ethernet connection .
Verification
1. Ensure the newly created bridge is running and contains both the host’s physical interface and
the interface of the VM.
a. In the guest operating system, obtain the network ID of the system. For example, if it is a
Linux guest:
# ip addr
[...]
enp0s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state
UP group default qlen 1000
link/ether 52:54:00:09:15:46 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.150/24 brd 10.0.0.255 scope global dynamic noprefixroute enp0s0
b. From an external system connected to the local network, connect to the VM using the
obtained ID.
# ssh [email protected]
[email protected]'s password:
Last login: Mon Sep 24 12:05:36 2019
root~#*
Additional resources
For instructions on creating an externally visible VM using the web console, see Section 13.4.1,
“Configuring externally visible virtual machines using the web console”.
For additional information on bridged mode, see Section 13.5.3, “Virtual networking in bridged
mode”.
In certain situations, such as when a using client-to-site VPN while the VM is hosted on the
client, using bridged mode for making your VMs available to external locations is not possible.
To work around this problem, you can set a destination NAT for the VM. For details, see the
Configuring and managing networking document.
13.3.2. Isolating virtual machines from each other using the command-line interface
181
Red Hat Enterprise Linux 8 Configuring and managing virtualization
To prevent a virtual machine (VM) from communicating with other VMs on your host, for example to
avoid data sharing or to increase system security, you can completely isolate the VM from host-side
network traffic.
By default, a newly created VM connects to a NAT-type network that uses virbr0, the default virtual
bridge on the host. This ensures that the VM can use the host’s NIC for connecting to outside networks,
as well as to other VMs on the host. This is a generally secure connection, but in some cases, connectivity
to the other VMs may be a security or data privacy hazard. In such situations, you can isolate the VM by
using direct macvtap connection in private mode instead of the default network.
In private mode, the VM is visible to external systems and can receive a public IP on the host’s subnet,
but the VM and the host cannot access each other, and the VM is also not visible to other VMs on the
host.
For instructions to set up macvtap private mode on your VM using the CLI, see below.
Prerequisites
The name of the host interface that you want to use for the macvtap connection. The interface
you must select will vary depending on your use case and the network configuration on your
host. As an example, this procedure uses the host’s physical ethernet interface.
To obtain the name of the targeted interface:
$ ip addr
[...]
2: enp0s31f6: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel
state DOWN group default qlen 1000
link/ether 54:e1:ad:42:70:45 brd ff:ff:ff:ff:ff:ff
[...]
Procedure
Use the selected interface to set up private macvtap on the selected VM. The following
example configures macvtap in private mode on the enp0s31f6 interface for the VM named
panic-room.
Verification
182
CHAPTER 13. CONFIGURING VIRTUAL MACHINE NETWORK CONNECTIONS
net.0.name=macvtap0
net.0.rx.bytes=0
net.0.rx.pkts=0
net.0.rx.errs=0
net.0.rx.drop=0
net.0.tx.bytes=0
net.0.tx.pkts=0
net.0.tx.errs=0
net.0.tx.drop=0
If the command displays similar output, the VM has been isolated successfully.
Additional resources
For instructions on isolating a VM using the web console, see Section 13.4.2, “Isolating virtual
machines from each other using the web console”.
For additional information about macvtap private mode, see Section 13.5.6, “Direct attachment
of the virtual network device”.
For additional security measures that you can set on a VM, see Chapter 15, Securing virtual
machines.
13.4.1. Configuring externally visible virtual machines using the web console
By default, a newly created VM connects to a NAT-type network that uses virbr0, the default virtual
bridge on the host. This ensures that the VM can use the host’s network interface controller (NIC) for
connecting to outside networks, but the VM is not reachable from external systems.
If you require a VM to appear on the same external network as the hypervisor, you must use bridged
mode instead. To do so, attach the VM to a bridge device connected to the hypervisor’s physical
network device. To use the RHEL 8 web console for this, follow the instructions below.
Prerequisites
The IP configuration of the hypervisor. This varies depending on the network connection of the
host. As an example, this procedure uses a scenario where the host is connected to the network
using an ethernet cable, and the hosts' physical NIC MAC address is assigned to a static IP on a
DHCP server. Therefore, the ethernet interface is treated as the hypervisor IP.
To obtain the IP configuration of the ethernet interface, go to the Networking tab in the web
console, and see the Interfaces section.
183
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Procedure
1. Create and set up a bridge connection for the physical interface on the host. For instructions,
see Configuring network bridges in the web console .
Note that in a scenario where static IP assignment is used, you must move the IPv4 setting of
the physical ethernet interface to the bridge interface.
2. Modify the VM’s network to use the bridged interface. In the Network Interfaces tab of the VM:
c. Click Add
d. Optional: Click Unplug for all the other interfaces connected to the VM.
4. In the guest operating system, adjust the IP and DHCP settings of the system’s network
interface as if the VM was another physical system in the same network as the hypervisor.
The specific steps for this will differ depending on the guest OS used by the VM. For example, if
the guest OS is RHEL 8, see Configuring an Ethernet connection .
Verification
1. In the Networking tab of the host’s web console, click the row with the newly created bridge to
ensure it is running and contains both the host’s physical interface and the interface of the VM.
a. In the guest operating system, obtain the network ID of the system. For example, if it is a
184
CHAPTER 13. CONFIGURING VIRTUAL MACHINE NETWORK CONNECTIONS
a. In the guest operating system, obtain the network ID of the system. For example, if it is a
Linux guest:
# ip addr
[...]
enp0s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state
UP group default qlen 1000
link/ether 52:54:00:09:15:46 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.150/24 brd 10.0.0.255 scope global dynamic noprefixroute enp0s0
b. From an external system connected to the local network, connect to the VM using the
obtained ID.
# ssh [email protected]
[email protected]'s password:
Last login: Mon Sep 24 12:05:36 2019
root~#*
Additional resources
For instructions on creating an externally visible VM using the CLI, see Section 13.3.1,
“Configuring externally visible virtual machines using the command-line interface”.
For additional information on bridged mode, see Section 13.5.3, “Virtual networking in bridged
mode”.
In certain situations, such as when a using client-to-site VPN while the VM is hosted on the
client, using bridged mode for making your VMs available to external locations is not possible.
To work around this problem, you can set a destination NAT for the VM. For details, see the
Configuring and managing networking document.
13.4.2. Isolating virtual machines from each other using the web console
To prevent a virtual machine (VM) from communicating with other VMs on your host, for example to
avoid data sharing or to increase system security, you can completely isolate the VM from host-side
network traffic.
By default, a newly created VM connects to a NAT-type network that uses virbr0, the default virtual
bridge on the host. This ensures that the VM can use the host’s NIC for connecting to outside networks,
as well as to other VMs on the host. This is a generally secure connection, but in some cases, connectivity
to the other VMs may be a security or data privacy hazard. In such situations, you can isolate the VM by
using direct macvtap connection in private mode instead of the default network.
In private mode, the VM is visible to external systems and can receive a public IP on the host’s subnet,
but the VM and the host cannot access each other, and the VM is also not visible to other VMs on the
host.
For instructions to set up macvtap private mode on your VM using the web console, see below.
Prerequisites
185
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Procedure
1. In the Virtual Machines pane, click the row with the virtual machine you want to isolate.
A pane with the basic information about the VM opens.
3. Click Edit.
The Virtual Machine Interface Settings dialog opens.
Verification
2. In the Terminal pane of the web console, list the interface statistics for the VM. For example, to
view the network interface traffic for the panic-room VM:
If the command displays similar output, the VM has been isolated successfully.
Additional resources
For instructions on isolating a VM using the command-line, see Section 13.3.2, “Isolating virtual
machines from each other using the command-line interface”.
For additional information about macvtap private mode, see Section 13.5.6, “Direct attachment
of the virtual network device”.
For additional security measures that you can set on a VM, see Chapter 15, Securing virtual
machines.
186
CHAPTER 13. CONFIGURING VIRTUAL MACHINE NETWORK CONNECTIONS
WARNING
Virtual network switches use NAT configured by firewall rules. Editing these rules
while the switch is running is not recommended, because incorrect rules may result
in the switch being unable to communicate.
187
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Common topologies that use routed mode include DMZs and virtual server hosting.
DMZ
You can create a network where one or more nodes are placed in a controlled sub-network for
security reasons. Such a sub-network is known as a demilitarized zone (DMZ).
Host machines in a DMZ typically provide services to WAN (external) host machines as well as LAN
(internal) host machines. Since this requires them to be accessible from multiple locations, and
considering that these locations are controlled and operated in different ways based on their security
and trust level, routed mode is the best configuration for this environment.
188
CHAPTER 13. CONFIGURING VIRTUAL MACHINE NETWORK CONNECTIONS
In bridged mode, the VM appear within the same subnet as the host machine. All other physical
machines on the same physical network can detect the VM and access it.
mode 1
189
Red Hat Enterprise Linux 8 Configuring and managing virtualization
mode 2
mode 4
In contrast, using modes 0, 3, 5, or 6 is likely to cause the connection to fail. Also note that media-
independent interface (MII) monitoring should be used to monitor bonding modes, as Address
Resolution Protocol (ARP) monitoring does not work correctly.
For more information on bonding modes, refer to the Red Hat Knowledgebase .
Common scenarios
The most common use cases for bridged mode include:
Deploying VMs in an existing network alongside host machines, making the difference between
virtual and physical machines invisible to the end user.
Deploying VMs without making any changes to existing physical network configuration settings.
Deploying VMs that must be easily accessible to an existing physical network. Placing VMs on a
physical network where they must access DHCP services.
Connecting VMs to an existing network where virtual LANs (VLANs) are used.
Additional resources
For instructions on configuring your VMs to use bridged mode, see Configuring externally visible
virtual machines using the command-line interface or Configuring externally visible virtual
machines using the web console.
190
CHAPTER 13. CONFIGURING VIRTUAL MACHINE NETWORK CONNECTIONS
In this mode, all packets are sent to the external switch and will only be delivered to a target VM on the
same host machine if they are sent through an external router or gateway and these send them back to
the host. Private mode can be used to prevent the individual VMs on a single host from communicating
with each other.
Additional resources
For instructions on configuring your VMs to use macvtap in private mode, see Isolating virtual
machines from each other using the command-line interface or Isolating virtual machines from
each other using the web console.
191
Red Hat Enterprise Linux 8 Configuring and managing virtualization
WARNING
These procedures are provided only as an example. Ensure that you have sufficient
backups before proceeding.
Prerequisites
dnsmasq
Cobbler server
Procedure
192
CHAPTER 13. CONFIGURING VIRTUAL MACHINE NETWORK CONNECTIONS
7. Edit the <ip> element to include the appropriate address, network mask, DHCP address range,
and boot file, where BOOT_FILENAME is the name of the boot image file.
Verification
# virsh net-list
Name State Autostart Persistent
---------------------------------------------------
default active no no
Additional resources
For more information about configuring TFTP and DHCP on a PXE server, see Preparing to
install from the network using PXE.
193
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Prerequisites
A PXE boot server is set up on the virtual network as described in Section 13.6.1, “Setting up a
PXE boot server on a virtual network”.
Procedure
Create a new VM with PXE booting enabled. For example, to install from a PXE, available on the
default virtual network, into a new 10 GB qcow2 image file:
Alternatively, you can manually edit the XML configuration file of an existing VM:
<os>
<type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>
<boot dev='network'/>
<boot dev='hd'/>
</os>
ii. Ensure the guest network is configured to use your virtual network:
<interface type='network'>
<mac address='52:54:00:66:79:14'/>
<source network='default'/>
<target dev='vnet0'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
Verification
Start the VM using the virsh start command. If PXE is configured correctly, the VM boots from
a boot image available on the PXE server.
Prerequisites
Procedure
Create a new VM with PXE booting enabled. For example, to install from a PXE, available on the
breth0 bridged network, into a new 10 GB qcow2 image file:
194
CHAPTER 13. CONFIGURING VIRTUAL MACHINE NETWORK CONNECTIONS
Alternatively, you can manually edit the XML configuration file of an existing VM:
<os>
<type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>
<boot dev='network'/>
<boot dev='hd'/>
</os>
<interface type='bridge'>
<mac address='52:54:00:5a:ad:cb'/>
<source bridge='breth0'/>
<target dev='vnet0'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
Verification
Start the VM using the virsh start command. If PXE is configured correctly, the VM boots from
a boot image available on the PXE server.
Additional resources
For more information about bridged networking, see Configuring a network bridge .
Specific network interface cards can be attached to VMs as SR-IOV devices, which increases
their performance. For details, see Section 10.9, “Managing SR-IOV devices” .
195
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Prerequisites
A directory that you want to share with your VMs. If you do not want to share any of your existing
directories, create a new one, for example named shared-files.
# mkdir shared-files
The host is visible and reachable over a network for the VM. This is generally the case if the VM
is connected using the NAT and bridge type of virtual networks. However, for the macvtap
connection, you must first set up the macvlan feature on the host. To do so:
1. Create a network device file in the host’s /etc/systemd/network/ directory, for example
called vm-macvlan.netdev.
# vim /etc/systemd/network/vm-macvlan.netdev
2. Edit the network device file to have the following content. You can replace vm-macvlan
with the name you chose for your network device.
[NetDev]
Name=vm-macvlan
Kind=macvlan
[MACVLAN]
Mode=bridge
3. Create a network configuration file for your macvlan network device, for example vm-
macvlan.network.
# vim /etc/systemd/network/vm-macvlan.network
4. Edit the network configuration file to have the following content. You can replace vm-
macvlan with the name you chose for your network device.
[Match]
Name=_vm-macvlan_
[Network]
196
CHAPTER 14. SHARING FILES BETWEEN THE HOST AND ITS VIRTUAL MACHINES
IPForward=yes
Address=192.168.250.33/24
Gateway=192.168.250.1
DNS=192.168.250.1
5. Create a network configuration file for your physical network interface. For example, if your
interface is enp4s0:
# vim /etc/systemd/network/enp4s0.network
If you are unsure what interface name to use, you can use the ifconfig command on your
host to obtain the list of active network interfaces.
6. Edit the physical network configuration file to make the physical network a part of the
macvlan interface, in this case vm-macvlan:
[Match]
Name=enp4s0
[Network]
MACVLAN=vm-macvlan
Optional: For improved security, ensure your VMs are compatible with NFS version 4 or later.
Procedure
1. On the host, export a directory with the files you want to share as a network file system (NFS).
a. Obtain the IP address of each virtual machine you want to share files with. The following
example obtains the IPs of testguest1 and testguest2.
b. Edit the /etc/exports file on the host and add a line that includes the directory you want to
share, IPs of VMs you want to share with, and sharing options.
For example, the following shares the /usr/local/shared-files directory on the host with
testguest1 and testguest2, and enables the VMs to edit the content of the directory:
197
Red Hat Enterprise Linux 8 Configuring and managing virtualization
# exportfs -a
e. Obtain the IP address of the host system. This will be used for mounting the shared
directory on the VMs later.
# ip addr
[...]
5: virbr0: [BROADCAST,MULTICAST,UP,LOWER_UP] mtu 1500 qdisc noqueue state
UP group default qlen 1000
link/ether 52:54:00:32:ff:a5 brd ff:ff:ff:ff:ff:ff
inet 192.168.124.1/24 brd 192.168.124.255 scope global virbr0
valid_lft forever preferred_lft forever
[...]
Note that the relevant network is the one being used use for connection to the host by the
VMs you want to share files with. Usually, this is virbr0.
2. On the guest OS of a VM specified in the /etc/exports file, mount the exported file system.
a. Create a directory you want to use as a mount point for the shared file system, for example
/mnt/host-share:
# mkdir /mnt/host-share
b. Mount the directory exported by the host on the mount point. This example mounts the
/usr/local/shared-files directory exported by the 192.168.124.1 host on /mnt/host-share in
the guest:
Verification
To verify the mount has succeeded, access and explore the shared directory on the mount
point:
# cd /mnt/host-share
# ls
shared-file1 shared-file2 shared-file3
Additional resources
For efficient file sharing between your host system and the Windows VMs it is connected to, you can
198
CHAPTER 14. SHARING FILES BETWEEN THE HOST AND ITS VIRTUAL MACHINES
For efficient file sharing between your host system and the Windows VMs it is connected to, you can
prepare a Samba server that your VMs can access.
Prerequisites
The samba packages are installed on your host. If they are not:
The host is visible and reachable over a network for the VM. This is generally the case if the VM
is connected using the NAT and bridge type of virtual networks. However, for the macvtap
connection, you must first set up the macvlan feature on the host. To do so:
1. Create a network device file, for example called vm-macvlan.netdev in the host’s
/etc/systemd/network/ directory.
# vim /etc/systemd/network/vm-macvlan.netdev
2. Edit the network device file to have the following content. You can replace vm-macvlan
with the name you chose for your network device.
[NetDev]
Name=vm-macvlan
Kind=macvlan
[MACVLAN]
Mode=bridge
3. Create a network configuration file for your macvlan network device, for example vm-
macvlan.network.
# vim /etc/systemd/network/vm-macvlan.network
4. Edit the network configuration file to have the following content. You can replace vm-
macvlan with the name you chose for your network device.
[Match]
Name=_vm-macvlan_
[Network]
IPForward=yes
Address=192.168.250.33/24
Gateway=192.168.250.1
DNS=192.168.250.1
5. Create a network configuration file for your physical network interface. For example, if your
interface is enp4s0:
# vim /etc/systemd/network/enp4s0.network
If you are unsure what interface to use, you can use the ifconfig command on your host to
obtain the list of active network interfaces.
6. Edit the physical network configuration file to make the physical network a part of the
199
Red Hat Enterprise Linux 8 Configuring and managing virtualization
6. Edit the physical network configuration file to make the physical network a part of the
macvlan interface, in this case vm-macvlan:
[Match]
Name=enp4s0
[Network]
MACVLAN=vm-macvlan
Procedure
1. On the host, create a Samba share and make it accessible for external systems.
Note that the hosts allow line restricts the accessibility of the share only to hosts on
the VM network. If you want the share to be accessible by anyone, remove the line.
# mkdir -p /samba/VM-share
200
CHAPTER 14. SHARING FILES BETWEEN THE HOST AND ITS VIRTUAL MACHINES
f. Allow the VM-share directory to be accessible and modifiable for the VMs.
2. On the Windows guest operating system, attach the Samba share as a network location.
c. In the Add Network Location wizard that opens, select "Choose a custom network location"
and click Next.
d. In the "Internet or network address" field, type host-IP/VM-share, where host-IP is the IP
address of the host. Usually, the host IP is the default gateway of the VM. Afterwards, click
Next.
201
Red Hat Enterprise Linux 8 Configuring and managing virtualization
e. When the wizard asks if you want to rename the shared directory, keep the default name.
This ensures the consistency of file sharing configuration across the VM and the guest. Click
Next.
f. If accessing the network location was successful, you can now click Finish and open the
shared directory.
Additional resources
202
CHAPTER 15. SECURING VIRTUAL MACHINES
This document outlines the mechanics of securing VMs on a RHEL 8 host and provides a list of methods
to increase the security of your VMs.
Because the hypervisor uses the host kernel to manage VMs, services running on the VM’s operating
system are frequently used for injecting malicious code into the host system. However, you can protect
your system against such security threats by using a number of security features on your host and your
guest systems.
These features, such as SELinux or QEMU sandboxing, provide various measures that make it more
difficult for malicious code to attack the hypervisor and transfer between your host and your VMs.
Many of the features that RHEL 8 provides for VM security are always active and do not have to be
enabled or configured. For details, see Section 15.5, “Automatic features for virtual machine security” .
In addition, you can adhere to a variety of best practices to minimize the vulnerability of your VMs and
your hypervisor. For more information, see Section 15.2, “Best practices for securing virtual machines” .
Secure the virtual machine as if it was a physical machine. The specific methods available to
enhance security depend on the guest OS.
If your VM is running RHEL 8, see Securing Red Hat Enterprise Linux 8 for detailed instructions
on improving the security of your guest system.
When managing VMs remotely, use cryptographic utilities such as SSH and network protocols
such as SSL for connecting to the VMs.
# getenforce
Enforcing
If SELinux is disabled or in Permissive mode, see the Using SELinux document for instructions
on activating Enforcing mode.
NOTE
204
CHAPTER 15. SECURING VIRTUAL MACHINES
NOTE
SELinux Enforcing mode also enables the sVirt RHEL 8 feature. This is a set of
specialized SELinux booleans for virtualization, which can be manually adjusted
for fine-grained VM security management.
SecureBoot can only be applied when installing a Linux VM that uses OVMF firmware. For
instructions, see Section 15.3, “Creating a SecureBoot virtual machine” .
Additional resources
For detailed information on modifying your virtualization booleans, see Section 15.6,
“Virtualization booleans”.
Prerequisites
An operating system (OS) installation source is available locally or on a network. This can be one
of the following formats:
205
Red Hat Enterprise Linux 8 Configuring and managing virtualization
WARNING
Optional: A Kickstart file can be provided for faster and easier configuration of the installation.
Procedure
1. Use the virt-install command to create a VM as detailed in Section 2.2.1, “Creating virtual
machines using the command-line interface”. For the --boot option, use the
uefi,nvram_template=/usr/share/OVMF/OVMF_VARS.secboot.fd value. This uses the
OVMF_VARS.secboot.fd and OVMF_CODE.secboot.fd files as templates for the VM’s non-
volatile RAM (NVRAM) settings, which enables the SecureBoot feature.
For example:
Verification
1. After the guest OS is installed, access the VM’s command line by opening the terminal in the
graphical guest console or connecting to the guest OS using SSH.
2. To confirm that SecureBoot has been enabled on the VM, use the mokutil --sb-state command:
# mokutil --sb-state
SecureBoot enabled
Additional resources
Installing RHEL 8
Procedure
1. Optional: Ensure your system’s polkit control policies related to libvirt are set up according to
206
CHAPTER 15. SECURING VIRTUAL MACHINES
1. Optional: Ensure your system’s polkit control policies related to libvirt are set up according to
your preferences.
ii. Add your custom policies to this file, and save it.
For further information and examples of libvirt control policies, see the libvirt upstream
documentation.
Verification
As a user whose VM actions you intended to limit, perform one of the restricted actions.
For example, if unprivileged users are restricted from viewing VMs created in the system
session:
If this command does not list any VMs even though one or more VMs exist on your system,
polkit succesfully restricts the action for unprivileged users.
Troubleshooting
Currently, configuring libvirt to use polkit makes it impossible to connect to VMs using the
RHEL 8 web console, due to an incompatibility with the libvirt-dbus service.
If you require fine-grained access control of VMs in the web console, Red Hat recommends
creating a custom D-Bus policy. For instructions, see How to configure fine-grained control of
Virtual Machines in Cockpit in the Red Hat Knowledgebase.
207
Red Hat Enterprise Linux 8 Configuring and managing virtualization
NOTE
To list all virtualization-related booleans and their statuses, use the getsebool -a | grep virt command:
To enable a specific boolean, use the setsebool -P boolean_name on command as root. To disable a
boolean, use setsebool -P boolean_name off.
The following table lists virtualization-related booleans available in RHEL 8 and what they do when
enabled:
208
CHAPTER 15. SECURING VIRTUAL MACHINES
IBM Secure Execution, also known as Protected Virtualization, prevents the host system from accessing
a VM’s state and memory contents. As a result, even if the host is compromised, it cannot be used as a
209
Red Hat Enterprise Linux 8 Configuring and managing virtualization
vector for attacking the guest operating system. In addition, Secure Execution can be used to prevent
untrusted hosts from obtaining sensitive information from the VM.
The following procedure describes how to convert an existing VM on an IBM Z host into a secured VM.
Prerequisites
The Secure Execution feature is enabled for your system. To verify, use:
If this command displays any output, your CPU is compatible with Secure Execution.
# ls /sys/firmware | grep uv
If the command generates any output, your kernel supports Secure Execution.
The host CPU model contains the unpack facility. To confirm, use:
If the command generates the above output, your CPU host model is compatible with Secure
Execution.
The CPU mode of the VM is set to host-model. To confirm this, use the following and replace
vm-name with the name of your VM.
If the command generates any output, the VM’s CPU mode is set correctly.
You have obtained and verified the IBM Z host key document. For instructions to do so, see
Verifying the host key document in IBM documentation.
Procedure
1. Add the prot_virt=1 kernel parameter to the boot configuration of the host.
2. Create a parameter file for the VM you want to secure. For example:
# touch ~/secure-parameters
3. In the /boot/loader/entries directory of the host, identify the boot loader entry with the latest
210
CHAPTER 15. SECURING VIRTUAL MACHINES
3. In the /boot/loader/entries directory of the host, identify the boot loader entry with the latest
version:
# ls /boot/loader/entries -l
[...]
-rw-r--r--. 1 root root 281 Oct 9 15:51 3ab27a195c2849429927b00679db15c1-4.18.0-
240.el8.s390x.conf
# cat /boot/loader/entries/3ab27a195c2849429927b00679db15c1-4.18.0-
240.el8.s390x.conf | grep options
options root=/dev/mapper/rhel-root crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap
5. Add the content of the options line and swiotlb=262144 to the created parameters file.
Using the genprotimg utility creates the secure image, which contains the kernel parameters,
initial RAM disk, and boot image.
7. In the guest operating system of the VM, update the VM’s boot menu to boot from the secure
image. In addition, remove the lines starting with initrd and options, as they are not needed.
For example, in a RHEL 8.3 VM, the boot menu can be edited in the /boot/loader/entries/
directory:
# cat /boot/loader/entries/3ab27a195c2849429927b00679db15c1-4.18.0-
240.el8.s390x.conf
title Red Hat Enterprise Linux 8.3
version 4.18.0-240.el8.s390x
linux /boot/secure-image
[...]
8. Enable virtio devices to use shared buffers. To do so, use virsh edit to modify the XML
configuration of the VM, and add iommu='on' to the <driver> line of all devices that have one.
For example:
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
<driver name='vhost' iommu='on'/>
</interface>
211
Red Hat Enterprise Linux 8 Configuring and managing virtualization
If a device configuration does not contain any <driver> line, add <driver iommu='on'\> instead.
9. Disable memory ballooning on the VM, as the feature is not compatible with Secure Execution.
To do so, add the following line to the VM’s XML configuration.
<memballoon model='none'/>
# zipl -V
# shred /boot/vmlinuz-4.18.0-240.el8.s390x
# shred /boot/initramfs-4.18.0-240.el8.s390x.img
# shred secure-parameters
The original boot image, the initial RAM image, and the kernel parameter file are unprotected,
and if they are not removed, VMs with Secure Execution enabled can still be vulnerable to
hacking attempts or sensitive data mining.
Verification
On the host, use the virsh dumpxml utility to confirm the XML configuration of the secured
VM. The configuration must include the <driver iommu='on'/> and <memballoon
model='none'/> elements.
Additional resources
For additonal instructions on modifying the boot configuration of the host, see Configuring
kernel command-line parameters.
Prerequisites
The cryptographic coprocessor is compatible with device assignment. To confirm this, ensure
that the type of your coprocessor is listed as CEX4 or later.
# lszcrypt -V
# modprobe vfio_ap
Procedure
1. On the host, reassign your crypto device to the vfio-ap drivers. The following example assigns
two crypto devices with bitmask IDs (0x05, 0x0004) and (0x05, 0x00ab) to vfio-ap.
For information on identifying the bitmask ID values, see Preparing pass-through devices for
cryptographic adapter resources in the KVM Virtual Server Management document from IBM.
# lszcrypt -V
213
Red Hat Enterprise Linux 8 Configuring and managing virtualization
If the DRIVER values of the domain queues changed to vfio_ap, the reassignment succeeded.
# uuidgen
669d9b23-fe1b-4ecb-be08-a2fabca99b71
# cat /sys/devices/vfio_ap/matrix/mdev_supported_types/vfio_ap-
passthrough/devices/669d9b23-fe1b-4ecb-be08-a2fabca99b71/matrix
05.0004
05.00ab
If the output contains the numerical values of queues that you have previously assigned to vfio-
ap, the process was successful.
7. Use the virsh edit command to open the XML configuration of the VM where you want to use
the crypto devices.
8. Add the following lines to the <devices> section in the XML configuration, and save it.
214
CHAPTER 15. SECURING VIRTUAL MACHINES
Verification
2. After the guest operating system (OS) boots, ensure that it detects the assigned crypto
devices.
# lszcrypt -V
The output of this command in the guest OS will be identical to that on a host logical partition
with the same cryptographic coprocessor devices available.
Prerequisites
Make sure you have installed the latest WHQL certified VirtIO drivers.
Procedure
1. Enable TPM 2.0 by adding the following parameters to the <devices> section in the VM’s XML
configuration.
<devices>
[...]
<tpm model='tpm-crb'>
<backend type='emulator' version='2.0'/>
215
Red Hat Enterprise Linux 8 Configuring and managing virtualization
</tpm>
[...]
</devices>
2. Install Windows in UEFI mode. For more information on how to do so, see Creating a
SecureBoot virtual machine.
3. Install the VirtIO drivers on the Windows VM. For more information on how to do so, see
Installing virtio drivers on a Windows guest .
4. In UEFI, enable Secure Boot. For more information on how to do so, see Secure Boot.
Verification
Ensure that the Device Security page on your Windows machine displays the following
message:
Settings > Update & Security > Windows Security > Device Security
Prerequisites
Ensure that standard hardware security is enabled. For more information, see Enabling standard
hardware security on Windows virtual machines.
Ensure that KVM nesting is enabled. For more information, see Creating nested virtual
machines.
# -cpu Skylake-Client-
v3,hv_stimer,hv_synic,hv_relaxed,hv_reenlightenment,hv_spinlocks=0xfff,hv_vpin
dex,hv_vapic,hv_time,hv_frequencies,hv_runtime,+kvm_pv_unhalt,+vmx
Procedure
216
CHAPTER 15. SECURING VIRTUAL MACHINES
NOTE
For other methods of enabling HVCI, see the relevant Microsoft documentation.
Verification
Ensure that the Device Security page on your Windows VM displays the following message:
Settings > Update & Security > Windows Security > Device Security
217
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Virtual CPUs (vCPUs) are implemented as threads on the host, handled by the Linux scheduler.
VMs do not automatically inherit optimization features, such as NUMA or huge pages, from the
host kernel.
Disk and network I/O settings of the host might have a significant performance impact on the
VM.
Depending on the host devices and their models, there might be significant overhead due to
emulation of particular hardware.
The severity of the virtualization impact on the VM performance is influenced by a variety factors, which
include:
The tuned service can automatically optimize the resource distribution and performance of
your VMs.
Block I/O tuning can improve the performances of the VM’s block devices, such as disks.
IMPORTANT
218
CHAPTER 16. OPTIMIZING VIRTUAL MACHINE PERFORMANCE
IMPORTANT
Tuning VM performance can have adverse effects on other virtualization functions. For
example, it can make migrating the modified VM more difficult.
For RHEL 8 virtual machines, use the virtual-guest profile. It is based on the generally
applicable throughput-performance profile, but also decreases the swappiness of virtual
memory.
For RHEL 8 virtualization hosts, use the virtual-host profile. This enables more aggressive
writeback of dirty memory pages, which benefits the host performance.
Prerequisites
Procedure
To enable a specific tuned profile:
# tuned-adm list
Available profiles:
- balanced - General non-specialized tuned profile
- desktop - Optimize for the desktop use-case
[...]
- virtual-guest - Optimize for running inside a virtual guest
- virtual-host - Optimize for running KVM guests
Current active profile: balanced
219
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Additional resources
For more information about tuned and tuned profiles, see Monitoring and managing system
status and performance.
To perform these actions, you can use the web console or the command-line interface.
16.3.1. Adding and removing virtual machine memory using the web console
To improve the performance of a virtual machine (VM) or to free up the host resources it is using, you
can use the web console to adjust amount of memory allocated to the VM.
Prerequisites
The guest OS is running the memory balloon drivers. To verify this is the case:
If this commands displays any output and the model is not set to none, the memballoon
device is present.
In Windows guests, the drivers are installed as a part of the virtio-win driver package.
For instructions, see Section 17.2.1, “Installing KVM paravirtualized drivers for Windows
virtual machines”.
In Linux guests, the drivers are generally included by default and activate when the
memballoon device is present.
Procedure
1. Optional: Obtain the information about the maximum memory and currently used memory for a
VM. This will serve as a baseline for your changes, and also for verification.
2. In the Virtual Machines interface, click the VM whose information you want to see.
A new page opens with an Overview section with basic information about the selected VM and a
220
CHAPTER 16. OPTIMIZING VIRTUAL MACHINE PERFORMANCE
A new page opens with an Overview section with basic information about the selected VM and a
Console section to access the VM’s graphical interface.
Maximum allocation - Sets the maximum amount of host memory that the VM can use for
its processes. You can specify the maximum memory when creating the VM or increase it
later. You can specify memory as multiples of MiB or GiB.
Adjusting maximum memory allocation is only possible on a shut-off VM.
Current allocation - Sets the actual amount of memory allocated to the VM. This value can
be less than the Maximum allocation but cannot exceed it. You can adjust the value to
regulate the memory available to the VM for its processes. You can specify memory as
multiples of MiB or GiB.
If you do not specify this value, the default allocation is the Maximum allocation value.
5. Click Save.
The memory allocation of the VM is adjusted.
Additional resources
For instructions for adjusting VM memory setting using the command-line interface, see
Section 16.3.2, “Adding and removing virtual machine memory using the command-line
interface”.
To optimize how the VM uses the allocated memory, you can modify your vCPU setting. For
more information, see Section 16.5, “Optimizing virtual machine CPU performance” .
16.3.2. Adding and removing virtual machine memory using the command-line
interface
To improve the performance of a virtual machine (VM) or to free up the host resources it is using, you
can use the CLI to adjust amount of memory allocated to the VM.
Prerequisites
The guest OS is running the memory balloon drivers. To verify this is the case:
221
Red Hat Enterprise Linux 8 Configuring and managing virtualization
If this commands displays any output and the model is not set to none, the memballoon
device is present.
In Windows guests, the drivers are installed as a part of the virtio-win driver package.
For instructions, see Section 17.2.1, “Installing KVM paravirtualized drivers for Windows
virtual machines”.
In Linux guests, the drivers are generally included by default and activate when the
memballoon device is present.
Procedure
1. Optional: Obtain the information about the maximum memory and currently used memory for a
VM. This will serve as a baseline for your changes, and also for verification.
2. Adjust the maximum memory allocated to a VM. Increasing this value improves the performance
potential of the VM, and reducing the value lowers the performance footprint the VM has on
your host. Note that this change can only be performed on a shut-off VM, so adjusting a running
VM requires a reboot to take effect.
For example, to change the maximum memory that the testguest VM can use to 4096 MiB:
1. Optional: You can also adjust the memory currently used by the VM, up to the maximum
allocation. This regulates the memory load that the VM has on the host until the next reboot,
without changing the maximum VM allocation.
Verification
2. Optional: If you adjusted the current VM memory, you can obtain the memory balloon statistics
of the VM to evaluate how effectively it regulates its memory use.
222
CHAPTER 16. OPTIMIZING VIRTUAL MACHINE PERFORMANCE
Domain: 'testguest'
balloon.current=365624
balloon.maximum=4194304
balloon.swap_in=0
balloon.swap_out=0
balloon.major_fault=306
balloon.minor_fault=156117
balloon.unused=3834448
balloon.available=4035008
balloon.usable=3746340
balloon.last-update=1587971682
balloon.disk_caches=75444
balloon.hugetlb_pgalloc=0
balloon.hugetlb_pgfail=0
balloon.rss=1005456
Additional resources
For instructions for adjusting VM memory setting using the web console, see Section 16.3.1,
“Adding and removing virtual machine memory using the web console”.
To optimize how the VM uses the allocated memory, you can modify your vCPU setting. For
more information, see Section 16.5, “Optimizing virtual machine CPU performance” .
Increasing the I/O weight of a device increases its priority for I/O bandwidth, and therefore provides it
with more host resources. Similarly, reducing a device’s weight makes it consume less host resources.
NOTE
Each device’s weight value must be within the 100 to 1000 range. Alternatively, the value
can be 0, which removes that device from per-device listings.
Procedure
To display and set a VM’s block I/O parameters:
223
Red Hat Enterprise Linux 8 Configuring and managing virtualization
<domain>
[...]
<blkiotune>
<weight>800</weight>
<device>
<path>/dev/sda</path>
<weight>1000</weight>
</device>
<device>
<path>/dev/sdb</path>
<weight>500</weight>
</device>
</blkiotune>
[...]
</domain>
For example, the following changes the weight of the /dev/sda device in the liftrul VM to 500.
To enable disk I/O throttling, set a limit on disk I/O requests sent from each block device attached to
VMs to the host machine.
Procedure
1. Use the virsh domblklist command to list the names of all the disk devices on a specified VM.
2. Find the host block device where the virtual disk that you want to throttle is mounted.
For example, if you want to throttle the sdb virtual disk from the previous step, the following
output shows that the disk is mounted on the /dev/nvme0n1p3 partition.
$ lsblk
224
CHAPTER 16. OPTIMIZING VIRTUAL MACHINE PERFORMANCE
3. Set I/O limits for the block device using the virsh blkiotune command.
The following example throttles the sdb disk on the rollin-coal VM to 1000 read and write I/O
operations per second and to 50 MB per second read and write throughput.
Additional information
Disk I/O throttling can be useful in various situations, for example when VMs belonging to
different customers are running on the same host, or when quality of service guarantees are
given for different VMs. Disk I/O throttling can also be used to simulate slower disks.
I/O throttling can be applied independently to each block device attached to a VM and
supports limits on throughput and I/O operations.
Red Hat does not support using the virsh blkdeviotune command to configure I/O throttling in
VMs. For more information on unsupported features when using RHEL 8 as a VM host, see
Section 20.3, “Unsupported features in RHEL 8 virtualization” .
Procedure
To enable multi-queue virtio-scsi support for a specific VM, add the following to the VM’s XML
configuration, where N is the total number of vCPU queues:
1. Adjust how many host CPUs are assigned to the VM. You can do this using the CLI or the web
225
Red Hat Enterprise Linux 8 Configuring and managing virtualization
1. Adjust how many host CPUs are assigned to the VM. You can do this using the CLI or the web
console.
2. Ensure that the vCPU model is aligned with the CPU model of the host. For example, to set the
testguest1 VM to use the CPU model of the host:
4. If your host machine uses Non-Uniform Memory Access (NUMA), you can also configure NUMA
for its VMs. This maps the host’s CPU and memory processes onto the CPU and memory
processes of the VM as closely as possible. In effect, NUMA tuning provides the vCPU with a
more streamlined access to the system memory allocated to the VM, which can improve the
vCPU processing effectiveness.
For details, see Configuring NUMA in a virtual machine and Sample vCPU performance tuning
scenario.
16.5.1. Adding and removing virtual CPUs using the command-line interface
To increase or optimize the CPU performance of a virtual machine (VM), you can add or remove virtual
CPUs (vCPUs) assigned to the VM.
When performed on a running VM, this is also referred to as vCPU hot plugging and hot unplugging.
However, note that vCPU hot unplug is not supported in RHEL 8, and Red Hat highly discourages its use.
Prerequisites
Optional: View the current state of the vCPUs in the targeted VM. For example, to display the
number of vCPUs on the testguest VM:
This output indicates that testguest is currently using 1 vCPU, and 1 more vCPu can be hot
plugged to it to increase the VM’s performance. However, after reboot, the number of vCPUs
testguest uses will change to 2, and it will be possible to hot plug 2 more vCPUs.
Procedure
1. Adjust the maximum number of vCPUs that can be attached to a VM, which takes effect on the
VM’s next boot.
For example, to increase the maximum vCPU count for the testguest VM to 8:
Note that the maximum may be limited by the CPU topology, host hardware, the hypervisor,
and other factors.
2. Adjust the current number of vCPUs attached to a VM, up to the maximum configured in the
previous step. For example:
226
CHAPTER 16. OPTIMIZING VIRTUAL MACHINE PERFORMANCE
This increases the VM’s performance and host load footprint of testguest until the VM’s
next boot.
This decreases the VM’s performance and host load footprint of testguest after the VM’s
next boot. However, if needed, additional vCPUs can be hot plugged to the VM to
temporarily increase its performance.
Verification
Confirm that the current state of vCPU for the VM reflects your changes.
Additional resources
For information on adding and removing vCPUs using the web console, see Section 16.5.2,
“Managing virtual CPUs using the web console”.
Prerequisites
Procedure
1. In the Virtual Machines interface, click the VM whose information you want to see.
A new page opens with an Overview section with basic information about the selected VM and a
Console section to access the VM’s graphical interface.
227
Red Hat Enterprise Linux 8 Configuring and managing virtualization
NOTE
vCPU Maximum - The maximum number of virtual CPUs that can be configured for the
VM. If this value is higher than the vCPU Count, additional vCPUs can be attached to the
VM.
Cores per socket - The number of cores for each socket to expose to the VM.
Threads per core - The number of threads for each core to expose to the VM.
Note that the Sockets, Cores per socket, and Threads per core options adjust the CPU
topology of the VM. This may be beneficial for vCPU performance and may impact the
functionality of certain software in the guest OS. If a different setting is not required by your
deployment, keep the default values.
2. Click Apply.
The virtual CPUs for the VM are configured.
NOTE
Changes to virtual CPU settings only take effect after the VM is restarted.
Additional resources:
For information on managing your vCPUs using the command-line interface, see Section 16.5.1,
“Adding and removing virtual CPUs using the command-line interface”.
Prerequisites
The host is a NUMA-compatible machine. To detect whether this is the case, use the virsh
228
CHAPTER 16. OPTIMIZING VIRTUAL MACHINE PERFORMANCE
The host is a NUMA-compatible machine. To detect whether this is the case, use the virsh
nodeinfo command and see the NUMA cell(s) line:
# virsh nodeinfo
CPU model: x86_64
CPU(s): 48
CPU frequency: 1200 MHz
CPU socket(s): 1
Core(s) per socket: 12
Thread(s) per core: 2
NUMA cell(s): 2
Memory size: 67012964 KiB
Procedure
For ease of use, you can set up a VM’s NUMA configuration using automated utilities and services.
However, manual NUMA setup is more likely to yield a significant performance improvement.
Automatic methods
Set the VM’s NUMA policy to Preferred. For example, to do so for the testguest5 VM:
Use the numad command to automatically align the VM CPU with memory resources.
# numad
Manual methods
1. Pin specific vCPU threads to a specific host CPU or range of CPUs. This is also possible on non-
NUMA hosts and VMs, and is recommended as a safe method of vCPU performance
improvement.
For example, the following commands pin vCPU threads 0 to 5 of the testguest6 VM to host
CPUs 1, 3, 5, 7, 9, and 11, respectively:
229
Red Hat Enterprise Linux 8 Configuring and managing virtualization
0 1
1 3
2 5
3 7
4 9
5 11
2. After pinning vCPU threads, you can also pin QEMU process threads associated with a specified
VM to a specific host CPU or range of CPUs. For example, the following commands pin the
QEMU process thread of testguest6 to CPUs 13 and 15, and verify this was successful:
3. Finally, you can also specify which host NUMA nodes will be assigned specifically to a certain
VM. This can improve the host memory usage by the VM’s vCPU. For example, the following
commands set testguest6 to use host NUMA nodes 3 to 5, and verify this was successful:
Additional resources
Note that for best performance results, it is recommended to use all of the manual tuning
methods listed above. For an example of such a configuration, see Section 16.5.4, “Sample
vCPU performance tuning scenario”.
To see the current NUMA configuration of your system, you can use the numastat utility. For
details on using numastat, see Section 16.7, “Virtual machine performance monitoring tools” .
NUMA tuning is currently not possible to perform on IBM Z hosts. For further information, see
Section 4.2, “How virtualization on IBM Z differs from AMD64 and Intel 64” .
Starting scenario
2 NUMA nodes
The output of virsh nodeinfo of such a machine would look similar to:
# virsh nodeinfo
230
CHAPTER 16. OPTIMIZING VIRTUAL MACHINE PERFORMANCE
You intend to modify an existing VM to have 8 vCPUs, which means that it will not fit in a single
NUMA node.
Therefore, you should distribute 4 vCPUs on each NUMA node and make the vCPU topology
resemble the host topology as closely as possible. This means that vCPUs that run as sibling
threads of a given physical CPU should be pinned to host threads on the same core. For details,
see the Solution below:
Solution
# virsh capabilities
The output should include a section that looks similar to the following:
<topology>
<cells num="2">
<cell id="0">
<memory unit="KiB">15624346</memory>
<pages unit="KiB" size="4">3906086</pages>
<pages unit="KiB" size="2048">0</pages>
<pages unit="KiB" size="1048576">0</pages>
<distances>
<sibling id="0" value="10" />
<sibling id="1" value="21" />
</distances>
<cpus num="6">
<cpu id="0" socket_id="0" core_id="0" siblings="0,3" />
<cpu id="1" socket_id="0" core_id="1" siblings="1,4" />
<cpu id="2" socket_id="0" core_id="2" siblings="2,5" />
<cpu id="3" socket_id="0" core_id="0" siblings="0,3" />
<cpu id="4" socket_id="0" core_id="1" siblings="1,4" />
<cpu id="5" socket_id="0" core_id="2" siblings="2,5" />
</cpus>
</cell>
<cell id="1">
<memory unit="KiB">15624346</memory>
<pages unit="KiB" size="4">3906086</pages>
<pages unit="KiB" size="2048">0</pages>
<pages unit="KiB" size="1048576">0</pages>
<distances>
<sibling id="0" value="21" />
<sibling id="1" value="10" />
</distances>
<cpus num="6">
<cpu id="6" socket_id="1" core_id="3" siblings="6,9" />
231
Red Hat Enterprise Linux 8 Configuring and managing virtualization
2. Optional: Test the performance of the VM using the applicable tools and utilities.
default_hugepagesz=1G hugepagesz=1G
[Unit]
Description=HugeTLB Gigantic Pages Reservation
DefaultDependencies=no
Before=dev-hugepages.mount
ConditionPathExists=/sys/devices/system/node
ConditionKernelCommandLine=hugepagesz=1G
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/etc/systemd/hugetlb-reserve-pages.sh
[Install]
WantedBy=sysinit.target
#!/bin/sh
nodes_path=/sys/devices/system/node/
if [ ! -d $nodes_path ]; then
echo "ERROR: $nodes_path does not exist"
exit 1
fi
reserve_pages()
{
echo $1 > $nodes_path/$2/hugepages/hugepages-1048576kB/nr_hugepages
}
reserve_pages 4 node1
reserve_pages 4 node2
This reserves four 1GiB huge pages from node1 and four 1GiB huge pages from node2.
232
CHAPTER 16. OPTIMIZING VIRTUAL MACHINE PERFORMANCE
# chmod +x /etc/systemd/hugetlb-reserve-pages.sh
4. Use the virsh edit command to edit the XML configuration of the VM you wish to optimize, in
this example super-VM:
a. Set the VM to use 8 static vCPUs. Use the <vcpu/> element to do this.
b. Pin each of the vCPU threads to the corresponding host CPU threads that it mirrors in the
topology. To do so, use the <vcpupin/> elements in the <cputune> section.
Note that, as shown by the virsh capabilities utility above, host CPU threads are not
ordered sequentially in their respective cores. In addition, the vCPU threads should be
pinned to the highest available set of host cores on the same NUMA node. For a table
illustration, see the Additional Resources section below.
The XML configuration for steps a. and b. can look similar to:
<cputune>
<vcpupin vcpu='0' cpuset='1'/>
<vcpupin vcpu='1' cpuset='4'/>
<vcpupin vcpu='2' cpuset='2'/>
<vcpupin vcpu='3' cpuset='5'/>
<vcpupin vcpu='4' cpuset='7'/>
<vcpupin vcpu='5' cpuset='10'/>
<vcpupin vcpu='6' cpuset='8'/>
<vcpupin vcpu='7' cpuset='11'/>
<emulatorpin cpuset='6,9'/>
</cputune>
<memoryBacking>
<hugepages>
<page size='1' unit='GiB'/>
</hugepages>
</memoryBacking>
d. Configure the VM’s NUMA nodes to use memory from the corresponding NUMA nodes on
the host. To do so, use the <memnode/> elements in the <numatune/> section:
<numatune>
<memory mode="preferred" nodeset="1"/>
<memnode cellid="0" mode="strict" nodeset="0"/>
<memnode cellid="1" mode="strict" nodeset="1"/>
</numatune>
e. Ensure the CPU mode is set to host-passthrough, and that the CPU uses cache in
233
Red Hat Enterprise Linux 8 Configuring and managing virtualization
e. Ensure the CPU mode is set to host-passthrough, and that the CPU uses cache in
passthrough mode:
<cpu mode="host-passthrough">
<topology sockets="2" cores="2" threads="2"/>
<cache mode="passthrough"/>
Verification
1. Confirm that the resulting XML configuration of the VM includes a section similar to the
following:
[...]
<memoryBacking>
<hugepages>
<page size='1' unit='GiB'/>
</hugepages>
</memoryBacking>
<vcpu placement='static'>8</vcpu>
<cputune>
<vcpupin vcpu='0' cpuset='1'/>
<vcpupin vcpu='1' cpuset='4'/>
<vcpupin vcpu='2' cpuset='2'/>
<vcpupin vcpu='3' cpuset='5'/>
<vcpupin vcpu='4' cpuset='7'/>
<vcpupin vcpu='5' cpuset='10'/>
<vcpupin vcpu='6' cpuset='8'/>
<vcpupin vcpu='7' cpuset='11'/>
<emulatorpin cpuset='6,9'/>
</cputune>
<numatune>
<memory mode="preferred" nodeset="1"/>
<memnode cellid="0" mode="strict" nodeset="0"/>
<memnode cellid="1" mode="strict" nodeset="1"/>
</numatune>
<cpu mode="host-passthrough">
<topology sockets="2" cores="2" threads="2"/>
<cache mode="passthrough"/>
<numa>
<cell id="0" cpus="0-3" memory="2" unit="GiB">
<distances>
<sibling id="0" value="10"/>
<sibling id="1" value="21"/>
</distances>
</cell>
<cell id="1" cpus="4-7" memory="2" unit="GiB">
<distances>
<sibling id="0" value="21"/>
<sibling id="1" value="10"/>
</distances>
</cell>
</numa>
</cpu>
</domain>
2. Optional: Test the performance of the VM using the applicable tools and utilities to evaluate
234
CHAPTER 16. OPTIMIZING VIRTUAL MACHINE PERFORMANCE
2. Optional: Test the performance of the VM using the applicable tools and utilities to evaluate
the impact of the VM’s optimization.
Additional resources
The following tables illustrate the connections between the vCPUs and the host CPUs they
should be pinned to:
CPU threads 0 3 1 4 2 5 6 9 7 10 8 11
Cores 0 1 2 3 4 5
Sockets 0 1
NUMA nodes 0 1
vCPU threads 0 1 2 3 4 5 6 7
Cores 0 1 2 3
Sockets 0 1
NUMA nodes 0 1
vCPU threads 0 1 2 3 4 5 6 7
Host CPU 0 3 1 4 2 5 6 9 7 10 8 11
threads
Cores 0 1 2 3 4 5
Sockets 0 1
NUMA nodes 0 1
In this scenario, there are 2 NUMA nodes and 8 vCPUs. Therefore, 4 vCPU threads should be
pinned to each node.
In addition, Red Hat recommends leaving at least a single CPU thread available on each node
for host system operations.
Because in this example, each NUMA node houses 3 cores, each with 2 host CPU threads, the
set for node 0 translates as follows:
235
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Depending on your requirements, you can either deactivate KSM for a single session or persistently.
Procedure
To deactivate KSM for a single session, use the systemctl utility to stop ksm and ksmtuned
services.
To deactivate KSM persistently, use the systemctl utility to disable ksm and ksmtuned
services.
NOTE
Memory pages shared between VMs before deactivating KSM will remain shared. To stop
sharing, delete all the PageKSM pages in the system using the following command:
After anonymous pages replace the KSM pages, the khugepaged kernel service will
rebuild transparent hugepages on the VM’s physical memory.
Procedure
Use any of the following methods and observe if it has a beneficial effect on your VM network
performance:
236
CHAPTER 16. OPTIMIZING VIRTUAL MACHINE PERFORMANCE
If the output of this command is blank, enable the vhost_net kernel module:
# modprobe vhost_net
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
<driver name='vhost' queues='N'/>
</interface>
SR-IOV
If your host NIC supports SR-IOV, use SR-IOV device assignment for your vNICs. For more
information, see Section 10.9, “Managing SR-IOV devices” .
Additional resources
For additional information on virtual network connection types and tips for usage, see
Section 13.1, “Understanding virtual networking” .
On your RHEL 8 host, as root, use the top utility or the system monitor application, and look for
qemu and virt in the output. This shows how much host system resources your VMs are
consuming.
237
Red Hat Enterprise Linux 8 Configuring and managing virtualization
If the monitoring tool displays that any of the qemu or virt processes consume a large
portion of the host CPU or memory capacity, use the perf utility to investigate. For details,
see below.
On the guest operating system, use performance utilities and applications available on the
system to evaluate which processes consume the most system resources.
perf kvm
You can use the perf utility to collect and analyze virtualization-specific statistics about the
performance of your RHEL 8 host. To do so:
2. Use one of the perf kvm stat commands to display perf statistics for your virtualization host:
For real-time monitoring of your hypervisor, use the perf kvm stat live command.
To log the perf data of your hypervisor over a period of time, activate the logging using the
perf kvm stat record command. After the command is canceled or interrupted, the data is
saved in the perf.data.guest file, which can be analyzed using the perf kvm stat report
command.
3. Analyze the perf output for types of VM-EXIT events and their distribution. For example, the
PAUSE_INSTRUCTION events should be infrequent, but in the following output, the high
occurrence of this event suggests that the host CPUs are not handling the running vCPUs well.
In such a scenario, consider shutting down some of your active VMs, removing vCPUs from
these VMs, or tuning the performance of the vCPUs.
VM-EXIT Samples Samples% Time% Min Time Max Time Avg time
238
CHAPTER 16. OPTIMIZING VIRTUAL MACHINE PERFORMANCE
)
EXCEPTION_NMI 27 0.00% 0.00% 0.69us 1.34us 0.98us ( +-
3.50% )
EPT_MISCONFIG 5 0.00% 0.00% 5.15us 10.85us 7.88us ( +-
11.67% )
Other event types that can signal problems in the output of perf kvm stat include:
For more information on using perf to monitor virtualization performance, see the perf-kvm man page.
numastat
To see the current NUMA configuration of your system, you can use the numastat utility, which is
provided by installing the numactl package.
The following shows a host with 4 running VMs, each obtaining memory from multiple NUMA nodes. This
is not optimal for vCPU performance, and warrants adjusting:
# numastat -c qemu-kvm
In contrast, the following shows memory being provided to each VM by a single node, which is
significantly more efficient.
# numastat -c qemu-kvm
239
Red Hat Enterprise Linux 8 Configuring and managing virtualization
For this purpose, the following sections provide information on installing and optimizing Windows VMs
on the host, as well as installing and configuring drivers in these VMs.
To create the VM and to install the Windows guest OS, use the virt-install command or the RHEL 8 web
console.
Prerequisites
A Windows OS installation source, which can be one of the following, and be available locally or
on a network:
Procedure
1. Create the VM. For instructions, see Section 2.2, “Creating virtual machines”.
If using the virt-install utility to create the VM, add the following options to the command:
The storage medium with the KVM virtio drivers. For example:
--disk path=/usr/share/virtio-win/virtio-win.iso,device=disk,bus=virtio
The Windows version you will install. For example, for Windows 10:
--os-variant win10
For a list of available Windows versions and the appropriate option, use the following
command:
# osinfo-query os
If using the web console to create the VM, specify your version of Windows in the
Operating System field of the Create New Virtual Machine window. After the VM is
created and the guest OS is installed, attach the storage medium with virtio drivers to the
240
CHAPTER 17. INSTALLING AND MANAGING WINDOWS VIRTUAL MACHINES
VM using the Disks interface. For instructions, see Section 11.7.9, “Attaching existing disks
to virtual machines using the web console”.
3. Configure KVM virtio drivers in the Windows guest OS. For details, see Section 17.2.1, “Installing
KVM paravirtualized drivers for Windows virtual machines”.
Additional resources
For information on further optimizing Windows VMs, see Section 17.2, “Optimizing Windows
virtual machines”.
Therefore, Red Hat recommends optimizing your Windows VMs by doing any combination of the
following:
Using paravirtualized drivers. For more information, see Section 17.2.1, “Installing KVM
paravirtualized drivers for Windows virtual machines”.
Enabling Hyper-V enlightenments. For more information, see Section 17.2.2, “Enabling Hyper-V
enlightenments”.
Configuring NetKVM driver parameters. For more information, see Section 17.2.3, “Configuring
NetKVM driver parameters”.
To do so:
1. Prepare the install media on the host machine. For more information, see Section 17.2.1.2,
“Preparing virtio driver installation media on a host machine”.
2. Attach the install media to an existing Windows VM, or attach it when creating a new Windows
VM.
3. Install the virtio drivers on the Windows guest OS. For more information, see Section 17.2.1.3,
“Installing virtio drivers on a Windows guest”.
Paravirtualized drivers enhance the performance of virtual machines (VMs) by decreasing I/O latency
241
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Paravirtualized drivers enhance the performance of virtual machines (VMs) by decreasing I/O latency
and increasing throughput to almost bare-metal levels. Red Hat recommends that you use
paravirtualized drivers for VMs that run I/O-heavy tasks and applications.
virtio drivers are KVM’s paravirtualized device drivers, available for Windows VMs running on KVM hosts.
These drivers are provided by the virtio-win package, which includes drivers for:
Video controllers
NOTE
For additional information about emulated, virtio, and assigned devices, refer to
Chapter 10, Managing virtual devices.
Using KVM virtio drivers, the following Microsoft Windows versions are expected to run similarly to
physical systems:
Windows Server versions: See Certified guest operating systems for Red Hat Enterprise Linux
with KVM in the Red Hat Knowledgebase.
To install KVM virtio drivers on a Windows virtual machine (VM), you must first prepare the installation
media for the virtio driver on the host machine. To do so, install the virtio-win package on the host
machine and use the .iso file it provides as storage for the VM.
Prerequisites
242
CHAPTER 17. INSTALLING AND MANAGING WINDOWS VIRTUAL MACHINES
Procedure
b. Select the Product Variant relevant for your system architecture. For example, for Intel 64
and AMD64, select Red Hat Enterprise Linux for x86_64.
2. Install the virtio-win package from the download directory. For example:
If the installation succeeds, the virtio-win driver files are prepared in the /usr/share/virtio-win/
directory. These include ISO files and a drivers directory with the driver files in directories, one
for each architecture and supported Windows version.
# ls /usr/share/virtio-win/
drivers/ guest-agent/ virtio-win-1.9.9.iso virtio-win.iso
3. Attach the virtio-win.iso file to the Windows VM. To do so, do one of the following:
Additional resources
When virtio-win.iso is attached to the Windows VM, you can proceed to installing the virtio
driver on the Windows guest operating system. For instructions, see Section 17.2.1.3, “Installing
virtio drivers on a Windows guest”.
To install KVM virtio drivers on a Windows guest operating system (OS), you must add a storage device
that contains the drivers - either when creating the virtual machine (VM) or afterwards - and install the
drivers in the Windows guest OS.
Prerequisites
An installation medium with the KVM virtio drivers must be attached to the VM. For instructions
243
Red Hat Enterprise Linux 8 Configuring and managing virtualization
An installation medium with the KVM virtio drivers must be attached to the VM. For instructions
on preparing the medium, see Section 17.2.1.2, “Preparing virtio driver installation media on a
host machine”.
Procedure
4. Based on the architecture of the VM’s vCPU, run one of the installers on the medium.
5. In the Virtio-win-guest-tools setup wizard that opens, follow the displayed instructions until
you reach the Custom Setup step.
244
CHAPTER 17. INSTALLING AND MANAGING WINDOWS VIRTUAL MACHINES
6. In the Custom Setup window, select the device drivers you want to install. The recommended
driver set is selected automatically, and the descriptions of the drivers are displayed on the right
of the list.
Verification
245
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Additional resources
You can use the Microsoft Windows Installer (MSI) command-line interface (CLI) instead of the
graphical interface to install the drivers. For more information about MSI, see the Microsoft
documentation.
If you install the NetKVM driver, you may also need to configure the Windows guest’s networking
parameters. For instructions, see Section 17.2.3, “Configuring NetKVM driver parameters” .
The following sections provide information about the supported Hyper-V enlightenments and how to
enable them.
Hyper-V enlightenments provide better performance in a Windows virtual machine (VM) running in a
RHEL 8 host. For instructions on how to enable them, see the following.
Procedure
1. Use the virsh edit command to open the XML configuration of the VM. For example:
2. Add the following <hyperv> sub-section to the <features> section of the XML:
<features>
[...]
246
CHAPTER 17. INSTALLING AND MANAGING WINDOWS VIRTUAL MACHINES
<hyperv>
<relaxed state='on'/>
<vapic state='on'/>
<spinlocks state='on' retries='8191'/>
<vpindex state='on'/>
<runtime state='on' />
<synic state='on'/>
<stimer state='on'/>
<frequencies state='on'/>
</hyperv>
[...]
</features>
<clock offset='localtime'>
<timer name='hypervclock' present='yes'/>
</clock>
Verification
Use the virsh dumpxml command to display the XML configuration of the running VM. If it
includes the following segments, the Hyper-V enlightenments are enabled on the VM.
<hyperv>
<relaxed state='on'/>
<vapic state='on'/>
<spinlocks state='on' retries='8191'/>
<vpindex state='on'/>
<runtime state='on' />
<synic state='on'/>
<stimer state='on'/>
<frequencies state='on'/>
</hyperv>
<clock offset='localtime'>
<timer name='hypervclock' present='yes'/>
</clock>
You can configure certain Hyper-V features to optimize Windows VMs. The following table provides
information about these configurable Hyper-V features and their values.
247
Red Hat Enterprise Linux 8 Configuring and managing virtualization
248
CHAPTER 17. INSTALLING AND MANAGING WINDOWS VIRTUAL MACHINES
Used by Hyper-V to
indicate to the virtual
machine’s operating
system the number of
times a spinlock
acquisition should be
attempted before
indicating an excessive
spin situation to Hyper-
V.
249
Red Hat Enterprise Linux 8 Configuring and managing virtualization
MSR-based 82 Hyper-V
clock source
(HV_X64_MSR_TIME_RE
F_COUNT,
0x40000020)
Id value - string of up to
12 characters
IMPORTANT
Modifying the driver’s parameters causes Windows to reload that driver. This interrupts
existing network activity.
Prerequisites
Procedure
250
CHAPTER 17. INSTALLING AND MANAGING WINDOWS VIRTUAL MACHINES
b. Under the list of network adapters, double-click Red Hat VirtIO Ethernet Adapter. The
Properties window for the device opens.
a. Click the parameter you want to modify. Options for that parameter are displayed.
Parameter Description 2
NOTE
The following table provides information on the configurable NetKVM driver initial parameters.
251
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Parameter Description
Valid values are: 16, 32, 64, 128, 256, 512, and 1024.
Valid values are: 16, 32, 64, 128, 256, 512, and 1024.
252
CHAPTER 17. INSTALLING AND MANAGING WINDOWS VIRTUAL MACHINES
WARNING
Certain processes might not work as expected if you change their configuration.
Procedure
You can optimize your Windows VMs by performing any combination of the following:
Remove unused devices, such as USBs or CD-ROMs, and disable the ports.
Disable automatic Windows Update. For more information on how to do so, see Configure
Group Policy Settings for Automatic Updates or Configure Windows Update for Business.
Note that Windows Update is essential for installing latest updates and hotfixes from Microsoft.
As such, Red Hat does not recommend disabling Windows Updates
Disable background services, such as SuperFetch and Windows Search. For more information
about stopping services, see Disabling system services or Stop-Service.
Review and disable unnecessary scheduled tasks, such as scheduled disk defragmentation. For
more information on how to do so, see Disable Scheduled Tasks.
Reduce periodic activity of server applications. You can do so by editing the respective timers.
For more information, see Multimedia Timers.
Disable the antivirus software. Note that disabling the antivirus might compromise the security
of the VM.
Prerequisites
The samba packages are installed on your host. If they are not:
253
Red Hat Enterprise Linux 8 Configuring and managing virtualization
The host is visible and reachable over a network for the VM. This is generally the case if the VM
is connected using the NAT and bridge type of virtual networks. However, for the macvtap
connection, you must first set up the macvlan feature on the host. To do so:
1. Create a network device file, for example called vm-macvlan.netdev in the host’s
/etc/systemd/network/ directory.
# vim /etc/systemd/network/vm-macvlan.netdev
2. Edit the network device file to have the following content. You can replace vm-macvlan
with the name you chose for your network device.
[NetDev]
Name=vm-macvlan
Kind=macvlan
[MACVLAN]
Mode=bridge
3. Create a network configuration file for your macvlan network device, for example vm-
macvlan.network.
# vim /etc/systemd/network/vm-macvlan.network
4. Edit the network configuration file to have the following content. You can replace vm-
macvlan with the name you chose for your network device.
[Match]
Name=_vm-macvlan_
[Network]
IPForward=yes
Address=192.168.250.33/24
Gateway=192.168.250.1
DNS=192.168.250.1
5. Create a network configuration file for your physical network interface. For example, if your
interface is enp4s0:
# vim /etc/systemd/network/enp4s0.network
If you are unsure what interface to use, you can use the ifconfig command on your host to
obtain the list of active network interfaces.
6. Edit the physical network configuration file to make the physical network a part of the
macvlan interface, in this case vm-macvlan:
[Match]
Name=enp4s0
[Network]
MACVLAN=vm-macvlan
254
CHAPTER 17. INSTALLING AND MANAGING WINDOWS VIRTUAL MACHINES
Procedure
1. On the host, create a Samba share and make it accessible for external systems.
Note that the hosts allow line restricts the accessibility of the share only to hosts on
the VM network. If you want the share to be accessible by anyone, remove the line.
# mkdir -p /samba/VM-share
f. Allow the VM-share directory to be accessible and modifiable for the VMs.
2. On the Windows guest operating system, attach the Samba share as a network location.
255
Red Hat Enterprise Linux 8 Configuring and managing virtualization
c. In the Add Network Location wizard that opens, select "Choose a custom network location"
and click Next.
d. In the "Internet or network address" field, type host-IP/VM-share, where host-IP is the IP
address of the host. Usually, the host IP is the default gateway of the VM. Afterwards, click
Next.
e. When the wizard asks if you want to rename the shared directory, keep the default name.
This ensures the consistency of file sharing configuration across the VM and the guest. Click
Next.
f. If accessing the network location was successful, you can now click Finish and open the
256
CHAPTER 17. INSTALLING AND MANAGING WINDOWS VIRTUAL MACHINES
f. If accessing the network location was successful, you can now click Finish and open the
shared directory.
Additional resources
Prerequisites
Make sure you have installed the latest WHQL certified VirtIO drivers.
Procedure
1. Enable TPM 2.0 by adding the following parameters to the <devices> section in the VM’s XML
configuration.
<devices>
[...]
<tpm model='tpm-crb'>
<backend type='emulator' version='2.0'/>
</tpm>
[...]
</devices>
2. Install Windows in UEFI mode. For more information on how to do so, see Creating a
SecureBoot virtual machine.
3. Install the VirtIO drivers on the Windows VM. For more information on how to do so, see
Installing virtio drivers on a Windows guest .
257
Red Hat Enterprise Linux 8 Configuring and managing virtualization
4. In UEFI, enable Secure Boot. For more information on how to do so, see Secure Boot.
Verification
Ensure that the Device Security page on your Windows machine displays the following
message:
Settings > Update & Security > Windows Security > Device Security
Prerequisites
Ensure that standard hardware security is enabled. For more information, see Enabling standard
hardware security on Windows virtual machines.
Ensure that KVM nesting is enabled. For more information, see Creating nested virtual
machines.
# -cpu Skylake-Client-
v3,hv_stimer,hv_synic,hv_relaxed,hv_reenlightenment,hv_spinlocks=0xfff,hv_vpin
dex,hv_vapic,hv_time,hv_frequencies,hv_runtime,+kvm_pv_unhalt,+vmx
Procedure
NOTE
For other methods of enabling HVCI, see the relevant Microsoft documentation.
Verification
Ensure that the Device Security page on your Windows VM displays the following message:
Settings > Update & Security > Windows Security > Device Security
258
CHAPTER 17. INSTALLING AND MANAGING WINDOWS VIRTUAL MACHINES
259
Red Hat Enterprise Linux 8 Configuring and managing virtualization
In other words, an L0 host can run L1 virtual machines (VMs), and each of these L1 VMs can host their
own L2 VMs. Note that, in such cases, both L0 and L1 hosts must be RHEL 8 systems, whereas the L2
guest can be any supported RHEL or Windows system.
WARNING
Red Hat currently provides nested virtualization only as a Technology Preview, and
is therefore unsupported.
In addition, Red Hat does not recommend using nested virtualization in production user environments,
due to various limitations in functionality . Instead, nested virtualization is primarily intended for
development and testing scenarios, such as:
It is also possible to create nested VMs on multiple architectures, such as Intel, AMD, IBM POWER9 , and
IBM Z . Note that, irrespective of the architecture used, nesting is a Technology Preview, and therefore
not supported by Red Hat.
WARNING
Prerequisites
The hypervisor CPU must support nested virtualization. To verify, use the cat /proc/cpuinfo
command on the L0 hypervisor. If the output of the command includes the vmx and ept flags,
creating L2 VMs is possible. This is generally the case on Intel Xeon v3 cores and later.
260
CHAPTER 18. CREATING NESTED VIRTUAL MACHINES
# cat /sys/module/kvm_intel/parameters/nested
If the command returns 1 or Y, the feature is enabled. Skip the remaining prerequisite steps,
and continue with the Procedure section.
If the command returns 0 or N but your system supports nested virtualization, use the
following steps to enable the feature.
# modprobe -r kvm_intel
iii. The nesting feature is now enabled, but only until the next reboot of the L0 host. To
enable it permanently, add the following line to the /etc/modprobe.d/kvm.conf file:
Procedure
a. Open the XML configuration of the VM. The following example opens the configuration of
the Intel-L1 VM:
<cpu mode='host-passthrough'/>
If the VM’s XML configuration file already contains a <cpu> element, rewrite it.
2. Create an L2 VM within the L1 VM. To do this, follow the same procedure as when creating the
L1 VM.
WARNING
261
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Prerequisites
The hypervisor CPU must support nested virtualization. To verify, use the cat /proc/cpuinfo
command on the L0 hypervisor. If the output of the command includes the svm and npt flags,
creating L2 VMs is possible. This is generally the case on AMD EPYC cores and later.
# cat /sys/module/kvm_amd/parameters/nested
If the command returns 1 or Y, the feature is enabled. Skip the remaining prerequisite steps,
and continue with the Procedure section.
If the command returns 0 or N, use the following steps to enable the feature.
# modprobe -r kvm_amd
iv. The nesting feature is now enabled, but only until the next reboot of the L0 host. To
enable it permanently, add the following to the /etc/modprobe.d/kvm.conf file:
Procedure
a. Open the XML configuration of the VM. The following example opens the configuration of
the AMD-L1 VM:
<cpu mode='host-passthrough'/>
If you require the VM to use a specific CPU instead of host-passthrough, add a <feature
policy='require' name='vmx'/> line to the CPU configuration. For example:
2. Create an L2 VM within the L1 VM. To do this, follow the same procedure as when creating the
L1 VM.
262
CHAPTER 18. CREATING NESTED VIRTUAL MACHINES
WARNING
Prerequisites
The hypervisor CPU must support nested virtualization. To verify this is the case, use the cat
/proc/cpuinfo command on the L0 hypervisor. If the output of the command includes the sie
flag, creating L2 VMs is possible.
# cat /sys/module/kvm/parameters/nested
If the command returns 1 or Y, the feature is enabled. Skip the remaining prerequisite steps,
and continue with the Procedure section.
If the command returns 0 or N, use the following steps to enable the feature.
# modprobe -r kvm
iv. The nesting feature is now enabled, but only until the next reboot of the L0 host. To
enable it permanently, add the following line to the /etc/modprobe.d/kvm.conf file:
Procedure
Create an L2 VM within the L1 VM. To do this, follow the same procedure as when creating the
L1 VM.
263
Red Hat Enterprise Linux 8 Configuring and managing virtualization
WARNING
Prerequisites
An L0 RHEL 8 host is running an L1 VM. The L1 VM is using RHEL 8 as the guest operating
system.
# cat /sys/module/kvm_hv/parameters/nested
If the command returns 1 or Y, the feature is enabled. Skip the remaining prerequisite steps,
and continue with the Procedure section.
If the command returns 0 or N, use the following steps to enable the feature:
# modprobe -r kvm_hv
iv. The nesting feature is now enabled, but only until the next reboot of the L0 host. To
enable it permanently, add the following line to the /etc/modprobe.d/kvm.conf file:
Procedure
1. To ensure that the L1 VM can create L2 VMs, add the cap-nested-hv parameter to the machine
type of the L1 VM. To do so, use the virsh edit command to modify the L1 VM’s XML
configuration, and the following line to the <features> section:
<nested-hv state='on'/>
2. Create an L2 VM within the L1 VM. To do this, follow the same procedure as when creating the
L1 VM.
To significantly improve the performance of L2 VMs, Red Hat recommends adding the`cap-
nested-hv` parameter to the XML configurations of L2 VMs as well. For instructions, see the
previous step.
264
CHAPTER 18. CREATING NESTED VIRTUAL MACHINES
Additional information
Note that using IBM POWER8 as the architecture for the L2 VM currently does not work.
WARNING
Red Hat currently does not support nested virtualization, and only provides nesting
as a Technology Preview.
Supported architectures
The L0 host must be an Intel, AMD, IBM POWER9, or IBM Z system. Nested virtualization
currently does not work on other architectures.
To create nested VMs, you must use the following guest operating systems (OSs):
On the L1 VMs - RHEL 7.8 and later, or RHEL 8.2 and later
NOTE
This support does not apply to using virtualization offerings based on RHEL 7
and RHEL 8 in L1 VMs. These include:
OpenShift Virtualization
In addition, on IBM POWER9, nested virtualization currently only works under the following
circumstances:
265
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Hypervisor limitations
Currently, Red Hat supports nesting only on RHEL-KVM. When RHEL is used as the L0
hypervisor, you can use RHEL 8 or Windows for WSL 2 as the L1 hypervisor.
Feature limitations
Use of L2 VMs as hypervisors and creating L3 guests has not been properly tested and is not
expected to work.
Migrating VMs currently does not work on AMD systems if nested virtualization has been
enabled on the L0 host.
On an IBM Z system, huge-page backing storage and nested virtualization cannot be used at
the same time.
Some features available on the L0 host may be unavailable for the L1 hypervisor.
For example, on IBM POWER 9 hardware, the External Interrupt Virtualization Engine (XIVE)
does not work. However, L1 VMs can use the emulated XIVE interrupt controller to launch L2
VMs.
266
CHAPTER 19. DIAGNOSING VIRTUAL MACHINE PROBLEMS
The following sections provide detailed information about generating logs and diagnosing some
common VM problems, as well as about reporting these problems.
The following sections explain what debug logs are , how you can set them to be persistent , enable them
during runtime, and attach them when reporting problems.
Debug logging is not enabled by default and has to be enabled when libvirt starts. You can enable
logging for a single session or persistently. You can also enable logging when a libvirtd daemon session
is already running by modifying the daemon run-time settings .
Attaching the libvirt debug logs is also useful when requesting support with a VM problem.
Procedure
3 logs all warning and error messages. This is the default value.
267
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Log all error and warning messages from the remote, util.json, and rpc layers
This is useful when restarting libvirtd is not possible because restarting fixes the problem, or because
there is another process, such as migration or backup, running at the same time. Modifying runtime
settings is also useful if you want to try a command without editing the configuration files or restarting
the daemon.
Prerequisites
Procedure
NOTE
It is recommended that you back up the active set of filters so that you can
restore them after generating the logs. If you do not restore the filters, the
messages will continue to be logged which may affect system performance.
2. Use the virt-admin utility to enable debugging and set the filters according to your
requirements.
268
CHAPTER 19. DIAGNOSING VIRTUAL MACHINE PROBLEMS
3 logs all warning and error messages. This is the default value.
Logs all error and warning messages from the remote, util.json, and rpc layers
3. Use the virt-admin utility to save the logs to a specific file or directory.
For example, the following command saves the log output to the libvirt.log file in the
/var/log/libvirt/ directory.
4. Optional: You can also remove the filters to generate a log file that contains all VM-related
information. However, it is not recommended since this file may contain a large amount of
redundant information produced by libvirt’s modules.
# virt-admin daemon-log-filters
Logging filters:
5. Optional: Restore the filters to their original state using the backup file.
Perform the second step with the saved values to restore the filters.
Procedure
Based on the encountered problems, attach the following logs along with your report:
For problems with the libvirt service, attach the /var/log/libvirt/libvirtd.log file from the
host.
For problems with a specific VM, attach its respective log file.
For example, for the testguest1 VM, attach the testguest1.log file, which can be found at
/var/log/libvirt/qemu/testguest1.log.
269
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Additional resources
For more information about attaching log files, see How to provide files to Red Hat Support?
This section provides a brief introduction to core dumping and explains how you can dump a VM core to
a specific file.
In such cases, you can use the virsh dump utility to save (or dump) the core of a VM to a file before you
reboot the VM. The core dump file contains a raw physical memory image of the VM which contains
detailed information about the VM. This information can be used to diagnose VM problems, either
manually, or by using a tool such as the crash utility.
Additional resources
For information about using the crash utility, see the crash man page and the crash utility home
page.
Prerequisites
Make sure you have sufficient disk space to save the file. Note that the space occupied by the
VM depends on the amount of RAM allocated to the VM.
Procedure
IMPORTANT
270
CHAPTER 19. DIAGNOSING VIRTUAL MACHINE PROBLEMS
IMPORTANT
The crash utility no longer supports the default file format of the virsh dump command.
To analyze a core dump file using crash, you must create the file using the --memory-
only option.
Additionally, you must use the --memory-only option when creating a core dump file to
attach to a Red Hat Support Case.
Troubleshooting
If the virsh dump command fails with a System is deadlocked on memory error, ensure you are
assigning sufficient memory for the core dump file. To do so, use the following crashkernel option
value. Alternatively, do not use crashkernel at all, which assigns core dump memory automatically.
crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M
Additional resources
Prerequisites
Make sure you know the PID of the processes that you want to backtrace.
You can find the PID by using the pgrep command followed by the name of the process. For
example:
# pgrep libvirt
22014
22025
Procedure
Use the gstack utility followed by the PID of the process you wish to backtrace.
For example, the following command backtraces the libvirt process with the PID 22014.
# gstack 22014
Thread 3 (Thread 0x7f33edaf7700 (LWP 22017)):
#0 0x00007f33f81aef21 in poll () from /lib64/libc.so.6
271
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Additional resources
Additional resources for reporting virtual machine problems and providing logs
To request additional help and support, you can:
Raise a service request using the redhat-support-tool command line option, the Red Hat Portal
UI, or several different methods using FTP.
Upload the SOS Report and the log files when you submit a service request.
This ensures that the Red Hat support engineer has all the necessary diagnostic information for
reference.
For more information about SOS reports, see What is an SOS Report and how to create one
in Red Hat Enterprise Linux?
For information about attaching log files, see How to provide files to Red Hat Support?
272
CHAPTER 20. FEATURE SUPPORT AND LIMITATIONS IN RHEL 8 VIRTUALIZATION
Features listed in Section 20.2, “Recommended features in RHEL 8 virtualization” have been tested and
certified by Red Hat to work with the KVM hypervisor on a RHEL 8 system. Therefore, they are fully
supported and recommended for use in virtualization in RHEL 8.
Features listed in Section 20.3, “Unsupported features in RHEL 8 virtualization” may work, but are not
supported and not intended for use in RHEL 8. Therefore, Red Hat strongly recommends not using
these features in RHEL 8 with KVM.
Section 20.4, “Resource allocation limits in RHEL 8 virtualization” lists the maximum amount of specific
resources supported on a KVM guest in RHEL 8. Guests that exceed these limits are not supported by
Red Hat.
In addition, unless stated otherwise, all features and solutions used by the documentation for RHEL 8
virtualization are supported. However, some of these have not been completely tested and therefore
may not be fully optimized.
IMPORTANT
Many of these limitations do not apply to other virtualization solutions provided by Red
Hat, such as Red Hat Virtualization (RHV), OpenShift Virtualization, or Red Hat
OpenStack Platform (RHOSP).
IBM POWER8
IBM POWER9
Any other hardware architectures are not supported for using RHEL 8 as a KVM virtualization host, and
Red Hat highly discourages doing so. Notably, this includes the 64-bit ARM architecture (ARM 64).
NOTE
273
Red Hat Enterprise Linux 8 Configuring and managing virtualization
NOTE
RHEL 8 documentation primarily describes AMD64 and Intel 64 features and usage. For
information about the specific of using RHEL 8 virtualization on different architectures,
see:
Note, however, that by default, your guest OS does not use the same subscription as your host.
Therefore, you must activate a separate licence or subscription for the guest OS to work properly.
Machine types
To ensure that your VM is compatible with your host architecture and that the guest OS runs optimally,
the VM must use an appropriate machine type.
When creating a VM using the command line , the virt-install utility provides multiple methods of setting
the machine type.
When you use the --os-variant option, virt-install automatically selects the machine type
recommended for your host CPU and supported by the guest OS.
If you do not use --os-variant or require a different machine type, use the --machine option to
specify the machine type explicitly.
If you specify a --machine value that is unsupported or not compatible with your host, virt-
install fails and displays an error message.
The recommended machine types for KVM virtual machines on supported architectures, and the
corresponding values for the --machine option, are as follows. Y stands for the latest minor version of
RHEL 8.
# /usr/libexec/qemu-kvm -M help
274
CHAPTER 20. FEATURE SUPPORT AND LIMITATIONS IN RHEL 8 VIRTUALIZATION
Additional resources
For information about unsupported guest OS types and features in RHEL 8 virtualization, see
Section 20.3, “Unsupported features in RHEL 8 virtualization” .
For information about the maximum supported amounts of resources that can be allocated to a
VM, see Section 20.4, “Resource allocation limits in RHEL 8 virtualization” .
For information about supported machine types when migrating VMs, see Section 9.7,
“Supported hosts for virtual machine migration”.
IMPORTANT
Many of these limitations may not apply to other virtualization solutions provided by Red
Hat, such as Red Hat Virtualization (RHV), OpenShift Virtualization, or Red Hat
OpenStack Platform (RHOSP).
Features supported by RHV 4.2 and later, or RHOSP 13 and later, are described as such in
the following paragraphs.
Notably, Red Hat does not support using systems with the 64-bit ARM architecture (ARM 64) for KVM
virtualization on RHEL 8.
macOS
For a list of guest OSs supported on RHEL hosts, see Certified guest operating systems for Red Hat
Enterprise Linux with KVM.
For a list of guest OSs supported by other virtualization solutions provided by Red Hat, see Certified
Guest Operating Systems in Red Hat OpenStack Platform and Red Hat Virtualization.
For a list of guest OSs supported specifically by RHV, see Supported guest operating systems in RHV .
275
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Red Hat does not support creating KVM virtual machines in any type of container that includes the
elements of the RHEL 8 hypervisor (such as the QEMU emulator or the libvirt package).
To create VMs in containers, Red Hat recommends using the OpenShift Virtualization offering.
Note that vCPU hot unplugs are supported in RHV. For details, see Hot plugging VCPUs .
Note that memory hot unplugs are supported in RHV, but only on guest VMs running RHEL with specific
guest configurations. For details, see Hot Unplugging Virtual Memory .
To set up I/O throttling in RHEL 8, use virsh blkiotune. This is also known as libvirt-side I/O throttling.
For instructions, see Section 16.4.2, “Disk I/O throttling in virtual machines” .
Note that QEMU-side I/O throttling is supported in RHV. For details, see Storage quality of service .
QEMU-side I/O throttling is also supported in RHOSP. For details, see Setting Resource Limitation on
Disk and the Use Quality-of-Service Specifications section in the RHOSP Storage Guide .
Note that storage live migration is supported in RHV. For details, see Overview of Live Storage
Migration.
Storage live migration is also supported in RHOSP, but with some limitations. For details, see Migrate a
Volume.
Live snapshots
Creating or loading a snapshot of a running VM, also referred to as a live snapshot, is not supported in
RHEL 8.
In addition, note that non-live VM snapshots are deprecated in RHEL 8. Therefore, creating or loading a
snapshot of a shut-down VM is supported, but Red Hat recommends not using it.
Note that live snapshots are supported in RHV. For details, see Live snapshots in Red Hat Virtualization .
Live snapshots are also supported in RHOSP. For details, see Importing virtual machines into the
overcloud.
vhost-user
276
CHAPTER 20. FEATURE SUPPORT AND LIMITATIONS IN RHEL 8 VIRTUALIZATION
Note that vhost-user is supported in RHOSP, but only for virtio-net interfaces. For details, see virtio-
net implementation and vhost user ports.
Note that the S3 and S4 states are currently also not supported in RHV and RHOSP.
Note that S3-PR on a multipathed vDisk is supported in RHV. Therefore, if you require Windows Cluster
support, Red Hat recommends using RHV as your virtualization solution. For details, see Cluster support
on RHV guests.
virtio-crypto
The drivers for the virtio-crypto device are available in the RHEL 8.0 kernel, and the device can thus be
enabled on a KVM hypervisor under certain circumstances. However, using the virtio-crypto device in
RHEL 8 is not supported and its use is therefore highly discouraged.
Note that virtio-crypto devices are also not supported in RHV or RHOSP.
Note that incremental live backup is provided as a Technology Preview in RHV 4.4 and later.
net_failover
Using the net_failover driver to set up an automated network device failover mechanism is not
supported in RHEL 8.
Note that net_failover is currently also not supported in RHV and RHOSP.
Multi-FD migration
Migrating VMs using multiple file descriptors (FDs), also known as multi-FD migration, is not supported
in RHEL 8.
Note that multi-FD migrations are currently also not supported in RHV or RHOSP.
virtiofs
Sharing files between the host and its VMs using the virtiofs file system is unsupported in RHEL 8.
Note that using virtiofs is currently also not supported in RHV or RHOSP.
NVMe devices
Attaching Non-volatile Memory express (NVMe) devices to VMs hosted in RHEL 8 is not supported.
277
Red Hat Enterprise Linux 8 Configuring and managing virtualization
Note that attaching NVMe devices to VMs is currently also not supported in RHV or RHOSP.
TCG
QEMU and libvirt include a dynamic translation mode using the QEMU Tiny Code Generator (TCG). This
mode does not require hardware virtualization support. However, TCG is not supported by Red Hat.
TCG-based guests can be recognized by examining its XML configuration, for example using the "virsh
dumpxml" command.
<domain type='qemu'>
<domain type='kvm'>
Additional resources
For information about supported guest OS types and recommended features in RHEL 8
virtualization, see Section 20.2, “Recommended features in RHEL 8 virtualization” .
For information about the maximum supported amounts of resources that can be allocated to a
VM, see Section 20.4, “Resource allocation limits in RHEL 8 virtualization” .
IMPORTANT
Many of these limitations do not apply to other virtualization solutions provided by Red
Hat, such as Red Hat Virtualization (RHV), OpenShift Virtualization, or Red Hat
OpenStack Platform (RHOSP).
Each PCI bridge adds a new bus, potentially enabling another 512 device addresses. However, some
buses do not make all 512 device addresses available for the user; for example, the root bus has several
built-in devices occupying slots.
278
CHAPTER 20. FEATURE SUPPORT AND LIMITATIONS IN RHEL 8 VIRTUALIZATION
Note that some of the unsupported features are supported on other Red Hat products, such as Red Hat
Virtualization and Red Hat OpenStack platform. For more information, see Section 20.3, “Unsupported
features in RHEL 8 virtualization”.
Additional sources
For a complete list of unsupported features of virtual machines in RHEL 8, see Section 20.3,
“Unsupported features in RHEL 8 virtualization”.
For details on the specifics for virtualization on the IBM Z architecture, see Section 4.2, “How
virtualization on IBM Z differs from AMD64 and Intel 64”.
For details on the specifics for virtualization on the IBM POWER architecture, see Section 3.2,
“How virtualization on IBM POWER differs from AMD64 and Intel 64”.
279
Red Hat Enterprise Linux 8 Configuring and managing virtualization
280