KVM Virtualization
KVM Virtualization
SC34-2752-04
Linux on Z and LinuxONE IBM
SC34-2752-04
This edition applies to the Linux on z Systems Development stream, libvirt version, and QEMU release as available
at the time of writing, and to all subsequent releases and modifications until otherwise indicated in new editions.
© Copyright IBM Corporation 2015, 2017.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
About this document . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
How this document is organized . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Conventions and assumptions used in this publication . . . . . . . . . . . . . . . . . . . . . x
Where to get more information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x
Other publications for Linux on z Systems and LinuxONE . . . . . . . . . . . . . . . . . . . . xi
Chapter 1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Virtual server management tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Virtualization components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Part 3. Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Chapter 13. Creating, modifying, and deleting persistent virtual server definitions. . . 109
Defining a virtual server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Modifying a virtual server definition . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Undefining a virtual server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Contents v
<backend> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
<boot> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
| <bridge> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
<cipher> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
<cmdline> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
<console> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
<controller> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
| <cpu> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
<cputune> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
<device> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
<devices> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
| <dhcp> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
<disk> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
<domain> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
<driver> as child element of <controller> . . . . . . . . . . . . . . . . . . . . . . . . . 209
<driver> as child element of <disk> . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
<emulator> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
| <feature> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
<format> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
| <forward> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
<geometry> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
<host> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
<hostdev> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
<initrd> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
<interface> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
<iothreads> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
| <ip> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
<kernel> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
<key> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
<keywrap> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
<log> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
<mac>. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
<memballoon> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
<memory> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
<memtune> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
| <model> as a child element of <cpu> . . . . . . . . . . . . . . . . . . . . . . . . . . 232
<model> as a child element of <interface> . . . . . . . . . . . . . . . . . . . . . . . . 233
<name> as a child element of <domain> . . . . . . . . . . . . . . . . . . . . . . . . . 234
| <name> as a child element of <network> . . . . . . . . . . . . . . . . . . . . . . . . . 235
| <network> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
<on_crash> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
<on_reboot>. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
<os> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
<path> as child element of <pool><target> . . . . . . . . . . . . . . . . . . . . . . . . 240
<path> as child element of <volume><target> . . . . . . . . . . . . . . . . . . . . . . . 241
<pool> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
<readonly> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
<rng> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
<shareable> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
<shares> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
<soft_limit> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
<source> as child element of <disk>. . . . . . . . . . . . . . . . . . . . . . . . . . . 249
<source> as child element of <hostdev> . . . . . . . . . . . . . . . . . . . . . . . . . 251
<source> as child element of <interface> . . . . . . . . . . . . . . . . . . . . . . . . . 252
<source> as child element of <pool> . . . . . . . . . . . . . . . . . . . . . . . . . . 253
<target> as child element of <console> . . . . . . . . . . . . . . . . . . . . . . . . . . 254
<target> as child element of <disk> . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
<target> as child element of <pool> . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
<target> as child element of <volume> . . . . . . . . . . . . . . . . . . . . . . . . . . 257
<type> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
<vcpu> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
Contents vii
shutdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
setvcpus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
suspend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
undefine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
vcpucount . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
vol-create. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
vol-delete. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
vol-dumpxml . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
vol-info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
vol-key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
vol-list. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
vol-name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
vol-path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
vol-pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Chapter 31. Hypervisor information for the virtual server user . . . . . . . . . . . 353
Accessibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
For KVM host setup information, see the host administration documentation of
your distribution.
For a description of Linux on KVM and tasks that are performed by the KVM
virtual server user, see Device Drivers, Features, and Commands for Linux as a KVM
Guest, SC34-2754.
This document describes a selection of helpful libvirt XML elements and virsh
commands that can be used to perform the documented administration tasks for a
KVM host on IBM Z hardware. The described subset is not complete.
You can find the latest version of the complete references on libvirt.org at:
v libvirt.org/format.html
v libvirt.org/sources/virshcmdref
Part two contains chapters that describe how to change the current setup of IBM Z
devices on the KVM host in order to provide them as virtual devices for a KVM
virtual server.
Part three contains chapters about the configuration of a KVM virtual server and
the specification of the IBM Z hardware on which the virtual resources are based.
Part four contains chapters about the lifecycle management and operation of a
KVM virtual server.
Part five contains chapters that describe how to display information that helps to
diagnose and solve problems associated with the operation of a KVM virtual
server.
Authority
Most of the tasks described in this document require a user with root authority.
Throughout this document, it is assumed that you have root authority.
Persistent configuration
Device and interface setups as described in this document do not persist across
host reboots. For information about persistently setting up devices and interfaces,
see the administration documentation of your host distribution.
Terminology
Highlighting
QEMU
QEMU is the user space process that implements the virtual server hardware on
the host.
Other publications
v Open vSwitch: openvswitch.org
v SCSI Architecture Model (SAM): t10.org
For versions of documents that have been adapted to a particular distribution, see
one of the following web pages:
www.ibm.com/developerworks/linux/linux390/documentation_red_hat.html
www.ibm.com/developerworks/linux/linux390/documentation_suse.html
www.ibm.com/developerworks/linux/linux390/documentation_ubuntu.html
As KVM virtual server administrator, you prepare devices for the use of virtual
servers, configure virtual servers, and manage the operation of virtual servers.
Setup
Configuration
Operation
. .
The KVM host is the Linux instance that runs the KVM virtual servers and
manages their resources. In the libvirt documentation, a host is also called a node.
KVM host
Virtual server
Guest
Virtual hardware
Memory
CPU CPU
. blk eth
blk
IBM Z hardware
Memory
CPU CPU CPU
Figure 2. KVM host with a virtual server including a guest operating system
libvirt
Name:
configure Domain define
configuration- libvirt-internal
XML configuration
Name
. .
b. Now you can manage the operation of the virtual server. This consists of:
v Life cycle management:
Domain
define configuration-
XML
shut off
undefine
start
terminate
running
suspend
resume
paused
undefined defined
Chapter 1. Overview 5
Virtualization components
The virtual server management as described in this document is based on the
following virtualization components.
Linux kernel including the kvm kernel module (KVM)
Provides the core virtualization infrastructure to run multiple virtual
servers on a Linux host.
QEMU
User space component that implements virtual servers on the host using
KVM functionality.
libvirt Provides a toolkit for the virtual server management:
v The XML format is used to configure virtual servers.
v The virsh command-line interface is used to operate virtual servers and
devices.
Figure 5 on page 7 shows the virtual server management tasks using the XML
format and the virsh command-line interface.
Device
setup
on the host
IBM Z
hardware FCP
channel
Configuration
using XML format
including the prepared
device and network L
M
interface names n -X
io
at
ig ur
nf
co
ain
D om
define
Operation
using virsh:
Create a persistent libvirt
Virtual server
virtual server
definition Virtual hardware CPU memory
CPU
Block device Ethernet device
KVM host
IBM Z
hardware
FCP
channel
Virtual hardware
memory
CPU CPU
shut off
Operation start
Manage the
running
virtual server
life cycle suspend
resume
paused
Figure 5. Virtual server administrator tasks using XML format and the virsh command-line
interface
Chapter 1. Overview 7
8 KVM Virtual Server Management - November 2017
Chapter 2. Virtual block devices
DASDs, FC-attached SCSI disks, image files and logical volumes are virtualized as
virtio block devices.
Related publications
v Device Drivers, Features, and Commands, SC33-8411
v How to use FC-attached SCSI devices with Linux on z Systems, SC33-8413
On the host, you manage various types of disk devices and their configuration
topology. For production systems, DASDs and FC-attached SCSI disks are typically
set up with multipathing to boost availability through path redundancy.
From the virtual server point of view, these are virtual block devices which are
attached by one virtual channel path. There is no difference whether a virtual block
device is implemented as a DASD, a SCSI disk, or an image file on the host.
QEMU uses the current libvirt-internal configuration to assign the virtual devices
of a virtual server to the underlying host devices.
To provide DASDs and FC-attached SCSI disks as virtual block devices for a
virtual server:
1. Set up the DASDs and FC-attached SCSI disks.
Prepare multipathing, because virtual block devices cannot be multipathed on
the virtual server.
It is also important that you provide unique device nodes that are persistent
across host reboots. Unique device nodes ensure that your configuration
remains valid after a host reboot. In addition, device nodes that are unique for
a disk device on different hosts allow the live migration of a virtual server to a
different host, or the migration of a disk to a different storage server or storage
controller.
See Chapter 5, “Preparing DASDs,” on page 27 and Chapter 6, “Preparing SCSI
disks,” on page 29.
2. Configure the DASDs and FC-attached SCSI disks as virtual block devices.
You configure devices that are to be defined with the virtual server in its
domain configuration-XML file. You can also define devices in a separate
device configuration-XML file. Such devices can be attached to an already
defined virtual server.
See Chapter 10, “Configuring devices,” on page 75 and “Configuring virtual
block devices” on page 78.
Figure 6 on page 10 shows how multipathed DASD and SCSI disks are configured
as virtual block devices.
Virtual devices
identified by: Virtual block devices
<multi- <multi-
pathA> pathB>
Host devices
identified by:
device bus-ID
dasd dasd FCP FCP
<a> <b> device device
IBM Z
hardware
target WWPN
drive medium changer
Figure 6. Multipathed DASD and SCSI disks configured as virtual block devices
There are multiple ways to identify a disk device on the host or on the virtual
server.
Device bus-ID and device number of an FCP device
On the host, a SCSI device is connected to an FCP device, which has a
device bus-ID of the form:
0.m.dddd
Where:
Example:
0.0.1700 device bus-ID of the FCP device.
Example:
0.0.e717 device bus-ID of the DASD.
e717 device number of the DASD.
Example:
IBM.75000000010671.5600.00
Where:
Example:
0.0.1a12 device bus-ID of the virtual device.
1a12 device number of the virtual device.
Example:
dasda on the host.
sda on the host.
vda on the virtual server.
Example:
/dev/sda for SCSI disks on the host.
/dev/dasda for DASDs on the host.
/dev/vda for virtual block devices on the virtual server.
Tip: Prepare a strategy for specifying device numbers for the virtio block
devices, which you provide for virtual servers. This strategy makes it easy
to identify the virtualized disk from the device bus-ID or device number of
the virtual block device.
Example:
/dev/mapper/36005076305ffc1ae00000000000021d5
/dev/mapper/36005076305ffc1ae00000000000021d5p1
where
p1 denotes the first partition of the device.
Tip: Use device mapper-created device nodes for SCSI disks and udev-created
device nodes for DASDs in your configuration-XML files to support a smooth live
migration of virtual servers to a different host.
Storage pools
Alternatively, you can configure storage pools, leaving the resource management of
step 1 to libvirt. A storage pool consists of a set of volumes, such as
v The image files of a host directory
v The image files residing on a disk or the partition of a disk
v The image files residing on a network file system
v The logical volumes of a volume group
A live virtual server migration is only possible for storage pools backed by image
files residing on a network file system.
Volume
Image
Volume files
.
.
Volume
Figure 8 shows a storage pool backed by the logical volumes of a volume group:
Volume group
Storage pool
Volume
Volume
.
.
Volume Logical volumes
.
.
To provide the volumes of a storage pool as virtual block devices for a virtual
server:
1. Create the resources which back the storage pool.
blk blk blk blk blk blk blk blk blk blk blk blk
Host
memory
5. Define and start the storage pool before defining the virtual server.
Manage the storage pool and its volumes by using the commands described in
Chapter 19, “Managing storage pools,” on page 151.
To provide high reliability, be sure to set up redundant paths for SCSI tape or
medium changer devices on the host. A device configuration for a SCSI tape or
medium changer device provides one virtual SCSI device for each path. Figure 10
on page 18 shows one virtual SCSI device for sg<0>, and one for sg<1>, although
these devices represent different paths to the same device. The lin_tape device
driver models path redundancy on the virtual server. lin_tape reunites the virtual
SCSI devices that represent different paths to the same SCSI tape or medium
changer device.
{
{
IBMchanger<1>
IBMchanger<2>
IBMtape<1>
IBMtape<1>
IBMtape<2>
target target
FCP FCP
device device
IBM Z
hardware
FCP FCP
channel channel
SAN SAN
fabric fabric
.
SCSI tape library controller .
Figure 10. Multipathed SCSI tapes and SCSI medium changer devices configured as virtual
SCSI devices
For a SCSI tape or medium changer device configuration, the following device
names are relevant:
Standard device name
Standard device names are of the form:
sg<x> for SCSI tape or medium changer devices on the host using
the SCSI generic device driver.
IBMtape<x> for SCSI tape devices on the virtual server using the
lin_tape device driver.
IBMchanger<x> for SCSI medium changer devices on the virtual server
using the lin_tape device driver.
Where:
<SCSI-host-number> is assigned to the FCP device in the order in which the FCP
device is detected.
<SCSI-ID> is the SCSI ID of the target port.
<SCSI-LUN> is assigned to the SCSI device by conversion from the
corresponding FCP LUN.
SCSI device names are freshly assigned when the host reboots, or when an
FCP device or a SCSI tape or medium changer device is set offline and
back online.
SCSI device names are also referred to as SCSI stack addresses.
Example: 0:0:1:7
Related publication
v Device Drivers, Features, and Commands, SC33-8411
In a typical virtual network device configuration, you will want to isolate the
virtual server communication paths from the communication paths of the host.
There are two ways to provide network isolation:
v You set up separate network devices for the virtual servers that are not used for
the host network traffic. This method is called full isolation. It allows the virtual
network device configuration using a direct MacVTap connection or a virtual
switch.
v If the virtual server network traffic shares network interfaces with the host, you
can provide isolation by configuring the virtual network device using a
MacVTap interface. Direct MacVTap connections guarantee the isolation of
virtual server and host communication paths.
KVM host
IBMtape<1>
Virtual network
device
Network Network
hardware hardware
. .
MacVTap provides a high speed network interface to the virtual server. The
MacVTap network device driver virtualizes Ethernet devices and provides MAC
addresses for virtual network devices.
KVM host
IBMtape<1>
Virtual network
device
bond0
Bonded interface
Network Network
hardware hardware
.
.
When you configure a virtual Ethernet device, you associate it with a network
interface name on the host in the configuration-XML. In Figure 12, this is bond0.
libvirt then creates a MacVTap interface from your network configuration.
Use persistent network interface names to ensure that the configuration-XMLs are
still valid after a host reboot or after you unplug or plug in a network adapter.
Your product or distribution might provide a way to assign meaningful names to
your network interfaces. When you intend to migrate a virtual server, use network
interface names that are valid for the hosts that are part of the migration.
Virtual switches are implemented using Open vSwitch. Virtual switches can be
used to virtualize Ethernet devices. They provide means to configure path
redundancy, and isolated communication between selected virtual servers.
KVM host
IBMtape<1>
Virtual network
device
Network Network
hardware hardware
. .
Note: Libvirt also provides a default bridged network, called virbr0, which is not
covered in this document. See the libvirt networking documentation reference in
the related publications section for more details.
Related publications
v Device Drivers, Features, and Commands, SC33-8411
v Libvirt networking documentation at wiki.libvirt.org/page/Networking
Related tasks:
Chapter 8, “Preparing network devices,” on page 37
Consider these aspects when setting up network interfaces for the use of virtual
servers.
“Configuring virtual Ethernet devices” on page 98
Configure network interfaces, such as Ethernet interfaces, bonded interfaces,
virtual LANs, or virtual switches as virtual Ethernet devices for a virtual server.
v If the PAV or the HyperPAV feature is enabled on your storage system, it assigns
unique IDs to its DASDs and manages the alias devices.
The following publication describes how to configure, prepare, and work with
DASDs:
v Device Drivers, Features, and Commands, SC33-8411
Procedure
The following steps describe a DASD setup on the host that does not persist across
host reboots.
For a persistent setup, see your host administration documentation (see also
“Persistent configuration” on page x).
1. Set the DASD base device and its alias devices online.
2. Obtain the device node of the DASD.
3. You need to format the DASD, because the virtual server cannot format DASDs
by itself.
You can use CDL, and LDL formats.
4. Do not create partitions on behalf of the virtual server.
Establish a process to let the virtual server user know which virtual block
devices are backed up by DASDs, because these devices have to be partitioned
using the Linux command fdasd for CDL formats. The inadvertent use of the
fdisk command to partition the device could lead to data corruption.
Example
1. Set the DASD online using the Linux command chccwdev and the device bus-ID
of the DASD.
For example, for device 0.0.7500, issue:
2. To obtain the DASD name from the device bus-ID, you can use the Linux
command lsdasd:
# lsdasd
Bus-ID Status Name Device Type BlkSz Size Blocks
==============================================================================
0.0.7500 active dasde 94:0 ECKD 4096 7043MB 1803060
...
3. Format the DASD using the Linux command dasdfmt and the device name.
# dasdfmt -b 4096 /dev/disk/by-path/ccw-0.0.7500 -p
4. Establish a procedure to let the virtual server user know which virtual devices
are backed up by DASDs.
What to do next
The following publications describe in detail how to configure, prepare, and work
with FC-attached SCSI disks:
v Fibre Channel Protocol for Linux and z/VM on IBM System z®, SG24-7266
v How to use FC-attached SCSI devices with Linux on z Systems, SC33-8413
v Device Drivers, Features, and Commands, SC33-8411
Procedure
The following steps describe a SCSI disk setup on the host that does not persist
across host reboots.
For a persistent setup, see your host administration documentation (see also
“Persistent configuration” on page x).
1. Linux senses the available FCP devices.
You can use the lscss command to display the available FCP devices.
The -t option can be used to restrict the output to a particular device type. FCP
devices are listed as 1732/03 devices with control unit type 1731/03.
2. Set the FCP device online.
You can use the chccwdev command to set an FCP device online or offline.
3. Configure the SCSI disks on the host.
For details about this step, refer to your host administration documentation and
Device Drivers, Features, and Commands, SC33-8411.
Example
For one example path, you provide the device bus-ID of the FCP device, the target
WWPN, and the FCP LUN of the SCSI disk:
/sys/bus/ccw/drivers/zfcp/0.0.1700/0x500507630513c1ae/0x402340bc00000000
provides the information:
4. Figure out the device mapper-created device node of the SCSI disk.
a. You can use the lszfcp command to display the SCSI device name of a
SCSI disk:
# lszfcp -D -b 0.0.1700 -p 0x500507630513c1ae -l 0x402340bc00000000
0.0.1700/0x500507630513c1ae/0x402340bc00000000 2:0:17:1086079011
b. The lsscsi -i command displays the multipathed SCSI disk related to the
SCSI device name:
The device mapper-created device node that you can use to uniquely
reference the multipathed SCSI disk 36005076305ffc1ae00000000000023bc is:
/dev/mapper/36005076305ffc1ae00000000000023bc
What to do next
/sys/bus/ccw/drivers/zfcp/<device_bus_id>/<wwpn>/<fcp_lun>
The virtual server user can install and use the IBM lin_tape package on the virtual
server for actions such as the mounting and unmounting of tape cartridges into the
affected tape drive. The use of the lin_tape device driver is documented in the IBM
Tape Device Drivers Installation and User's Guide, GC27-2130.
The following publications describe in detail how to configure, prepare, and work
with FC-attached SCSI devices:
v Fibre Channel Protocol for Linux and z/VM on IBM System z, SG24-7266
v How to use FC-attached SCSI devices with Linux on z Systems, SC33-8413
v Device Drivers, Features, and Commands, SC33-8411
Note: In the libvirt documentation, the term “LUN” is often referenced as “unit”.
Procedure
The following steps describe a SCSI tape or medium changer setup on the host
that does not persist across host reboots.
For a persistent setup, see your host administration documentation (see also
“Persistent configuration” on page x).
1. Linux senses the available FCP devices.
You can use the lscss command to display the available FCP devices. The -t
option can be used to restrict the output to a particular device type. FCP
devices are listed as 1732/03 devices with control unit type 1731/03.
2. Set the FCP device to which your SCSI device is attached online.
You can use the chccwdev command to set an FCP device online or offline.
3. Register the SCSI tape or medium changer device on the host.
For details about this step, refer to your host administration documentation and
Device Drivers, Features, and Commands, SC33-8411.
If your LUN is not automatically detected, you might add the LUN of the SCSI
tape or medium changer device to the file system by issuing:
# echo <fcp_lun> > /sys/bus/ccw/devices/<device_bus_id>/<wwpn>/unit_add
This step registers the SCSI tape or medium changer device in the Linux SCSI
stack and creates a sysfs entry for it in the SCSI branch.
This command displays the SCSI device name of the SCSI tape or the SCSI
medium changer:
<scsi_host_number>:0:<scsi_ID>:<scsi_lun>
Example
For one example path, you provide the device bus-ID of the FCP device, the target
WWPN, and the FCP LUN of the SCSI tape or medium changer device:
/sys/bus/ccw/drivers/zfcp/0.0.1cc8/0x5005076044840242/0x0000000000000000
provides the information:
4. Obtain the SCSI host number, the SCSI ID, and the SCSI LUN of the registered
SCSI tape device:
# lszfcp -D -b 0.0.1cc8 -p 0x5005076044840242 -l 0x0000000000000000
0.0.1cc8/0x5005076044840242/0x0000000000000000 1:0:2:0
where:
For information about how to set up network devices on the host, see Device
Drivers, Features, and Commands, SC33-8411.
Procedure
1. Create network interfaces as described in “Creating a network interface” on
page 38.
2. Prepare the configuration-specific setup.
a. To configure a MacVTap interface, perform the steps described in
“Preparing a network interface for a direct MacVTap connection” on page
40.
b. To configure a virtual switch, perform the steps described in “Preparing a
virtual switch” on page 43.
Virtual switches provide means to configure highly available or isolated
connections. Nevertheless, you may set up a bonded interface or a virtual
LAN interface.
What to do next
You need to know the IP address of the network device and its network interface
name.
The following steps describe a network interface setup on the host that does not
persist across host reboots.
For a persistent setup, see your host administration documentation (see also
“Persistent configuration” on page x).
Procedure
1. Determine the available network devices as defined in the IOCDS.
You can use the znetconf -u command to list the unconfigured network
devices and to determine their device bus-IDs.
# znetconf -u
2. Configure the network devices in layer 2 mode and set them online.
To provide a good network performance, set the buffer count value to 128.
For a non-persistent configuration, use the znetconf -a command with the
layer2 sysfs attribute set to 1 and the buffer_count attribute set to 128:
# znetconf -a <device-bus-ID> -o layer2=1 -o buffer_count=128
You can use the znetconf -c command to list the configured network interfaces
and to display their interface names:
# znetconf -c
Issue the first command only if the interface has not already been activated and
subsequently deactivated.
4. To exploit best performance, increase the transmit queue length of the network
device (txqueuelen) to the recommended value of 2500.
ip link set <network-interface-name> qlen 2500
In the following example, you determine that OSA-Express CCW group devices
with, for example, device bus-IDs 0.0.8050, 0.0.8051, and 0.0.8052 are to be used,
and you set up the network interface.
1. Determine the available network devices.
# znetconf -u
Scanning for network devices...
Device IDs Type Card Type CHPID Drv.
------------------------------------------------------------
...
0.0.8050,0.0.8051,0.0.8052 1731/01 OSA (QDIO) 90 qeth
...
# znetconf -c
Device IDs Type Card Type CHPID Drv. Name State
-----------------------------------------------------------------------------------
...
0.0.8050,0.0.8051,0.0.8052 1731/01 OSD_1000 A0 qeth enccw0.0.8050 online
...
What to do next
Prepare the configuration-specific setup as described in:
v “Preparing a network interface for a direct MacVTap connection” on page 40
v or “Preparing a virtual switch” on page 43
libvirt will automatically create a MacVTap interface when you configure a direct
connection.
Make sure that the MacVTap kernel modules are loaded, for example by using the
lsmod | grep macvtap command.
Procedure
1. Create a bonded interface to provide high availability.
See “Preparing a bonded interface.”
2. Optional: Create a virtual LAN (VLAN) interface.
VLAN interfaces provide an isolated communication between the virtual
servers that are connected to it.
Use the ip link add command to create a VLAN on a network interface and to
specify a VLAN ID:
# ip link add link <base-network-if-name> name <vlan-network-if-name>
type vlan id <VLAN-ID>
Example:
Ensure that the channel bonding module is loaded, for example using the
following commands:
# modprobe bonding
# lsmod | grep bonding
bonding 156908 0
The following steps describe a bonded interface setup on the host that does not
persist across host reboots.
Procedure
1. Define the bonded interface.
If you configure the bonded interface in a configuration-XML that is intended
for a migration, choose an interface name policy which you also provide on the
destination host.
2. Set the bonding parameters for the desired bonding mode.
Dedicate OSA devices planned for 802.3ad mode to a target LPAR. For more
information, see Open Systems Adapter-Express Customer's Guide and Reference,
SA22-7935-17.
3. Configure slave devices.
4. Activate the interface.
Example
This example shows how to set up bonded interface bond1. In your distribution,
bond0 might be automatically created and registered. In this case, omit step 1 to
make use of bond0.
1. Add a new master bonded interface:
# echo "+bond1" > /sys/class/net/bonding_masters
# ip link show bond1
8: bond1: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN mode DEFAULT
link/ether 9a:80:45:ba:50:90 brd ff:ff:ff:ff:ff:ff
2. Set the bonding parameters for the desired bonding mode. To set the mode to
active-backup:
# echo "active-backup 1" > /sys/class/net/bond1/bonding/mode
# echo "100" > /sys/class/net/bond1/bonding/miimon
# echo "active 1" > /sys/class/net/bond1/bonding/fail_over_mac
Related tasks:
“Configuring a MacVTap interface” on page 98
Configure network interfaces, such as Ethernet interfaces, bonded interfaces,
virtual LANs, through a direct MacVTap interface.
If an OSA network device is not an active bridge port, use the znetconf
command with the -o option to enable the bridge port role:
# znetconf -a <device-bus-ID> -o layer2=1 -o bridge_role=primary
For more information about active bridge ports, see Device Drivers, Features, and
Commands, SC33-8411
v Security-Enhanced Linux (SELinux) is enabled.
v An Open vSwitch package is installed and running. The status openvswitch
command displays the Open vSwitch status:
# systemctl status openvswitch
ovsdb-server is not running
ovs-vswitchd is not running
Procedure
1. Create a virtual switch.
Use the ovs-vsctl add-br command to create a virtual switch.
# ovs-vsctl add-br <vswitch>
Example
KVM host
IBMtape<1>
Virtual switch vswitch0
vsbond0
Network Network
hardware hardware
. .
Verify that the OSA network devices are configured as bridge ports:
cat /sys/devices/qeth/0.0.1108/bridge_state
active
cat /sys/devices/qeth/0.0.a112/bridge_state
active
Related tasks:
“Configuring a virtual switch” on page 100
Configure virtual switches as virtual Ethernet devices.
Procedure
1. Create a domain configuration-XML file.
See “Domain configuration-XML” on page 51.
2. Specify a name for the virtual server.
Use the name element to specify a unique name according to your naming
conventions.
3. Configure system resources, such as virtual CPUs, or the virtual memory.
a. Configure a boot process.
See “Configuring the boot process” on page 53.
b. Configure virtual CPUs.
See “Configuring virtual CPUs” on page 61.
c. Configure memory.
See “Configuring virtual memory” on page 65.
d. Optional: Configure the collection of QEMU core dumps.
See “Configuring the collection of QEMU core dumps” on page 67.
4. In the domain configuration-XML file, enter the virtual server device
configuration.
a. Optional: Configure the user space.
If you do not configure the user space, libvirt configures an existing user
space automatically.
See “Configuring the user space” on page 68.
b. Configure persistent devices.
See “Configuring devices with the virtual server” on page 69.
c. Configure the console device.
See “Configuring the console” on page 70.
d. Optional: Configure a watchdog device.
See “Configuring a watchdog device” on page 71.
e. Optional: Disable the generation of cryptographic wrapping keys and the
use of protected key management operations on the virtual server.
See “Disabling protected key encryption” on page 72.
f. Optional: Libvirt automatically generates a default memory balloon device
for the virtual server.
To prohibit this automatism, see “Suppressing the automatic configuration of
a default memory balloon device” on page 74.
5. Save the domain configuration-XML file according to your virtual server
administration policy.
Define the virtual server to libvirt based on the created domain configuration-XML
file as described in “Defining a virtual server” on page 110.
Root element
domain
Specify kvm as the domain type.
devices
Configures the devices that are persistent across virtual server reboots.
<domain type=“kvm”>
<name>vserv1</name>
<memory unit=“GiB”>4</memory>
<vcpu>2</vcpu>
<cputune>
<shares>2048</shares>
</cputune>
<os>
<type arch=“s390x” machine=“s390-ccw-virtio”>hvm</type>
</os>
<iothreads>1</iothreads>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>preserve</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-s390x</emulator>
<disk type=“block” device=“disk”>
<driver name=“qemu” type=“raw” cache=“none” io=“native” iothread=“1”/>
<source dev=“/dev/mapper/36005076305ffc1ae00000000000020d3”/>
<target dev=“vda” bus=“virtio”/>
<boot order=“1”/>
</disk>
<interface type=“direct”>
<source dev=“bond0” mode=“bridge”/>
<model type=“virtio”/>
</interface>
<console type=“pty”>
<target type=“sclp”/>
</console>
<memballoon model=“none”/>
</devices>
</domain>
Related reference:
Chapter 28, “Selected libvirt XML elements,” on page 189
These libvirt XML elements might be useful for you. You find the complete libvirt
XML reference at libvirt.org.
Prepare a DASD or a SCSI disk, which contains a root file system with a bootable
kernel as described in Chapter 5, “Preparing DASDs,” on page 27 or Chapter 6,
“Preparing SCSI disks,” on page 29.
Procedure
1. Configure the DASD or SCSI disk containing the root file system as a persistent
device.
See “Configuring devices with the virtual server” on page 69 and “Configuring
a DASD or SCSI disk” on page 78.
2. Per default, the guest is booted from the first specified disk device in the
current libvirt-internal configuration. To avoid possible errors, explicitly specify
the boot device with the boot element in the disk device definition (see
“<boot>” on page 196).
| The guest is booted from the disk with the lowest specified boot order value. If
the specified device has a boot menu configuration, you can use the loadparm
attribute of the boot element to specify a particular menu entry to be booted.
Example
The following domain configuration-XML configures V1, which is booted from the
| virtual block device 0xe714 on the virtual subchannel set “0x1”:
Procedure
1. Configure a virtual SCSI-attached CD/DVD drive as a persistent device, which
contains the ISO image as virtual DVD.
See “Configuring a virtual SCSI-attached CD/DVD drive” on page 95.
The guest is booted from the disk with the lowest specified boot order value.
Example
1. Specify the ISO image.
Configure the ISO image as a virtual DVD:
<devices>
...
<controller type=“scsi” model=“virtio-scsi” index=“4”/>
<disk type=“file” device=“cdrom”>
<driver name=“qemu” type=“raw” io=“native” cache=“none”/>
<source file=“/root/SLE12SP1ServerDVDs390xGMCDVD1.iso”/>
<target dev=“sda” bus=“scsi”/>
<address type=“drive” controller=“4” bus=“0” target=“0” unit=“0”/>
<readonly/>
<boot order=“1”/>
</disk>
...
</devices>
When you start the virtual server, it will be booted from this ISO image:
# virsh start vserv1 --console
Domain vserv1 started
Initializing cgroup subsys cpuacct
Linux version 3.12.4911default (geeko@buildhost) (gcc version 4.8.5
(SUSE Linux) ) #1 SMP Wed Nov 11 20:52:43 UTC 2015 (8d714a0)
setup.289988: Linux is running under KVM in 64bit mode
Zone ranges:
DMA [mem 0x000000000x7fffffff]
Normal empty
...
Procedure
1. Specify the initial ramdisk, the kernel image file, and the kernel parameters.
b. Specify the fully qualified path to the kernel image file in the kernel
element, which is a child of the os element (see “<kernel>” on page 223).
2. Configure all disks that are needed for the boot process as persistent devices.
If you are booting from the kernel image file as an initial installation, make
sure to provide a disk for the guest installation.
Example
1. Specify the kernel image file in the os element:
<os>
...
<initrd>initial-ramdisk</initrd>
<kernel>kernel-image</kernel>
<cmdline>command-line-parameters</cmdline>
</os>
| A network boot server and a connection from your KVM host to that server must
| be in place.
| Procedure
| 1. Configure an interface to a virtual network, to an Open vSwitch, or for a direct
| MacVTap connection (see “<interface>” on page 220).
| Example
| <domain name="vs003n">
| ...
| <interface type="network">
| <source network="boot-net"/>
| <model type="virtio"/>
| <boot order="1"/>
| <address type="ccw" cssid="0xfe" ssid="0x0" devno="0xb001"/>
| </interface>
| ...
| </domain>
| In the example, the first boot device in the boot order of the KVM virtual server
| vs003n is the CCW network device with bus ID 0.0.b001.
Example:
<domain>
...
<os>
...
</os>
...
<devices>
<emulator>/usr/bin/qemu-system-s390x</emulator>
<!-- IPL device -->
<controller type=“scsi” model=“virtio-scsi” index=“4”/>
<disk type=“file” device=“cdrom”>
<driver name=“qemu” type=“raw” io=“native” cache=“none”/>
<source file=“/root/SLE12SP1ServerDVDs390xGMCDVD1.iso”/>
<target dev=“sda” bus=“scsi”/>
<address type=“drive” controller=“4” bus=“0” target=“0” unit=“0”/>
<readonly/>
<boot order=“1”/>
</disk>
<!-- guest installation device -->
<disk type=“block” device=“disk”>
<driver name=“qemu” type=“raw” cache=“none”
io=“native” iothread=“1”/>
<source dev=“/dev/mapper/36005076305ffc1ae00000000000021d7”/>
<target dev=“vda” bus=“virtio”/>
</disk>
<console type=“pty”>
<target type=“sclp”/>
</console>
</devices>
</domain>
b. If you intend to boot from a kernel image file and an initial ramdisk, the
domain configuration-XML file should contain:
v The fully qualified path and filename of the kernel image.
v The fully qualified path and filename of the initial ramdisk.
v The kernel command-line parameters.
Example:
<domain>
...
<os>
...
<!-- Boot kernel - remove 3 lines -->
<!-- after a successful initial installation -->
<initrd>initial-ramdisk</initrd>
<kernel>kernel-image</kernel>
<cmdline>command-line-parameters</cmdline>
...
</os>
...
<devices>
<emulator>/usr/bin/qemu-system-s390x</emulator>
<!-- guest installation device -->
<disk type=“block” device=“disk”>
<driver name=“qemu” type=“raw” cache=“none”
io=“native” iothread=“1”/>
<source dev=“/dev/mapper/36005076305ffc1ae00000000000021d7”/>
<target dev=“vda” bus=“virtio”/>
</disk>
<console type=“pty”>
<target type=“sclp”/>
</console>
</devices>
</domain>
Example:
<domain>
...
<os>
...
</os>
...
<devices>
<emulator>/usr/bin/qemu-system-s390x</emulator>
<!-- IPL device -->
<controller type=“scsi” model=“virtio-scsi” index=“4”/>
<disk type=“file” device=“cdrom”>
<driver name=“qemu” type=“raw” io=“native” cache=“none”/>
<source file=“/root/SLE12SP1ServerDVDs390xGMCDVD1.iso”/>
<console type=“pty”>
<target type=“sclp”/>
</console>
</devices>
</domain>
b. In case you installed the guest using the kernel image and the initial
ramdisk:
Example:
<domain>
...
<os>
...
</os>
...
<devices>
<emulator>/usr/bin/qemu-system-s390x</emulator>
<!-- guest IPL disk -->
<disk type=“block” device=“disk”>
<driver name=“qemu” type=“raw” cache=“none”
io=“native” iothread=“1”/>
<source dev=“/dev/mapper/36005076305ffc1ae00000000000021d7”/>
<target dev=“vda” bus=“virtio”/>
<boot order=“1”/>
</disk>
<console type=“pty”>
<target type=“sclp”/>
</console>
</devices>
</domain>
6. From now on, you can start the virtual server using this domain
configuration-XML. The virtual server boots the installed guest from the IPL
disk.
Procedure
1. You can configure the number of virtual CPUs that are available for the defined
virtual server by using the vcpu element (see “<vcpu>” on page 259).
If you do not specify the vcpu element, the maximum number of virtual CPUs
available for a virtual server is 1.
Note: It is not useful to configure more virtual CPUs than available host CPUs.
2. To configure the actual number of virtual CPUs that are available for the virtual
server when it is started, specify the current attribute. The value of the current
attribute is limited by the maximum number of available virtual CPUs.
If you do not specify the current attribute, the maximum number of virtual
CPUs is available at startup.
Example
This example configures 5 virtual CPUs, which are all available at startup:
<domain type=“kvm”>
...
<vcpu>5</vcpu>
...
</domain>
This example configures a maximum of 5 available virtual CPUs for the virtual
server. When the virtual server is started, only 2 virtual CPUs are available. You
can modify the number of virtual CPUs that are available for the running virtual
server using the virsh setvcpus command (see “Modifying the number of virtual
CPUs” on page 138).
<domain type=“kvm”>
...
<vcpu current=“2”>5</vcpu>
...
</domain>
For more information about the CPU weight, see “CPU weight” on page 160.
Procedure
You specify the CPU weight by using the shares element (see “<shares>” on page
247).
Example
<domain>
...
<cputune>
<shares>2048</shares>
</cputune>
...
</domain>
| You can use a generic specification that resolves to a basic set of CPU features on
| any hardware model. Use an explicit configuration if you must satisfy special
| requirements, for example:
| v Disable a CPU feature that causes problems for a particular application.
| v Keep the option for a live migration to an earlier hardware model that does not
| support all CPU features of the current hardware (see “IBM Z hardware model”
| on page 128).
| v Keep the option for a live migration to a KVM host with an earlier QEMU
| version that does not support all CPU features of the current version.
Procedure
| v To configure the basic set of CPU features that is provided by the hardware,
| specify:
|| cpu mode attribute: host-model
|
| (see “<cpu>” on page 202)
| v To define a CPU model with a specific set of hardware features, specify:
| 1. Declare that a specific CPU model is to be configured.
|| cpu mode attribute: custom
| cpu match attribute: exact
|
| 2. Specify an existing CPU model with the <model> element as a child of the
| <cpu> element.
|| model element: <cpu_model>
|
| (see “<model> as a child element of <cpu>” on page 232)
| Where <cpu_model> is one of the models listed in the <domainCapabilities>
| XML. Issue virsh domcapabilities to display the contents of the XML file.
| Eligible values are specified with <model> tags that have the attribute
| useable="yes".
| Example
| v To use all available QEMU supported CPU features of any mainframe model:
| <cpu mode="host-model"/>
| v To require the QEMU supported CPU features of a z14 mainframe, but without
| the iep feature:
| <cpu mode="custom">
| <model>z14</model>
| <feature policy="disable" name="iep">
| </cpu>
| As for other parts of the domain configuration-XML, the CPU model specification
| is expanded in the internal XML representation of a defined and of a started
| virtual server.
Procedure
Use the memory element which is a child of the domain element (see “<memory>”
on page 229).
Example
<domain type=“kvm”>
<name>vserv1</name>
<memory unit=“MB”>512</memory>
...
<domain>
The memory that is configured for the virtual server when it starts up is 512 MB.
For more information about memory tuning, see Chapter 22, “Memory
management,” on page 163.
Procedure
Use the memtune element to group memory tuning elements.
Specify a soft limit by using the soft_limit element (see “<soft_limit>” on page
248).
The memory configured for virtual server vserv1 is 512 MB. In case the host is
under memory pressure, it might limit the physical host memory usage of vserv1
to 256 MB.
Procedure
To exclude the memory of a virtual server from a QEMU core dump, specify:
Example
<domain type=“kvm”>
<name>vserv1</name>
<memory unit=“MB” dumpCore=“off”>512</memory>
...
<domain>
Procedure
The optional emulator element contains path and file name of the user space
process (see “<emulator>” on page 212).
The emulator element is a child of the devices element. If you do not specify it,
libvirt automatically inserts the user space configuration to the libvirt-internal
configuration when you define it.
Example:
<devices>
<emulator>/usr/bin/qemu-system-s390x</emulator>
...
</devices>
Procedure
1. Optional: To improve the performance of I/O operations on DASDs and SCSI
disks, specify the number of I/O threads to be supplied for the virtual server.
For more information about I/O threads, see “I/O threads” on page 167.
Example:
<domain>
...
<iothreads>1</iothreads>
...
</domain>
2. Specify a configuration-XML for each device.
Chapter 10, “Configuring devices,” on page 75 describes how to specify a
configuration-XML for a device.
3. For each device to be defined with the virtual server, place the
configuration-XML as a child element of the devices element in the domain
configuration-XML file.
Example
<domain type=“kvm”>
<iothreads>1</iothreads>
...
<devices>
...
<disk type=“block” device=“disk”>
<driver name=“qemu” type=“raw” cache=“none” io=“native” iothread=“1”/>
<source dev=“/dev/mapper/36005076305ffc1ae00000000000020d3”/>
<target dev=“vda” bus=“virtio”/>
</disk>
...
</devices>
</domain>
Procedure
1. You configure the host representation of the console by using the console type
attribute (see “<console>” on page 200).
To configure a pty console, enter:
2. You configure the virtual server representation of the console by using the
target type attribute (see “<target> as child element of <console>” on page 254).
To configure a service-call logical processor (SCLP) console interface, enter the
“sclp” value.
You can also configure a virtio console by entering the target type attribute
value “virtio”.
3. Optional: Specify a log file which collects the console output in addition to the
display in the console window.
Use the log element to specify the log file (see “<log>” on page 226).
Optionally, you can specify whether or not the log file will be overwritten in
case of a virtual server restart. By default, the log file is overwritten.
Example
This example configures a pty console. The console output is collected in the file
/var/log/libvirt/qemu/vserv-cons0.log. A virtual server restart overwrites the
log file.
<devices>
...
<console type=“pty”>
<target type=“sclp” port=“0”/>
<log file=“/var/log/libvirt/qemu/vserv-cons0.log” append=“off”/>
</console>
<devices/>
Related tasks:
“Connecting to the console of a virtual server” on page 149
Open a console when you start a virtual server, or connect to the console of a
running virtual server.
When the guest is loading the watchdog module, it provides the new device node
/dev/watchdog for the watchdog device. The watchdog timer is started when the
watchdog device is opened by the guest watchdog application. The watchdog
application confirms a healthy system state by writing to /dev/watchdog at regular
intervals. If nothing is written to the device node for a specified time, the
watchdog timer elapses, and QEMU assumes that the guest is in an error state.
QEMU then triggers a predefined action against the guest. For example, the virtual
server might be terminated and rebooted, or a dump might be initiated.
Procedure
Use the watchdog element as child of the devices element to configure a watchdog
device (see “<watchdog>” on page 263).
Example
<devices>
...
<watchdog model=“diag288” action=“inject-nmi”/>
...
</devices>
The unique wrapping keys are associated with the lifetime of a virtual server. Each
time the virtual server is started, its wrapping keys are regenerated. There are two
wrapping keys: one for DEA or TDEA keys, and one for AES keys.
If you disable the generation of wrapping keys for DEA/TDEA or for AES, you
also disable the access to the respective protected key management operations on
the virtual server.
Procedure
You configure the generation of wrapping keys by using the keywrap element (see
“<keywrap>” on page 225).
Its child element cipher (see “<cipher>” on page 198) enables or disables the
generation of a wrapping key and the use of the respective protected key
<state>
on Default; enables the wrapping key generation.
off Disables the wrapping key generation.
Example
This example disables the generation of an AES wrapping key. The DEA/TDEA
wrapping key is generated by default.
<keywrap>
<cipher name=“aes” state=“off”/>
</keywrap>
Procedure
To avoid the automatic creation of a default memory balloon device, specify:
Example
<devices>
...
<memballoon model=“none”/>
...
</devices>
The virtual channel subsystem provides only one virtual channel path that is
shared by all CCW devices. The virtual server views the virtual channel
subsystem-ID 0x00. When you define a device for a virtual server, you use the
reserved channel subsystem-ID 0xfe.
The virtual control unit model is used to reflect the device type.
Procedure
1. Configure the device as described in:
v “Configuring virtual block devices” on page 78
v “Configuring virtual SCSI devices” on page 88
v “Configuring virtual Ethernet devices” on page 98
Device configuration-XML
Devices that are configured with separate device configuration-XML files can be
attached to an already defined virtual server.
<interface type=“direct”>
<source dev=“bond0” mode=“bridge”/>
<model type=“virtio”/>
</interface>
Related reference:
Chapter 28, “Selected libvirt XML elements,” on page 189
These libvirt XML elements might be useful for you. You find the complete libvirt
XML reference at libvirt.org.
If the virtual server uses Logical Volume Manager (LVM), be sure to exclude these
devices from the host LVM configuration. Otherwise, the host LVM might interpret
the LVM metadata on the disk as its own and cause data corruption. For more
information, see “Logical volume management” on page 167.
You specify DASDs or SCSI disks by a device node. If you want to identify the
device on the host as it appears to the virtual server, specify a device number for
the virtual block device.
Procedure
1. Configure the device.
a. Configure the device as virtio block device.
Example:
<domain>
...
<iothreads>2</iothreads>
...
<devices>
<disk type=“block” device=“disk”>
<driver name=“qemu” type=“raw” cache=“none” io=“native” iothread=“2”/>
...
</disk>
</devices>
....
</domain>
Note: You should be aware that the selection of the specified device node
determines whether or not you will be able to:
v Perform a live migration of the virtual server accessing the device.
v Migrate the storage to another storage server or another storage controller.
For DASDs:
Use udev-created device nodes.
All udev-created device nodes support live migration. By-uuid device
nodes support also storage migration, because they are
hardware-independent.
For SCSI disks:
Use device mapper-created device nodes.
Device mapper-created device nodes are unique and always specify the
same device, irrespective of the host which runs the virtual server.
Please be aware that setting up multipathing on the host without
passing the device mapper-created device nodes to the virtual server
leads to the loss of all multipath advantages regarding high availability
and performance.
3. Identify the device on the virtual server.
a. Specify a unique logical device name.
where n is the subchannel set-ID and dddd is the device number. The
channel subsystem-ID 0xfe is reserved to the virtual channel.
The virtual server sees the channel subsystem-ID 0x0 instead.
Tip: Do not mix device specifications with and without device numbers.
Example: KVM host device bus-ID fe.0.1a12 is seen by the virtual server
as device bus-ID 0.0.1a12.
If you do not specify a device number, a device bus-ID is automatically
generated by using the first available device bus-ID starting with
subchannel set-ID 0x0 and device number 0x0000.
Assign device numbers depending on your policy, such as:
v Assigning identical device numbers on the virtual server and on the host
enable the virtual server user to identify the real device.
v Assigning identical device numbers on the virtual servers allows you to
create identical virtual servers.
Related concepts:
Chapter 2, “Virtual block devices,” on page 9
DASDs, FC-attached SCSI disks, image files and logical volumes are virtualized as
virtio block devices.
This example follows the policy to assign the host device number to the virtual
server.
The virtual server sees the standard device nodes, which are of the form
/dev/vd<x>, where <x> represents one or more letters. The mapping between a
name and a certain device is not persistent across guest reboots. To see the current
mapping between the standard device nodes and the udev-created by-path device
nodes, enter:
[root@guest:] # ls /dev/disk/by-path -l
total 0
lrwxrwxrwx 1 root root 9 May 15 15:20 ccw-0.0.7500 -> ../../vda
lrwxrwxrwx 1 root root 10 May 15 15:20 ccw-0.0.7600 -> ../../vdb
The virtual server always sees the control unit type 3832. The control unit model
indicates the device type, where 02 is a block device:
[root@guest:] # lscss
Device Subchan. DevType CU Type Use PIM PAM POM CHPIDs
----------------------------------------------------------------------
0.0.7500 0.0.0000 0000/00 3832/02 yes 80 80 ff 00000000 00000000
0.0.7600 0.0.0001 0000/00 3832/02 yes 80 80 ff 00000000 00000000
The virtual server sees the standard device nodes, which are of the form
/dev/vd<x>, where <x> represents one or more letters. The mapping between a
name and a certain device is not persistent across guest reboots. To see the current
mapping between the standard device nodes and the udev-created by-path device
nodes, enter:
[root@guest:] # ls /dev/disk/by-path -l
total 0
lrwxrwxrwx 1 root root 9 May 15 15:20 ccw-0.0.1a10 -> ../../vda
lrwxrwxrwx 1 root root 10 May 15 15:20 ccw-0.0.1a12 -> ../../vdb
The virtual server always sees the control unit type 3832. The control unit model
indicates the device type, where 02 is a block device:
Make sure that the image file exists, is initialized and accessible for the virtual
server. You can provide raw image files or qcow2 image files. qcow2 image files
occupy only the amount of storage that is really in use.
Use the QEMU command qemu-img create to create a qcow2 image file. See
“Examples for the use of the qemu-img command” on page 351 for examples.
Procedure
1. Configure the image file.
a. Configure the image file as virtual disk.
where n is the subchannel set-ID and dddd is the device number. The
channel subsystem-ID 0xfe is reserved to the virtual channel.
The virtual server sees the channel subsystem-ID 0x0 instead.
Example: KVM host device bus-ID fe.0.0009 is seen by the virtual server
as device bus-ID 0.0.0009.
If you do not specify a device number, a device bus-ID is automatically
generated by using the first available device bus-ID starting with
subchannel set-ID 0x0 and device number 0x0000.
Example
Procedure
1. Configure the volume.
a. Configure the volume as virtual disk.
where n is the subchannel set-ID and dddd is the device number. The
channel subsystem-ID 0xfe is reserved to the virtual channel.
The virtual server sees the channel subsystem-ID 0x0 instead.
Example: KVM host device bus-ID fe.0.0009 is seen by the virtual server
as device bus-ID 0.0.0009.
If you do not specify a device number, a device bus-ID is automatically
generated by using the first available device bus-ID starting with
subchannel set-ID 0x0 and device number 0x0000.
Example
This example configures logical volume blk-pool0-vol0 from the LVM pool
blk-pool0 as a virtual block device.
<disk type=“volume” device=“disk”>
<driver name=“qemu” type=“raw” io=“native” cache=“none”/>
<source pool=“blk-pool0” volume=“blk-pool0-vol0”/>
<target dev=“vdb” bus=“virtio”/>
<address type=“ccw” cssid=“0xfe” ssid=“0x0” devno=“0x0009”/>
</disk>
Related tasks:
“Configuring the boot process” on page 53
Specify the device that contains a root file system, or a prepared kernel image file.
Procedure
1. Configure a virtual HBA.
See “Configuring a virtual HBA”
2. Configure a SCSI tape of medium changer device or a virtual SCSI-attached
CD/DVD drive being attached to the virtual HBA.
See one of the following:
v “Configuring a SCSI tape or medium changer device” on page 90
v “Configuring a virtual SCSI-attached CD/DVD drive” on page 95
Example
Procedure
1. Use the controller element, which is a child of the devices element (see
“<controller>” on page 201).
Example:
<devices>
<controller type=“scsi” model=“virtio-scsi” index=“0”/>
</devices>
2. Optional: To improve performance, specify an I/O thread dedicated to perform
the I/O operations on the device.
Use the driver element, which is a child of the controller element (see
“<driver> as child element of <controller>” on page 209):
Example:
Example:
<devices>
<controller type=“scsi” model=“virtio-scsi” index=“0”>
<address type=“ccw” cssid=“0xfe” ssid=“0” devno=“0x1111”/>
</controller>
</devices>
Example
If you do not configure an address for an HBA, libvirt creates an address for you.
You can retrieve this address with the virsh dumpxml command.
1. Domain configuration-XML file:
<domain type=“kvm”>
...
<devices>
<controller type=“scsi” model=“virtio-scsi” index=“0”/>
...
</devices>
</domain>
Make sure that, as described in Chapter 7, “Preparing SCSI tape and medium
changer devices,” on page 33:
v The SCSI tape or medium changer device is set up.
v You provide the SCSI device name of the SCSI tape or medium changer device.
SCSI device names are freshly assigned after a host reboot or when a device is set
offline and back online. This means that you have to verify an FC-attached SCSI
tape or medium changer device configuration after one of these events. This
limitation is also important if you plan a live migration.
Tip: Configure both FC-attached SCSI tape and medium changer devices in
separate device configuration-XML files. Attach these devices only when necessary,
and detach them before you migrate the virtual server, or set one of the devices in
the configuration path offline.
Procedure
1. Configure the SCSI tape or medium changer device using the hostdev element
(see “<hostdev>” on page 218).
2. Specify the SCSI tape or medium changer device on the host as child of the
source element.
Tip: Choose a value between 0 and 255, because these values are
identically mapped to the SCSI LUN on the virtual server.
Example
Obtain the SCSI host number, the SCSI ID, and the SCSI LUN of the FC-attached
SCSI tape or medium changer device:
# lszfcp -D
0.0.1cc8/0x5005076044840242/0x0000000000000000 3:0:8:0
where:
<source>
<adapter name=”scsi_host3>
<address bus=”0” target=”8” unit=”0”>
</source>
Assign a SCSI device name to the virtual SCSI device on the virtual server. The
controller attribute of the address element refers to the index attribute of the
controller element.
v Domain configuration-XML file:
<domain type=“kvm”>
<name>VM1</name>
...
<devices>
...
<controller type=“scsi” model=“virtio-scsi” index=“0”>
<address type=“ccw” cssid=“0xfe” ssid=“0” devno=“0x0002”/>
</controller>
...
</devices>
</domain>
On the virtual server, the SCSI tape will be displayed like this:
[root@guest:] # lsscsi
[0:0:1:1] tape IBM 03592E07 35CD
Procedure
1. Create a domain configuration-XML file with one configured virtual HBA for
each host device. This configuration groups all virtual SCSI devices that
represent the same host device in an own virtual HBA.
<domain type=“kvm”>
<name>VM1</name>
...
<devices>
...
<controller type=“scsi” model=“virtio-scsi” index=“0”>
<address type=“ccw” cssid=“0xfe” ssid=“0” devno=“0x0002”/>
</controller>
<controller type=“scsi” model=“virtio-scsi” index=“1”>
<address type=“ccw” cssid=“0xfe” ssid=“0” devno=“0x0004”/>
</controller>
...
</devices>
</domain>
2. Create separate device configuration-XML files for the SCSI tape device, both
connected to the virtual HBA 0.
a. The first file configures SCSI device name 0:0:0:0, which is the path of SCSI
LUN 0 via SCSI host 0.
b. The second file configures SCSI device name 1:0:0:0, which is the path via
SCSI host 1.
3. Create separate device configuration-XML files for the SCSI medium changer
device, both connected to the virtual HBA 1.
a. The first file configures SCSI device name 0:0:0:1, which is the path of SCSI
LUN 1 via SCSI host 0.
b. The second file configures SCSI device name 1:0:0:1, which is the path via
SCSI host 1.
You can remove the configured ISO image and provide a different one during the
life cycle of the virtual server.
The virtual server can load it, and then reboot using the new ISO image.
Procedure
1. Configure the virtual DVD.
a. Configure the ISO image, which represents the virtual DVD, as a file of type
cdrom (see “<disk>” on page 207).
b. Specify the user space process that implements the virtual DVD (see
“<driver> as child element of <disk>” on page 210).
c. Specify the ISO image as virtual block device (see “<target> as child
element of <disk>” on page 255).
d. Specify the virtual DVD as read-only using the readonly element (see
“<readonly>” on page 244).
2. Identify the ISO image on the host.
Specify the fully qualified ISO image file name on the host (see “<source> as
child element of <disk>” on page 249). If the virtual SCSI-attached CD/DVD
drive is empty, omit this step.
Do not confuse the logical device name with its device name on the virtual
server.
b. Optional: Connect to a virtual HBA and specify a freely selectable SCSI
device name on the virtual server.
Tip: Choose a value between 0 and 255, because these values are
identically mapped to the SCSI LUN on the virtual server.
Example
<devices>
...
<controller type=“scsi” model=“virtio-scsi” index=“4”/>
<disk type=“file” device=“cdrom”>
<driver name=“qemu” type=“raw” io=“native” cache=“none”/>
<source file=“/var/lib/libvirt/images/cd.iso”/>
<target dev=“sda” bus=“scsi”/>
<address type=“drive” controller=“4” bus=“0” target=“0” unit=“0”/>
<readonly/>
</disk>
...
</devices>
Procedure
v To configure a MacVTap interface, follow the steps described in “Configuring a
MacVTap interface.”
v To configure a virtual switch, follow the steps described in “Configuring a
virtual switch” on page 100
Procedure
You configure a network interface as direct MacVTap connection by using the
interface element (see “<interface>” on page 220).
Libvirt automatically creates a MacVTap interface when you define the network
device.
By default, the virtual server cannot change its assigned MAC address and, as a
result, cannot join multicast groups. To enable multicasting, you need set the
interface trustGuestRxFilters attribute to yes. This has security implications,
because it allows the virtual server to change its MAC address and thus to receive
all frames delivered to this address.
1. Optional: Specify a freely selectable Media Access Control (MAC) address for
the virtual server's virtual NIC.
<interface type=“direct”>
<source dev=“bond0” mode=“bridge”/>
<model type=“virtio”/>
</interface>
KVM host
virtual
Ethernet
device
macvtap
. .
bond0
enccw0.0.1108 enccw0.0.a100
IBM Z
hardware
Network Network
. hardware hardware .
<interface type=“direct”>
<source dev=“bond0.623” mode=“bridge”/>
<model type=“virtio”/>
</interface>
KVM host
Virtual network
device
Figure 16. Direct interface type which configures a virtual LAN interface
Procedure
You configure a virtual switch by using the interface element (see “<interface>” on
page 220).
1. Optional: Specify a freely selectable Media Access Control (MAC) address for
the virtual server's virtual NIC.
Example
<interface type=“bridge”>
<source bridge=“vswitch0”/>
<virtualport type=“openvswitch”/>
<model type=“virtio”/>
</interface>
After the creation and the start of the virtual server, the virtual switch is displayed
as follows:
Procedure
Use the rng element to configure a random number generator (see “<rng>” on
page 245).
Use the backend element as child of the rng element to specify the device node of
the input character device (see “<backend>” on page 195).
Currently, /dev/random is the only valid device node.
Example
<devices>
...
<rng model=“virtio”>
<backend model=“random”>/dev/random</backend>
</rng>
...
</devices>
Storage pool
Root element
pool
Selected child elements
name, source, target
Example
<pool type=“dir”>
<name>myPool</name>
<target>
<path>/var/lib/libvirt/images</path>
</target>
</pool>
<volume type=“file”>
<name>federico.img</name>
<key>/var/lib/libvirt/images/federico.img</key>
<target>
<path>/var/lib/libvirt/images/federico.img</path>
<format type=“qcow2”/>
</target>
</volume>
Related reference:
“<pool>” on page 242
Is the root element of a storage pool configuration-XML.
“<volume>” on page 262
Is the root element of a volume configuration-XML.
| KVM hosts on IBM Z support networks with three types of Linux bridges. All
| types make a communication setup addressable as a network or bridge.
| v Bridge with network address translation (NAT)
| v Open vSwitch bridge
| v Bridge with IP routing
| Each bridge type has a different forwarding mode as specified with the <forward>
| element. Omitting the <forward> element results in a virtual network among the
| virtual servers, without a connection to a physical network.
| With network address translation, traffic of all virtual servers to the physical
| network is routed through the host's routing stack and uses the host's public IP
| address. This type of network supports outbound traffic only.
| Forwarding mode
| nat
| Example
|| <network>
| <name>net0</name>
| <uuid>fec14861-35f0-4fd8-852b-5b70fdc112e3</uuid>
| <forward mode="nat">
| <nat>
| <port start="1024" end="65535"/>
| </nat>
| </forward>
| <bridge name="virbr0" stp="on" delay="0"/>
| <ip address="192.0.2.1" netmask="255.255.255.0">
| <dhcp>
| <range start="192.0.2.2" end="192.0.2.254"/>
| </dhcp>
| </ip>
| </network>
|
| With an Open vSwitch bridge, the switch implements a subnet. The <bridge>
| element must reference an already existing Open vSwitch (see “Preparing a virtual
| switch” on page 43).
| Forwarding mode
| bridge
| Example
|
| Bridges with IP routing link to a virtual IP subnet on the host. Traffic to and from
| virtual servers that are connected to that subnet are then handled by the IP
| protocol.
| Forwarding mode
| route
| Example
|
| <network>
| <name>net1</name>
| <uuid>34fc97f4-86c5-4d65-887a-cc8b33d2a260</uuid>
| <forward mode="route"/>
| <bridge name="iedn" stp="off" delay="0"/>
| <mac address="f6:2b:85:a9:bf:d9"/>
| <ip address="198.51.100.1" netmask="255.255.255.0">
| </ip>
| </network>
|
| Related reference:
| “<bridge>” on page 197
| Configures the bridge device that is used to set up the virtual network.
| “<dhcp>” on page 206
| Configures DHCP services for the virtual network.
| “<forward>” on page 215
| Configures the forwarding mode for the bridge that connects the virtual network
| to a physical LAN. Omitting this tag results in an isolated network that can
| connect guests.
| “<ip>” on page 222
| Configures IP addresses for the virtual network.
| “<name> as a child element of <network>” on page 235
| Assigns a short name to a virtual network.
| “<network>” on page 236
| Is the root element of a network configuration-XML.
Chapter 14. Managing the virtual server life Chapter 17. Managing system resources . . . 137
cycle . . . . . . . . . . . . . . . . 113 Managing virtual CPUs . . . . . . . . . . 138
Starting a virtual server . . . . . . . . . . 114 Modifying the number of virtual CPUs . . . . 138
Terminating a virtual server . . . . . . . . 114 Modifying the virtual CPU weight . . . . . 141
Suspending a virtual server. . . . . . . . . 116 Managing virtual memory . . . . . . . . . 143
Resuming a virtual server . . . . . . . . . 116
Chapter 18. Managing devices . . . . . . . 145
Chapter 15. Monitoring virtual servers . . . . 119 Attaching a device. . . . . . . . . . . . 146
Browsing virtual servers. . . . . . . . . . 120 Detaching a device . . . . . . . . . . . 147
Displaying information about a virtual server . . 120 Replacing a virtual DVD . . . . . . . . . 148
Displaying the current libvirt-internal configuration 122 Connecting to the console of a virtual server . . . 149
Chapter 16. Migration . . . . . . . . . . 125 Chapter 19. Managing storage pools . . . . . 151
Definition of a virtual server on different hosts Storage pool management commands . . . . . 152
using the same configuration-XML . . . . . . 126 Volume management commands . . . . . . . 153
Live virtual server migration . . . . . . . . 127
Hypervisor release . . . . . . . . . . 127 | Chapter 20. Managing virtual networks . . . . 155
| IBM Z hardware model . . . . . . . . . 128
Procedure
Define a virtual server to libvirt using the virsh define command (see “define” on
page 273):
# virsh define <domain-configuration-XML-filename>
<domain-configuration-XML-filename>
is the path and file name of the domain configuration-XML file.
Results
What to do next
Virtual servers that are defined but not yet started are listed with state “shut
off”.
2. Display the current libvirt-internal configuration as described in “Displaying
the current libvirt-internal configuration” on page 122.
3. Start the virtual server as described in “Starting a virtual server” on page 114.
4. Check your connection to the virtual server via the configured console as
described in “Connecting to the console of a virtual server” on page 149.
Related reference:
Chapter 27, “Virtual server life cycle,” on page 183
Display the state of a defined virtual server including the reason with the virsh
domstate --reason command.
Modify the libvirt-internal configuration of a virtual server by using the virsh edit
command (see “edit” on page 288):
# virsh edit <VS>
By default, the virsh edit command uses the vi editor. You can modify the editor
by setting the environment variables $VISUAL or $EDITOR.
Results
If your configuration does not contain necessary elements, they will be inserted
automatically when you quit the editor. Also, the virsh edit command does not
allow to save and quit corrupted files.
The libvirt-internal configuration is modified and will be effective with the next
virtual server restart.
What to do next
Procedure
Delete the definition of a virtual server from libvirt by using the virsh undefine
command (see “undefine” on page 339):
# virsh undefine <VS>
If the virtual server is not displayed, see “Defining a virtual server” on page 110.
When you start a virtual server, usually, an Initial Program Load (IPL) is
performed, for example to boot the guest. But if there is a saved system image for
the virtual server, the guest is restored from this system image. It depends on the
command that terminated a virtual server whether the system image was saved or
not (see “Terminating a virtual server”).
The “saved shut off” state indicates the availability of a saved system image. To
display the state and the reason of a virtual server, enter the command:
# virsh domstate <VS> --reason
shut off (saved)
Refer to Chapter 27, “Virtual server life cycle,” on page 183 to see the effect of the
virsh start command depending on the virtual server state.
Procedure
Start a defined virtual server in “shut off” state using the virsh start command
(see “start” on page 336):
# virsh start <VS>
Using the --console option grants initial access to the virtual server console and
displays all messages that are issued to the console:
# virsh start <VS> --console
If there is a saved system image, you can avoid that the virtual server is restored
from this image by using the --force-boot option.
Refer to Chapter 27, “Virtual server life cycle,” on page 183 to see the effect of the
virsh commands to terminate a virtual server depending on its state.
v In most cases, you use the virsh shutdown command to properly terminate a
virtual server.
If the virtual server does not respond, it is not terminated. While the virtual
server is shutting down, it traverses the state “in shutdown” and finally enters
the “shutdown shut off” state.
# virsh shutdown <VS>
Example:
v Save the system image of a running or a paused virtual server and terminate it
thereafter with the virsh managedsave command.
# virsh managedsave <VS>
Example:
To save the system image of virtual server vserv2 and properly shut it down,
issue:
# virsh managedsave vserv2
Domain vserv2 state saved by libvirt
The system image of the virtual server is resumed at the time of the next start.
Then, the state of the virtual server is either running or paused, depending on
the last state of the virtual server and the managedsave command options.
Note: The managedsave operation will save the virtual server state in a file in
the host filesystem. This file has at least the size of the virtual server memory.
Make sure the host filesystem has enough space to hold the virtual server state.
v When a virtual server is not responding, you can terminate it immediately with
the virsh destroy command.
The virtual server enters the “destroyed shut off” state. This command might
cause a loss of data.
# virsh destroy <VS>
Example:
Refer to Chapter 27, “Virtual server life cycle,” on page 183 to see the effect of the
virsh suspend command depending on the virtual server state.
Procedure
Suspend a virtual server by using the virsh suspend command (see “suspend” on
page 338):
# virsh suspend <VS>
What to do next
To transfer the virtual server back to the running state, issue the virsh resume
command.
The virsh list command with the --state-paused option displays a list of paused
virtual servers.
Refer to Chapter 27, “Virtual server life cycle,” on page 183 to see the effect of the
virsh resume command depending on the virtual server state.
Resume a virtual server using the virsh resume command (see “resume” on page
331):
# virsh resume <VS>
If the virtual server is not displayed, see “Defining a virtual server” on page 110.
Procedure
v To view a list of all defined virtual servers, use the virsh list command with
the --all option (see “list” on page 295):
# virsh list --all
Example
3 vserv1 paused
8 vserv2 running
Procedure
You can display information about a defined virtual server using one of the
following commands:
v View information about the I/O threads of a virtual server with 8 virtual CPUs:
# virsh iothreadinfo vserv1
IOThread ID CPU Affinity
---------------------------------------------------
1 0-7
2 0-7
3 0-7
Procedure
vserv1.xml
<domain type=“kvm”>
<name>vserv1</name>
<memory unit=“GiB”>4</memory>
<vcpu>2</vcpu>
<cputune>
<shares>2048</shares>
</cputune>
<os>
<type arch=“s390x” machine=“s390-ccw-virtio”>hvm</type>
</os>
<iothreads>2</iothreads>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>preserve</on_crash>
<devices>
<disk type=“block” device=“disk”>
<driver name=“qemu” type=“raw” cache=“none” io=“native” iothread=“1”/>
<source dev=“/dev/mapper/36005076305ffc1ae00000000000020d3”/>
<target dev=“vda” bus=“virtio”/>
<boot order=“1”/>
</disk>
<interface type=“direct”>
<source dev=“bond0” mode=“bridge”/>
<model type=“virtio”/>
</interface>
<console type=“pty”>
<target type=“sclp”/>
</console>
<memballoon model=“none”/>
</devices>
</domain>
dev1.xml
You can define and start the virtual server and then attach the configured device
with the commands:
# virsh define vserv1.xml
# virsh start vserv1 --console
# virsh attach-device vserv1 dev1.xml
The hypervisor release is defined by the installed QEMU release, by the hypervisor
product or by your distribution on the host.
The virtual server's machine type determines which hypervisor release runs the
virtual server on the host.
Be sure to configure the machine type with the alias value “s390-ccw-virtio” in the
domain configuration-XML unless you intend to migrate the virtual server to a
destination host with an earlier hypervisor release.
When you define a virtual server using the alias machine type, libvirt replaces the
alias machine type by the machine type which reflects the current hypervisor
release of the host running the virtual server. The libvirt-internal configuration
reflects the installed hypervisor release.
Example:
Domain configuration-XML using the alias machine type:
<type arch=“s390x” machine=“s390-ccw-virtio”>hvm</type>
Libvirt-internal configuration for QEMU release 2.10:
<type arch=“s390x” machine=“s390-ccw-virtio-2.10”>hvm</type>
Figure 17 shows that creating virtual servers from the same domain
Virtual server of machine type Virtual server of machine type
s390-ccw-virtio-2.7 s390-ccw-virtio-2.10
Guest L Guest
-XM
Virtual hardware ion Virtual hardware
at
define igur define
Memory nf Memory
CPU CPU co CPU CPU
ain
m
Do
blk blk eth. blk blk eth
of alias
KVM source host running with QEMU 2.7 KVM destination host running with QEMU 2.10
machine type
s390-ccw-virtio
IBM Z hardware IBM Z hardware
Memory Memory
CPU CPU CPU CPU CPU CPU
. .
Hypervisor release
A live virtual server migration preserves the machine type of the virtual server.
The libvirt-internal configuration is not changed, that is, the machine type still
reflects the hypervisor release of the source host. Newer hypervisor releases are
compatible with earlier versions.
However, if you try to migrate a virtual server to a destination host with an earlier
hypervisor release than the currently reflected machine type, you need to explicitly
specify this earlier machine type in the virtual server definition before the
migration.
Example:
1. Before the migration, the virtual server is running on the source host with a
hypervisor release based on QEMU 2.7. The virtual server's machine type is
s390-ccw-virtio-2.7.
Virtual hardware
Memory
CPU CPU
. .
2. After the migration, the virtual server is running on the destination host with a
hypervisor release based on QEMU 2.10. The virtual server's machine type is
still s390-ccw-virtio-2.7.
KVM source host running with QEMU 2.7 KVM destination host running with QEMU 2.10
. .
The virtual server runs on the earlier hypervisor release and does not exploit
the features of the current release.
As long as you do not change the machine type to the new release, a migration
of this virtual server back to its original source host will succeed.
| You can perform virtual server live migrations across IBM Z hardware of the same
| model and upgrade level. You can also migrate to a later hardware model, for
| example from an IBM z13 to an IBM z14 mainframe.
| Migration to a prior hardware model is possible only if the virtual server on the
| newer hardware is restricted to CPU features that are also available on the older
| destination hardware. By default, a virtual server uses the latest CPU model of the
| hardware. Use the <cpu> element in the domain XML to configure a specific
| backlevel CPU model (see “Configuring the CPU model” on page 63).
| After a live migration to a later hardware model, the virtual server keeps running
| with the CPU model of the original hardware. This behavior preserves the option
| for a live migration back to the original hardware. To use new CPU features on the
| destination hardware, stop the virtual server, modify the domain
| configuration-XML, and then restart the virtual server.
| If you cannot perform live guest migration, migrate by shutting down the virtual
| server and then starting it on the destination hardware (see “Definition of a virtual
| server on different hosts using the same configuration-XML” on page 126).
| Example
| The following figure illustrates the rules for a live migration. A virtual server on
| z13 hardware runs a guest operating system with the CPU features of zEC12 with
| upgrade level 2.
|
migrate
Guest
zEC12.2 <cpu ...>
<model>zEC12.2</model>
...
</cpu>
z13 z14
System resources
Provide access to the same or equivalent system resources, such as memory and
CPUs, on both hosts.
Storage
Storage devices that are configured for the virtual server must be accessible from
the destination host.
DASDs:
v Make sure that DASDs are configured using udev-created device nodes.
v If the DASDs are configured using the device bus-ID (by-path device
node), make sure that you use identical device numbers in the IOCDS of
both hosts.
v Make sure that there is a migration process for setting both the base
devices and the alias devices online on the destination host.
SCSI disks:
v Make sure that SCSI disks are configured using device mapper-created
device nodes.
Image files residing on a network file system (NFS):
Please note that depending on the NFS configuration the image files
could be accessible by other virtual servers.
Disk images residing on the host:
There are options to migrate image files that back up virtual block devices
to the destination host. This process is called disk migration.
For each image file which is to be migrated:
v Make sure that the image file has write permission. That is, the virtual
block device which is backed by the image file is not configured as a
virtual DVD or by using the readonly element.
SCSI tapes or medium changer devices:
v When you migrate a virtual server that uses a configured virtual SCSI
device, be aware that the SCSI device name, which is used to specify the
source device, might change on the destination host.
Tip: Make sure that SCSI tapes or medium changer devices are
configured in separate device configuration-XML files. Detach them
before you perform a migration. After the migration, reconfigure the
devices before you reattach them.
Networking
To ensure that the virtual server's network access is not interrupted by the
migration:
v Make sure that the network administrator uses identical network interface
names for the access to identical networks on both hosts.
v Make sure that the OSA channels are not shared between the source and the
destination host.
Source host
virtual
vd<x0> vd<x1> vd<x2> vd<x3> Ethernet
virtio-net
device
device
<multi- <multi-
pathA> pathB> <macvtap>
IBM Z hardware
drive
IBM Z hardware
dasd dasd
<a> <b>
bond0
<multi- <multi-
pathA> pathB>
<macvtap>
virtual
virtio-net
vd<x0> vd<x1> vd<x2> vd<x3>
Ethernet
device
device
Destination host
Figure 18. Example of a device setup on the source and destination hosts that allows the
migration of the virtual server using these devices
Concurrency
Maximum number of concurrent connections
If you connect to the destination host using ssh, increase the maximum
number of unauthenticated concurrent connections to perform more than
10 concurrent migrations.
1. On the destination host, modify the OpenSSH SSH daemon
configuration file /etc/ssh/sshd_config. The MaxStartups parameter
specifies the maximum number of concurrent connections that have not
yet been authenticated. The default is 10, which is specified as follows:
#MaxStartups 10:30:100
Firewall configuration
Make sure that the firewall configuration of the involved systems allows access to
all required network resources.
Open the required migration port range in the firewall of the destination host. If
you modified the migration port range which is used by libvirt, open the
additional destination ports as well.
Example:
Deadlock prevention
Performance considerations
In most cases, live virtual server migration does not directly affect the host system
performance. However, it might have an impact if either the source system or the
destination system is heavily loaded or constrained in the areas of CPU utilization,
paging, or network bandwidth.
Live phase
While the virtual server is running, its memory pages are transferred to the
destination host. During the live phase, the virtual server might continue to
modify memory pages. These pages are called dirty pages, which must be
retransmitted.
QEMU continuously estimates the time it will need to complete the migration
during the stopped phase. If this estimated time is less than the specified
maximum downtime for the virtual server, the virtual server enters the stopped
phase of the migration.
If the virtual server changes memory pages faster than the host can transfer them
to the destination, the migration command option --auto-converge can be used to
throttle down the CPU time of the virtual server until the estimated downtime is
less than the specified maximum downtime. If you do not specify this option, it
might happen that the virtual server never enters the stopped phase because there
are too many dirty pages to migrate.
This mechanism works for average virtual server workloads. Workloads that are
very memory intensive might require the additional specification of the --timeout
option. This option suspends the virtual server after a specified amount of time
and avoids the situation where throttling down the CPU cannot catch up with the
memory activity and thus, in the worst case, the migration operation never stops.
Stopped phase
During the stopped phase, the virtual server is paused. The host uses this
downtime to transfer the rest of the dirty pages and the virtual server's system
image to the destination.
Procedure
1. Optional: You may specify a tolerable downtime for a virtual server during a
migration operation by using the virsh migrate-setmaxdowntime command (see
“migrate-setmaxdowntime” on page 304). The specified value is used to
estimate the point in time when to enter the stopped phase.
You can still issue this command during the process of a migration operation:
# virsh migrate-setmaxdowntime <VS> <milliseconds>
2. Optional: You might want to limit the bandwidth that is provided for a
migration.
To set or to modify the maximum bandwidth, use the virsh migrate-setspeed
command (see “migrate-setspeed” on page 305):
# virsh migrate-setspeed <VS> --bandwidth <mebibyte-per-second>
You can display the maximum bandwidth that is used during a migration with
the virsh migrate-getspeed command (see “migrate-getspeed” on page 303):
# virsh migrate-getspeed <VS>
3. To start a live migration of a virtual server, use the virsh migrate command
with the --live option (see “migrate” on page 300):
# virsh migrate --live <command-options> <VS> qemu+ssh://<destination-host>/system
When virsh connects to the destination host via SSH, you will be prompted for
a password. See libvirt.org/remote.html to avoid entering a password.
<command-options>
Are options of the virsh migrate command.
<destination-host>
Is the name of the destination host.
<mebibyte-per-second>
Is the migration bandwidth limit in MiB/s.
<milliseconds>
Is the number of milliseconds used to estimate the point in time when
the virtual server enters the stopped phase.
<VS> Is the name of the virtual server as specified in its domain
configuration-XML file.
a. Optional: The use of the --auto-converge and the --timeout options ensure
that the migration operation completes.
b. Optional: To avoid a loss of connectivity during a time-consuming
migration process, increase the virsh keepalive interval (see Chapter 29,
“Selected virsh commands,” on page 265):
# virsh --keepalive-interval <interval-in-seconds>
Defaults:
keepalive interval 5 seconds
keepalive count 6
Example:
# virsh --keepalive-interval 10 migrate --live --persistent --undefinesource \
--timeout 1200 --verbose vserv1 qemu+ssh://kvmhost/system
This example increases the keepalive interval of the connection to the host
to 10 seconds.
c. Optional: If the virtual server accesses virtual block devices that are backed
by an image fileon the source host, these disks have to be migrated to the
destination host (disk migration).
Specify the option --copy-storage-all or --copy-storage-inc in
combination with the option --migrate-disks to copy image files that back
up virtual block devices to the destination host.
Restriction:
v Disk migration is only possible for writable virtual disks.
One example of a read-only disk is a virtual DVD. If in doubt, check your
domain configuration-XML. If the disk device attribute of a disk element
is configured as cdrom, or contains a readonly element, then the disk
cannot be migrated.
Example:
This example copies the qcow2 image /var/libvirt/images/vdd.qcow2 to
the destination host, assuming that vdd is configured as follows:
Results
The virtual server is not destroyed on the source host until it has been completely
migrated to the destination host.
In the event of an error during migration, the resources on the destination host are
cleaned up and the virtual server continues to run on the source host.
Example
v This example starts a live migration of the virtual server vserv3 to the
destination host zhost. The virtual server will be transient on zhost, that is, after
vserv3 is stopped on zhost, its definition will be deleted. After a successful
migration, the virtual server will be destroyed on the source host, but still be
defined.
If the migration operation is not terminated within three hundred seconds, the
virtual server is suspended while the migration continues.
# virsh migrate --live --auto-converge --timeout 300 vserv3 qemu+ssh://zhost/system
v This example starts a live migration of vserv3 to the destination host zhost. After
a successful migration, vserv3 will be destroyed and undefined on the source
host. The virtual server definition will be persistent on the destination host.
If the migration operation is not terminated within three hundred seconds, the
virtual server is suspended while the migration continues.
# virsh migrate --live --auto-converge --timeout 300 --undefinesource --persistent \
vserv3 qemu+ssh://zhost/system
What to do next
v You can verify whether the migration completed successfully by looking for a
running status of the virtual server on the destination, for example by using the
virsh list command:
# virsh list
Id Name State
----------------------------------
10 kvm1 running
v You can cancel an ongoing migration operation by using the virsh domjobabort
command:
# virsh domjobabort <VS>
If the virtual server is not displayed, see “Defining a virtual server” on page 110.
The number of virtual CPUs that you can assign to a virtual server is limited by
the maximum number of available virtual CPUs. Both numbers are configured
with the vcpu element and can be modified during operation.
To display the number of virtual CPUs, use the virsh vcpucount command. For
example, issue:
# virsh vcpucount vserv1
maximum config 5
maximum live 5
current config 3
current live 3
where
maximum config
Specifies the maximum number of virtual CPUs that can be made available
for the virtual server after the next restart.
maximum live
Specifies the maximum number of virtual CPUs that can be made available
for the running or paused virtual server.
current config
Specifies the actual number of virtual CPUs which will be available for the
virtual server with the next restart.
current live
Specifies the actual number of virtual CPUs which are available for the
running or paused virtual server.
Procedure
Use the virsh setvcpus command to modify the number of virtual CPUs or the
maximum number of available virtual CPUs for a defined virtual server (see
“setvcpus” on page 334).
v Modify maximum config:
To modify the maximum number of available virtual CPUs with the next virtual
server restart, use the --maximum and the --config options:
# virsh setvcpus <VS> <max-number-of-CPUs> --maximum --config
This modification takes effect after the termination of the virtual server and a
subsequent restart. Please note that a virtual server reboot does not modify the
libvirt-internal configuration.
v Modify current config:
To increase or reduce the number of virtual CPUs with the next virtual server
restart, use the --config option:
# virsh setvcpus <VS> <number-of-CPUs> --config
The virtual CPUs are not removed until the next virtual server reboot. Until
then, the virtual server user might set the corresponding number of virtual
CPUs offline.
v Modify current live:
To increase the number of virtual CPUs of a running or paused virtual server,
use the --live option:
# virsh setvcpus <VS> <number-of-CPUs> --live
The virtual server user has to bring the additional virtual CPUs online.
<VS> Is the name of the virtual server as specified in its domain
configuration-XML file.
Example
v Change the maximum number of available virtual CPUs with the next virtual
server restart.
# virsh vcpucount vserv1
maximum config 5
maximum live 5
current config 4
current live 4
2. To set the CPUs offline until the next virtual server restart, the virtual server
user might set the virtual CPUs offline:
[root@guest:] # chcpu -d 2
CPU 2 disabled
[root@guest:] # chcpu -d 3
CPU 3 disabled
2. To set the additional CPU online, the virtual server user might enter:
[root@guest:] # chcpu -e 3
CPU 3 enabled
The available CPU time is shared between the running virtual servers. Each virtual
server receives the share that is configured with the shares element, or the default
value.
You can modify this share for a running virtual server or persistently across virtual
server restarts.
Procedure
v To modify the current CPU weight of a running virtual server, use the virsh
schedinfo command with the --live option (see “schedinfo” on page 332):
# virsh schedinfo <VS> --live cpu_shares=<number>
<number>
Specifies the CPU weight.
<VS> Is the name of the virtual server.
Example
v A virtual server with a CPU weight of 2048 receives twice as much run time as a
virtual server with a CPU weight of 1024.
Related tasks:
“Tuning virtual CPUs” on page 62
Regardless of the number of its virtual CPUs, the CPU weight determines the
shares of CPU time which is dedicated to a virtual server.
Procedure
Specify a soft limit for physical host memory usage with the virsh memtune
command (see “memtune” on page 299):
# virsh memtune <VS> --soft-limit <limit-in-KB>
<limit-in-KB>
Specifies the soft limit in kilobytes.
<VS> Is the name of the virtual server as defined in the domain
configuration-XML file.
Related concepts:
Chapter 22, “Memory management,” on page 163
The memory configured for a virtual server appears as physical memory to the
guest operating system but is realized as a Linux virtual address space.
Related tasks:
“Tuning virtual memory” on page 65
A configured soft limit allows the host to limit the physical host memory resources
used for the virtual server memory in case the host experiences high swapping
activity.
If the virtual server is not displayed, see “Defining a virtual server” on page 110.
Procedure
1. Optional: If you attach a virtual block device, and the current libvirt-internal
configuration does not provide an I/O thread for the device:
Add an I/O thread dedicated to the device by using the virsh iothreadadd
command (see “iothreadadd” on page 290):
# virsh iothreadadd <VS> <IOthread-ID>
<VS>
Is the name of the virtual server as defined in the domain
configuration-XML file.
<IOthread-ID>
Is the ID of the I/O thread to be added to the virtual server. Be sure that
the I/O thread ID matches the I/O thread ID in the device
configuration-XML.
2. Attach the device using the virsh attach-device command (see “attach-device”
on page 268).
# virsh attach-device <VS> <device-configuration-XML-filename> <scope>
<device-configuration-XML-filename>
Is the name of the device configuration-XML file.
<VS>
Is the name of the virtual server as defined in the domain
configuration-XML file.
<scope>
Specifies the scope of the command:
--live
Hotplugs the device to a running virtual server. This configuration
change does not persist across stopping and starting the virtual server.
--config
Adds the device to the persistent virtual server configuration. The
device becomes available when the virtual server is next started. This
configuration change persists across stopping and starting the virtual
server.
--persistent
Adds the device to the persistent virtual server configuration and
Detaching a device
You can unplug devices from a running virtual server, remove devices from the
persistent virtual server configuration, or both.
You need a device configuration-XML file to detach a device from a virtual server.
If the device has previously been attached to the virtual server, use the device
configuration-XML file that was used to attach the device.
Procedure
1. Detach the device using the virsh detach-device command (see
“detach-device” on page 275):
# virsh detach-device <VS> <device-configuration-XML-filename> <scope>
<device-configuration-XML-filename>
Is the name of the device configuration-XML file.
<VS>
Is the name of the virtual server as defined in the domain
configuration-XML file.
<scope>
Specifies the scope of the command:
--live
Unplugs the device from a running virtual server. This configuration
change does not persist across stopping and starting the virtual server.
--config
Removes the device from the persistent virtual server configuration.
The device becomes unavailable when the virtual server is next started.
This configuration change persists across stopping and starting the
virtual server.
--persistent
Removes the device from the persistent virtual server configuration and
unplugs it if the virtual server is running. This configuration change
persists across stopping and starting the virtual server. This option is
equivalent to specifying both --live and --config.
<VS>
Is the name of the virtual server as defined in the domain
configuration-XML file.
<IOthread-ID>
Is the ID of the I/O thread to be deleted from the virtual server.
Make sure that the virtual DVD drive is configured as a virtual SCSI device (see
“Configuring a virtual SCSI-attached CD/DVD drive” on page 95).
The guest is able to mount and to unmount the file system residing on a virtual
DVD. You can remove the ISO image which represents the virtual DVD and
provide a different one during the life time of the virtual server. If you try to
remove an ISO image that is still in use by the guest, QEMU forces the guest to
release the file system.
Procedure
1. Optional: Remove the current ISO image by using the virsh change-media
command with the --eject option (see “change-media” on page 270):
# virsh change-media <VS> <logical-device-name> --eject
2. Provide a different ISO image by using the virsh change-media command with
the --insert option:
# virsh change-media <VS> <logical-device-name> --insert <iso-image>
In case the current ISO image has not been removed before, it is replaced by
the new one.
<iso-image>
Is the fully qualified path to the ISO image on the host.
<logical-device-name>
Identifies the virtual SCSI-attached CD/DVD drive by its logical device
name, which was specified with the target dev attribute in the domain
configuration-XML file.
Example
After the guest has unmounted the file system on the virtual DVD, this example
removes the currently provided virtual DVD from the virtual DVD drive:
# virsh domblklist vserv1
Target Source
------------------------------------------------
vda /dev/storage1/vs1_disk1
sda /var/lib/libvirt/images/cd2.iso
If the virtual DVD is still in use by the guest, the change-media command with the
--eject option forces the guest to unmount the file system.
This example inserts a virtual DVD, which is represented by the ISO image, into a
virtual DVD drive:
# virsh change-media vserv1 sda --insert /var/lib/libvirt/images/cd2.iso
Successfully inserted media.
Procedure
Connect to a pty console of a running virtual server by using the virsh console
command (see “console” on page 272):
# virsh console <VS>
However, if you want to be sure that you do not miss any console message,
connect to the console when you start a virtual server by using the --console
option (see “start” on page 336):
# virsh start <VS> --console
What to do next
To leave the console, press Control and Right bracket (Ctrl+]) when using the US
keyboard layout.
Related tasks:
“Starting a virtual server” on page 114
Use the virsh start command to start a shut off virtual server.
Chapter 18. Devices 149
“Configuring the console” on page 70
Configure the console by using the console element.
Once a storage pool is defined to libvirt, it can enter the states “inactive”, “active”,
or “destroyed”.
pool-undefine
Storage pool
Volume
Volume
.
.
Volume
inactive/destroyed
undefined defined
pool-delete
pool-undefine
Storage pool
Volume
Volume
.
.
pool-destroy
pool-start
pool-refresh
Storage pool
Volume
Volume
.
.
Volume
active
There are also virsh commands to manage the volumes of a storage pool. Use the
commands described in “Volume management commands” on page 153.
Procedure
v Creating, modifying, and deleting a persistent storage pool definition:
Functionality Command
Create a persistent definition of a storage pool “pool-define” on page 318
configuration
Edit the libvirt-internal configuration of a defined storage “pool-edit” on page 322
pool
Delete the persistent libvirt definition of a storage pool “pool-undefine” on page 328
Functionality Command
Enable or disable the automatic start of a storage pool “pool-autostart” on page 317
when the libvirt daemon is started
Start a defined inactive storage pool “pool-start” on page 327
Update the volume list of a storage pool “pool-refresh” on page 326
Shut down a storage pool - the pool can be restarted by “pool-destroy” on page 320
using the virsh pool-start command
Delete the volumes of a storage pool “pool-delete” on page 319
Functionality Command
View a list of all defined storage pools “pool-list” on page 324
Display the current libvirt-internal configuration of a “pool-dumpxml” on page
storage pool 321
Display information about a defined storage pool “pool-info” on page 323
Retrieve the name of a storage pool from its UUID “pool-name” on page 325
Retrieve the UUID of a storage pool from its name “pool-uuid” on page 329
Procedure
v Creating, modifying, and deleting volumes:
Functionality Command
Create a volume for a storage pool from a volume “vol-create” on page 341
configuration-XML file
Remove a volume from a storage pool “vol-delete” on page 342
Functionality Command
Display a list of the volumes of a storage pool “vol-list” on page 346
Display the current libvirt-internal configuration of a “vol-dumpxml” on page 343
storage volume
Display information about a defined volume “vol-info” on page 344
Display the key of a volume from its name or path “vol-key” on page 345
Display the name of a volume from its key or path “vol-name” on page 347
Display the path of a volume from its name or key “vol-path” on page 348
Display the storage pool name or UUID which hosts the “vol-pool” on page 349
volume
| Virtual networks that are defined to libvirt can be in one of the states “inactive” or
| “active”.
Network
configuration-
XML
inactive
net-undefine
net-destroy net-start
active
undefined defined
|
| Figure 20. Virtual network state-transition diagram
|
| Procedure
| v Creating, modifying, and deleting a persistent network definition:
|| Task Command
| Create a persistent definition of a virtual network “net-define” on page 307
| configuration.
| Edit the libvirt-internal configuration of a defined virtual “net-edit” on page 310
| network.
Linux scheduling
Based on the hardware layout of the physical cores, the Linux scheduler maintains
hierarchically ordered scheduling domains.
Basic scheduling domains consist of those processes that are run on physically
adjacent cores, such as the cores on the same chip. Higher level scheduling
domains group physically adjacent scheduling domains, such as the chips on the
same book.
The Linux scheduler is a multi-queue scheduler, which means that for each of the
logical host CPUs, there is a run queue of processes waiting for this CPU. Each
virtual CPU waits for its execution in one of these run queues.
Moving a virtual CPU from one run queue to another is called a (CPU) migration.
Be sure not to confuse the term “CPU migration” with a “live migration”, which is
the migration of a virtual server from one host to another. The Linux scheduler
might decide to migrate a virtual CPU when the estimated wait time until the
virtual CPU will be executed is too long, the run queue where it is supposed to be
waiting is full, or another run queue is empty and needs to be filled up.
Migrating a virtual CPU within the same scheduling domain is less cost intensive
than to a different scheduling domain because of the caches being moved from one
core to another. The Linux scheduler has detailed information about the migration
costs between different scheduling domains or CPUs. Migration costs are an
important factor for the decision if the migration of a virtual CPU to another host
CPU is valuable.
Scheduling Scheduling
domain domain
mi
mi
gra
g ra
ti
tion c
on co
ost
sts
s
. . . .
. . . . Run queues
. . . .
Host CPUs
logical logical logical logical
CPU CPU CPU CPU
.
libvirt provides means to assign virtual CPUs to groups of host CPUs in order to
minimize migration costs. This process is called CPU pinning. CPU pinning forces
the Linux scheduler to migrate virtual CPUs only between those host CPUs of the
specified group. Likewise, the execution of the user space process or I/O threads
can be assigned to groups of host CPUs.
Attention: Do not use CPU pinning, because a successful CPU pinning depends
on a variety of factors which can change over time:
v CPU pinning can lead to the opposite effect of what was desired when the
circumstances for which it was designed change. This may occur, for example,
when the host reboots, the workload on the host changes, or the virtual servers
are modified.
v Deactivating operating CPUs and activating standby CPUs (CPU hotplug) on the
host may lead to a situation where host CPUs are no longer available for the
execution of virtual server threads after their reactivation.
CPU weight
The host CPU time which is available for the execution of the virtual CPUs
depends on the system utilization.
The available CPU time is divided up between the virtual servers running on the
host.
The Linux scheduler and the Linux kernel feature cgroups allocate the upper limit
of CPU time shares (or simply: CPU shares) which a virtual server is allowed to use
based on the CPU weight of all virtual servers running on the host.
You can configure the CPU weight of a virtual server, and you can modify it
during operation.
The CPU shares of a virtual server are calculated by forming the virtual server's
weight-fraction.
Weight-
Virtual server CPU weight Weight-sum CPU shares
fraction
A 1024 3072 1024/3072 1/3
B 2048 3072 2048/3072 2/3
The number of virtual CPUs does not affect the CPU shares of a virtual server.
Example:
The CPU shares are the same for both virtual servers:
Weight-
Virtual server CPU weight Weight-sum CPU shares
fraction
A 1024 2048 1024/2048 1/2
B 1024 2048 1024/2048 1/2
The CPU shares of each virtual server are spread across its virtual CPUs,
such as:
Virtual server memory has the same characteristics as virtual memory used by
other Linux processes. For example, it is protected from access by other virtual
servers or applications running on the host. It also allows for memory
overcommitment, that is, the amount of virtual memory for one or more virtual
servers may exceed the amount of physical memory available on the host.
Memory is organized in fixed size blocks called pages. Each virtual server memory
page must be backed by a physical page of the host. Since more virtual pages than
physical pages can exist, it is necessary that the content of currently unused virtual
pages can be temporarily stored on a storage volume (swap device) and retrieved
upon access by the guest. The activity of storing pages to and retrieving them from
the disk is called swapping.
2)
Pages
1)
Swapping:
1) Storing up an unused page to a disk
2) Retrieving another page from the disk
Since disk storage access is significantly slower than memory access, swapping will
slow down the execution of a virtual server even though it happens transparently
for the guest. Careful planning of virtual server memory handling is therefore
essential for an optimal system performance.
Tip:
v Plan a memory ratio of not more than virtual-to-real to 2:1
v Configure the minimum amount of memory necessary for each virtual server
A guest operating system can mark memory pages as unused or volatile with the
IBM Z Collaborative Memory Management Assist (CMMA) facility. This allows the
host to avoid unnecessary disk swapping because unused pages can simply be
discarded. Current Linux operating systems make use of CMMA. The subset of the
CMMA facility as used by Linux is enabled in KVM, therefore transparently
ensuring efficient physical host memory usage, while still allowing the virtual
server to use all of the defined virtual memory if needed.
Ballooning
KVM implements a virtual memory balloon device that serves the purpose of
controlling the physical host memory usage of a virtual server. With the balloon
device, the host can request that the guest gives up memory. This could be done to
re-balance the resource allocations between virtual servers to adapt to changing
resource needs.
Whether and to which extent the guest honors the request depends on a few
factors not controlled by the host, such as, whether or not a balloon device driver
is installed in the guest, or whether there's enough memory that can be freed.
Unlike for CMMA, the memory given up by the balloon device is removed from
the virtual server and cannot be reclaimed by the guest. As this can cause adverse
effects and even lead to program or operating system failures due to low memory
conditions, it should only be used in well-understood situations. By default, you
should disable the balloon device by configuring <memballoon model=“none”/>.
Memory tuning
Another way to control virtual server memory usage is by means of the Linux
cgroups memory controller. By specifying a soft limit the amount of physical host
memory used by a virtual server can be restricted once the host is under high
memory pressure, that is, the host is experiencing high swapping activity. Again,
this would typically be done to re-balance resource allocations between virtual
servers.
Since the virtual server memory available to the guest is not modified, applying a
soft limit is transparent, except for the performance penalty caused by swapping. If
swapping becomes excessive, time-critical processes may be affected, causing
program failures. Therefore the soft limit should be applied carefully as well.
{
Soft limit for
Virtual Server 4
Pages
Applying the
soft limit for
Virtual Server 4
The virtual server memory soft limit can be controlled statically using the
soft_limit child element of the memtune element or dynamically using the virsh
memtune command.
Related tasks:
“Configuring virtual memory” on page 65
Configure the virtual server memory.
“Managing virtual memory” on page 143
Specify a soft limit for the amount of physical host memory used by a virtual
server.
I/O threads
I/O threads are dedicated to perform I/O operations on virtual block devices.
For a good performance of I/O operations, provide one I/O thread for each virtual
block device. Estimate no more than one or two I/O threads per host CPU and no
more I/O threads than virtual block I/O devices that will be available for the
virtual server. Too many I/O threads will reduce system performance by increasing
the system overhead.
When you attach a virtual block device to a virtual server, you can provide an I/O
thread for this device during operation and remove it after use. For more
information, see:
v “Attaching a device” on page 146
v “Detaching a device” on page 147
Path redundancy
Data integrity
Virtual hardware
blk blk
Logical volumes
Physical volumes
Extents
Virtual server
Virtual hardware
Logical volumes
Volume group
blk blk
KVM host
dasd dasd
<a> <b>
.
IBM Z
hardware
.
Physical volumes
Extents
devices
{filter = [ "a|/dev/mapper/36005076305ffc1ae00000000000021d5p1|",
"a|/dev/mapper/36005076305ffc1ae00000000000021d7p1|",
"a|/dev/disk/by-path/ccw-0.0.1607-part1|",
"r|.*|" ]
}
You can verify that SCSI disks are referenced correctly by issuing the following
pvscan command:
# pvscan -vvv 2>&1 | fgrep ’/dev/sd’
...
/dev/sda: Added to device cache
/dev/block/8:0: Aliased to /dev/sda in device cache
/dev/disk/by-path/ccw-0.0.50c0-zfcp-0x1234123412341234:\
0x0001000000000000: Aliased to /dev/sda in device cache
...
/dev/sda: Skipping (regex)
The output must contain the string “Skipping (regex)” for each SCSI disk standard
device name which is configured for the virtual server.
Monitor and display information that helps to diagnose and solve problems.
Log messages
These logs are created.
libvirt log messages
By default, libvirt log messages are stored in the system journal. You can
specify a different location in the libvirt configuration file at
/etc/libvirt/libvirtd.conf. For more information, see
libvirt.org/logging.html.
QEMU log file of a virtual server
/var/log/libvirt/qemu/<VS>.log, where <VS> is the name of the virtual
server.
Console log file
If the log element is specified in the console configuration, the log file
attribute indicates the console log file.
Example:
Procedure
1. In the libvirt configuration file /etc/libvirt/libvirtd.conf, specify:
log_level = <n>
Where <n> is the logging level:
4 Displays errors.
3 Is the default logging level, which logs errors and warnings.
2 Provides more information than logging level 3.
1 Is the most verbose logging level.
2. Restart the libvirt daemon to enable the changes.
Procedure
Create a dump of the crashed virtual server using the virsh dump command with
the --memory-only option:
# virsh dump --memory-only <VS> <dumpfile>
<dumpfile>
Is the name of the dump file. If no fully qualified path to the dump file is
specified, it is written to the current working directory of the user who
issues the virsh dump command.
<VS> Is the name of the virtual server as specified in its domain
configuration-XML file.
Results
What to do next
<kernel-image-filename>
Is the name of the kernel image file of the guest running on the dumped
virtual server.
If kdump is not enabled on the virtual server, the following procedure causes only
a restart of the virtual server.
For more information about kdump, see Using the Dump Tools, SC33-8412.
Results
The virtual server creates a dump and then restarts in kdump mode.
What to do next
To verify your action, you might want to see the dump on the virtual server:
1. Log in to the virtual server as root.
2. Use the makedumpfile command to create a dump file from the vmcore file:
[root@guest:] # makedumpfile -c <vmcore> <dumpfile>
If the command returns a list of supported events, such as the tracepoint event
kvm_s390_sie_enter, the tool is installed.
Procedure
You collect, record, and display performance metrics with the perf kvm stat
command.
v The record subcommand records performance metrics and stores them in the file
perf.data.guest.
– The perf tool records events until you terminate it by pressing Control and c
(Ctrl+c).
– To display the recorded data, use the report subcommand.
– It is recommended to save perf.data.guest before you collect new statistics,
because a new record may overwrite this file.
v The live subcommand displays the current statistics without saving them.
The perf tool displays events until you terminate it by pressing Control and c
(Ctrl+c).
VM-EXIT Samples Samples% Time% Min Time Max Time Avg time
What to do next
For more information about the perf subcommand kvm stat, see the man page or
issue the full subcommand with the --help option:
With the collected statistics, you can watch the virtual server behavior and time
consumption and then analyze the recorded events. So you may find hints for
possible sources of error.
v You can find a description of the general instructions in the z/Architecture®
Principles of Operation, SA22-7832, for example:
Required means that a function is not available without the diagnose; optional
means that the function is available but there might be a performance impact.
You may also find other DIAG events on your list, but those are not supported
by KVM on IBM Z. A list of all Linux diagnoses is provided in Device Drivers,
Features, and Commands, SC33-8411.
Get an overview of the virtual server states and the elements and commands that
are specific to configure and operate a virtual server on IBM Z. The virtual server
user can retrieve information about the IBM Z hardware and the LPAR on which
the KVM host runs.
Figure 24 shows the life cycle of a defined virtual server: States, their reasons, and
state transitions which are caused by the virsh virtual server management
commands. The state transitions shown in this figure do not comprise command
options that you can use to further influence the state transition.
shut off
unknown
destroyed
saved
in
shutdown
destroy
in
shutdown
ve
managedsa
crashed
paused
paused
migrated
booted running
migr
ate
shut off
The virtual server is defined to libvirt and has not yet been started, or it was
terminated.
Reasons
unknown The virtual server is defined to the host.
saved The system image of the virtual server is saved in the file
/var/lib/libvirt/qemu/save/<VS>.save and can be restored.
The system image contains state information about the virtual server.
Depending on this state, the virtual server is started in the state running or
paused.
shutdown The virtual server was properly terminated. The virtual server's resources
were released.
destroyed The virtual server was immediately terminated. The virtual server's resources
were released.
Commands
Command From state (reason) To state (reason)
start shut off (unknown) running (booted)
start shut off (saved from running) running (restored)
start shut off (saved from paused) paused (migrating)
start shut off (shutdown) running (booted)
start shut off (destroyed) running (booted)
start --force-boot shut off (unknown) running (booted)
start --force-boot shut off (saved from running) running (booted)
start --force-boot shut off (saved from paused) paused (user)
start --force-boot shut off (shutdown) running (booted)
start --force-boot shut off (destroyed) running (booted)
start --paused shut off (unknown) paused (user)
start --paused shut off (saved from running) paused (migrating)
start --paused shut off (saved from paused) paused (migrating)
start --paused shut off (shutdown) paused (user)
start --paused shut off (destroyed) paused (user)
running
The virtual server was started.
Reasons
booted The virtual server was started from scratch.
migrated The virtual server was restarted on the destination host after the stopped
phase of a live migration.
restored The virtual server was started at the state indicated by the stored system
image.
unpaused The virtual server was resumed from the paused state.
Commands
Command Transition state To state (reason)
destroy n/a shut off (destroyed)
managedsave n/a shut off (saved from running)
managedsave --running n/a shut off (saved from running)
managedsave --paused n/a shut off (saved from paused)
migrate paused (migrating) running (migrated)
migrate --suspend paused (migrating) paused (user)
shutdown in shutdown shut off (shutdown)
suspend n/a paused (user)
paused
The virtual server has been suspended.
Reasons
user The virtual server was suspended with the virsh suspend command.
migrating The virtual server's system image is saved and the virtual server is halted -
either because it is being migrated, or because it is started from a saved shut
off state.
Commands
Command Transition state To state (reason)
destroy n/a shut off (destroyed)
managedsave n/a shut off (saved from paused)
managedsave --running n/a shut off (saved from running)
managedsave --paused n/a shut off (saved from paused)
resume n/a running (unpaused)
shutdown in shutdown shut off (shutdown)
crashed
The virtual server crashed and is not prepared for a reboot.
Then, you can terminate the virtual server and restart it.
For testing purposes, you can crash a virtual server with the virsh inject-nmi
command.
Commands
Command To state (reason)
destroy shut off (destroyed)
in shutdown
While the virtual server is shutting down, it traverses the “in shutdown” state.
Text content
None.
Selected attributes
name=scsi_host<n>
Specifies the name of the FCP device, where <n> is a nonnegative integer.
Usage
Parent elements
Child elements
None.
Example
<devices>
...
<controller type=“scsi” model=“virtio-scsi” index=“0”/>
<hostdev mode=“subsystem” type=“scsi”>
<source>
<adapter name=“scsi_host0”/>
<address bus=“0” target=“0” unit=“0”/>
</source>
<address type=“scsi” controller=“0” bus=“0” target=“0” unit=“0”/>
</hostdev>
...
</devices>
Text content
None.
Selected attributes
type=ccw
Specifies a virtio CCW device, such as a block device or a network device.
You can specify the device bus-ID with the address attributes cssid, ssid,
and devno.
cssid Specifies the channel subsystem number of the virtual device. Must be
“0xfe”.
ssid Specifies the subchannel set of the virtual device. Valid values are between
“0x0” and “0x3”.
devno Specifies the device number of the virtio device. Must be a unique value
between “0x0000” and “0xffff”.
Usage
v “Configuring a DASD or SCSI disk” on page 78
v “Configuring an image file as storage device” on page 84
Parent elements
v “<controller>” on page 201
v “<disk>” on page 207
v “<interface>” on page 220
v “<memballoon>” on page 228
Child elements
None.
Example
<disk type=“block” device=“disk”>
<driver name=“qemu” type=“raw” cache=“none” io=“native” iothread=“1”/>
<source dev=“/dev/mapper/36005076305ffc1ae00000000000021d5”/>
<target dev=“vda” bus=“virtio”/>
<address type=“ccw” cssid=“0xfe” ssid=“0x0” devno=“0x1108”/>
</disk>
Text content
None.
Selected attributes
type=scsi
Specifies a SCSI device.
controller
Specifies the virtual controller of the virtual device. Enter the index
attribute value of the respective controller element.
bus Specifies the virtual SCSI bus of the virtual device.
target Specifies the virtual SCSI target of the virtual device. This value can be
between 0 and 255.
unit Specifies the unit number (LUN) of the virtual SCSI device.
Usage
v “Configuring a SCSI tape or medium changer device” on page 90
v “Configuring a virtual SCSI-attached CD/DVD drive” on page 95
Parent elements
v “<hostdev>” on page 218
v “<disk>” on page 207
Child elements
None.
Example
<devices>
...
<controller type=“scsi” model=“virtio-scsi” index=“0”/>
<hostdev mode=“subsystem” type=“scsi”>
<source>
<adapter name=“scsi_host0”/>
<address bus=“0” target=“0” unit=“0”/>
</source>
<address type=“scsi” controller=“0” bus=“0” target=“0” unit=“0”/>
</hostdev>
...
<controller type=“scsi” model=“virtio-scsi” index=“1”/>
<disk type=“file” device=“cdrom”>
<driver name=“qemu” type=“raw” io=“native” cache=“none”/>
<source file=“/var/lib/libvirt/images/cd.iso”/>
<target dev=“vda” bus=“scsi”/>
<address type=“drive” controller=“1” bus=“0” target=“0” unit=“0”/>
<readonly/>
</disk>
...
</devices>
Text content
None.
Selected attributes
bus=0 For a SCSI device the value is zero.
target Specifies the SCSI ID.
unit Specifies the SCSI LUN.
Usage
Parent elements
Child elements
None.
Example
<devices>
...
<controller type=“scsi” model=“virtio-scsi” index=“0”/>
<hostdev mode=“subsystem” type=“scsi”>
<source>
<adapter name=“scsi_host0”/>
<address bus=“0” target=“0” unit=“0”/>
</source>
<address type=“scsi” controller=“0” bus=“0” target=“0” unit=“0”/>
</hostdev>
...
</devices>
<backend>
Specifies the character device which generates the random numbers.
Text content
Specifies the device node of the input character device. The default value and
currently the only valid value is /dev/random.
Selected attributes
model=random
Specifies the source model.
Usage
Parent elements
Child elements
None.
Example
<devices>
...
<rng model=“virtio”>
<backend model=“random”>/dev/random</backend>
</rng>
...
</devices>
<boot>
Indicates that the virtual block device is bootable.
Text content
None.
Selected attributes
order=number
Specifies the order in which a device is considered as boot device during
the boot sequence.
| loadparm=number
| For IPL devices with a boot menu configuration: Specifies the boot menu
| entry. If this parameter is omitted, the default entry is booted.
Usage
Parent elements
“<disk>” on page 207
Child elements
None.
| Example
| <disk type=“block” device=“disk”>
| <driver name=“qemu” type=“raw” cache=“none” io=“native” iothread=“1”/>
| <source dev=“/dev/mapper/36005076305ffc1ae00000000000021d7”/>
| <target dev=“vdb” bus=“virtio”/>
| <address type=“ccw” cssid=“0xfe” ssid=“0x1” devno=“0xa30e”/>
| <boot order=“1” loadparm=“2”/>
| </disk>
| <bridge>
| Configures the bridge device that is used to set up the virtual network.
| Text content
| None.
| Selected attributes
| name Specifies a name for the bridge.
| Usage
| Parent elements
| Child elements
| None.
| Example
| <network>
| ...
| <bridge name="virbr2"/>
| ...
| </network>
<cipher>
Configures the generation of an AES or DEA/TDEA wrapping key and the use of
the respective protected key management operations on the virtual server.
Text content
None.
Selected attributes
name=aes | dea
Specifies the AES or DEA/TDEA wrapping key.
state=on | off
on Enables wrapping key generation.
The respective protected key management operations are available
on the virtual server.
off Disables wrapping key generation.
The respective protected key management operations are not
available on the virtual server.
Usage
Parent elements
Child elements
None.
Example
<domain type=“kvm”>
...
<keywrap>
<cipher name=“aes” state=“off”/>
</keywrap>
...
</domain>
<cmdline>
Specifies arguments to be passed to the kernel (or installer) at boot time.
Text content
Command line arguments using the same syntax as if they were specified in the
command line.
Selected attributes
None.
Usage
Parent elements
Child elements
None.
Example
<os>
<type arch='s390x' machine='s390-virtio'>hvm</type>
<kernel>/boot/vmlinuz-3.1.0-7.fc16.s390x</kernel>
<initrd>/boot/initramfs-3.1.0-7.fc16.s390x.img</initrd>
<cmdline>printk.time=1</cmdline>
</os>
<console>
Configures the host representation of the virtual server console.
Text content
None.
Selected attributes
type=pty
Configures a console which is accessible via PTY.
Usage
Parent elements
Child elements
v “<log>” on page 226
v <protocol>
v “<target> as child element of <console>” on page 254
Example
<devices>
...
<console type=“pty”>
<target type=“sclp” port=“0”/>
<log file=“/var/log/libvirt/qemu/vserv-cons0.log” append=“off”/>
</console>
<devices/>
<controller>
Specifies a device controller for a virtual server.
Text content
None.
Selected attributes
type=scsi | virtio-serial
Specifies the type of controller.
index This decimal integer specifies the controller index, which is referenced by
the attached host device.
To reference a controller, use the controller attribute of the address element
as child of the hostdev element.
scsi type-specific attributes:
model=virtio-scsi
Optional; specifies the model of the controller.
Usage
Parent elements
Child elements
v “<address> as child element of <controller>, <disk>, <interface>, and
<memballoon>” on page 192
v “<driver> as child element of <controller>” on page 209
Example
<devices>
<controller type=“scsi” model=“virtio-scsi” index=“0”/>
<hostdev mode=“subsystem” type=“scsi”>
<source>
<adapter name=“scsi_host0”/>
<address bus=“0” target=“0” unit=“0”/>
</source>
<address type=“scsi” controller=“0” bus=“0” target=“0” unit=“0”/>
</hostdev>
</devices>
| <cpu>
| Specifies the features of the virtual CPUs of a virtual server.
| Text content
| None.
| Selected attributes
| match=exact
| The virtual CPU provided to the virtual server must match the
| specification. The virtual server can be started only if the specified CPU
| model is supported. This is the default.
| mode= custom | host-model
| custom
| The <cpu> element and its nested elements define the CPU to be
| presented to the virtual server. This mode ensures that a persistent
| virtual server uses the same CPU model on any KVM host. The
| virtual server can be started only if the KVM host supports the
| specified CPU model. This is the default.
| host-model
| The CPU definition is derived from the KVM host CPU.
| Usage
| Parent elements
| Child elements
| v “<model> as a child element of <cpu>” on page 232
| v “<feature>” on page 213
| Example
| This example sets the CPU model to the default for z14:
| <cpu mode="custom">
| <model>z14</model>
| </cpu>
<cputune>
Groups CPU tuning parameters.
Text content
None.
Selected attributes
None.
Usage
Parent elements
Child elements
The use of the emulator_period, emulator_quota, period, and quota elements might
affect the runtime behavior of the virtual server and interfere with the use of the
shares element. Use the shares element for CPU tuning unless there is a specific
need for the use of one of those elements.
Example
<domain>
...
<cputune>
<shares>2048</shares>
</cputune>
...
</domain>
<device>
Specifies the device that stores a network file system or a logical volume backing a
storage pool.
Text content
None.
Selected attributes
path
Usage
Parent elements
Child elements
None.
Example
<pool type=“netfs”>
<name>nfspool01</name>
<source>
<format type=“nfs”/>
<host name=“sandbox.example.com”/>
<device path=“/srv/nfsexport”/>
</source>
<target>
<path>/var/lib/libvirt/images/nfspool01</path>
</target>
</pool>
<devices>
Specifies the virtual network and block devices of the virtual server.
Text content
None.
Selected attributes
None.
Usage
Parent elements
Child elements
v “<console>” on page 200
v “<controller>” on page 201
v “<disk>” on page 207
v “<emulator>” on page 212
v “<hostdev>” on page 218
v “<interface>” on page 220
v “<memballoon>” on page 228
v “<watchdog>” on page 263
Example
<devices>
<interface type=“direct”>
<source dev=“enccw0.0.1108” mode=“bridge”/>
<model type=“virtio”/>
</interface>
| <dhcp>
| Configures DHCP services for the virtual network.
| Text content
| None.
| Selected attributes
| None.
| Usage
| Parent elements
| Child elements
| <range>
| Example
| <network>
| ...
| <ip address="192.0.2.1" netmask="255.255.255.0">
| <dhcp>
| <range start="192.0.2.2" end="192.0.2.254"/>
| </dhcp>
| </ip>
| </network>
<disk>
Specifies a virtual block device, such as a SCSI device, or an image file.
Text content
None.
Selected attributes
type=block | file
Specifies the underlying disk source.
device=disk | cdrom
Optional; Indicates how the virtual block device is to be presented to the
virtual server.
Usage
v Chapter 10, “Configuring devices,” on page 75
v “Configuring a virtual SCSI-attached CD/DVD drive” on page 95
Parent elements
Child elements
v “<address> as child element of <controller>, <disk>, <interface>, and
<memballoon>” on page 192
v <blockio>
v “<boot>” on page 196
v “<driver> as child element of <disk>” on page 210
v “<geometry>” on page 216
v “<readonly>” on page 244
v “<shareable>” on page 246
v “<source> as child element of <disk>” on page 249
v “<target> as child element of <disk>” on page 255
Example
<disk type=“block” device=“disk”>
<driver name=“qemu” type=“raw” cache=“none” io=“native” iothread=“1”/>
<source dev=“/dev/mapper/36005076305ffc1ae00000000000021d5”/>
<target dev=“vdb” bus=“virtio”/>
<address type=“ccw” cssid=“0xfe” ssid=“0x0” devno=“0x0009”/>
</disk>
<domain>
Is the root element of a domain configuration-XML.
Text content
None.
Selected attributes
type=kvm
Specifies the virtual server type.
Usage
Parent elements
None.
Child elements
v <clock>
v “<console>” on page 200
v “<controller>” on page 201
v “<cputune>” on page 203
v <currentMemory>
v “<devices>” on page 205
v “<iothreads>” on page 221
v <memory>
v <name>
v “<on_crash>” on page 237
v <on_poweroff>
v <on_reboot>
v <os>
v <uuid>
v “<vcpu>” on page 259
Text content
None.
Selected attributes
iothread=<IOthread-ID>
Assigns a certain I/O thread to the user space process. Use this attribute to
ensure best performance.
<IOthread-ID> is a value between 1 and the number of I/O threads which
is specified by the iothreads element.
Usage
Parent elements
“<controller>” on page 201
Child elements
None.
Example
<domain>
...
<iothreads>2</iothreads>
...
<devices>
<controller type=“scsi” model=“virtio-scsi” index=“0”>
<driver iothread=“2”/>
..
</controller>
</devices>
....
</domain>
Text content
None.
Selected attributes
name=qemu
Name of the user space process. Use “qemu”.
type=raw | qcow2
Use subtype “raw”, except for qcow2 image files, which require the
“qcow2” subtype.
iothread=<IOthread-ID>
Assigns a certain I/O thread to the user space process. Use this attribute to
ensure best performance.
<IOthread-ID> is a value between 1 and the number of I/O threads which
is specified by the iothreads element.
cache=none
Optional; controls the cache mechanism.
error_policy=report | stop | ignore | enospace
Optional; the error_policy attribute controls how the host will behave if a
disk read or write error occurs.
rerror_policy=report | stop | ignore
Optional; controls the behavior for read errors only. If no rerror_policy is
given, error_policy is used for both read and write errors. If rerror_policy
is given, it overrides the error_policy for read errors. Also, note that
“enospace” is not a valid policy for read errors. Therefore, if error_policy is
set to “enospace” and no rerror_policy is given, the read error policy is left
at its default (“report”).
io=threads | native
Optional; controls specific policies on I/O. For a better performance,
specify “native”.
ioeventfd=on | off
Optional; allows users to set domain I/O asynchronous handling for the
disk device. The default is left to the discretion of the host. Enabling this
attribute allows QEMU to run the virtual server while a separate thread
handles I/O. Typically virtual servers experiencing high system CPU
utilization during I/O will benefit from this. On the other hand, on
overloaded host it could increase virtual server I/O latency. Note: Only
very experienced users should attempt to use this option!
event_idx=on | off
Optional; controls some aspects of device event processing. If it is on, it
will reduce the number of interrupts and exits for the virtual server. The
default is determined by QEMU; usually if the feature is supported, the
default is “on”. If the situation occurs where this behavior is suboptimal,
this attribute provides a way to force the feature “off”. Note: Only
experienced users should attempt to use this option!
Usage
v “Configuring a DASD or SCSI disk” on page 78
v “Configuring a virtual SCSI-attached CD/DVD drive” on page 95
Parent elements
Child elements
None.
Example
<disk type=“block” device=“disk”>
<driver name=“qemu” type=“raw” cache=“none” io=“native” iothread=“1”/>
<source dev=“/dev/mapper/36005076305ffc1ae00000000000021d5”/>
<target dev=“vdb” bus=“virtio”/>
<address type=“ccw” cssid=“0xfe” ssid=“0x0” devno=“0xd501”/>
</disk>
<emulator>
Specifies the user space process.
Text content
Fully qualified path and file name of the user space process.
Selected attributes
None.
Usage
v “Configuring the user space” on page 68
v “Displaying the current libvirt-internal configuration” on page 122
Parent elements
Child elements
None.
Example
<emulator>/usr/bin/qemu-system-s390x</emulator>
| <feature>
| Adds or removes a CPU feature in a CPU model specification.
| Text content
| None.
| Selected attributes
| name Specifies one of the features as shown in the output of the
| qemu-system-s390x -cpu help command.
| policy= require | disable
| require
| Adds the feature to the CPU definition specified with the name
| attribute.
| disable
| Removes the feature from the CPU definition specified with the
| name attribute.
| Usage
| “Configuring the CPU model” on page 63
| Parent elements
| Child elements
| None.
| Example
| This example uses the default for CPU model z14 as a starting point, but removes
| the iep feature:
| <cpu mode="custom">
| <model>z14</model>
| <feature name="iep" policy="disable"/>
| </cpu>
|
| <format>
| Specifies the image file format backing the storage pool volume.
| Text content
| None.
| Selected attributes
| type=raw | qcow2
| Usage
| Parent elements
| Child elements
| None.
| Example
| <volume type=“file”>
| <name>federico.img</name>
| <key>/var/lib/libvirt/images/federico.img</key>
| <target>
| <path>/var/lib/libvirt/images/federico.img</path>
| <format type=“qcow2”/>
| </target>
| </volume>
|
| <forward>
| Configures the forwarding mode for the bridge that connects the virtual network
| to a physical LAN. Omitting this tag results in an isolated network that can
| connect guests.
| Text content
| None.
| Selected attributes
| mode=nat | bridge | route
| Specifies the method of forwarding for the LAN connection.
| nat Configures a bridge with network address translation (NAT). All
| guest traffic to the physical network is routed through the host's
| routing stack and uses the host's public IP address. This mode
| supports only outbound traffic.
| bridge Configures a bridge based on an already configured Open
| vSwitch (see “Preparing a virtual switch” on page 43).
| route Configures a bridge with IP routing. Bridges with IP routing link
| to a virtual IP subnet on the host. Traffic to and from virtual
| servers that are connected to that subnet are then handled by the
| IP protocol.
| Usage
| Chapter 12, “Configuring virtual networks,” on page 105
| Parent elements
| “<network>” on page 236
| Child elements
| <nat>
| Example
| <network>
| <name>net0</name>
| <uuid>fec14861-35f0-4fd8-852b-5b70fdc112e3</uuid>
| <forward mode="nat">
| <nat>
| <port start="1024" end="65535"/>
| </nat>
| </forward>
| <bridge name="virbr0" stp="on" delay="0"/>
| <ip address="192.0.2.1" netmask="255.255.255.0">
| <dhcp>
| <range start="192.0.2.2" end="192.0.2.254"/>
| </dhcp>
| </ip>
| </network>
<geometry>
Overrides the geometry settings of DASDs or FC-attached SCSI disks.
Text content
None.
Selected attributes
cyls Specifies the number of cylinders.
heads Specifies the number of heads.
secs Specifies the number of sectors per track.
Usage
Parent elements
Child elements
None.
Example
<geometry cyls=“16383” heads=“16” secs=“64” trans=“lba”/>
<host>
Specifies the host that stores the network file system backing a storage pool.
Text content
None.
Selected attributes
name
Usage
Parent elements
Child elements
None.
Example
<pool type=“netfs”>
<name>nfspool01</name>
<source>
<format type=“nfs”/>
<host name=“sandbox.example.com”/>
<device path=“/srv/nfsexport”/>
</source>
<target>
<path>/var/lib/libvirt/images/nfspool01</path>
</target>
</pool>
<hostdev>
Passes host-attached devices to a virtual server.
Ensure that the device that is passed through to the virtual server is not in use by
the host.
Text content
None.
Selected attributes
mode=subsystem
Specifies the pass-through mode.
type=scsi
Specifies the type of device that is assigned to a virtual server.
rawio=no| yes
Indicates whether the device needs raw I/O capability. If any device in a
device configuration-XML file is specified in raw I/O mode, this capability
is enabled for all such devices of the virtual server.
sgio=filtered | unfiltered
Indicates whether the kernel will filter unprivileged SG_IO commands for
the device.
Usage
“Configuring a SCSI tape or medium changer device” on page 90
Parent elements
“<devices>” on page 205
Child elements
v “<address> as child element of <hostdev> or <disk>” on page 193
v “<readonly>” on page 244
v “<shareable>” on page 246
v “<source> as child element of <hostdev>” on page 251
Example
<devices>
<controller type=“scsi” model=“virtio-scsi” index=“0”/>
<hostdev mode=“subsystem” type=“scsi”>
<source>
<adapter name=“scsi_host0”/>
<address bus=“0” target=“0” unit=“0”/>
</source>
<address type=“scsi” controller=“0” bus=“0” target=“0” unit=“0”/>
</hostdev>
</devices>
<initrd>
Specifies the fully qualified path of the ramdisk image in the host operating
system.
Text content
Selected attributes
None.
Usage
Parent elements
Child elements
None.
Example
<os>
<type arch='s390x' machine='s390-virtio'>hvm</type>
<kernel>/boot/vmlinuz-3.1.0-7.fc16.s390x</kernel>
<initrd>/boot/initramfs-3.1.0-7.fc16.s390x.img</initrd>
<cmdline>printk.time=1</cmdline>
</os>
<interface>
Specifies a virtual Ethernet device for a virtual server.
Text content
None.
Selected attributes
| type = direct | bridge | network
Specifies the type of connection:
direct Creates a MacVTap interface.
bridge Attaches to a bridge, as for example implemented by a virtual
switch.
| network
| Attaches to a virtual network as configured with a network
| configuration-XML.
trustGuestRxFilters = no | yes
Only valid if type = “direct”.
Set this attribute to “yes” to allow the virtual server to change its MAC
address. As a consequence, the virtual server can join multicast groups.
The ability to join multicast groups is a prerequisite for the IPv6 Neighbor
Discovery Protocol (NDP).
Setting trustGuestRxFilters to “yes” has security implications, because it
allows the virtual server to change its MAC address and thus to receive all
frames delivered to this address.
Usage
Parent elements
Child elements
v “<address> as child element of <controller>, <disk>, <interface>, and
<memballoon>” on page 192
v “<mac>” on page 227
v “<model> as a child element of <interface>” on page 233
v “<source> as child element of <interface>” on page 252
v “<virtualport> as a child element of <interface>” on page 260
Example
<interface type=“direct”>
<source dev=“bond0” mode=“bridge”/>
<model type=“virtio”/>
</interface>
<iothreads>
Assigns threads that are dedicated to I/O operations on virtual block devices to a
virtual server.
The use of I/O threads improves the performance of I/O operations of the virtual
server. If this element is not specified, no I/O threads are provided.
Text content
Natural number specifying the number of threads.
Selected attributes
None.
Usage
Parent elements
Child elements
None.
Example
<iothreads>3</iothreads>
| <ip>
| Configures IP addresses for the virtual network.
| Text content
| None.
| Selected attributes
| address
| Sets the IP address for the bridge device. The value must be a valid IPv4
| address.
| netmask
| Specifies a subnet mask for the virtual network.
| Usage
| Parent elements
| Child elements
| Example
| <network>
| ...
| <ip address="192.0.2.1" netmask="255.255.255.0">
| <dhcp>
| <range start="192.0.2.2" end="192.0.2.254"/>
| </dhcp>
| </ip>
| </network>
<kernel>
Specifies the kernel image file.
Text content
Fully qualified path and file name of the kernel image file.
Selected attributes
None.
Usage
Parent elements
Child elements
None.
Example
<kernel>/boot/vmlinuz-3.9.3-60.x.20130605-s390xrhel</kernel>
<key>
Specifies the image file backing the volume.
Text content
Fully qualified path and file name of the image file backing the volume.
Selected attributes
None.
Usage
Parent elements
Child elements
None.
Example
<volume type=“file”>
<name>federico.img</name>
<key>/var/lib/libvirt/images/federico.img</key>
<target>
<path>/var/lib/libvirt/images/federico.img</path>
<format type=“qcow2”/>
</target>
</volume>
<keywrap>
Groups the configuration of the AES and DEA/TDEA wrapping key generation.
Text content
None.
Selected attributes
None.
Usage
“Disabling protected key encryption” on page 72
Parent elements
“<domain>” on page 208
Child elements
Example
<domain type=“kvm”>
...
<keywrap>
<cipher name=“aes” state=“off”/>
</keywrap>
...
</domain>
<log>
Specifies a log file which is associated with the virtual server console output.
Text content
None.
Selected attributes
file Specifies the fully qualified path and filename of the log file.
append=off | on
Specifies whether the information in the file is preserved (append=“on”) or
overwritten (append=“off”) on a virtual server restart.
Usage
Parent elements
Child elements
None.
Example
<devices>
...
<console type=“pty”>
<target type=“sclp”/>
<log file=“/var/log/libvirt/qemu/vserv-cons0.log” append=“off”/>
</console>
</devices>
<mac>
Specifies a host network interface for a virtual server.
Text content
None.
Selected attributes
address
Specifies the mac address of the interface.
Usage
Parent elements
Child elements
None.
Example
<interface type=’direct’>
<mac address=’02:10:10:f9:80:00’/>
<model type=’virtio’/>
</interface>
<memballoon>
Specifies memory balloon devices.
Text content
None.
Selected attributes
model=none
Suppresses the automatic creation of a default memory balloon device.
Usage
Parent elements
Child elements
None.
Example
<memballoon model=“none”/>
<memory>
Specifies the amount of memory allocated for a virtual server at boot time and
configures the collection of QEMU core dumps.
Text content
Natural number specifying the amount of memory. The unit is specified with the
unit attribute.
Selected attributes
dumpCore=on | off
Specifies whether the memory of a virtual server is included in a generated
core dump.
on Specifies that the virtual server memory is included.
off Specifies that the virtual server memory is excluded.
unit=b | KB | k | KiB | MB | M | MiB | GB | G | GiB | TB | T | TiB
Specifies the units of memory used:
b bytes
KB kilobytes (1,000 bytes)
k or KiB
kibibytes (1024 bytes), the default
MB megabytes (1,000,000 bytes)
M or MiB
mebibytes (1,048,576 bytes)
GB gigabytes (1,000,000,000 bytes)
G or GiB
gibibytes (1,073,741,824 bytes)
TB terabytes (1,000,000,000,000 bytes)
T or TiB
tebibytes (1,099,511,627,776 bytes
Usage
v “Configuring virtual memory” on page 65
v “Configuring the collection of QEMU core dumps” on page 67
Parent elements
Child elements
None.
Example
This example:
v Configures 524,288 KB of virtual memory.
<memtune>
Groups memory tuning parameters.
Text content
None.
Selected attributes
None.
Usage
v Chapter 22, “Memory management,” on page 163
v “Tuning virtual memory” on page 65
Parent elements
Child elements
Example
| Text content
| Example: This example identifies the value z14 as an eligible CPU model.
| <model usable="yes">z14</model>
| Selected attributes
| None.
| Usage
| Parent elements
| Child elements
| None.
| Example
| This example sets the CPU model to the default for z14:
| <cpu mode="custom">
| <model>z14</model>
| </cpu>
Text content
None.
Selected attributes
type=virtio
Specifies the interface model type virtio.
Usage
v “Configuring a MacVTap interface” on page 98
v “Configuring a virtual switch” on page 100
Parent elements
Child elements
None.
Example
Text content
Selected attributes
None.
Usage
Parent elements
Child elements
None.
Example
<domain type=“kvm”>
<name>Virtual_server_25</name>
<uuid>12345678abcd12341234abcdefabcdef</uuid>
....
</domain>
| Text content
| Alphanumeric name for the virtual network. The name must be unique for the
| scope of the KVM host.
| Selected attributes
| None.
| Usage
| Parent elements
| Child elements
| None.
| Example
| <network>
| <name>net0</name>
| ...
| </network>
| <network>
| Is the root element of a network configuration-XML.
| Text content
| None.
| Selected attributes
| None.
| Usage
| Parent elements
| None.
| Child elements
| v “<bridge>” on page 197
| v “<forward>” on page 215
| v “<ip>” on page 222
| v “<name> as a child element of <network>” on page 235
| v <uuid>
| v “<virtualport> as a child element of <network>” on page 261
| Example
| <network>
| <name>ovs</name>
| <forward mode="bridge"/>
| <bridge name="ovs-br0"/>
| <virtualport type="openvswitch"/>
| </network>
<on_crash>
Configures the behavior of the virtual server in the crashed state.
Text content
preserve
Preserves the crashed state.
Selected attributes
None.
Usage
Parent elements
Child elements
None.
Example
<on_crash>preserve</on_crash>
<on_reboot>
Configures the behavior of the virtual server when it is rebooted.
Text content
restart Terminates the virtual server using the shutdown command and then boots
the guest using the previous libvirt-internal configuration without
modifying it.
destroy
Terminates the virtual server using the destroy command and then boots
the guest using the previous libvirt-internal configuration without
modifying it.
Selected attributes
None.
Usage
Parent elements
Child elements
None.
Example
<on_reboot>restart</on_reboot>
<os>
Groups the operating system parameters.
Text content
None.
Selected attributes
None.
Usage
Parent elements
Child elements
v “<type>” on page 258
v “<kernel>” on page 223
v “<initrd>” on page 219
v “<cmdline>” on page 199
Example
<os>
<type arch=“s390x” machine=“s390-ccw-virtio”>hvm</type>
<initrd>/boot/initramfs-3.9.3-60.x.20130605-s390xrhel.img</initrd>
<kernel>/boot/vmlinuz-3.9.3-60.x.20130605-s390xrhel</kernel>
<cmdline>rd.md=0 rd.lvm=0 LANG=en_US.UTF-8
KEYTABLE=us SYSFONT=latarcyrheb-sun16 rd.luks=0
root=/dev/disk/by-path/ccw-0.0.e714-part1
rd.dm=0 selinux=0 CMMA=on
crashkernel=128M plymouth.enable=0
</cmdline>
</os>
Text content
Selected attributes
None.
Usage
Parent elements
Child elements
None.
Example
This example specifies an FC-attached SCSI disk backing a storage pool of type file
system:
<pool type=“fs”>
<name>fspool01</name>
<source>
<device path=“/dev/s356001/fspool”/>
</source>
<target>
<path>/var/lib/libvirt/images/fspool01</path>
</target>
</pool>
Text content
Selected attributes
None.
Usage
Parent elements
“<target> as child element of <volume>” on page 257
Child elements
None.
Example
<volume type=“file”>
<name>federico.img</name>
<key>/var/lib/libvirt/images/federico.img</key>
<target>
<path>/var/lib/libvirt/images/federico.img</path>
<format type=“qcow2”/>
</target>
</volume>
<pool>
Is the root element of a storage pool configuration-XML.
Text content
None.
Selected attributes
type=dir | fs | netfs | logical
where
dir Specifies a directory. All image files located in this directory are
volumes of the storage pool.
fs Specifies a file system. The file system may be located on a DASD
or SCSI disk or on a disk partition. libvirt will mount the file
system and make all image files contained in the file system
available as volumes of the storage pool.
netfs Specifies a network file system, such as NFS or CIFS. libvirt will
mount the file system and make all image files contained in the file
system available as volumes of the storage pool.
logical
Specifies a volume group. Each logical volume of this volume
group will be available as volume of the storage pool.
Usage
Parent elements
None.
Child elements
v <name>
v “<source> as child element of <pool>” on page 253
v “<target> as child element of <pool>” on page 256
Example
v This example configures a storage pool backed by a directory as shown in
Figure 7 on page 14:
<pool type=“dir”>
<name>directoryPool</name>
<target>
<path>/var/lib/libvirt/images</path>
</target>
</pool>
v This example configures a storage pool backed by a file system:
<pool type=“fs”>
<name>fspool01</name>
<source>
<device path=“/dev/s356001/fspool”/>
</source>
<target>
<path>/var/lib/libvirt/images/fspool01</path>
</target>
</pool>
v This example configures a storage pool backed by a network file system:
<pool type=“netfs”>
<name>nfspool01</name>
<source>
<format type=“nfs”/>
<host name=“sandbox.example.com”/>
<device path=“/srv/nfsexport”/>
</source>
<target>
<path>/var/lib/libvirt/images/nfspool01</path>
</target>
</pool>
v This example configures a storage pool backed by a volume group as shown in
Figure 8 on page 14:
<pool type=“logical”>
<name>lvPool01</name>
<source>
<name>lvpool01</name>
<format type=“lvm2”/>
</source>
</pool>
<readonly>
Indicates that a device is readonly.
Text content
None.
Selected attributes
None.
Usage
Parent elements
v “<disk>” on page 207
v “<hostdev>” on page 218
Child elements
None.
Example
<disk type=“block” device=“disk”>
<driver name=“qemu” type=“raw” cache=“none” io=“native” iothread=“1”/>
<source dev=“/dev/mapper/36005076305ffc1ae00000000000021d5”/>
<target dev=“vdb” bus=“virtio”/>
<readonly/>
</disk>
<rng>
Specifies a random number generator.
Text content
None.
Selected attributes
model=virtio
Specifies the random number generator device type.
Usage
Parent elements
Child elements
Example
<devices>
...
<rng model=“virtio”>
<backend model=“random”>/dev/random</backend>
</rng>
...
</devices>
<shareable>
Indicates that a device can be shared between various virtual servers.
Text content
None.
Selected attributes
None.
Parent elements
v “<disk>” on page 207
v “<hostdev>” on page 218
Child elements
None.
Example
<devices>
<controller type=“scsi” model=“virtio-scsi” index=“0”/>
<hostdev mode=“subsystem” type=“scsi”>
<source>
<adapter name=“scsi_host0”/>
<address bus=“0” target=“0” unit=“0”/>
</source>
<address type=“scsi” controller=“0” bus=“0” target=“0” unit=“0”/>
<shareable/>
</hostdev>
</devices>
<shares>
Specifies the initial CPU weight.
The CPU shares of a virtual server are calculated from the CPU weight of all
virtual servers running on the host. For example, a virtual server that is configured
with value 2048 gets twice as much CPU time as a virtual server that is configured
with value 1024.
Text content
Selected attributes
None.
Usage
v “Tuning virtual CPUs” on page 62
v “CPU weight” on page 160
Parent elements
Child elements
None.
Example
<cputune>
<shares>2048</shares>
</cputune>
<soft_limit>
Specifies a soft limit for the physical host memory requirements of the virtual
server memory.
Text content
None.
Selected attributes
unit=b | KB | k | KiB | MB | M | MiB | GB | G | GiB | TB | T | TiB
Specifies the units of memory used:
b bytes
KB kilobytes (1,000 bytes)
k or KiB
kibibytes (1024 bytes), the default
MB megabytes (1,000,000 bytes)
M or MiB
mebibytes (1,048,576 bytes)
GB gigabytes (1,000,000,000 bytes)
G or GiB
gibibytes (1,073,741,824 bytes)
TB terabytes (1,000,000,000,000 bytes)
T or TiB
tebibytes (1,099,511,627,776 bytes
Usage
v Chapter 22, “Memory management,” on page 163
v “Configuring virtual memory” on page 65
Parent elements
Child elements
None.
Example
This example configures a memory soft limit of 128 mebibytes:
<memtune>
<soft_limit unit=“M”>128</soft_limit>
</memtune>
Text content
None.
Selected attributes
dev Must be specified for disk type=“block”. Specifies a host device node of
the block device.
file Must be specified for disk type=“file”. Specifies the fully qualified host file
name.
pool Must be specified for disk type=“volume”. Specifies the name of the
defined pool.
volume
Must be specified for disk type=“volume”. Specifies the name of the
defined volume, which must be part of the specified pool.
startupPolicy=mandatory | requisite | optional
For disk type file that represents a CD or diskette, you may define a policy
what to do with the disk if the source file is not accessible:
mandatory
fail if missing for any reason
requisite
fail if missing on boot up, drop if missing on migrate/restore/
revert
optional
drop if missing at any start attempt
Usage
v “Configuring a DASD or SCSI disk” on page 78
v “Configuring an image file as storage device” on page 84
v “Configuring a volume as storage device” on page 86
v “Configuring a virtual SCSI-attached CD/DVD drive” on page 95
Parent elements
See also:
v “<source> as child element of <interface>” on page 252
Child elements
<seclabel>
Examples
v This example configures a SCSI disk as virtual block device:
Text content
None.
Selected attributes
None.
Usage
Parent elements
Child elements
v “<address> as child element of <source>” on page 194
v “<adapter> as child element of <source>” on page 191
Example
<devices>
...
<hostdev mode=“subsystem” type=“scsi”>
<source>
<adapter name=“scsi_host0”/>
<address bus=“0” target=“0” unit=“0”/>
</source>
<address type=“scsi” controller=“0” bus=“0” target=“0” unit=“0”/>
</hostdev>
...
</devices>
Text content
None.
Selected attributes
dev Specifies the network interface.
mode=bridge | vepa
Optional and mutually exclusive with network; indicates whether packets
are delivered to the target device or to the external bridge.
bridge If packets have a destination on the host from which they
originated, they are delivered directly to the target. For direct
delivery, both origin and destination devices need to be in bridge
mode. If either the origin or destination is in vepa mode, a
VEPA-capable bridge is required.
vepa All packets are sent to the external bridge. If packets have a
destination on the host from which they originated, the
VEPA-capable bridge will return the packets to the host.
| network
| Optional and mutually exclusive with mode; specifies the name of a virtual
| network.
Usage
Parent elements
Child elements
None.
Example
<interface type=“direct”>
<source dev=“bond0” mode=“bridge”/>
<model type=“virtio”/>
</interface>
Text content
None.
Selected attributes
None.
Usage
Parent elements
Child elements
v <host>
v <device>
Example
<pool type=“netfs”>
<name>nfspool01</name>
<source>
<format type=“nfs”/>
<host name=“sandbox.example.com”/>
<device path=“/srv/nfsexport”/>
</source>
<target>
<path>/var/lib/libvirt/images/nfspool01</path>
</target>
</pool>
Text content
None.
Selected attributes
type=virtio | sclp
Must be specified for the console.
virtio Specifies a virtio console.
sclp Specifies an SCLP console.
Usage
Parent elements
See also:
v “<target> as child element of <disk>” on page 255
Child elements
None.
Example
<console type=“pty”>
<target type=“sclp”/>
</console>
Text content
None.
Selected attributes
dev Unique name for the device of the form vd<x>, where <x> can be one or
more letters.
If no address element is specified, the order in which device bus-IDs are
assigned to virtio block devices is determined by the order of the target
dev attributes.
bus=virtio
Specifies the device type on the virtual server. Specify “virtio”.
Usage
v “Configuring a DASD or SCSI disk” on page 78
v “Configuring an image file as storage device” on page 84
v “Configuring a virtual SCSI-attached CD/DVD drive” on page 95
Parent elements
Child elements
None.
Example
<disk type=“block” device=“disk”>
<driver name=“qemu” type=“raw” cache=“none” io=“native” iothread=“1”/>
<source dev=“/dev/mapper/36005076305ffc1ae00000000000021d7”/>
<target dev=“vdb” bus=“virtio”/>
<address type=“ccw” cssid=“0xfe” ssid=“0x0” devno=“0xa30e”/>
</disk>
Text content
None.
Selected attributes
None.
Usage
Parent elements
Child elements
v “<path> as child element of <pool><target>” on page 240
v <permissions>
Example
<pool type=“dir”>
<name>directoryPool</name>
<target>
<path>/var/lib/libvirt/images</path>
</target>
</pool>
Text content
None.
Selected attributes
None.
Usage
Parent elements
Child elements
v “<path> as child element of <volume><target>” on page 241
v “<format>” on page 214
Example
<volume type=“file”>
<name>federico.img</name>
<key>/var/lib/libvirt/images/federico.img</key>
<target>
<path>/var/lib/libvirt/images/federico.img</path>
<format type=“qcow2”/>
</target>
</volume>
<type>
Specifies the machine type.
Text content
hvm Indicates that the operating system needs full virtualization.
Selected attributes
arch=s390x
Specifies the system architecture.
machine=s390-ccw-virtio | <machine-type>
Specifies the machine type. If you specify the alias machine type
“s390-ccw-virtio”, libvirt replaces this value by the current machine type,
which depends on the installed QEMU release on the host or on the
hypervisor release. Use this value unless you intend to migrate to a host
with an earlier hypervisor release.
If you intend to migrate the virtual server to a destination host with earlier
hypervisor release than the source host, specify the machine type reflecting
this earlier release.
To display the available machine types, enter:
# qemu-kvm --machine help
Usage
v “Domain configuration-XML” on page 51
v “Definition of a virtual server on different hosts using the same
configuration-XML” on page 126
Parent elements
Child elements
None.
Example
<type arch=“s390x” machine=“s390-ccw-virtio”>hvm</type>
<vcpu>
Specifies the number of virtual CPUs for a virtual server.
Text content
Selected attributes
current
Optional; specifies the number of virtual CPUs available at startup.
The value of the current attribute is limited by the maximum number of
available virtual CPUs. If you do not specify the current attribute, the
maximum number of virtual CPUs is available at startup.
Usage
Parent elements
Child elements
None.
Example
<domain type=“kvm”>
<name>vserv1</name>
<memory>524288</memory>
<vcpu current=“2”>5</vcpu>
....
</domain>
Text content
None.
Selected attributes
type=openvswitch
Specifies the type of the virtual switch.
Usage
v “Configuring a virtual switch” on page 100
Parent elements
Child elements
None.
Example
<interface>
...
<virtualport type=“openvswitch”>
</interface>
| Text content
| None.
| Selected attributes
| type=openvswitch
| In combination with forwarding mode bridge and a <bridge> element,
| identifies the bridge as an Open vSwitch.
| Usage
| Parent elements
| Child elements
| None.
| Example
| <network>
| <name>ovs</name>
| <forward mode="bridge"/>
| <bridge name="ovs-br0"/>
| <virtualport type="openvswitch"/>
| </network>
<volume>
Is the root element of a volume configuration-XML.
Text content
None.
Selected attributes
type=file
Usage
Parent elements
None.
Child elements
v <name>
v <key>
v <source>
v <target>
Example
<volume type=“file”>
<name>federico.img</name>
<key>/var/lib/libvirt/images/federico.img</key>
<target>
<path>/var/lib/libvirt/images/federico.img</path>
<format type=“qcow2”/>
</target>
</volume>
<watchdog>
Specifies a watchdog device, which provides a guest watchdog application with
access to a watchdog timer.
You can specify no more than one diag288 watchdog device. A watchdog device
can be configured only as persistent device.
Text content
None.
Selected attributes
model=diag288
Specifies the diag288 watchdog device.
action=reset | poweroff | pause | dump | inject-nmi | none | shutdown
Optional; specifies an action that is automatically performed when the
watchdog timer expires:
reset Default; immediately terminates the virtual server and restarts it
afterwards.
poweroff
Immediately terminates the virtual server.
pause Suspends the virtual server.
dump Creates a virtual server dump on the host.
inject-nmi
Causes a restart interrupt for the virtual server including a dump
on the virtual server, if it is configured respectively.
none Does not perform any command.
shutdown
Tries to properly shut down the virtual server.
Since the usage of this action assumes that the virtual server is not
responding, it is unlikely that the virtual server will respond to the
shutdown command. It is recommended not to use this action.
Usage
“Configuring a watchdog device” on page 71
Parent elements
Child elements
None.
Example
<devices>
...
<watchdog model=“diag288” action=“inject-nmi”/>
...
</devices>
Syntax
►► virsh ▼ <virsh-command> ►◄
<option>
Where:
<option>
Is a command option.
<VS> Is the name, the ID, or the UUID of the virtual server.
<virsh-command>
Is a virsh command.
For a complete list of the virsh commands, see libvirt.org/
virshcmdref.html.
<XML-filename>
Is the name of the XML file, which defines the device to be attached to the
running virtual server.
Selected options
--help Displays the virsh online help.
--keepalive-interval <interval-in-seconds>
Sets an interval for sending keepalive messages to the virtual server to
confirm the connection between the host and the virtual server. If the
virtual server does not answer for a number of times which is defined by
the --keepalive-count option, the host closes the connection. Setting the
interval to 0 disables this mechanism. The default is 5 seconds.
--keepalive-count <keepalive-count>
Sets the number of times keepalive message can be sent without getting an
answer from the virtual server without closing the connection. If the
keepalive interval is set to 0, this option has no effect. The default is 6.
--version
Displays the installed libvirt version.
Example
This example displays the virsh online help of the virsh migrate command:
# virsh help migrate
This example increases the keepalive interval of the connection to the host to 10
seconds during a live migration:
# virsh --keepalive-interval 10 migrate --live --persistent --undefinesource \
--timeout 1200 --verbose vserv1 qemu+ssh://kvmhost/system
attach-device
Attaches a device to a defined virtual server.
Syntax
--domain --file
►► attach-device <VS> <XML-filename> ►
--live
► ►◄
--config
--current
--persistent
Where:
<VS> Is the name, the ID, or the UUID of the virtual server.
<XML-filename>
Is the name of the XML file, which defines the device to be attached to the
running virtual server.
Selected options
--config
Persistently attaches the device to the virtual server with the next restart.
--current
Depending on the virtual server state:
running, paused
Attaches the device to the virtual server until it is detached or the
virtual server is terminated.
shut off
Persistently attaches the device to the virtual server with the next
restart.
--domain
Specifies the virtual server.
--file Specifies the device configuration-XML file.
--live Attaches the device to the running virtual server until it is detached or the
virtual server is terminated.
--persistent
Depending on the virtual server state:
running, paused
Attaches the device to the virtual server.
The device remains persistently attached across restarts.
shut off
Persistently attaches the device to the virtual server with the next
restart.
Usage
Example
This example attaches the devices that are defined in device configuration-XML file
dev1.xml to the virtual server vserv1.
# virsh attach-device vserv1 dev1.xml
change-media
Removes a currently provided ISO image from a virtual SCSI-attached CD/DVD
drive, or provides a different ISO image.
Syntax
--domain --path
►► change-media <VS> <logical-device-name> ►
--update
<iso-image>
► --eject ►
--insert <iso-image> --force
--live
► ►◄
--config
--current
Where:
<logical-device-name>
Identifies the virtual SCSI-attached CD/DVD drive as specified with the
target dev attribute in the domain configuration-XML file.
<iso-image>
Is the fully qualified path to the ISO image on the host.
<VS> Is the name, ID or UUID of the virtual server.
Selected options
--config
Persistently adds or removes the ISO image with the next virtual server
restart.
--current
Depending on the virtual server state:
running, paused
Adds or removes the ISO image until the virtual server is
terminated.
shut off
Persistently removes the ISO image from the virtual server or
provides a different one with the next restart.
--domain
Specifies the virtual server.
--eject Removes the currently provided ISO image from the virtual SCSI-attached
CD/DVD drive.
--force Forces the guest to release the file system residing on the virtual DVD,
even if it is currently in use.
--insert
Provides a different ISO image for the virtual server.
--live Removes an ISO image from the running virtual server or provides an ISO
image for a running virtual server until the virtual server is terminated.
--path Specifies the virtual SCSI-attached CD/DVD drive.
--update
If no ISO image is specified:
Removes the currently provided ISO image, just like the --eject
option.
If an ISO image is specified:
Provides the specified ISO image. In case the current disk image
has not been removed before, it is replaced by the new one.
Usage
Example
This command replaces the currently provided virtual DVD by a different one:
# virsh change-media vserv1 vdc -update /var/lib/libvirt/images/cd2.iso
Successfully inserted media.
console
Displays the console of a virtual server.
Syntax
►► console <VS> ►◄
<alternate-console-name> --safe --force
Where:
<alternate-console-name>
Is the device alias name of an alternative console that is configured for the
virtual server.
<VS> Is the name, the ID, or the UUID of the virtual server.
Selected options
--force Disconnects any session in a case the connection is disrupted.
--safe Only connects to the console if the host ensures exclusive access to the
console.
Usage
Example
define
Creates a persistent virtual server definition.
Syntax
►► define <XML-filename> ►◄
--validate
Where:
<XML-filename>
Is the name of the domain configuration-XML file.
Selected options
--validate
Validates the domain configuration-XML file against the XML schema.
Usage
v Chapter 1, “Overview,” on page 3
v “Defining a virtual server” on page 110
Example
destroy
Immediately terminates a virtual server and releases any used resources.
Syntax
--domain
►► destroy <VS> ►◄
--graceful
Where:
<VS> Is the name, the ID, or the UUID of the virtual server.
Selected options
--domain
Specifies the virtual server.
--graceful
Tries to properly terminate the virtual server, and only if it is not
responding in a reasonable amount of time, it is forcefully terminated.
Usage
Example
detach-device
Detaches a device from a defined virtual server.
Syntax
--domain --file
►► detach-device <VS> <XML-filename> ►
--live
► ►◄
--config
--current
--persistent
Where:
<VS> Is the name, the ID, or the UUID of the virtual server.
<XML-filename>
Is the name of the XML file, which defines the device to be detached from
the running virtual server.
Selected options
--config
Persistently detaches the device with the next restart.
--current
Depending on the virtual server state:
running, paused
Immediately detaches the device from the virtual server.
If the device was attached persistently, it will be reattached with
the next restart.
shut off
Persistently detaches the device from the virtual server with the
next restart.
--domain
Specifies the virtual server.
--file Specifies the device configuration-XML file.
--live Detaches the device from the running virtual server.
--persistent
Depending on the virtual server state:
running, paused
Immediately detaches the device from the virtual server.
The device remains persistently detached across restarts.
shut off
Persistently detaches the device from the virtual server with the
next restart.
Usage
Example
This example detaches the device that is defined in device configuration-XML file
vda.xml from virtual server vserv1.
# virsh detach-device vserv1 vda.xml
domblklist
Displays information about the virtual block devices of a virtual server.
Syntax
--domain
►► domblklist <VS> ►◄
--inactive --details
Where:
<VS> Is the name, the ID, or the UUID of the virtual server.
Selected options
--details
Display details, such as device type and value.
--domain
Specifies the virtual server.
--inactive
Lists the block devices that will be used with the next virtual server reboot.
Usage
Example
# virsh domblklist vserv1
Target Source
------------------------------------------------
vda /dev/disk/by-id/dm-uuid-mpath-36005076305ffc1ae00000000000023be
domblkstat
Displays status information about a virtual block device.
Syntax
--domain
►► domblkstat <VS> <device-name> ►◄
--human
Where:
<device-name>
Is the name of the virtual block device.
<VS> Is the name, the ID, or the UUID of the virtual server.
Selected options
--domain
Specifies the virtual server.
--human
Replaces abbreviations by written-out information.
Usage
Example
Obtain the device names of the block devices of virtual server vserv1:
# virsh domblklist vserv1
Target Source
------------------------------------------------
vda /dev/disk/by-id/dm-uuid-mpath-36005076305ffc1ae00000000000023be
domcapabilities
| Prints an XML document that describes the hypervisor capabilities.
| Syntax
|
| ►► domcapabilities ►◄
--machine <machine>
|
||
| Selected options
| <machine>
| Is a supported hypervisor version as listed by the qemu-kvm --machine
| help command. If this option is omitted, the XML document for the default
| hypervisor version is displayed.
| Usage
| Example
| # virsh domcapabilities --machine s390-ccw-virtio-2.8
| <domainCapabilities>
| <path>/usr/bin/qemu-system-s390x</path>
| <domain>kvm</domain>
| <machine>s390-ccw-virtio-2.8</machine>
| <arch>s390x</arch>
| <vcpu max=’248’/>
| ...
| <cpu>
| <mode name=’host-passthrough’ supported=’yes’/>
| <mode name=’host-model’ supported=’yes’>
| <model fallback=’forbid’>z13s-base</model>
| <feature policy=’require’ name=’aefsi’/>
| <feature policy=’require’ name=’msa5’/>
| <feature policy=’require’ name=’msa4’/>
| ...
| </mode>
| <mode name=’custom’ supported=’yes’>
| <model usable=’yes’>z10EC-base</model>
| <model usable=’yes’>z9EC-base</model>
| <model usable=’yes’>z196.2-base</model>
| ...
| </mode>
| </cpu>
| ...
| </domainCapabilities>
|
|
domiflist
Displays network interface information for a running virtual server.
Syntax
--domain
►► domiflist <VS> ►◄
--inactive
Where:
<VS> Is the name, the ID, or the UUID of the virtual server.
Selected options
--domain
Specifies the virtual server.
--inactive
Lists the interfaces that will be used with the next virtual server reboot.
Usage
Example
# virsh domiflist vserv1
Interface Type Source Model MAC
-------------------------------------------------------
vnet2 network iedn virtio 02:17:12:03:ff:01
domifstat
Displays network interface statistics for a running virtual server.
Syntax
--domain
►► domifstat <VS> <interface> ►◄
Where:
<VS> Is the name, the ID, or the UUID of the virtual server.
<interface>
Is the name of the network interface as specified as target dev attribute in
the configuration-XML file.
Selected options
--domain
Specifies the virtual server.
Usage
Example
# virsh domifstat vserv1 vnet0
vnet0 rx_bytes 7766280
vnet0 rx_packets 184904
vnet0 rx_errs 0
vnet0 rx_drop 0
vnet0 tx_bytes 5772
vnet0 tx_packets 130
vnet0 tx_errs 0
vnet0 tx_drop 0
dominfo
Displays information about a virtual server.
Syntax
--domain
►► dominfo <VS> ►◄
Where:
<VS> Is the name, ID, or UUID of the virtual server.
Selected options
--domain
Specifies the virtual server.
Usage
Example
# virsh dominfo e20
Id: 55
Name: e20
UUID: 65d6cee0-ca0a-d0c1-efc7-faacb8631497
OS Type: hvm
State: running
CPU(s): 2
CPU time: 1.2s
Max memory: 4194304 KiB
Used memory: 4194304 KiB
Persistent: yes
Autostart: enable
Managed save: no
Security model: none
Security DOI: 0
domjobabort
Aborts the currently running virsh command related to the specified virtual server.
Syntax
--domain
►► domjobabort <VS> ►◄
Where:
<VS> Is the name, ID or UUID of the virtual server.
Selected options
None.
Usage
Example
This example aborts the currently running dump request for vserv1.
# virsh dump vserv1 vserv1.txt
error: Failed to core dump domain vserv1 to vserv1.txt
error: operation aborted: domain core dump job: canceled by client
domstate
Displays the state of a virtual server.
Syntax
►► domstate <VS> ►◄
--reason
Where:
<VS> Is the name, ID, or UUID of the virtual server.
Selected options
--reason
Displays information about the reason why the virtual server entered the
current state.
Usage
Example
# virsh domstate vserv1
crashed
# virsh domstate vserv1 --reason
crashed (panicked)
dump
Creates a virtual server dump on the host.
Syntax
Where:
<VS> Is the name, ID, or UUID of the virtual server.
<filename>
Is the name of the target dump file.
Selected options
--memory-only
Issues ELF dumps, which can be inspected by using the crash command.
Usage
Example
This example dumps the virtual server vserv1 to the file dumpfile.name.
# virsh dump --memory-only vserv1 dumpfile.name
dumpxml
Displays the current libvirt-internal configuration of a defined virtual server.
Syntax
--domain
►► dumpxml <VS> ►
--inactive
► ►◄
--security-info --update-cpu --migratable
Where:
<VS> Is the name, the ID, or the UUID of the virtual server.
Selected options
--domain
Specifies the virtual server.
--migratable
Displays a version of the current libvirt-internal configuration that is
compatible with older libvirt releases.
--inactive
Displays a defined virtual server, which is not in “running” state.
--security-info
Includes security-sensitive information.
--update-cpu
Updates the virtual server according to the host CPU.
Usage
Example
edit
Edits the libvirt-internal configuration of a virtual server.
Syntax
--domain
►► edit <VS> ►◄
Where:
<VS> Is the name, ID, or UUID of the virtual server.
Selected options
--domain
Specifies the virtual server.
Usage
Example
inject-nmi
Causes a restart interrupt for a virtual server including a dump on the virtual
server, if it is configured respectively.
Syntax
►► inject-nmi <VS> ►◄
Where:
<VS> Is the name, the ID, or the UUID of the virtual server.
Selected options
None.
Usage
Example
This example causes a restart interrupt for the virtual server vserv1 including a
core dump.
# virsh inject-nmi vserv1
iothreadadd
Provides an additional I/O thread for a virtual server.
Syntax
--domain --id
►► iothreadadd <VS> <IOthread-ID> ►
► ►◄
--config
--live
--current
Where:
<IOthread-ID>
Is the ID of the I/O thread to be added to the virtual server. The I/O
thread ID must be beyond the range of available I/O threads.
<VS> Is the name, ID, or UUID of the virtual server.
Selected options
--config
Affects the virtual server the next time it is restarted.
--current
Affects the current virtual server.
--domain
Specifies the virtual server.
--id Specifies the ID of the I/O thread that will be added to the I/O threads of
the virtual server.
--live Affects the current virtual server only if it is running.
Usage
Example
iothreaddel
Removes an I/O thread from a virtual server.
If the specified I/O thread is assigned to a virtual block device that belongs to the
current configuration of the virtual server, it is not removed.
Syntax
--domain --id
►► iothreaddel <VS> <IOthread-ID> ►
► ►◄
--config
--live
--current
Where:
<IOthread-ID>
Is the ID of the I/O thread to be deleted from the virtual server.
<VS> Is the name, ID, or UUID of the virtual server.
Selected options
--config
Affects the virtual server the next time it is restarted.
--current
Affects the current virtual server.
--domain
Specifies the virtual server.
--id Specifies the ID of the I/O thread that will be removed from the I/O
threads of the virtual server.
--live Affects the current virtual server only if it is running.
Usage
Example
iothreadinfo
Displays information about the I/O threads of a virtual server.
Syntax
--domain
►► iothreadinfo <VS> ►◄
--config
--live
--current
Where:
<VS> Is the name, ID, or UUID of the virtual server.
Selected options
--config
Affects the virtual server the next time it is restarted.
--current
Affects the current virtual server.
--domain
Specifies the virtual server.
--live Affects the current virtual server only if it is running.
Usage
Example
This example shows the iothreadinfo command for 8 virtual CPUs:
# virsh iothreadinfo vserv1
IOThread ID CPU Affinity
---------------------------------------------------
1 0-7
2 0-7
3 0-7
list
Browses defined virtual servers.
Syntax
--state-running
►► list ►
--all --with-snapshot --transient
--inactive --without-snapshot --persistent
--state-paused
--state-shutoff
--state-other
► ►
--autostart --with-managed-save --name
--no-autostart --without-managed-save --id
--uuid
--table
► ►◄
--managed-save --title
Selected options
--all Lists all defined virtual servers.
--autostart
Lists all defined virtual servers with autostart enabled.
--inactive
Lists all defined virtual servers that are not running.
--managed-save
Only when --table is specified.
--name
Lists only virtual server names.
--no-autostart
Lists only virtual servers with disabled autostart option.
--persistent
Lists persistent virtual servers.
--state-other
Lists virtual servers in state “shutting down”.
--state-paused
Lists virtual servers in state “paused”.
--state-running
Lists virtual servers in state “running”.
--state-shutoff
Lists virtual servers in state “shut off”.
--table Displays the listing as a table.
Usage
Example
managedsave
Saves the system image of a running or a paused virtual server and terminates it
thereafter. When the virtual server is started again, the saved system image is
resumed.
Per default, the virtual server is in the same state as it was when it was
terminated.
Use the dominfo command to see whether the system image of a shut off virtual
server was saved.
Syntax
--domain
►► managedsave <VS> ►
--bypass-cache --running
--paused
► ►◄
--verbose
Where:
<VS> Is the name, ID, or UUID of the virtual server.
Selected options
--bypass-cache
Writes virtual server data directly to the disk bypassing the file system
cache. This sacrifices write speed for data integrity by getting the data
written to the disk faster.
--running
When you restart the virtual server, it will be running.
--paused
When you restart the virtual server, it will be paused.
--verbose
Displays the progress of the save operation.
Usage
v “Terminating a virtual server” on page 114
v Chapter 27, “Virtual server life cycle,” on page 183
Example
# virsh managedsave vserv1 --running
Domain vserv1 state saved by libvirt
# virsh list
Id Name State
----------------------------------------------------
13 vserv1 running
# virsh list
Id Name State
----------------------------------------------------
13 vserv1 paused
memtune
Specifies a soft limit for the physical host memory requirements of the virtual
server memory.
Syntax
--domain
►► memtune <VS> ►◄
--soft-limit <limit-in-KB>
Where:
<limit-in-KB>
Is the minimum physical host memory in kilobytes remaining available for
the virtual server memory in case the physical host memory resources are
reduced.
<VS> Is the name, ID, or UUID of the virtual server.
Selected options
--soft-limit
Specifies the minimum physical host memory remaining available for the
virtual server in case the memory resources are reduced.
Note: Do not use the options --hard-limit and --swap_hard_limit. Their use
might lead to a virtual server crash.
Usage
v Chapter 22, “Memory management,” on page 163
v “Managing virtual memory” on page 143
Example
This example allows the host to limit the physical host memory usage of vserv1
memory to 256 MB in case the host is under memory pressure:
# virsh memtune vserv1 --soft-limit 256000
This example displays the memory tuning parameters of vserv1. Be sure not to
modify the hard_limit and swap_hard_limit parameters.
# virsh memtune vserv1
hard_limit : unlimited
soft_limit : 256000
swap_hard_limit: unlimited
migrate
Migrates a virtual server to a different host.
Syntax
--offline
►► migrate ►
--live --p2p --persistent
--tunnelled
► ►
--undefinesource --suspend --change-protection --unsafe
► ►
--verbose --auto-converge --abort-on-error
--domain --desturi
► <VS> <destination-host> ►
► ►
--migrateuri <migrateen-address> --dname
<destination-name>
► ►
--timeout <seconds> --xml <XML-filename>
► ►◄
,
where
<destination-host>
The libvirt connection URI of the destination host.
Normal migration:
Specify the address of the destination host as seen from the virtual
server.
Peer to-peer migration:
Specify the address of the destination host as seen from the source
host.
<destination-name>
Is the new name of the virtual server on the destination host.
<logical-device-name>
The logical device name of the virtual block device.
<migrateen-address>
The host specific URI of the destination host.
<VS> Is the name, ID, or UUID of the virtual server.
<XML-filename>
The domain configuration-XML for the source virtual server.
Selected options
--abort-on-error
Causes an abort on soft errors during migration.
--auto-converge
Forces auto convergence during live migration.
--change-protection
Prevents any configuration changes to the virtual server until the migration
ends
--copy-storage-all
Copies image files that back up virtual block devices to the destination.
Make sure that an image file with the same path and filename exists on the
destination host before you issue the virsh migrate command. The
regarding virtual block devices are specified by the --migrate-disks
option.
--copy-storage-inc
Incrementally copies non-readonly image files that back up virtual block
devices to the destination. Make sure that an image file with the same path
and filename exists on the destination host before you issue the virsh
migrate command. The regarding virtual block devices are specified by the
--migrate-disks option.
--dname
Specifies that the virtual server is renamed during migration (if supported).
--domain
Specifies the virtual server.
--live Specifies the migration of a running or a paused virtual server.
--migrate-disks
Copies the files which back up the specified virtual block devices to the
destination host. Use the --copy-storage-all or the --copy-storage-inc
option in conjunction with this option. The regarding files must be
writable. Please note that virtual DVDs are read-only disks. If in doubt,
check your domain configuration-XML. If the disk device attribute of a
disk element is configured as cdrom, or contains a readonly element, then
the disk cannot be migrated.
--migrateuri
Specifies the host specific URI of the destination host.
If not specified, libvirt automatically processes the host specific URI from
the libvirt connection URI. In some cases, it is useful to specify a
destination network interface or port manually.
--offline
Specifies the migration of the virtual server in “shut off” state. A copy of
the libvirt-internal configuration of the virtual server on the source host is
defined on the destination host.
If you specify this option, specify the --persistent option, too.
--persistent
Specifies to persistent the virtual server on the destination system.
Usage
“Live virtual server migration” on page 127
Example
This example migrates the virtual server vserv1 to the host zhost.
# virsh migrate –-auto-converge –-timeout 300 vserv1 qemu+ssh://zhost/system
More information
libvirt.org/migration.html
migrate-getspeed
Displays the maximum migration bandwidth for a virtual server in MiB/s.
Syntax
--domain
►► migrate-getspeed <VS> ►◄
Where:
<VS> Is the name, ID or UUID of the virtual server.
Selected options
None.
Usage
Example
# virsh migrate-getspeed vserv1
8796093022207
migrate-setmaxdowntime
Specifies a tolerable downtime for the virtual server during the migration, which is
used to estimate the point in time when to suspend it.
Syntax
--domain
►► migrate-setmaxdowntime <VS> ►
--downtime
► <milliseconds> ►◄
where
<milliseconds>
Is the tolerable downtime of the virtual server during migration in
milliseconds.
<VS> Is the name, ID, or UUID of the virtual server.
Selected options
None.
Usage
Example
This example specifies a tolerable downtime of 100 milliseconds for the virtual
server vserv1 in case it is migrated to another host.
# virsh migrate-setmaxdowntime vserv1 --downtime 100
migrate-setspeed
Sets the maximum migration bandwidth for a virtual server in MiB/s.
Syntax
--domain
►► migrate-setspeed <VS> ►
► <mebibyte-per-second> ►◄
--bandwidth
Where:
<mebibyte-per-second>
Is the migration bandwidth limit in MiB/s.
<VS> Is the name, ID or UUID of the virtual server.
Selected options
--bandwidth
Sets the bandwidth limit during a migration in MiB/s.
Usage
Example
# virsh migrate-setspeed vserv1 --bandwidth 100
# virsh migrate-getspeed vserv1
100
| net-autostart
| Enables or disables the automatic start of a virtual network when the libvirt
| daemon is started.
| Syntax
|
| --network
►► net-autostart <network-name> ►◄
<network-UUID> --disable
|
||
| Where:
| <network-name>
| Is the name of the virtual network.
| <network-UUID>
| Is the UUID of the virtual network.
| Selected options
| --network
| Specifies the virtual network.
| --disable
| Disables the automatic start of the virtual network when the libvirt
| daemon is started.
| Usage
| Example
| This example configures the automatic start of virtual network net0 when the
| libvirt daemon is started.
| # virsh net-autostart net0
|
|
| net-define
| Creates a persistent definition of a virtual network.
| Syntax
|
| --file
►► net-define <XML-filename> ►◄
|
||
| Where:
| <XML-filename>
| Is the name of the network configuration-XML file.
| Selected options
| --file specifies the network configuration-XML file.
| Usage
| Example
| This example defines the virtual network that is configured by the net0.xml
| network configuration-XML file.
| # virsh net-define net0.xml
|
|
| net-destroy
| Deactivates an active virtual network.
| Syntax
|
| --network
►► net-destroy <network-name> ►◄
<network-UUID>
|
||
| Where:
| <network-name>
| Is the name of the virtual network.
| <network-UUID>
| Is the UUID of the virtual network.
| Selected options
| --network
| Specifies the virtual network.
| Usage
| Example
| This example shuts down the virtual network with name net0.
| # virsh net-destroy net0
|
|
| net-dumpxml
| Displays the current configuration of a virtual network.
| Syntax
|
| --network
►► net-dumpxml <network-name> ►◄
<network-UUID> --inactive
|
||
| Where:
| <network-name>
| Is the name of the virtual network.
| <network-UUID>
| Is the UUID of the virtual network.
| Selected options
| --network
| Specifies the virtual network.
| --inactive
| Displays the network XML without the automatic expansions in the
| libvirt-internal representation.
| Usage
| Example
| # virsh net-dumpxml net0
| <network>
| <name>net0</name>
| <uuid>fec14861-35f0-4fd8-852b-5b70fdc112e3</uuid>
| <forward mode="nat">
| <nat>
| <port start="1024" end="65535"/>
| </nat>
| </forward>
| <bridge name="virbr0" stp="on" delay="0"/>
| <mac address="aa:25:9e:d9:55:13"/>
| <ip address="192.0.2.1" netmask="255.255.255.0">
| <dhcp>
| <range start="192.0.2.2" end="192.0.2.254"/>
| </dhcp>
| </ip>
| </network>
|
|
|
| net-edit
| Edits the configuration of a defined virtual network. When the update is saved,
| both the libvirt-internal configuration and the Network configuration-XML are
| updated.
| Syntax
|
| --network
►► net-edit <network-name> ►◄
<network-UUID>
|
||
| Where:
| <network-name>
| Is the name of the virtual network.
| <network-UUID>
| Is the UUID of the virtual network.
| Selected options
| --network
| Specifies the virtual network.
| Usage
| Example
| net-info
| Displays information about a defined virtual network.
| Syntax
|
| --network
►► net-info <network-name> ►◄
<network-UUID>
|
||
| Where:
| <network-name>
| Is the name of the virtual network.
| <network-UUID>
| Is the UUID of the virtual network.
| Selected options
| --network
| Specifies the virtual network.
| Usage
| Example
| # virsh net-info net0
| Name: net0
| UUID: fec14861-35f0-4fd8-852b-5b70fdc112e3
| Active: yes
| Persistent: yes
| Autostart: yes
| Bridge: virbr0
|
|
| net-list
| Displays a list of defined virtual networks.
| Syntax
|
| --table
►► net-list --name ►◄
--inactive --autostart --uuid
--all --no-autostart
|
||
| Selected options
| --all Displays active and inactive virtual networks.
| --autostart
| Displays only virtual networks that start automatically when the libvirt
| daemon is started.
| --inactive
| Displays only inactive virtual networks.
| --name
| Lists the network names instead of displaying a table of virtual networks.
| --no-autostart
| Displays only virtual networks that do not start automatically when the
| libvirt daemon is started.
| --table Displays the virtual network information in table format.
| --uuid Lists the virtual network UUIDs instead of displaying a table of virtual
| networks.
| Usage
| Example
| # virsh net-list
| Name State Autostart Persistent
| ----------------------------------------------------------
| default active yes yes
| net0 active no yes
|
|
| net-name
| Displays the name of a virtual network that is specified with its UUID.
| Syntax
|
| --network
►► net-name <network-UUID> ►◄
|
||
| Where:
| <network-UUID>
| Is the UUID of the virtual network.
| Selected options
| --network
| Specifies the virtual network.
| Usage
| Example
| # virsh net-name fec14861-35f0-4fd8-852b-5b70fdc112e3
| net0
|
|
| net-start
| Activates a defined, inactive virtual network.
| Syntax
|
| --network
►► net-start <network-name> ►◄
<network-UUID>
|
||
| Where:
| <network-name>
| Is the name of the virtual network.
| <network-UUID>
| Is the UUID of the virtual network.
| Selected options
| --network
| Specifies the virtual network.
| Usage
| Example
| This example starts the virtual network with the name net0.
| # virsh net-start net0
|
|
| net-undefine
| Deletes the persistent libvirt definition of a virtual network.
| Syntax
|
| --network
►► net-undefine <network-name> ►◄
<network-UUID>
|
||
| Where:
| <network-name>
| Is the name of the virtual network.
| <network-UUID>
| Is the UUID of the virtual network.
| Selected options
| --network
| Specifies the virtual network.
| Usage
| Example
| This example removes the virtual network with name net0 from the libvirt
| definition.
| # virsh net-undefine net0
|
|
| net-uuid
| Displays the UUID of a virtual network that is specified with its name.
| Syntax
|
| --network
►► net-uuid <network-name> ►◄
|
||
| Where:
| <network-name>
| Is the name of the virtual network.
| Selected options
| --network
| Specifies the virtual network.
| Usage
| Example
| # virsh net-uuid net0
| fec14861-35f0-4fd8-852b-5b70fdc112e3
|
|
pool-autostart
Enables or disables the automatic start of a storage pool when the libvirt daemon
is started.
Syntax
--pool
►► pool-autostart <pool-name> ►◄
<pool-UUID> --disable
Where:
<pool-name>
Is the name of the storage pool.
<pool-UUID>
Is the UUID of the storage pool.
Selected options
--pool Specifies the storage pool.
--disable
Disables the automatic start of the storage pool when the libvirt daemon is
started.
Usage
Example
This example specifies the automatic start of storage pool pool1 when the libvirt
daemon is started.
# virsh pool-autostart pool1
pool-define
Creates a persistent definition of a storage pool configuration.
Syntax
--file
►► pool-define <XML-filename> ►◄
Where:
<XML-filename>
Is the name of the storage pool configuration-XML file.
Selected options
--file Specifies the storage pool configuration-XML file.
Usage
Example
This example defines the storage pool that is configured by the storage pool
configuration-XML file named pool1.xml.
# virsh pool-define pool1.xml
pool-delete
Deletes the volumes of a storage pool.
Syntax
Attention: This command is intended for expert users. Depending on the pool
type, the results range from no effect to loss of data. In particular, data is lost when
a zfs or LVM group pool is deleted.
--pool
►► pool-delete <pool-name> ►◄
<pool-UUID>
Where:
<pool-name>
Is the name of the storage pool.
<pool-UUID>
Is the UUID of the storage pool.
Selected options
--pool Specifies the storage pool.
Usage
Chapter 19, “Managing storage pools,” on page 151
Example
This example deletes the volumes of storage pool pool1.
# virsh pool-delete pool1
pool-destroy
Shut down a storage pool.
Syntax
--pool
►► pool-destroy <pool-name> ►◄
<pool-UUID>
Where:
<pool-name>
Is the name of the storage pool.
<pool-UUID>
Is the UUID of the storage pool.
Selected options
--pool Specifies the storage pool.
Selected options
None.
Usage
Example
pool-dumpxml
Displays the current libvirt-internal configuration of a storage pool.
Syntax
--pool
►► pool-dumpxml <pool-name> ►◄
<pool-UUID>
Where:
<pool-name>
Is the name of the storage pool.
<pool-UUID>
Is the UUID of the storage pool.
Selected options
--pool Specifies the storage pool.
Usage
Example
# virsh pool-dumpxml pool1 default
<pool type=“dir”>
<name>default</name>
<uuid>09382b31-03ac-6726-45be-dfcaaf7b01cc</uuid>
<capacity unit=“bytes”>243524067328</capacity>
<allocation unit=“bytes”>109275693056</allocation>
<available unit=“bytes”>134248374272</available>
<source>
</source>
<target>
<path>/var/lib/libvirt/images</path>
<permissions>
<mode>0711</mode>
<owner>0</owner>
<group>0</group>
</permissions>
</target>
</pool>
pool-edit
Edits the libvirt-internal configuration of a defined storage pool.
Syntax
--pool
►► pool-edit <pool-name> ►◄
<pool-UUID>
Where:
<pool-name>
Is the name of the storage pool.
<pool-UUID>
Is the UUID of the storage pool.
Selected options
--pool Specifies the storage pool.
Usage
Example
This example edits the libvirt-internal configuration of pool1.xml.
# virsh pool-edit pool1
pool-info
Displays information about a defined storage pool.
Syntax
--pool
►► pool-info <pool-name> ►◄
<pool-UUID>
Where:
<pool-name>
Is the name of the storage pool.
<pool-UUID>
Is the UUID of the storage pool.
Selected options
--pool Specifies the storage pool.
Usage
Example
# virsh pool-info pool1
pool-list
Displays a list of defined storage pools.
Syntax
►► pool-list ►
--inactive --persistent --autostart
--all --transient --no-autostart
► ►◄
--details ,
--type ▼ dir
fs
netfs
logical
Selected options
--all Displays all defined storage pools.
--autostart
Displays all storage pools that start automatically when the libvirt daemon
is started.
--details
Displays pool persistence and capacity related information.
--inactive
Displays all inactive storage pools.
--no-autostart
Displays all storage pools that do not start automatically when the libvirt
daemon is started.
--persistent
Displays all persistent storage pools.
--transient
Displays all transient storage pools.
--type Displays all storage pools of the specified types.
Usage
Example
# virsh pool-list
pool-name
Displays the name of a storage pool specified by its UUID.
Syntax
--pool
►► pool-name <pool-UUID> ►◄
Where:
<pool-UUID>
Is the UUID of the storage pool.
Selected options
--pool Specifies the storage pool.
Usage
Example
# virsh pool-name bc403958-a355-4d3c-9d5d-872c16b205ca
pool-refresh
Updates the volume list of a storage pool.
Syntax
--pool
►► pool-refresh <pool-name> ►◄
<pool-UUID>
Where:
<pool-name>
Is the name of the storage pool.
<pool-UUID>
Is the UUID of the storage pool.
Selected options
--pool Specifies the storage pool.
Usage
Example
This example updates the list of volumes contained in storage pool pool1.
# virsh pool-refresh pool1
pool-start
Starts a defined inactive storage pool.
Syntax
--pool
►► pool-start <pool-name> ►
<pool-UUID>
► ►◄
--build
--overwrite
--no-overwrite
Where:
<pool-name>
Is the name of the storage pool.
<pool-UUID>
Is the UUID of the storage pool.
Selected options
--pool Specifies the storage pool.
--build
Creates the directory or the file system or creates the label for a disk
(depending on the storage pool type) before starting the storage pool.
--overwrite
Only valid for storage pools of type dir, fs, or netfs.
Creates the directory, the file system or the label, overwriting existing ones.
--no-overwrite
Only valid for storage pools of type dir, fs, or netfs.
Creates the directory or the file system only if it does not exist. Returns an
error if it does exist.
Usage
Chapter 19, “Managing storage pools,” on page 151
Example
This example starts storage pool pool1.
# virsh pool-start pool1
pool-undefine
Deletes the persistent libvirt definition of a storage pool.
Syntax
--pool
►► pool-undefine <pool-name> ►◄
<pool-UUID>
Where:
<pool-name>
Is the name of the storage pool.
<pool-UUID>
Is the UUID of the storage pool.
Selected options
--pool Specifies the storage pool.
Usage
Example
This example removes storage pool pool1 from the libvirt definition.
# virsh pool-undefine pool1
pool-uuid
Displays the UUID of a storage pool specified by its name.
Syntax
--pool
►► pool-uuid <pool-name> ►◄
Where:
<pool-name>
Is the name of the storage pool.
Selected options
--pool Specifies the storage pool.
Usage
Example
# virsh pool-uuid pool1
reboot
Reboots a guest using the current libvirt-internal configuration.
For making virtual server configuration changes effective, shut down the virtual
server and start it again instead of rebooting it.
Syntax
►► reboot <VS> ►◄
Where:
<VS> Is the name, ID, or UUID of the virtual server.
Example
# virsh reboot vserv1
Domain vserv1 is being rebooted
resume
Resumes a virtual server from the paused to the running state.
Syntax
►► resume <VS> ►◄
Where:
<VS> Is the name, ID, or UUID of the virtual server.
Selected options
None.
Usage
Example
# virsh list
Id Name State
----------------------------------------------------
13 vserv1 paused
# virsh list
Id Name State
----------------------------------------------------
13 vserv1 running
schedinfo
Displays scheduling information about a virtual server, and can modify the portion
of CPU time that is assigned to it.
Syntax
►► schedinfo <VS> ►◄
--live cpu_shares = <number>
--config
Where:
<number>
Specifies the CPU weight.
<VS> Is the name, the ID, or the UUID of the virtual server.
Selected options
--live Specifies the modification of the current CPU weight of the running virtual
server.
--config
Specifies the modification of the virtual server's CPU weight after the next
restart.
Usage
Examples
This example sets the CPU weight of the running virtual server vserv1 to 2048.
# virsh schedinfo vserv1 --live cpu_shares=2048
This example modifies the domain configuration-XML, which will be effective from
the next restart.
# virsh schedinfo vserv1 --config cpu_shares=2048
This example displays scheduling information about the virtual server vserv1.
# virsh schedinfo vserv1
Scheduler : posix
cpu_shares : 1024
vcpu_period : 100000
vcpu_quota : -1
emulator_period: 100000
emulator_quota : -1
shutdown
Properly shuts down a running virtual server.
Syntax
--domain
►► shutdown <VS> ►◄
Where:
<VS> Is the name, the ID, or the UUID of the virtual server.
Selected options
--domain
Specifies the virtual server.
Usage
v Chapter 1, “Overview,” on page 3
v “Terminating a virtual server” on page 114
Example
setvcpus
Changes the number of virtual CPUs of a virtual server.
Syntax
--domain --live
►► setvcpus <VS> <count> ►◄
--current
--config
--live
--maximum
Where:
<count>
If the --maximum option is not specified:
Specifies the actual number of virtual CPUs which are made
available for the virtual server.
This value is limited by the maximum number of virtual CPUs.
This number is configured with the vcpu element and can be
modified during operation. If no number is specified, the
maximum number of virtual CPUs is 1.
If <count> is less than the actual number of available virtual CPUs,
specify the --config option to remove the appropriate number of
virtual CPUs with the next virtual server reboot. Until then, the
virtual server user might set the corresponding number of virtual
CPUs offline.
If the --maximum option is specified:
Specifies the maximum number of virtual CPUs which can be
made available after the next virtual server reboot.
Do not specify more virtual CPUs than available host CPUs.
<VS> Is the name, ID, or UUID of the virtual server.
Selected options
--config
Changes the number the next time the virtual server is started.
--current, --live
Changes the number of available virtual CPUs immediately.
--domain
Specifies the virtual server.
--maximum
Changes the maximum number of virtual CPUs that can be made available
after the next virtual server reboot.
Usage
Example
This example persistently adds a virtual CPU to the running virtual server vserv1:
# virsh vcpucount vserv1
maximum config 5
maximum live 5
current config 3
current live 3
start
Starts a defined virtual server that is shut off or crashed.
Syntax
--domain
►► start <VS> ►
--console --paused
► ►◄
--autodestroy --bypass-cache --force-boot
Where:
<VS> Is the name, ID, or UUID of the virtual server.
Selected options
--autodestroy
Destroys the virtual server when virsh disconnects from libvirt.
--bypass-cache
Does not load the virtual server from the cache.
--console
Connects to a configured pty console.
--domain
Specifies the virtual server.
--force-boot
Any saved system image is discarded before booting.
--paused
Suspends the virtual server as soon as it is started.
Usage
v Chapter 1, “Overview,” on page 3
v “Starting a virtual server” on page 114
v “Connecting to the console of a virtual server” on page 149
Example
This example starts virtual server vserv1 with initial console access.
# virsh start vserv1 --console
Domain vserv1 started
suspend
Transfers a virtual server from the running to the paused state.
Syntax
►► suspend <VS> ►◄
Where:
<VS> Is the name, ID, or UUID of the virtual server.
Selected options
None.
Usage
Example
# virsh list
Id Name State
----------------------------------------------------
13 vserv1 paused
undefine
Deletes a virtual server from libvirt.
Purpose
Syntax
►► undefine <VS> ►◄
Where:
<VS> Is the name, ID, or UUID of the virtual server.
Selected options
None.
Usage
v Chapter 1, “Overview,” on page 3
v “Undefining a virtual server” on page 111
Example
This example removes virtual server vserv1 from the libvirt definition.
# virsh undefine vserv1
vcpucount
Displays the number of virtual CPUs associated with a virtual server.
Syntax
--domain
►► vcpucount <VS> ►◄
--maximum --config
--active --live
--current
where
<VS> Is the name, ID, or UUID of the virtual server.
Selected options
--active
Displays the number of virtual CPUs being used by the virtual server.
--config
Displays the number of virtual CPUs available to an inactive virtual server
the next time it is restarted.
--current
Displays the number of virtual CPUs for the current virtual server.
--domain
Specifies the virtual server.
--live Displays the number of CPUs for the active virtual server.
--maximum
Displays information on the maximum cap of virtual CPUs that a virtual
server can add.
Usage
Example
# virsh vcpucount vserv1
maximum config 5
maximum live 5
current config 3
current live 3
vol-create
Creates a volume for a storage pool from a volume configuration-XML file.
Syntax
Where:
<pool-name>
Is the name of the storage pool.
<volume-XML-filename>
Is the name of the volume configuration-XML file.
Selected options
None.
Usage
Example
# virsh vol-create pool1 vol1.xml
vol-delete
Remove a volume from a storage pool.
Syntax
Where:
<pool-name>
Is the name of the storage pool.
<pool-UUID>
Is the UUID of the storage pool.
<vol-key>
Is the key of the volume.
<vol-name>
Is the name of the volume.
<vol-path>
Is the path of the volume.
Selected options
--pool Specifies the storage pool.
Usage
Example
# virsh vol-delete --pool pool1 vol1
vol-dumpxml
Displays the current libvirt-internal configuration of a storage volume.
Syntax
Where:
<pool-name>
Is the name of the storage pool.
<pool-UUID>
Is the UUID of the storage pool.
<vol-key>
Is the key of the volume.
<vol-name>
Is the name of the volume.
<vol-path>
Is the path of the volume.
Selected options
--pool Specifies the storage pool.
Usage
Example
# virsh vol-dumpxml --pool default federico.img
<volume type=“file”>
<name>federico.img</name>
<key>/var/lib/libvirt/images/federico.img</key>
<source>
</source>
<capacity unit=“bytes”>12582912000</capacity>
<allocation unit=“bytes”>2370707456</allocation>
<target>
<path>/var/lib/libvirt/images/federico.img</path>
<format type=“qcow2”/>
<permissions>
<mode>0600</mode>
<owner>0</owner>
<group>0</group>
</permissions>
<timestamps>
<atime>1481535271.342162944</atime>
<mtime>1481292068.444109102</mtime>
<ctime>1481292068.916109091</ctime>
</timestamps>
</target>
</volume>
vol-info
Displays information about a defined volume.
Syntax
Where:
<pool-name>
Is the name of the storage pool.
<pool-UUID>
Is the UUID of the storage pool.
<vol-key>
Is the key of the volume.
<vol-name>
Is the name of the volume.
<vol-path>
Is the path of the volume.
Selected options
--pool Specifies the storage pool.
Usage
Example
# virsh vol-info --pool pool1 vol1
vol-key
Displays the key of a volume from its name or path.
Syntax
Where:
<pool-name>
Is the name of the storage pool.
<pool-UUID>
Is the UUID of the storage pool.
<vol-name>
Is the name of the volume.
<vol-path>
Is the path of the volume.
Selected options
--pool Specifies the storage pool.
Usage
Example
This example displays the volume key of vol1 as a volume of storage pool pool1.
# virsh vol-key --pool pool1 vol1
/var/lib/libvirt/images/federico.img
vol-list
Displays a list of defined storage pools.
Syntax
Where:
<pool-name>
Is the name of the storage pool.
<pool-UUID>
Is the UUID of the storage pool.
Selected options
--details
Displays volume type and capacity related information.
--pool Specifies the storage pool.
Usage
Example
# virsh vol-list --pool pool1
vol-name
Displays the name of a volume from its key or path.
Syntax
Where:
<pool-name>
Is the name of the storage pool.
<pool-UUID>
Is the UUID of the storage pool.
<vol-key>
Is the key of the volume.
<vol-path>
Is the path of the volume.
Selected options
--pool Specifies the storage pool.
Usage
Example
This example displays the volume name of vol1 as a volume of storage pool pool1.
# virsh vol-name --pool pool1 /var/lib/libvirt/images/federico.img
vol1
vol-path
Displays the path of a volume from its name or key.
Syntax
Where:
<pool-name>
Is the name of the storage pool.
<pool-UUID>
Is the UUID of the storage pool.
<vol-key>
Is the key of the volume.
<vol-name>
Is the name of the volume.
Selected options
None.
Usage
Example
This example displays the volume key of vol1 as a volume of storage pool pool1.
# virsh vol-path --pool pool1 vol1
/var/lib/libvirt/images/federico.img
vol-pool
Displays the name or the UUID of the storage pool containing a given volume.
Syntax
►► vol-pool <vol-key> ►◄
--uuid <vol-name>
<vol-path>
Where:
<vol-key>
Is the key of the volume.
<vol-name>
Is the name of the volume.
<vol-path>
Is the path of the volume.
Selected options
--pool Specifies the storage pool.
--uuid Returns the storage pool UUID.
Usage
Example
# virsh vol-pool vol1
pool1
KVM guests use the qclib and the GCC inline assembly to run the emulated
instruction. For an example, see arch/s390/kvm/sthyi.c in the Linux source tree.
Header section
Length Data Type Offset (dec) Name Contents
1 Bitstring 0 INFHFLG1 Header Flag Byte 1
Chapter 31. Hypervisor information for the virtual server user 355
Format partition section
Length Data Type Offset (dec) Name Contents
1 Bitstring 0 INFPFLG1 Partition Flag Byte 1
0x80
Multithreading (MT) is enabled.
1 Bitstring 1 INFPFLG2 Partition Flag Byte 2 reserved for IBM use
1 Bitstring 2 INFPVAL1 Partition Validity Byte 1
0x80
This bit being on indicates that INFPSCPS,
INFPDCPS, INFPSIFL, and INFPDIFL
contain valid counts.
0x40
This bit being on indicates that
INFPWBCP and INFPWBIF are valid
0x20
This bit being on indicates that INFPABCP
and INFPABIF are valid.
0x10
This bit being on indicates that a SYSIB
2.2.2 was obtained from STSI and
information reported in the following
fields is valid: INFPPNUM and
INFPPNAM.
0x08
This bit being on indicates that
INFPLGNM, INFPLGCP, and INFPLGIF
are valid.
1 Bitstring 3 INFPVAL2 Partition Validity Byte 2 reserved for IBM use
2 Unsigned Binary 4 INFPPNUM Logical partition number
Integer
2 Unsigned Binary 6 INFPSCPS Number of shared logical CPs configured for
Integer this partition. Count of cores when MT is
enabled.
2 Unsigned Binary 8 INFPDCPS Number of dedicated logical CPs configured
Integer for this partition. Count of cores when MT is
enabled.
2 Unsigned Binary 10 INFPSIFL Number of shared logical IFLs configured for
Integer this partition. Count of cores when MT is
enabled.
2 Unsigned Binary 12 INFPDIFL Number of dedicated logical IFLs configured
Integer for this partition. Count of cores when MT is
enabled.
2 14 Reserved for future IBM use
8 EBCIDIC 16 INFPPNAM Logical partition name
4 Unsigned Binary 24 INFPWBCP Partition weight-based capped capacity for
Integer CPs, a scaled number where X'00010000'
represents one core. Zero if not capped.
4 Unsigned Binary 28 INFPABCP Partition absolute capped capacity for CPs, a
Integer scaled number where X'00010000' represents
one core. Zero if not capped.
Chapter 31. Hypervisor information for the virtual server user 357
358 KVM Virtual Server Management - November 2017
Part 8. Appendixes
Documentation accessibility
When you send information to IBM, you grant IBM a nonexclusive right to use or
distribute the information in any way it believes appropriate without incurring any
obligation to you.
See the IBM Human Ability and Accessibility Center for more information about
the commitment that IBM has to accessibility at
www.ibm.com/able
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you
any license to these patents. You can send license inquiries, in writing, to:
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore, this statement may not apply
to you.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those Web
sites. The materials at those Web sites are not part of the materials for this IBM
product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
The licensed program described in this information and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement, or any equivalent agreement
between us.
All statements regarding IBM's future direction or intent are subject to change or
withdrawal without notice, and represent goals and objectives only.
Index 367
command (continued) command (continued)
start openvswitch 43 virsh vol-info 344
status openvswitch 43 virsh vol-key 345
virsh attach-device 146, 268 virsh vol-list 153, 346
virsh change-media 148, 270 virsh vol-name 347
virsh console 149, 272 virsh vol-path 348
virsh define 110, 273 virsh vol-pool 349
virsh destroy 114, 274 zipl 53, 55
virsh detach-device 147, 275 znetconf 37, 38, 43, 98
virsh domblklist 120, 277 concurrency 132
virsh domblkstat 120, 278 concurrent connections 132
virsh domcapabilities 280 CONFIG_HAVE_PERF_EVENTS 177
virsh domiflist 120, 281 CONFIG_PERF_EVENTS 177
virsh domifstat 120, 282 CONFIG_TRACEPOINTS 177
virsh dominfo 120, 283 configuration
virsh domjobabort 134, 284 libvirt-internal 122
virsh domstate 120, 285 of devices 4
virsh dump 175, 286 of virtual servers 4
virsh dumpxml 88, 122, 287 configuration file
virsh edit 110, 288 libvirt 173
virsh inject-nmi 289 of the OpenSSH SSH daemon configuration file 132
virsh iothreadadd 290 configuration topology 9, 14
virsh iothreaddel 292 configuration-XML file 9, 14, 75
virsh iothreadinfo 294 of a device 4, 9, 14, 76
virsh list 111, 120, 145, 149, 295 of a domain 4, 9, 14, 51
virsh managedsave 114, 297 of a storage pool 103
virsh memtune 299 of a volume 103
virsh migrate 134, 300 configuring
virsh migrate-getspeed 134, 303 an ISO image as IPL device 54
virsh migrate-setmaxdowntime 134 bonded interfaces 98
virsh migrate-setspeed 134, 305 boot devices 49, 53
virsh net-autostart 306 boot process 53, 54, 55
virsh net-define 307 boot process, network 56
virsh net-destroy 308 consoles 70
virsh net-dumpxml 309 CPU model 63
virsh net-edit 310 CPUs 49, 61
virsh net-info 311 DASDs 78
virsh net-list 312 devices 4, 75
virsh net-name 313 devices with virtual server 69
virsh net-start 314 Ethernet interfaces 98
virsh net-undefine 315 FC-attached SCSI tape devices 90
virsh net-uuid 316 I/O threads 69
virsh pool-autostart 317 image files as storage devices 84
virsh pool-define 318 logical volumes 167
virsh pool-delete 319 memory 49, 65
virsh pool-destroy 320 multipath device mapper support 29
virsh pool-dumpxml 321 network devices 49
virsh pool-edit 322 network interfaces 98
virsh pool-info 323 operating systems 49
virsh pool-list 152, 324 physical volumes 167
virsh pool-name 325 protected key encryption 72
virsh pool-refresh 326 random number generators 102
virsh pool-start 327 removable ISO images 95
virsh pool-undefine 328 SCSI disks 78
virsh pool-uuid 329 SCSI medium changer devices 88, 90
virsh reboot 330 SCSI tapes 88, 90
virsh resume 116, 331 storage devices 49
virsh schedinfo 120, 141, 332 storage pools 103
virsh setvcpus 138, 334 user space 49, 68
virsh shutdown 114, 333 virtual CPUs 61
virsh start 114, 149, 336 virtual Host Bus Adapters 88
virsh suspend 116, 338 virtual memory 65
virsh undefine 111, 339 virtual networks 105
virsh vcpucount 120, 138, 340 virtual servers 4
virsh vol-create 341 virtual switches 98, 100
virsh vol-delete 342 volumes as storage devices 86
virsh vol-dumpxml 343 watchdog devices 71
Index 369
device mapper-created device node 9, 14, 29, 78 drive value
device name 9, 14 of the address type attribute 95
logical 78, 84, 86 driver element 76, 78, 84, 86, 95, 209, 210
standard 9, 14 cache attribute 78, 84, 86, 95, 210
device node 9, 14, 78 default value 210
device mapper 9, 14 directsync value 210
device mapper-created 29, 78 none value 84, 86, 95, 210
standard 9, 14, 78 unsafe value 210
udev-created 9, 14, 78 writeback value 210
device number 9, 14 writethrough value 84, 86, 210
of a virtual block device 9, 14, 78, 84, 86 error_policy attribute 210
of an FCP device 9, 14 enospace value 210
device type 9, 14, 75 ignore value 210
device-mapper multipathing 29 report value 210
devices element 51, 205 stop value 210
devno attribute event_idx attribute 210
of the address element 78, 84, 86, 88, 192 off value 210
dhcp element 206 on value 210
DIAG event 177 io attribute 78, 84, 86, 95, 210
diagnose 177 native value 84, 86, 95, 210
direct connection 98 of the driver element 84, 86
direct MacVTap connection 40 threads value 210
direct value ioeventfd attribute 210
of the interface type attribute 98, 220 off value 210
directsync value on value 210
of the driver cache attribute 210 iothread attribute 78, 209
dirty pages 133 name attribute 78, 84, 86, 95, 210
disable value qemu value 95, 210
of the feature policy attribute 63, 213 rerror_policy attribute 210
disabling ignore value 210
protected key encryption 72 report value 210
disk 75 stop value 210
disk element 76, 78, 84, 86, 95, 207 type attribute 78, 84, 86, 95, 210
device attribute 78, 84, 86, 95, 207 qcow2 value 84, 86
cdrom value 95, 207 raw value 84, 86, 95, 210
disk value 78, 84, 86, 207 driver value
type attribute 78, 84, 86, 95, 207 of the error_policy attribute 210
block value 78, 207 of the rerror_policy attribute 210
file value 84, 86, 95, 207 dump 175
disk migration 129, 134 configurable 67
disk value dump file 175
of the disk device attribute 78, 84, 86, 207 dump location 175
displaying dump virsh command 175, 286
block devices 120 --memory-only option 175, 286
information about a virtual server 120 dumpCore attribute
network interfaces 120 of the memory element 67
performance metrics 177 dumping
scheduling information 120 virtual servers 175
states 120 dumpxml virsh command 88, 122, 287
the libvirt-internal configuration 122 DVD 75
domain 3, 4
domain configuration-XML 4, 9, 14
domain configuration-XML file 4, 49, 51, 75
child elements 51
E
edit virsh command 110, 288
root element 51
editing
domain element 208
libvirt-internal configurations 110
type attribute 51, 208
persistent virtual server definitions 110
domblklist virsh command 120, 277
emulator element 68, 212
domblkstat virsh command 120, 278
enabling
domcapabilities virsh command 280
bridge port roles 43
domiflist virsh command 120, 281
encryption 72
domifstat virsh command 120, 282
enospace value
dominfo virsh command 120, 283
of the driver error_policy attribute 210
domjobabort virsh command 134, 284
environment variable
domstate virsh command 120, 285
$EDITOR 110
--reason option 120, 285
$VISUAL 110
Index 371
interface element (continued) libvirt XML attribute
type attribute (continued) adapter name 90
direct value 98, 220 address bus 90, 95, 193, 194
network value 220 address controller 90, 95, 193
interface name 37 address cssid 78, 84, 86, 88, 192
io attribute address devno 78, 84, 86, 88, 192
of the driver element 78, 95, 210 address ssid 78, 84, 86, 88, 192
IOCDS 27, 37, 38 address target 90, 95, 193, 194
ioeventfd attribute address type 78, 84, 86, 88, 90, 95, 192, 193
of the driver element 210 address unit 90, 95, 193, 194
iothread attribute backend model 195
of the driver element 78, 209 boot loadparm 196
iothreadadd virsh command 290 boot order 53, 196
--config option 290 bridge name 197
--current option 290 cipher name 72, 198
--id option 290 cipher state 72, 198
--live option 290 console type 70, 200
iothreaddel virsh command 292 controller index 88, 201
--config option 292 controller model 88, 201
--current option 292 controller ports 201
--id option 292 controller type 88, 201
--live option 292 controller vectors 201
iothreadinfo virsh command 294 cpu match 202
--config option 294 cpu mode 202
--current option 294 device path 204
--live option 294 disk device 78, 84, 86, 95, 207
iothreads element 51, 69, 78, 221 disk type 78, 84, 86, 95, 207
IP address 37 domain type 51, 208
ip command 37, 38 driver cache 78, 84, 86, 95, 210
ip element 222 driver error_policy 210
address attribute 222 driver event_idx 210
netmask attribute 222 driver io 78, 84, 86, 95, 210
IPL 53, 114 driver ioeventfd 210
ISO image 17, 58, 95, 148 driver iothread 78, 209
as IPL device 54 driver name 78, 84, 86, 95, 210
removable 95, 148 driver rerror_policy 210
driver type 78, 84, 86, 95, 210
feature name 213
K feature policy 213
format type 214
kdump 175
forward mode 215
keepalive interval
geometry cyls 216
of the virsh command 134
geometry heads 216
kernel element 51, 53, 55, 223
geometry secs 216
kernel image file 53, 55, 58
geometry trans 216
kernel parameters 53, 55
host name 217
specifying 55
hostdev mode 90, 218
key element 224
hostdev rawio 218
keywrap element 72, 225
hostdev sgio 218
KVM 6
hostdev type 90, 218
KVM guest x, 3, 4
interface trustGuestRxFilters 220
KVM host x
interface type 98, 100, 220
kvm kernel module 6
ip address 222
kvm value
ip netmask 222
of the domain type attribute 51, 208
log append 70, 226
KVM virtual server x, 3, 4
log file 70, 226
kvm_s390_sie_enter tracepoint event 177
mac address 98, 100
memballoon model 74, 228
memory dumpCore 67
L memory unit 65
layer 2 mode 38 model type 100
lba value pool type 242
of the geometry trans attribute 216 source bridge 100
libvirt 4, 6, 33 source dev 78, 98, 249
libvirt configuration file 173 source file 84, 86, 95, 249
libvirt daemon 113, 145 source mode 98
starting 113, 145 source pool 249
Index 373
log messages 173 memory element 51, 65, 67, 229
logging dumpCore attribute 67, 229
console output 70 unit attribute 65, 229
logging level 173 memory intensive workloads 133, 134
logical device name 78, 84, 86 memtune element 65
logical volume memtune virsh command 299
configuration 167 --soft-limit option 299
management 167 migrate virsh command 134, 300
Logical Volume Manager 78, 167 --auto-converge option 134
ls command 78 --timeout option 134
lscss command 33, 78 migrate-getspeed virsh command 134, 303
lsdasd command 27 migrate-setmaxdowntime virsh command 134
lsqeth command 38, 43 migrate-setspeed virsh command 134, 305
lsscsi command 29 --bandwidth option 134, 305
lsscss command 29 migrated reason
lszfcp command 29, 33 of the running state 185
LUN 33 migrating
LVM 78, 167 CPUs 159
image files 129
running virtual servers 127, 134
M storage 9, 14
virtual servers 125
MAC address 21, 43, 100
migrating reason
mac element 76, 98, 100, 227
of the paused state 186
address attribute 98, 100, 227
migration 159
machine attribute
of a running virtual server to another host 78, 90
of the target element 258
See live migration
of the type element 51, 126
of the storage server 9, 14, 78
machine type
of virtual disks
alias value 126
See disk migration
machine type of the virtual server 132
preparing virtual servers for 9, 14, 29
MacVTap
to a different hypervisor release 126, 127
direct connection 40
to a different IBM Z model 128
kernel modules 40
migration costs 159
network device driver 21
migration port range 132
MacVTap interface 21, 98
migration_port_max parameter 132
preparing 40
mode attribute 218
setting up 40
of the cpu element 63, 202
makedumpfile command 175
of the forward element 215
managedsave state 114, 183
of the hostdev element 90, 218
managedsave virsh command 114, 297
of the source element 98, 252
--bypass-cache option 297
subsystem value 218
--paused option 297
model attribute
--running option 297
of the backend element 195
--verbose option 297
of the controller element 88, 201
managing
of the memballoon element 74, 228
devices 145
of the rng element 245
storage pools 152
of the watchdog element 263
system resources 137
model element 63, 76, 98, 100, 232, 233
virtual CPUs 138
type attribute 98, 100, 233
virtual memory 143
virtio value 98, 100
virtual servers 113
model, CPU 63
volumes 153
modifying
mandatory value
persistent virtual server definitions 110
of the source startupPolicy attribute 249
the CPU weight 141
mapping a virtio block device to a host device 9, 14
the number of virtual CPUs 138
master bonded interface 40
virtual server definitions 109
match attribute
modprobe command 40
of the cpu element 63, 202
monitoring
maximum downtime 133
virtual servers 119
maximum number of available virtual CPUs 138
multipath command 29, 78
MaxStartups parameter 132
multipath device mapper support 9, 14, 29
memballoon element 74, 228
configuring 29
model attribute 74, 228
preparing 29
none value 74, 228
multipathed SCSI device configuration
memory 65
example 93
configuring 65
memory balloon device 74, 75
Index 375
perf tool 177 pty value
perf.data.guest 177 of the console type attribute 70, 200
performance considerations 132 pvscan command 167
performance metrics
collecting 177
displaying 177
recording 177
Q
qcow2 image file 84, 86
performing
qcow2 value
a live migration 134
of the driver type attribute 84, 86
physical volume
of the format type attribute 214
configuration 167
QEMU 6, 9, 14, 33
filter 167
QEMU command
policy attribute
info 351
of the feature element 63, 213
qemu-img create 84, 86
pool attribute
QEMU core dump
of the source element 249
configuring 67
pool element 242
qemu value
type attribute 242
of the driver name attribute 78, 84, 86, 95, 210
pool-autostart virsh command 317
qemu-kvm command 126
pool-define virsh command 318
qemu-system-s390x user space process 68
pool-delete virsh command 319
qethconf 43
pool-destroy virsh command 320
pool-dumpxml virsh command 321
pool-edit virsh command 322
pool-info virsh command 323 R
pool-list virsh command 152, 324 ramdisk 53, 55
pool-name virsh command 325 random number generator 75
pool-refresh virsh command 326 configuring 102
pool-start virsh command 327 random value
pool-undefine virsh command 328 of the backend model attribute 195
pool-uuid virsh command 329 raw image file 84, 86
port attribute raw value
of the target element 254 of the driver type attribute 78, 84, 86, 95, 210
port range for migration 132 of the format type attribute 214
ports attribute rawio attribute 218
of the controller element 201 no value 218
preparing of the hostdev element 218
bonded interfaces 40 yes value 218
DASDs 27 readonly element 95, 244
devices for virtual servers 4 reboot virsh command 330
host devices 27 record subcommand
MacVTap interfaces 40 or perf kvm stat 177
multipath device mapper support 29 recording
network devices 37 performance metrics 177
network interfaces 40 redundant paths 9, 14, 17, 21
physical volumes 167 relocating
SCSI disks 29 virtual servers
SCSI medium changer devices 33 See live migration
SCSI tapes 33 relocation
virtual Ethernet devices 37 See live migration
virtual LAN interfaces 40 removable ISO image
virtual servers for migration 9, 14, 29 configuring 95
virtual switches 37, 43 removing 148
VLAN interfaces 40 replacing 148
preserve text content removing
of the on_crash element 51, 237 I/O threads 147
of the on_reboot element 238 ISO images 148
preserve value replacing
of the on_crash element 51 ISO images 148
primary bridge port 43 report subcommand
property or perf kvm stat 177
name 49 report value
protected key 72 of the driver error_policy attribute 210
management operations 72 of the driver rerror_policy attribute 210
providing require value
I/O threads 69, 146 of the feature policy attribute 63, 213
ISO images 148
Index 377
standard device name 9, 14 target element (continued)
of a SCSI medium changer 17 bus attribute (continued)
of a SCSI tape 17 virtio value 78, 84, 86, 255
standard device node 9, 14, 78 dev attribute 78, 84, 86, 95, 255
standard interface name 37 port attribute 254
start openvswitch command 43 type attribute 70, 254
start virsh command 114, 149, 336 sclp value 70, 254
--console option 114, 149 virtio value 70, 254
--force-boot option 114 TDEA 72
starting TDES 72
libvirt daemon 113, 145 terminating
virtual servers 4, 114 virtual servers 4, 114
startupPolicy attribute threads value
of the source element 249 of the driver io attribute 210
state 4, 183 topology 9, 14
crashed 183, 187 trans attribute
displaying 120 of the geometry element 216
managedsave 114, 183 Triple DEA 72
paused 4, 183, 186 Triple DES 72
running 4, 183, 185 trustGuestRxFilters attribute
shut off 4, 111, 183, 184 of the interface element 98, 220
shutting down 183 tuning
state attribute virtual CPUs 62
of the cipher element 72, 198 virtual memory 65
state-transition diagram 183 type
simplified 4 of the virtual channel path 75
status openvswitch command 43 type attribute 218
STHYI instruction 353 of the address element 78, 84, 86, 88, 90, 95, 192, 193
stop value of the console element 70, 200
of the driver error_policy attribute 210 of the controller element 88, 201
of the driver rerror_policy attribute 210 of the disk element 78, 84, 86, 95, 207
stopped phase of a migration 133 of the domain element 51, 208
storage controller 9, 14 of the driver element 78, 84, 86, 95, 210
port 9, 14 of the format element 214
storage keys 133 of the hostdev element 90, 218
storage migration 78, 167 of the interface element 98, 100, 220
storage pool of the model element 98, 100, 233
configuring 103 of the pool element 242
managing 152 of the target element 70, 254
storage pool configuration-XML file 103 of the virtualport element 100, 260, 261
Store Hypervisor Information instruction 353 of the volume element 262
subchannel set-ID 9, 14, 78, 84, 86 scsi value 193, 201, 218
subsystem value virtio-serial value 201
of the hostdev mode attribute 90, 218 type element 51, 258
suspend virsh command 116, 338 arch attribute 258
suspending machine attribute 258
virtual servers 4, 116
symmetric encryption 72
sysfs attribute
bridge_state 43
U
udev-created by-path device node 78
system image 297
udev-created device node 9, 14, 78
saved 114
UID 9, 14
saving 114
undefine virsh command 111, 339
system journal 173
undefined virtual server 4
system resources
undefining
configuring 49
virtual servers 4, 111
managing 137
unfiltered value
of the hostdev sgio attribute 218
unique ID 9, 14
T unit 33
tape 75 unit attribute
target attribute of the address element 90, 95, 193, 194
of the address element 90, 95, 193, 194 of the memory element 65
target element 70, 76, 78, 84, 86, 95, 254, 255, 256, 257 of the soft_limit element 248
address attribute 254 universally unique identifier 9, 14
bus attribute 78, 84, 86, 255 unknown reason
scsi value 95 of the shut off state 184
Index 379
virsh command option (continued) virtual Ethernet device (continued)
--persistent 275, 295 attaching 146
--reason 120, 285 detaching 147
--running 297 device configuration-XML 76
--safe 272 hotplugging 146
--security-info 287 unplugging 147
--state-other 295 virtual Ethernet interface
--state-paused 295 preparing 37
--state-running 295 virtual HBA 75
--state-shutoff 295 attaching 146
--table 295, 312 configuring 88
--title 295 detaching 147
--transient 295 device configuration-XML 76
--update 270 hotplugging 146
--update-cpu 287 unplugging 147
--uuid 295, 312 virtual Host Bus Adapter
--verbose 297 configuring 88
--with-managed-save 295 device configuration-XML 76
--with-snapshot 295 virtual LAN interface 21, 40
--without-managed-save 295 virtual machine relocation
--without-snapshot 295 See live migration
virsh command-line interface 6 virtual memory 65
virtio 4 configuring 65
block device 9, 14, 78, 84, 86 managing 143
block device driver 4 tuning 65
device driver 4 virtual network
network device driver 4 configuring 105
virtio value for booting 56
of the model type attribute 98, 100 virtual SCSI device 17, 88
of the target bus attribute 78, 84, 86, 255 attaching 146
of the target type attribute 70, 254 detaching 147
virtio-block device 75 device configuration-XML 76
virtio-net device 75 hotplugging 146
virtio-scsi device 75 unplugging 147
virtio-scsi value virtual SCSI-attached CD/DVD drive 95
of the controller model attribute 88, 201 virtual server x, 3, 4
virtio-serial value browsing 120
of the controller type attribute 201 configuration 9, 14
virtual block device 9, 14, 78 configuring 4
attaching 146 crashed 175
detaching 147 defining 4, 110
device configuration-XML 76 destroying 114
hotplugging 146 devices 49
unplugging 147 displaying block devices 120
virtual channel 78, 84, 86 displaying information 120
virtual channel path 9, 14, 75 displaying network interfaces 120
virtual channel path type 75 displaying scheduling information 120
virtual channel subsystem 75 displaying the state 120
virtual channel subsystem-ID 75 dumping 175
virtual control unit model 75 kernel panic 175
virtual CPU 159 managing 113
configuring 61 migrating 125, 127
configuring the number 61 monitoring 119
Linux management of 159 name 4, 49, 51
managing 138 persistent definition
model 63 creating 109
modifying the weight 141 defining 109
tuning 62 deleting 109
virtual CPUs editing 110
actual number 138 modifying 109, 110
current config 138 preparing 4
maximum config 138 properties 49
maximum live 138 relocation
maximum number 138 See live migration
virtual DVD 17, 95, 148 resuming 4, 116
virtual DVD drive 17, 148 shutting down 4, 114
virtual Ethernet device 21 starting 4, 114
W
watchdog device
configuring 71
watchdog device driver 177
watchdog element 263
action attribute 263
model attribute 263
watchdog timer 71
weight-fraction 160
workload
memory intensive 133, 134
worldwide port name 9, 14
wrapping key 72
writeback value
of the driver cache attribute 210
writethrough value
of the driver cache attribute 84, 86, 210
WWPN 9, 14, 29
Index 381
382 KVM Virtual Server Management - November 2017
Readers’ Comments — We'd Like to Hear from You
Linux on Z and LinuxONE
KVM Virtual Server Management
November 2017
We appreciate your comments about this publication. Please comment on specific errors or omissions, accuracy,
organization, subject matter, or completeness of this book. The comments you send should pertain to only the
information in this manual or product and the way in which the information is presented.
For technical questions and information about products and prices, please contact your IBM branch office, your
IBM business partner, or your authorized remarketer.
When you send comments to IBM, you grant IBM a nonexclusive right to use or distribute your comments in any
way it believes appropriate without incurring any obligation to you. IBM or any other organizations will only use
the personal information that you supply to contact you about the issues that you state on this form.
Comments:
If you would like a response from IBM, please fill in the following information:
Name Address
Company or Organization
_ _ _ _ _ _ _Fold
_ _ _and
_ _ _Tape
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _Please
_ _ _ _ do
_ _ not
_ _ _staple
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _Fold
_ _ _and
_ _ Tape
______
PLACE
POSTAGE
STAMP
HERE
________________________________________________________________________________________
Fold and Tape Please do not staple Fold and Tape
Cut or Fold
SC34-2752-04 Along Line
IBM®
SC34-2752-04