0% found this document useful (0 votes)
7 views

EDU-EN-VSICM8-LEC-SE2

The document provides an overview of various storage protocols used in vSphere, including Direct-Attached Storage (DAS), VMFS, NFS, vSAN, Fibre Channel, and iSCSI. It highlights the advantages of shared storage for features like vMotion and HA, and explains the architecture and components of each storage type. Additionally, it discusses the importance of storage management practices such as zoning, LUN masking, and multipathing for optimal performance and reliability.

Uploaded by

ekou joel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

EDU-EN-VSICM8-LEC-SE2

The document provides an overview of various storage protocols used in vSphere, including Direct-Attached Storage (DAS), VMFS, NFS, vSAN, Fibre Channel, and iSCSI. It highlights the advantages of shared storage for features like vMotion and HA, and explains the architecture and components of each storage type. Additionally, it discusses the importance of storage management practices such as zoning, LUN masking, and multipathing for optimal performance and reliability.

Uploaded by

ekou joel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 151

6-9 Storage Protocol Overview

* Direct-attached storage (DAS) supports vSphere vMotion when combined with vSphere Storage
vMotion.
Direct-attached storage, as opposed to SAN storage, is where many administrators install ESXi.
Direct-attached storage is also ideal for small environments because of the cost savings associated
with purchasing and managing a SAN. The drawback is that you lose many of the features that
make virtualization a worthwhile investment, for example, balancing the workload on a specific
ESXi host. Direct-attached storage can also be used to store noncritical data:

 CD/DVD ISO images

 Decommissioned VMs

 VM templates

300 Module 6: Configuring and Managing Virtual Storage


In comparison, storage LUNs must be pooled and shared so that all ESXi hosts can access them.
Shared storage provides the following vSphere features:

 vSphere vMotion

 vSphere HA

 vSphere DRS

Using shared SAN storage also provides robust features in vSphere:

 Central repositories for VM files and templates

 Clustering of VMs across ESXi hosts

 Allocation of large amounts (terabytes) of storage to your ESXi hosts

ESXi supports different methods of booting from the SAN to avoid handling the maintenance of
additional direct-attached storage or if you have diskless hardware configurations, such as blade
systems. If you set up your host to boot from a SAN, your host’s boot image is stored on one or
more LUNs in the SAN storage system. When the host starts, it boots from the LUN on the SAN
rather than from its direct-attached disk.
For ESXi hosts, you can boot from software iSCSI, a supported independent hardware SCSI
adapter, and a supported dependent hardware iSCSI adapter. The network adapter must support
only the iSCSI Boot Firmware Table (iBFT) format, which is a method of communicating
parameters about the iSCSI boot device to an operating system.

Module 6: Configuring and Managing Virtual Storage 301


6-10 About VMFS

VMFS is a clustered file system where multiple ESXi hosts can read and write to the same storage
device simultaneously. The clustered file system provides unique, virtualization-based services:

 Migration of running VMs from one ESXi host to another without downtime

 Automatic restarting of a failed VM on a separate ESXi host

 Clustering of VMs across various physical servers

Using VMFS, IT organizations can simplify VM provisioning by efficiently storing the entire VM
state in a central location. Multiple ESXi hosts can access shared VM storage concurrently.
The size of a VMFS datastore can be increased dynamically when VMs residing on the VMFS
datastore are powered on and running. A VMFS datastore efficiently stores both large and small
files belonging to a VM. A VMFS datastore can support virtual disk files. A virtual disk file has a
maximum of 62 TB. A VMFS datastore uses subblock addressing to make efficient use of storage
for small files.

302 Module 6: Configuring and Managing Virtual Storage


VMFS provides block-level distributed locking to ensure that the same VM is not powered on by
multiple servers at the same time. If an ESXi host fails, the on-disk lock for each VM is released
and VMs can be restarted on other ESXi hosts.
On the slide, each ESXi host has two VMs running on it. The lines connecting the VMs to the VM
disks (VMDKs) are logical representations of the association and allocation of the larger VMFS
datastore. The VMFS datastore includes one or more LUNs. The VMs see the assigned storage
volume only as a SCSI target from within the guest operating system. The VM contents are only
files on the VMFS volume.
VMFS can be deployed on three kinds of SCSI-based storage devices:

 Direct-attached storage

 Fibre Channel storage

 iSCSI storage

A virtual disk stored on a VMFS datastore always appears to the VM as a mounted SCSI device.
The virtual disk hides the physical storage layer from the VM's operating system.
For the operating system in the VM, VMFS preserves the internal file system semantics. As a
result, the operating system running in the VM sees a native file system, not VMFS. These
semantics ensure correct behavior and data integrity for applications running on the VMs.

Module 6: Configuring and Managing Virtual Storage 303


6-11 About NFS

NAS is a specialized storage device that connects to a network and can provide file access services
to ESXi hosts.
NFS datastores are treated like VMFS datastores because they can hold VM files, templates, and
ISO images. In addition, like a VMFS datastore, an NFS volume allows the vSphere vMotion
migration of VMs whose files reside on an NFS datastore. The NFS client built in to ESXi uses
NFS protocol versions 3 and 4.1 to communicate with the NAS or NFS servers.
ESXi hosts do not use the Network Lock Manager protocol, which is a standard protocol that is
used to support the file locking of NFS-mounted files. VMware has its own locking protocol. NFS
3 locks are implemented by creating lock files on the NFS server. NFS 4.1 uses server-side file
locking.
Because NFS 3 and NFS 4.1 clients do not use the same locking protocol, you cannot use different
NFS versions to mount the same datastore on multiple hosts. Accessing the same virtual disks
from two incompatible clients might result in incorrect behavior and cause data corruption.

304 Module 6: Configuring and Managing Virtual Storage


6-12 About vSAN

When vSAN is enabled on a cluster, a single vSAN datastore is created. This datastore uses the
storage components of each host in the cluster.
vSAN can be configured as hybrid or all-flash storage.
In a hybrid storage architecture, vSAN pools server-attached HDDs and SSDs to create a
distributed shared datastore. This datastore abstracts the storage hardware to provide a software-
defined storage tier for VMs. Flash is used as a read cache/write buffer to accelerate performance,
and magnetic disks provide capacity and persistent data storage.
Alternately, vSAN can be deployed as an all-flash storage architecture in which flash devices are
used as a write cache. SSDs provide capacity, data persistence, and consistent, fast response times.
In the all-flash architecture, the tiering of SSDs results in a cost-effective implementation: a write-
intensive, enterprise-grade SSD cache tier and a read-intensive, lower-cost SSD capacity tier.

Module 6: Configuring and Managing Virtual Storage 305


6-13 About vSphere Virtual Volumes

vSphere Virtual Volumes virtualizes SAN and NAS devices by abstracting physical hardware
resources into logical pools of capacity.
vSphere Virtual Volumes provides the following benefits:

 Lower storage cost

 Reduced storage management overhead

 Greater scalability

 Better response to data access and analytical requirements

306 Module 6: Configuring and Managing Virtual Storage


6-14 About Raw Device Mapping

Raw device mapping (RDM) is a file stored in a VMFS volume that acts as a proxy for a raw
physical device.
Instead of storing VM data in a virtual disk file that is stored on a VMFS datastore, you can store
the guest operating system data directly on a raw LUN. Storing the data is useful if you run
applications in your VMs that must know the physical characteristics of the storage device. By
mapping a raw LUN, you can use existing SAN commands to manage storage for the disk.
Use RDM when a VM must interact with a real disk on the SAN. This condition occurs when you
make disk array snapshots or have a large amount of data that you do not want to move onto a
virtual disk as a part of a physical-to-virtual conversion.

Module 6: Configuring and Managing Virtual Storage 307


6-15 Physical Storage Considerations

For information to help you plan for your storage needs, see vSphere Storage at
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-
8AE88758-20C1-4873-99C7-181EF9ACFA70.html.
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-
8AE88758-20C1-4873-99C7-181EF9ACFA70.htmlAnother good source of information is the
vSphere Storage page at https://storagehub.vmware.com/.

308 Module 6: Configuring and Managing Virtual Storage


6-16 Review of Learner Objectives

Module 6: Configuring and Managing Virtual Storage 309


6-17 Lesson 2: Fibre Channel Storage

310 Module 6: Configuring and Managing Virtual Storage


6-18 Learner Objectives

Module 6: Configuring and Managing Virtual Storage 311


6-19 About Fibre Channel

To connect to the Fibre Channel SAN, your host should be equipped with Fibre Channel host bus
adapters (HBAs).
Unless you use Fibre Channel direct connect storage, you need Fibre Channel switches to route
storage traffic. If your host contains FCoE adapters, you can connect to your shared Fibre Channel
devices by using an Ethernet network.
In this configuration, a host connects to a SAN fabric, which consists of Fibre Channel switches
and storage arrays, using a Fibre Channel adapter. LUNs from a storage array become available to
the host. You can access the LUNs and create datastores for your storage needs. These datastores
use the VMFS format.
Alternatively, you can access a storage array that supports vSphere Virtual Volumes and create
vSphere Virtual Volumes datastores on the array’s storage containers.

312 Module 6: Configuring and Managing Virtual Storage


6-20 Fibre Channel SAN Components

Each SAN server might host numerous applications that require dedicated storage for applications
processing.

Module 6: Configuring and Managing Virtual Storage 313


The following components are involved:

 SAN switches: SAN switches connect various elements of the SAN. SAN switches might
connect hosts to storage arrays. Using SAN switches, you can set up path redundancy to
address any path failures from host server to switch, or from storage array to switch.

 Fabric: The SAN fabric is the network portion of the SAN. When one or more SAN switches
are connected, a fabric is created. The Fibre Channel (FC) protocol is used to communicate
over the entire network. A SAN can consist of multiple interconnected fabrics. Even a simple
SAN often consists of two fabrics for redundancy.

 Connections (HBAs and storage processors): Host servers and storage systems are connected
to the SAN fabric through ports in the fabric:

- A host connects to a fabric port through an HBA.


- Storage devices connect to the fabric ports through their storage processors.

314 Module 6: Configuring and Managing Virtual Storage


6-21 Fibre Channel Addressing and Access Control

A port connects from a device into the SAN. Each node in the SAN includes each host, storage
device, and fabric component (router or switch). Each node in the SAN has one or more ports that
connect it to the SAN. Ports can be identified in the following ways:

 World Wide Port Name (WWPN): A globally unique identifier for a port that allows certain
applications to access the port. The Fibre Channel switches discover the WWPN of a device
or host and assign a port address to the device.

 Port_ID: Within SAN, each port has a unique port ID that serves as the Fibre Channel address
for that port. The Fibre Channel switches assign the port ID when the device logs in to the
fabric. The port ID is valid only while the device is logged on.

You can use zoning and LUN masking to segregate SAN activity and restrict access to storage
devices.
You can protect access to storage in your vSphere environment by using zoning and LUN masking
with your SAN resources. For example, you might manage zones defined for testing

Module 6: Configuring and Managing Virtual Storage 315


independently within the SAN so that they do not interfere with activity in the production zones.
Similarly, you might set up different zones for different departments.
When you set up zones, consider host groups that are set up on the SAN device.
Zoning and masking capabilities for each SAN switch and disk array, and the tools for managing
LUN masking, are vendor-specific.
See your SAN vendor’s documentation and vSphere Storage at
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-
8AE88758-20C1-4873-99C7-181EF9ACFA70.html.

316 Module 6: Configuring and Managing Virtual Storage


6-22 Multipathing with Fibre Channel

A Fibre Channel path describes a route:

 From a specific HBA port in the host

 Through the switches in the fabric

 Into a specific storage port on the storage array

By default, ESXi hosts use only one path from a host to a given LUN at any one time. If the path
actively being used by the ESXi host fails, the server selects another available path.
The process of detecting a failed path and switching to another is called path failover. A path fails
if any of the components along the path (HBA, cable, switch port, or storage processor) fail.

Module 6: Configuring and Managing Virtual Storage 317


Distinguishing between active-active and active-passive disk arrays can be useful:

 An active-active disk array allows access to the LUNs simultaneously through the available
storage processors without significant performance degradation. All the paths are active at all
times (unless a path fails).

 In an active-passive disk array, one storage processor is actively servicing a given LUN. The
other storage processor acts as a backup for the LUN and might be actively servicing other
LUN I/O.

I/O can be sent only to an active processor. If the primary storage processor fails, one of the
secondary storage processors becomes active, either automatically or through administrative
intervention.

318 Module 6: Configuring and Managing Virtual Storage


6-23 FCoE Adapters

The Fibre Channel traffic is encapsulated into FCoE frames. These FCoE frames are converged
with other types of traffic on the Ethernet network.
When both Ethernet and Fibre Channel traffic are carried on the same Ethernet link, use of the
physical infrastructure increases. FCoE also reduces the total number of network ports and
cabling.

Module 6: Configuring and Managing Virtual Storage 319


6-24 Configuring Software FCoE: Creating VMkernel Ports

320 Module 6: Configuring and Managing Virtual Storage


6-25 Configuring Software FCoE: Activating Software FCoE
Adapters

You add the software FCoE adapter by selecting the host, clicking the Configure tab, selecting
Storage Adapters, and clicking Add Software Adapter.

Module 6: Configuring and Managing Virtual Storage 321


6-26 Review of Learner Objectives

322 Module 6: Configuring and Managing Virtual Storage


6-27 Lesson 3: iSCSI Storage

Module 6: Configuring and Managing Virtual Storage 323


6-28 Learner Objectives

324 Module 6: Configuring and Managing Virtual Storage


6-29 iSCSI Components

An iSCSI SAN consists of an iSCSI storage system, which contains one or more LUNs and one or
more storage processors. Communication between the host and the storage array occurs over a
TCP/IP network.
The ESXi host is configured with an iSCSI initiator. An initiator can be hardware-based, where
the initiator is an iSCSI HBA. Or the initiator can be software-based, known as the iSCSI software
initiator.
An initiator transmits SCSI commands over the IP network. A target receives SCSI commands
from the IP network. Your iSCSI network can include multiple initiators and targets. iSCSI is
SAN-oriented for the following reasons:

 The initiator finds one or more targets.

 A target presents LUNs to the initiator.

 The initiator sends SCSI commands to a target.

Module 6: Configuring and Managing Virtual Storage 325


An initiator resides in the ESXi host. Targets reside in the storage arrays that are supported by the
ESXi host.
To restrict access to targets from hosts, iSCSI arrays can use various mechanisms, including IP
address, subnets, and authentication requirements.

326 Module 6: Configuring and Managing Virtual Storage


6-30 iSCSI Addressing

The main addressable, discoverable entity is an iSCSI node. An iSCSI node can be an initiator or a
target. An iSCSI node requires a name so that storage can be managed regardless of address.
The iSCSI name can use one of the following formats: The iSCSI qualified name (IQN) or the
extended unique identifier (EUI).
The IQN can be up to 255 characters long. Several naming conventions are used:

 Prefix iqn

 Date code specifying the year and month in which the organization registered the domain or
subdomain name that is used as the naming authority string

 Organizational naming authority string, which consists of a valid, reversed domain or


subdomain name

 (Optional) Colon (:), followed by a string of the assigning organization’s choosing, which
must make each assigned iSCSI name unique

Module 6: Configuring and Managing Virtual Storage 327


EUI naming conventions are as follows:

 Prefix is eui.

 A 16-character name follows the prefix.

The name includes 24 bits for a company name that is assigned by the IEEE and 40 bits for a
unique ID, such as a serial number.

328 Module 6: Configuring and Managing Virtual Storage


6-31 Storage Device Naming Conventions

On ESXi hosts, SCSI storage devices use various identifiers. Each identifier serves a specific
purpose. For example, the VMkernel requires an identifier, generated by the storage device, which
is guaranteed to be unique to each LUN. If the storage device cannot provide a unique identifier,
the VMkernel must generate a unique identifier to represent each LUN or disk.
The following SCSI storage device identifiers are available:

 Runtime name: The name of the first path to the device. The runtime name is a user-friendly
name that is created by the host after each reboot. It is not a reliable identifier for the disk
device because it is not persistent. The runtime name might change if you add HBAs to the
ESXi host. However, you can use this name when you use command-line utilities to interact
with storage that an ESXi host recognizes.

 iSCSI name: A worldwide unique name for identifying the node. iSCSI uses the IQN and
EUI. IQN uses the format iqn.yyyy-mm.naming-authority:unique name.

Storage device names appear in various panels in the vSphere Client.

Module 6: Configuring and Managing Virtual Storage 329


6-32 iSCSI Adapters

The iSCSI initiators transport SCSI requests and responses, encapsulated in the iSCSI protocol,
between the host and the iSCSI target. Your host supports two types of initiators: software iSCSI
and hardware iSCSI.
A software iSCSI initiator is VMware code built in to the VMkernel. Using the initiator, your host
can connect to the iSCSI storage device through standard network adapters. The software iSCSI
initiator handles iSCSI processing while communicating with the network adapter. With the
software iSCSI initiator, you can use iSCSI technology without purchasing specialized hardware.
A hardware iSCSI initiator is a specialized third-party adapter capable of accessing iSCSI storage
over TCP/IP. Hardware iSCSI initiators are divided into two categories: dependent hardware
iSCSI and independent hardware iSCSI.
A dependent hardware iSCSI initiator, also known as an iSCSI host bus adapter, is a standard
network adapter that includes the iSCSI offload function. To use this type of adapter, you must
configure networking for the iSCSI traffic and bind the adapter to an appropriate VMkernel iSCSI
port.

330 Module 6: Configuring and Managing Virtual Storage


An independent hardware iSCSI adapter handles all iSCSI and network processing and
management for your ESXi host. In this case, a VMkernel iSCSI port is not required.
For configuration information, see vSphere Storage at https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-8AE88758-20C1-4873-99C7-
181EF9ACFA70.html.

Module 6: Configuring and Managing Virtual Storage 331


6-33 ESXi Network Configuration for IP Storage

Networking configuration for software iSCSI involves creating a VMkernel port on a virtual
switch to handle your iSCSI traffic.
Depending on the number of physical adapters that you want to use for the iSCSI traffic, the
networking setup can be different:

 If you have one physical network adapter, you need a VMkernel port on a virtual switch.

 If you have two or more physical network adapters for iSCSI, you can use these adapters for
host-based multipathing.

For performance and security, isolate your iSCSI network from other networks. Physically
separate the networks. If physically separating the networks is impossible, logically separate the
networks from one another on a single virtual switch by configuring a separate VLAN for each
network.

332 Module 6: Configuring and Managing Virtual Storage


6-34 Activating the Software iSCSI Adapter

You must activate your software iSCSI adapter so that your host can use it to access iSCSI
storage.
You can activate only one software iSCSI adapter.

NOTE

If you boot from iSCSI using the software iSCSI adapter, the adapter is enabled, and
the network configuration is created at the first boot. If you disable the adapter, it is
reenabled each time you boot the host.

Module 6: Configuring and Managing Virtual Storage 333


6-35 Discovering iSCSI Targets

The ESXi host supports the following iSCSI target-discovery methods:

 Static discovery: The initiator does not have to perform discovery. The initiator knows in
advance all the targets that it will contact. It uses their IP addresses and domain names to
communicate with them.

 Dynamic discovery or SendTargets discovery: Each time the initiator contacts a specified
iSCSI server, it sends the SendTargets request to the server. The server responds by supplying
a list of available targets to the initiator.

The names and IP addresses of these targets appear as static targets in the vSphere Client. You
can remove a static target that is added by dynamic discovery. If you remove the target, the
target might be returned to the list during the next rescan operation. The target might also be
returned to the list if the HBA is reset or the host is rebooted.

334 Module 6: Configuring and Managing Virtual Storage


6-36 iSCSI Security: CHAP

You can implement CHAP to provide authentication between iSCSI initiators and targets.
ESXi supports the following CHAP authentication methods:

 Unidirectional or one-way CHAP: The target authenticates the initiator, but the initiator does
not authenticate the target. You must specify the CHAP secret so that your initiators can
access the target.

 Bidirectional or mutual CHAP: With an extra level of security, the initiator can authenticate
the target. You must specify different target and initiator secrets.

CHAP uses a three-way handshake algorithm to verify the identity of your host and, if applicable,
of the iSCSI target when the host and target establish a connection. The verification is based on a
predefined private value, or CHAP secret, that the initiator and target share. ESXi implements
CHAP as defined in RFC 1994.

Module 6: Configuring and Managing Virtual Storage 335


ESXi supports CHAP authentication at the adapter level. All targets receive the same CHAP secret
from the iSCSI initiator. For both software iSCSI and dependent hardware iSCSI initiators, ESXi
also supports per-target CHAP authentication.
Before configuring CHAP, check whether CHAP is enabled at the iSCSI storage system and check
the CHAP authentication method that the system supports. If CHAP is enabled, you must enable it
for your initiators, verifying that the CHAP authentication credentials match the credentials on the
iSCSI storage.
Using CHAP in your iSCSI SAN implementation is recommended, but consult with your storage
vendor to ensure that best practices are followed.
You can protect your data in additional ways. For example, you might protect your iSCSI SAN by
giving it a dedicated standard switch. You might also configure the iSCSI SAN on its own VLAN
to improve performance and security. Some inline network devices might be implemented to
provide encryption and further data protection.

336 Module 6: Configuring and Managing Virtual Storage


6-37 Multipathing with iSCSI Storage

When setting up your ESXi host for multipathing and failover, you can use multiple hardware
iSCSI adapters or multiple NICs. The choice depends on the type of iSCSI initiators on your host.
With software iSCSI and dependent hardware iSCSI, you can use multiple NICs that provide
failover for iSCSI connections between your host and iSCSI storage systems.
With independent hardware iSCSI, the host typically has two or more available hardware iSCSI
adapters, from which the storage system can be reached by using one or more switches.
Alternatively, the setup might include one adapter and two storage processors so that the adapter
can use a different path to reach the storage system.
After iSCSI multipathing is set up, each port on the ESXi system has its own IP address, but the
ports share the same iSCSI initiator IQN. When iSCSI multipathing is configured, the VMkernel
routing table is not consulted for identifying the outbound NIC to use. Instead, iSCSI multipathing
is managed using vSphere multipathing modules. Because of the latency that can be incurred,
routing iSCSI traffic is not recommended.

Module 6: Configuring and Managing Virtual Storage 337


6-38 Binding VMkernel Ports with the iSCSI Initiator

With software iSCSI and dependent hardware iSCSI, multipathing plug-ins do not have direct
access to physical NICs on your host. For this reason, you must first connect each physical NIC to
a separate VMkernel port. Then you use a port-binding technique to associate all VMkernel ports
with the iSCSI initiator.
For dependent hardware iSCSI, you must correctly install the physical network card, which should
appear on the host's Configure tab in the Virtual Switches view.

338 Module 6: Configuring and Managing Virtual Storage


6-39 Lab 12: Accessing iSCSI Storage

Module 6: Configuring and Managing Virtual Storage 339


6-40 Review of Learner Objectives

340 Module 6: Configuring and Managing Virtual Storage


6-41 Lesson 4: VMFS Datastores

Module 6: Configuring and Managing Virtual Storage 341


6-42 Learner Objectives

342 Module 6: Configuring and Managing Virtual Storage


6-43 Creating a VMFS Datastore

Module 6: Configuring and Managing Virtual Storage 343


6-44 Browsing Datastore Contents

The Datastores pane lists all datastores currently configured for all managed ESXi hosts.
The example shows the contents of the VMFS datastore named Class-Datastore. The contents of
the datastore are folders that contain the files for virtual machines or templates.

344 Module 6: Configuring and Managing Virtual Storage


6-45 About VMFS Datastores

Module 6: Configuring and Managing Virtual Storage 345


6-46 Managing Overcommitted Datastores

Using thin-provisioned virtual disks for your VMs is a way to make the most of your datastore
capacity. But if your datastore is not sized properly, it can become overcommitted. A datastore
becomes overcommitted when the full capacity of its thin-provisioned virtual disks is greater than
the datastore’s capacity.
When a datastore reaches capacity, the vSphere Client prompts you to provide more space on the
underlying VMFS datastore and all VM I/O is paused.
Monitor your datastore capacity by setting alarms to alert you about how much a datastore’s disks
are fully allocated or how much disk space a VM is using.
Manage your datastore capacity by dynamically increasing the size of your datastore when
necessary. You can also use vSphere Storage vMotion to mitigate space use issues.
For example, with vSphere Storage vMotion, you can migrate a VM off a datastore. The migration
can be done by changing from virtual disks of thick format to thin format at the target datastore.

346 Module 6: Configuring and Managing Virtual Storage


6-47 Increasing the Size of VMFS Datastores

An example of the unique identifier of a volume is the NAA ID. You require this information to
identify the VMFS datastore that must be increased.
You can dynamically increase the capacity of a VMFS datastore if the datastore has insufficient
disk space. You discover whether insufficient disk space is an issue when you create a VM or you
try to add more disk space to a VM.
Use one of the following methods:

 Add an extent to the VMFS datastore: An extent is a partition on a LUN. You can add an
extent to any VMFS datastore. The datastore can stretch over multiple extents, up to 32.

 Expand the VMFS datastore: You expand the size of the VMFS datastore by expanding its
underlying extent first.

Module 6: Configuring and Managing Virtual Storage 347


6-48 Datastore Maintenance Mode

By selecting the Let me migrate storage for all virtual machines and continue entering
maintenance mode after migration check box, all VMs and templates on the datastore are
automatically migrated to the datastore of your choice. The datastore enters maintenance mode
after all VMs and templates are moved off the datastore.
Datastore maintenance mode is a function of the vSphere Storage DRS feature, but you can use
maintenance mode without enabling vSphere Storage DRS. For more information on vSphere
Storage DRS, see vSphere Resource Management at https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.resmgmt.doc/GUID-98BD5A8A-260A-494F-BAAE-
74781F5C4B87.html.

348 Module 6: Configuring and Managing Virtual Storage


6-49 Deleting or Unmounting a VMFS Datastore

Unmounting a VMFS datastore preserves the files on the datastore but makes the datastore
inaccessible to the ESXi host.
Do not perform any configuration operations that might result in I/O to the datastore while the
unmounting is in progress.
You can delete any type of VMFS datastore, including copies that you mounted without
resignaturing. Although you can delete the datastore without unmounting, you should unmount the
datastore first. Deleting a VMFS datastore destroys the pointers to the files on the datastore, so the
files disappear from all hosts that have access to the datastore.

Module 6: Configuring and Managing Virtual Storage 349


Before you delete or unmount a VMFS datastore, power off all VMs whose disks reside on the
datastore. If you do not power off the VMs and you try to continue, an error message tells you that
the resource is busy. Before you unmount a VMFS datastore, use the vSphere Client to verify the
following conditions:

 No virtual machines reside on the datastore.

 The datastore is not part of a datastore cluster.

 The datastore is not managed by vSphere Storage DRS.

 vSphere Storage I/O Control is disabled.

 The datastore is not used for vSphere HA heartbeat.

To keep your data, back up the contents of your VMFS datastore before you delete the datastore.

350 Module 6: Configuring and Managing Virtual Storage


6-50 Multipathing Algorithms

The Pluggable Storage Architecture is a VMkernel layer responsible for managing multiple
storage paths and providing load balancing. An ESXi host can be attached to storage arrays with
either active-active or active-passive storage processor configurations.
VMware offers native load-balancing and failover mechanisms. VMware path selection policies
include the following examples:

 Round Robin

 Most Recently Used (MRU)

 Fixed

Third-party vendors can design their own load-balancing techniques and failover mechanisms for
particular storage array types to add support for new arrays. Third-party vendors do not need to
provide internal information or intellectual property about the array to VMware.

Module 6: Configuring and Managing Virtual Storage 351


6-51 Configuring Storage Load Balancing

Multiple paths from an ESXi host to a datastore are possible.


For multipathing with Fibre Channel or iSCSI, the following path selection policies are supported:

 Fixed: The host always uses the preferred path to the disk when that path is available. If the
host cannot access the disk through the preferred path, it tries the alternative paths. This
policy is the default policy for active-active storage devices.

 Most Recently Used: The host selects the first working path discovered at system boot time.
When the path becomes unavailable, the host selects an alternative path. The host does not
revert to the original path when that path becomes available. The Most Recently Used policy
does not use the preferred path setting. This policy is the default policy for active-passive
storage devices and is required for those devices.

 Round Robin: The host uses a path selection algorithm that rotates through all available paths.
In addition to path failover, the Round Robin multipathing policy supports load balancing

352 Module 6: Configuring and Managing Virtual Storage


across the paths. Before using this policy, check with storage vendors to find out whether a
Round Robin configuration is supported on their storage.

Module 6: Configuring and Managing Virtual Storage 353


6-52 Lab 13: Managing VMFS Datastores

354 Module 6: Configuring and Managing Virtual Storage


6-53 Review of Learner Objectives

Module 6: Configuring and Managing Virtual Storage 355


6-54 Lesson 5: NFS Datastores

356 Module 6: Configuring and Managing Virtual Storage


6-55 Learner Objectives

Module 6: Configuring and Managing Virtual Storage 357


6-56 NFS Components

The NFS server contains one or more directories that are shared with the ESXi host over a TCP/IP
network. An ESXi host accesses the NFS server through a VMkernel port that is defined on a
virtual switch.

358 Module 6: Configuring and Managing Virtual Storage


6-57 NFS 3 and NFS 4.1

Compatibility issues between the two NFS versions prevent access to datastores using both
protocols at the same time from different hosts. If a datastore is configured as NFS 4.1, all hosts
that access that datastore must mount the share as NFS 4.1. Data corruption can occur if hosts
access a datastore with the wrong NFS version.

Module 6: Configuring and Managing Virtual Storage 359


6-58 NFS Version Compatibility with Other vSphere
Technologies

NFS 4.1 provides the following enhancements:

 Native multipathing and session trunking: NFS 4.1 provides multipathing for servers that
support session trunking. When trunking is available, you can use multiple IP addresses to
access a single NFS volume. Client ID trunking is not supported.

 Kerberos authentication: NFS 4.1 introduces Kerberos authentication in addition to the


traditional AUTH_SYS method used by NFS 3.

 Improved built-in file locking.

 Enhanced error recovery using server-side tracking of open files and delegations.

 Many general efficiency improvements including session leases and less protocol overhead.

360 Module 6: Configuring and Managing Virtual Storage


The NFS 4.1 client offers the following new features:

 Stateful locks with share reservation using a mandatory locking semantic

 Protocol integration, side-band (auxiliary) protocol no longer required to lock and mount

 Trunking (true NFS multipathing), where multiple paths (sessions) to the NAS array can be
created and load-distributed across those sessions

 Enhanced error recovery to mitigate server failure and loss of connectivity

Module 6: Configuring and Managing Virtual Storage 361


6-59 Configuring NFS Datastores

For each ESXi host that accesses an NFS datastore over the network, a VMkernel port must be
configured on a virtual switch. The name of this port can be anything that you want.
For performance and security reasons, isolate your NFS networks from the other networks, such as
your iSCSI network and your virtual machine networks.

362 Module 6: Configuring and Managing Virtual Storage


6-60 Configuring ESXi Host Authentication and NFS
Kerberos Credentials

You must take several configuration steps to prepare each ESXi host to use Kerberos
authentication.
Kerberos authentication requires that all nodes involved (the Active Directory server, the NFS
servers, and the ESXi hosts) be synchronized so that little to no time drift exists. Kerberos
authentication fails if any significant drift exists between the nodes.
To prepare your ESXi host to use Kerberos authentication, configure the NTP client settings to
reference a common NTP server (or the domain controller, if applicable).
When planning to use NFS Kerberos, consider the following points:

 NFS 3 and 4.1 use different authentication credentials, resulting in incompatible UID and GID
on files.

 Using different Active Directory users on different hosts that access the same NFS share can
cause the vSphere vMotion migration to fail.

Module 6: Configuring and Managing Virtual Storage 363


 NFS Kerberos configuration can be automated by using host profiles to reduce configuration
conflicts.

 Time must be synchronized between all participating components.

364 Module 6: Configuring and Managing Virtual Storage


6-61 Configuring the NFS Datastore to Use Kerberos

After performing the initial configuration steps, you can configure the datastore to use Kerberos
authentication.
The screenshot shows a choice of Kerberos authentication only (krb5) or authentication with data
integrity (krb5i). The difference is whether only the header or the header and the body of each
NFS operation is signed using a secure checksum.
For more information about how to configure the ESXi hosts for Kerberos authentication, see
vSphere Storage at https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-8AE88758-20C1-4873-99C7-
181EF9ACFA70.html.

Module 6: Configuring and Managing Virtual Storage 365


6-62 Unmounting an NFS Datastore

366 Module 6: Configuring and Managing Virtual Storage


6-63 Multipathing and NFS Storage

Examples of a single point of failure in the NAS architecture include the NIC card in an ESXi
host, and the cable between the NIC card and the switch. To avoid single points of failure and to
create a highly available NAS architecture, configure the ESXi host with redundant NIC cards and
redundant physical switches.
The best approach is to install multiple NICs on an ESXi host and configure them in NIC teams.
NIC teams should be configured on separate external switches, with each NIC pair configured as a
team on the respective external switch.
In addition, you might apply a load-balancing algorithm, based on the link aggregation protocol
type supported on the external switch, such as 802.3ad or EtherChannel.
An even higher level of performance and high availability can be achieved with cross-stack,
EtherChannel-capable switches. With certain network switches, you can team ports across two or
more separate physical switches that are managed as one logical switch.

Module 6: Configuring and Managing Virtual Storage 367


NIC teaming across virtual switches provides additional resilience and some performance
optimization. Having more paths available to the ESXi host can improve performance by enabling
distributed load sharing.
Only one active path is available for the connection between the ESXi host and a single storage
target (LUN or mount point). Although alternative connections might be available for failover, the
bandwidth for a single datastore and the underlying storage is limited to what a single connection
can provide.
To use more available bandwidth, an ESXi host requires multiple connections from the ESXi host
to the storage targets. You might need to configure multiple datastores, each using separate
connections between the ESXi host and the storage.
The table shows the recommended configuration for NFS multipathing.

External Switches Support Cross-Stack External Switches Do Not Support


EtherChannel Cross-Stack EtherChannel

Configure one VMkernel port. Configure two or more VMkernel ports on


different virtual switches on different subnets.

Configure NIC teaming by using adapters Configure NIC teaming with adapters attached
attached to separate physical switches. to the same physical switch.

Configure the NFS server with multiple IP Configure the NFS server with multiple IP
addresses. IP addresses can be on the same addresses. IP addresses can be on the same
subnet. subnet.

To use multiple links, configure NIC teams with To use multiple links, allow the VMkernel
the IP hash load-balancing policy. routing table to decide which link to send
packets (requires multiple datastores).

368 Module 6: Configuring and Managing Virtual Storage


6-64 Enabling Multipathing for NFS 4.1

NFS 4.1 provides multipathing for servers that support the session trunking. When trunking is
available, you can use multiple IP addresses to access a single NFS volume. Client ID trunking is
not supported.

Module 6: Configuring and Managing Virtual Storage 369


6-65 Lab 14: Accessing NFS Storage

370 Module 6: Configuring and Managing Virtual Storage


6-66 Review of Learner Objectives

Module 6: Configuring and Managing Virtual Storage 371


6-67 Lesson 6: vSAN Datastores

372 Module 6: Configuring and Managing Virtual Storage


6-68 Learner Objectives

Module 6: Configuring and Managing Virtual Storage 373


6-69 About vSAN Datastores

vSAN datastores help administrators use software-defined storage in the following ways:

 Storage policy per VM architecture: With multiple policies per datastore, each VM can have
different storage.

 vSphere and vCenter Server integration: vSAN capability is built in and requires no
appliance. You create a vSAN cluster, like vSphere HA or vSphere DRS.

 Scale-out storage: Up to 64 ESXi hosts can be in a cluster. Scale out by populating new nodes
in the cluster.

 Built-in resiliency: The default vSAN storage policy establishes RAID 1 redundancy for all
VMs.

374 Module 6: Configuring and Managing Virtual Storage


6-70 Disk Groups

vSAN uses the concept of disk groups to pool together cache devices and capacity devices as
single management constructs. A disk group is a pool of one cache device and one to seven
capacity devices

Module 6: Configuring and Managing Virtual Storage 375


6-71 vSAN Hardware Requirements

vSAN requires several hardware components that hosts do not normally have:

 One Serial Attached SCSI (SAS), SATA solid-state drive (SSD), or PCIe flash device and one
to seven magnetic drives for each hybrid disk group.

 One SAS, SATA SSD, or PCIe flash device and one to seven flash disks with flash capacity
enabled for all-flash disk groups.

 Dedicated 1 Gbps network (10 Gbps is recommended) for hybrid disk groups.

 Dedicated 10 Gbps network for all-flash disk groups.


1 Gbps network speeds result in detrimental congestion for an all-flash architecture and are
unsupported.

 The vSAN network must be configured for IPv4 or IPv6 and support unicast.

376 Module 6: Configuring and Managing Virtual Storage


In addition, each host should have a minimum of 32 GB of memory to accommodate a maximum
number of five disk groups and a maximum number of seven capacity devices per disk group.

Module 6: Configuring and Managing Virtual Storage 377


6-72 Viewing the vSAN Datastore Summary

378 Module 6: Configuring and Managing Virtual Storage


6-73 Objects in vSAN Datastores

A vSAN cluster stores and manages data as flexible data containers called objects. When you
provision a VM on a vSAN datastore, a set of objects is created:

 VM home namespace: Stores the virtual machine metadata (configuration files)

 VMDK: Virtual machine disk

 VM swap: Virtual machine swap file, which is created when the VM is powered on

 VM memory: Virtual machine’s memory state when a VM is suspended or when a snapshot is


taken of a VM and its memory state is preserved

 Snapshot delta: Created when a virtual machine snapshot is taken

Module 6: Configuring and Managing Virtual Storage 379


6-74 VM Storage Policies

VM storage policies are a set of rules that you configure for VMs. Each storage policy reflects a
set of capabilities that meet the availability, performance, and storage requirements of the
application or service-level agreement for that VM.
You should create storage policies before deploying the VMs that require these storage policies.
You can apply and update storage policies after deployment.
A vSphere administrator who is responsible for the deployment of VMs can select policies that are
created based on storage capabilities.
Based on the policy that is selected for the object VM, these capabilities are pushed back to the
vSAN datastore. The object is created across ESXi hosts and disk groups to satisfy these policies.

380 Module 6: Configuring and Managing Virtual Storage


6-75 Viewing VM Settings for vSAN Information

Module 6: Configuring and Managing Virtual Storage 381


6-76 Lab 15: Using a vSAN Datastore

382 Module 6: Configuring and Managing Virtual Storage


6-77 Review of Learner Objectives

Module 6: Configuring and Managing Virtual Storage 383


6-78 Virtual Beans: Storage

384 Module 6: Configuring and Managing Virtual Storage


6-79 Activity: Using vSAN Storage at Virtual Beans (1)

Module 6: Configuring and Managing Virtual Storage 385


6-80 Activity: Using vSAN Storage at Virtual Beans (2)

386 Module 6: Configuring and Managing Virtual Storage


6-81 Key Points

Module 6: Configuring and Managing Virtual Storage 387


388 Module 6: Configuring and Managing Virtual Storage
Module 7
Virtual Machine Management

Module 7: Virtual Machine Management 389


7-2 Importance

390 Module 7: Virtual Machine Management


7-3 Module Lessons

Module 7: Virtual Machine Management 391


7-4 Virtual Beans: VM Management

392 Module 7: Virtual Machine Management


7-5 Lesson 1: Creating Templates and Clones

Module 7: Virtual Machine Management 393


7-6 Learner Objectives

394 Module 7: Virtual Machine Management


7-7 About Templates

Creating templates makes the provisioning of virtual machines much faster and less error-prone
than provisioning physical machines and creating a VM by using the New Virtual Machine
wizard.
Templates coexist with VMs in the inventory. You can organize collections of VMs and templates
into arbitrary folders and apply permissions to VMs and templates. You can change VMs into
templates without having to make a full copy of the VM files and create an object.
You can deploy a VM from a template. The deployed VM is added to the folder that you selected
when creating the template.

Module 7: Virtual Machine Management 395


7-8 Creating a Template: Clone VM to Template

The Clone to Template option offers you a choice of format for storing the VM's virtual disks:

 Same format as source

 Thin-provisioned format

 Thick-provisioned lazy-zeroed format

 Thick-provisioned eager-zeroed format

396 Module 7: Virtual Machine Management


7-9 Creating a Template: Convert VM to Template

The Convert to Template option does not offer a choice of format and leaves the VM’s disk file
intact.

Module 7: Virtual Machine Management 397


7-10 Creating a Template: Clone a Template

398 Module 7: Virtual Machine Management


7-11 Updating Templates

To update your template to include new patches or software, you do not need to create a template.
Instead, you convert the template to a VM. You can then power on the VM.
For added security, you might want to prevent users from accessing the VM while you update it.
To prevent access, either disconnect the VM from the network or place it on an isolated network.
Log in to the VM’s guest operating system and apply the patch or install the software. When you
finish, power off the VM and convert it to a template again.

Module 7: Virtual Machine Management 399


7-12 Deploying VMs from a Template

When you place ISO files in a content library, the ISO files are available only to VMs that are
registered on an ESXi host that can access the datastore where the content library is located. These
ISO files are not available to VMs on hosts that cannot see the datastore on which the content
library is located.

400 Module 7: Virtual Machine Management


7-13 Cloning Virtual Machines

To clone a VM, you must be connected to vCenter Server. You cannot clone VMs if you use
VMware Host Client to manage a host directly.
When you clone a VM that is powered on, services and applications are not automatically
quiesced when the VM is cloned.
When deciding whether to clone a VM or deploy a VM from a template, consider the following
points:

 VM templates use storage space, so you must plan your storage space requirements
accordingly.

 Deploying a VM from a template is quicker than cloning a running VM, especially when you
must deploy many VMs at a time.

 When you deploy many VMs from a template, all the VMs start with the same base image.
Cloning many VMs from a running VM might not create identical VMs, depending on the
activity happening within the VM when the VM is cloned.

Module 7: Virtual Machine Management 401


7-14 Guest Operating System Customization

Customizing the guest operating system prevents conflicts that might occur when you deploy a
VM and a clone with identical guest OS settings simultaneously.

402 Module 7: Virtual Machine Management


7-15 About Customization Specifications

To manage customization specifications, select Policies and Profiles from the Menu drop-down
menu.
On the VM Customization Specifications pane, you can create specifications or manage existing
ones.

Module 7: Virtual Machine Management 403


7-16 Customizing the Guest Operating System

You can define the customization settings by using an existing customization specification during
cloning or deployment. You create the specification ahead of time. During cloning or deployment,
you can select the customization specification to apply to the new VM.
VMware Tools must be installed on the guest operating system that you want to customize.
The guest operating system must be installed on a disk attached to SCSI node 0:0 in the VM
configuration.
For more about guest operating system customization, see vSphere Virtual Machine
Administration at https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-55238059-912E-411F-A0E9-
A7A536972A91.html.

404 Module 7: Virtual Machine Management


7-17 About Instant Clones

Through instant cloning, the source (parent) VM does not lose its state because of the cloning
process. You can move to just-in-time provisioning, given the speed and state-persisting nature of
this operation.
During an instant clone operation, the source VM is stunned for a short time, less than 1 second.
While the source VM is stunned, a new writable delta disk is generated for each virtual disk, and a
checkpoint is taken and transferred to the destination VM.
The destination VM powers on by using the source’s checkpoint.
After the destination VM is fully powered on, the source VM resumes running.
Instant clone VMs are fully independent vCenter Server inventory objects. You can manage
instant clone VMs like regular VMs, without any restrictions.

Module 7: Virtual Machine Management 405


7-18 Use Cases for Instant Clones

Instant cloning is convenient for large-scale application deployments because it ensures memory
efficiency, and you can create many VMs on a single host.
To avoid network conflicts, you can customize the virtual hardware of the destination VM during
the instant cloning operation. For example, you can customize the MAC addresses of the virtual
NICs or the serial and parallel port configurations of the destination VM.
Starting with vSphere 7, you can customize the guest operating system for Linux VMs only. You
can customize networking settings such as IP address, DNS server, and the gateway. You can
change these settings without having to power off or restart the VM.

406 Module 7: Virtual Machine Management


7-19 Lab 16: Using VM Templates: Creating Templates and
Deploying VMs

Module 7: Virtual Machine Management 407


7-20 Review of Learner Objectives

408 Module 7: Virtual Machine Management


7-21 Lesson 2: Working with Content Libraries

Module 7: Virtual Machine Management 409


7-22 Learner Objectives

410 Module 7: Virtual Machine Management


7-23 About Content Libraries

Organizations might have multiple vCenter Server instances in data centers around the globe. On
these vCenter Server instances, organizations might have a collection of templates, ISO images,
and so on. The challenge is that all these items are independent of one another, with different
versions of these files and templates on various vCenter Server instances.
The content library is the solution to this challenge. IT can store OVF templates, ISO images, or
any other file types in a central location. The templates, images, and files can be published, and
other content libraries can subscribe to and download content. The content library keeps content
up to date by periodically synchronizing with the publisher, ensuring that the latest version is
available.

Module 7: Virtual Machine Management 411


7-24 Benefits of Content Libraries

Sharing content and ensuring that the content is kept up to date are major tasks.
For example, for a main vCenter Server instance, you create a central content library to store the
master copies of OVF templates, ISO images, and other file types. When you publish this content
library, other libraries, which might be located anywhere in the world, can subscribe and
download an exact copy of the data.
When an OVF template is added, modified, or deleted from the published catalog, the subscriber
synchronizes with the publisher, and the libraries are updated with the latest content.
Starting with vSphere 7, you can update a template while simultaneously deploying VMs from the
template. In addition, the content library keeps two copies of the VM template, the previous and
current versions. You can roll back the template to reverse changes made to the template.

412 Module 7: Virtual Machine Management


7-25 Types of Content Libraries

You can create a local library as the source for content that you want to save or share. You create
the local library on a single vCenter Server instance. You can then add or remove items to and
from the local library.
You can publish a local library, and this content library service endpoint can be accessed by other
vCenter Server instances in your virtual environment. When you publish a library, you can
configure the authentication method, which a subscribed library must use to authenticate to it.
You can create a subscribed library and populate its content by synchronizing it to a published
library. A subscribed library contains copies of the published library files or only the metadata of
the library items.
The published library can be on the same vCenter Server instance as the subscribed library, or the
subscribed library can reference a published library on a different vCenter Server instance.
You cannot add library items to a subscribed library. You can add items only to a local or
published library.

Module 7: Virtual Machine Management 413


After synchronization, both libraries contain the same items, or the subscribed library contains the
metadata for the items.

414 Module 7: Virtual Machine Management


7-26 Adding VM Templates to a Content Library

VMs and vApps have several files, such as log files, disk files, memory files, and snapshot files
that are part of a single library item. You can create library items in a specific local library or
remove items from a local library. You can also upload files to an item in a local library so that the
libraries subscribed to it can download the files to their NFS or SMB server, or datastore.

Module 7: Virtual Machine Management 415


7-27 Deploying VMs from Templates in a Content Library

416 Module 7: Virtual Machine Management


7-28 Lab 17: Using Content Libraries

Module 7: Virtual Machine Management 417


7-29 Review of Learner Objectives

418 Module 7: Virtual Machine Management


7-30 Lesson 3: Modifying Virtual Machines

Module 7: Virtual Machine Management 419


7-31 Learner Objectives

420 Module 7: Virtual Machine Management


7-32 Modifying Virtual Machine Settings

You might have to modify a VM’s configuration, for example, to add a network adapter or a
virtual disk. You can make all VM changes while the VM is powered off. Some VM hardware
changes can be made while the VM is powered on.
vSphere 7.0 makes the following virtual devices available:

 Watchdog timer: Virtual device used to detect and recover from operating system problems. If
a failure occurs, the watchdog timer attempts to reset or power off the VM. This feature is
based on Microsoft specifications: Watchdog Resource Table (WDRT) and Watchdog Action
Table (WDAT).
The watchdog timer is useful with high availability solutions such as Red Hat High
Availability and the MS SQL failover cluster. This device is also useful on VMware Cloud
and in hosted environments for implementing custom failover logic to reset or power off
VMs.

Module 7: Virtual Machine Management 421


 Precision Clock: Virtual device that presents the ESXi host's system time to the guest OS.
Precision Clock helps the guest operating system achieve clock accuracy in the 1 millisecond
range. The guest operating system uses Precision Clock time as reference time. Precision
Clock is not directly involved in guest OS time synchronization.
Precision Clock is useful when precise timekeeping is a requirement for the application, such
as for financial services applications. Precision Clock is also useful when precise time stamps
are required on events that track financial transactions.

 Virtual SGX: Virtual device that exposes Intel's SGX technology to VMs. Intel’s SGX
technology prevents unauthorized programs or processes from accessing certain regions in
memory. Intel SGX meets the needs of the Trusted Computing Industry.

Virtual SGX is useful for applications that must conceal proprietary algorithms and
encryption keys from unauthorized users. For example, cloud service providers cannot inspect
a client’s code and data in a virtual SGX-protected environment.

422 Module 7: Virtual Machine Management


7-33 Hot-Pluggable Devices

Adding devices to a physical server or removing devices from a physical server requires that you
physically interact with the server in the data center. When you use VMs, resources can be added
dynamically without a disruption in service. You must shut down a VM to remove hardware, but
you can reconfigure the VM without entering the data center.
You can add CPU and memory while the VM is powered on. These features are called the CPU
Hot Add and Memory Hot Plug, which are supported only on guest operating systems that support
hot-pluggable functionality. These features are disabled by default. To use these hot-plug features,
the following requirements must be satisfied:

 You must install VMware Tools.

 The VM must use hardware version 11 or later.

 The guest operating system in the VM must support CPU and memory hot-plug features.

 The hot-plug features must be enabled in the CPU or Memory settings on the Virtual
Hardware tab.

Module 7: Virtual Machine Management 423


If virtual NUMA is configured with virtual CPU hot-plug settings, the VM is started without
virtual NUMA. Instead, the VM uses UMA (Uniform Memory Access).

424 Module 7: Virtual Machine Management


7-34 Dynamically Increasing Virtual Disk Size

When you increase the size of a virtual disk, the VM must not have snapshots attached.
After you increase the size of a virtual disk, you might need to increase the size of the file system
on this disk. Use the appropriate tool in the guest OS to enable the file system to use the newly
allocated disk space.

Module 7: Virtual Machine Management 425


7-35 Inflating Thin-Provisioned Disks

When you inflate a thin-provisioned disk, the inflated virtual disk occupies the entire datastore
space originally provisioned to it. Inflating a thin-provisioned disk converts a thin disk to a virtual
disk in thick-provisioned format.

426 Module 7: Virtual Machine Management


7-36 VM Options: General Settings

Under General Options, you can view the location and name of the configuration file (with the
.vmx extension) and the location of the VM’s directory.
You can select the text for the configuration file and the working location to copy and paste them
into a document. However, only the display name and the guest operating system type can be
modified.
Changing the display name does not change the names of all the VM files or the directory that the
VM is stored in. When a VM is created, the filenames and the directory name associated with the
VM are based on its display name. But changing the display name later does not modify the
filename and the directory name.

Module 7: Virtual Machine Management 427


7-37 VM Options: VMware Tools Settings

When you use the VMware Tools controls to customize the power buttons on the VM, the VM
must be powered off.
You can select the Check and upgrade VMware Tools before each power on check box to
check for a newer version of VMware Tools. If a newer version is found, VMware Tools is
upgraded when the VM is power cycled.
When you select the Synchronize guest time with host check box, the guest operating system’s
clock synchronizes with the host.
For information about time keeping best practices for the guest operating systems that you use, see
VMware knowledge base articles 1318 at http://kb.vmware.com/kb/1318 and 1006427 at
http://kb.vmware.com/kb/1006427.

428 Module 7: Virtual Machine Management


7-38 VM Options: VM Boot Settings

When you build a VM and select a guest operating system, BIOS or EFI is selected automatically,
depending on the firmware supported by the operating system. Mac OS X Server guest operating
systems support only Extensible Firmware Interface (EFI). If the operating system supports BIOS
and EFI, you can change the boot option as needed. However, you must change the option before
installing the guest OS.
UEFI Secure Boot is a security standard that helps ensure that your PC boots use only software
that is trusted by the PC manufacturer. In an OS that supports UEFI Secure Boot, each piece of
boot software is signed, including the bootloader, the operating system kernel, and operating
system drivers. If you enable Secure Boot for a VM, you can load only signed drivers into that
VM.
With the Boot Delay value, you can set a delay between the time when a VM is turned on and the
guest OS starts to boot. A delayed boot can help stagger VM start ups when several VMs are
powered on.

Module 7: Virtual Machine Management 429


You can change the BIOS or EFI settings. For example, you might want to force a VM to start
from a CD/DVD. The next time the VM powers on, it goes straight into the BIOS. A forced entry
into the firmware setup is much easier than powering on the VM, opening a console, and quickly
trying to press the F2 key.
With the Failed Boot Recovery setting, you can configure the VM to retry booting after 10
seconds (the default) if the VM fails to find a boot device.

430 Module 7: Virtual Machine Management


7-39 Removing VMs

When a VM is removed from the inventory, its files remain at the same storage location, and the
VM can be re-registered in the datastore browser.

Module 7: Virtual Machine Management 431


7-40 Lab 18: Modifying Virtual Machines

432 Module 7: Virtual Machine Management


7-41 Review of Learner Objectives

Module 7: Virtual Machine Management 433


7-42 Lesson 4: Migrating VMs with vSphere vMotion

434 Module 7: Virtual Machine Management


7-43 Learner Objectives

Module 7: Virtual Machine Management 435


7-44 About VM Migration

A deciding factor for using a particular migration technique is the purpose of performing the
migration. For example, you might need to stop a host for maintenance but keep the VMs running.
You use vSphere vMotion to migrate the VMs instead of performing a cold or suspended VM
migration. If you must move a VM’s files to another datastore to better balance the disk load or
transition to another storage array, you use vSphere Storage vMotion.
Some migration techniques, such as vSphere vMotion migration, have special hardware
requirements that must be met to function properly. Other techniques, such as a cold migration, do
not have special hardware requirements to function properly.
You can perform the different types of migration on either powered-off (cold) or powered-on (hot)
VMs.

436 Module 7: Virtual Machine Management


7-45 About vSphere vMotion

Using vSphere vMotion, you can migrate running VMs from one ESXi host to another ESXi host
with no disruption or downtime. With vSphere vMotion, vSphere DRS can migrate running VMs
from one host to another to ensure that the VMs have the resources that they require.
With vSphere vMotion, the entire state of the VM is moved from one host to another, but the data
storage remains in the same datastore.
The state information includes the current memory content and all the information that defines and
identifies the VM. The memory content includes transaction data and whatever bits of the
operating system and applications are in memory. The definition and identification information
stored in the state includes all the data that maps to the VM hardware elements, such as the BIOS,
devices, CPU, and MAC addresses for the Ethernet cards.

Module 7: Virtual Machine Management 437


7-46 Enabling vSphere vMotion

438 Module 7: Virtual Machine Management


7-47 vSphere vMotion Migration Workflow

To play the animation, go to https://vmware.bravais.com/s/FbzaDb6owpSMKyKc940F.


A vSphere vMotion migration consists of the following steps:
1. A shadow VM is created on the destination host.
2. The VM’s memory state is copied over the vSphere vMotion network from the source host to
the target host through the vSphere vMotion network. Users continue to access the VM and,
potentially, update pages in memory. A list of modified pages in memory is kept in a memory
bitmap on the source host.
3. After the first pass of memory state copy completes, another pass of memory copy is
performed to copy any pages that changed during the last iteration. This iterative memory
copying continues until no changed pages remain.
4. After most of the VM’s memory is copied from the source host to the target host, the VM is
quiesced. No additional activity occurs on the VM. In the quiesce period, vSphere vMotion
transfers the VM device state and memory bitmap to the destination host.

Module 7: Virtual Machine Management 439


5. Immediately after the VM is quiesced on the source host, the VM is initialized and starts
running on the target host. A Gratuitous Address Resolution Protocol (GARP) request notifies
the subnet that VM A’s MAC address is now on a new switch port.
6. Users access the VM on the target host instead of the source host.
7. The memory pages that the VM was using on the source host are marked as free.

440 Module 7: Virtual Machine Management


7-48 VM Requirements for vSphere vMotion Migration

For the complete list of vSphere vMotion migration requirements, see vCenter Server and Host
Management at https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.vcenterhost.doc/GUID-3B5AF2B1-C534-4426-B97A-
D14019A8010F.html.

Module 7: Virtual Machine Management 441


7-49 Host Requirements for vSphere vMotion Migration (1)

You cannot migrate a VM from a host that is registered to vCenter Server with an IPv4 address to
a host that is registered with an IPv6 address.
Copying a swap file to a new location can result in slower migrations. If the destination host
cannot access the specified swap file location, it stores the swap file with the VM configuration
file.

442 Module 7: Virtual Machine Management


7-50 Host Requirements for vSphere vMotion Migration (2)

Using 1 GbE network adapters for the vSphere vMotion network might result in migration failure,
if you migrate VMs with large vGPU profiles.

Module 7: Virtual Machine Management 443


7-51 Checking vSphere vMotion Errors

If validation succeeds, you can continue in the wizard. If validation does not succeed, a list of
vSphere vMotion errors and warnings displays in the Compatibility pane.
With warnings, you can still perform a vSphere vMotion migration. But with errors, you cannot
continue. You must exit the wizard and fix all errors before retrying the migration.
If a failure occurs during the vSphere vMotion migration, the VM is not migrated and continues to
run on the source host.

444 Module 7: Virtual Machine Management


7-52 Encrypted vSphere vMotion

Encrypted vSphere vMotion secures confidentiality, integrity, and authenticity of data that is
transferred with vSphere vMotion. Encrypted vSphere vMotion supports all variants of vSphere
vMotion, including migration across vCenter Server systems. Encrypted vSphere Storage vMotion
is not supported.
You cannot turn off encrypted vSphere vMotion for encrypted VMs.

Module 7: Virtual Machine Management 445


7-53 Cross vCenter Migrations

446 Module 7: Virtual Machine Management


7-54 Cross vCenter Migration Requirements

You can perform cross vCenter migrations between vCenter Server instances of different versions.
For information on the supported versions, see VMware knowledge base article 2106952 at
http://kb.vmware.com/kb/2106952.

Module 7: Virtual Machine Management 447


7-55 Network Checks for Cross vCenter Migrations

448 Module 7: Virtual Machine Management


7-56 VMkernel Networking Layer and TCP/IP Stacks

Consider the following key points about TCP/IP stacks at the VMkernel level:

 Default TCP/IP stack: Provides networking support for the management traffic between
vCenter Server and ESXi hosts and for system traffic such as vSphere vMotion, IP storage,
and vSphere Fault Tolerance.

 vSphere vMotion TCP/IP stack: Supports the traffic for hot migrations of VMs.

 Provisioning TCP/IP stack: Supports the traffic for VM cold migration, cloning, and snapshot
creation. You can use the provisioning TPC/IP stack to handle NFC traffic during long-
distance vSphere vMotion migration. VMkernel adapters configured with the provisioning
TCP/IP stack handle the traffic from cloning the virtual disks of the migrated VMs in long-
distance vSphere vMotion.
By using the provisioning TCP/IP stack, you can isolate the traffic from the cloning
operations on a separate gateway. After you configure a VMkernel adapter with the

Module 7: Virtual Machine Management 449


provisioning TCP/IP stack, all adapters on the default TCP/IP stack are disabled for the
provisioning traffic.

 Custom TCP/IP stacks: You can create a custom TCP/IP stack on a host to forward
networking traffic through a custom application. Open an SSH connection to the host and run
the vSphere CLI command:

esxcli network ip netstack add -N="stack_name"

Take appropriate security measures to prevent unauthorized access to the management and system
traffic in your vSphere environment. For example, isolate the vSphere vMotion traffic in a
separate network that includes only the ESXi hosts that participate in the migration. Isolate the
management traffic in a network that only network and security administrators can access.

450 Module 7: Virtual Machine Management

You might also like