0% found this document useful (0 votes)
92 views

Memory Tiering Over NVMe Tech Preview - Vsphere 8.0.3 Technical Guide Revision 1.0

Uploaded by

Jayson JHBZA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views

Memory Tiering Over NVMe Tech Preview - Vsphere 8.0.3 Technical Guide Revision 1.0

Uploaded by

Jayson JHBZA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Memory Tiering over NVMe

Tech Preview
vSphere 8.0.3 Technical Guide

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved.


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

Broadcom Limited
Web: www.broadcom.com
Corporate Headquarters: San Jose, CA
© 2024 by Broadcom All rights reserved.

Revision History

Revision Date Change Description

1.0 June 7, 2024 Initial revision for evaluating vSphere 8.0.3.

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 1


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

Contents
Revision History 1
Contents 2
Introduction 3
Important Prerequisites 4
Workload Recommendations 5
VM Profile Type Recommendations 5
Guest OS and Workload Data Recommendations 7
Virtual Machine Memory Settings Recommendations 7
NVMe Device Recommendations 8
Cluster and Host Configuration Recommendations 10
Configuring the Tech Preview 11
Identifying NVMe devices to use as tiered memory (vSphere Client) 12
Configuring NVMe devices 13
Configuring NVMe devices (vSphere PowerCLI) 14
Configuring NVMe devices (vSphere ESXCLI) 16
Configuring Memory Tiering for each host 17
Configuring Memory Tiering (vSphere Client) 17
Configuring Memory Tiering (vSphere PowerCLI) 21
Configuring Memory Tiering (vSphere ESXCLI) 23
Checking Memory Tiering has been correctly configured on each host (vSphere Client) 24
Configuring Optional Features 26
Enabling the Use of Large Pages 26
Disabling the Use of Large Pages 27
Configuring the DRAM to NVMe Ratio 28
Configuring the DRAM to NVMe Ratio (vSphere Client) 29
Configuring the DRAM to NVMe Ratio (vSphere ESXCLI) 31
Configuring the DRAM to NVMe Ratio (vSphere PowerCLI) 31
Disabling the Tech Preview 32
Troubleshooting 33

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 2


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

Introduction
vSphere 8.0 Update 3 launches Memory Tiering in tech preview. This capability allows you to
use NVMe devices that you add locally to an ESXi host as tiered memory. Memory tiering over
NVMe optimizes performance by intelligently directing VM memory allocations to either NVMe
devices or faster dynamic random access memory (DRAM) in the host and performing hot and
cold memory page placements. This allows customers to increase their memory footprint, while
increasing workload capacity and reducing the overall total cost of ownership (TCO).

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 3


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

Important Prerequisites
For vSphere 8.0 Update 3, Memory Tiering is released as a tech preview to allow customers to
evaluate the Memory Tiering feature. The basic functionality of Memory Tiering is supported
with limitations on more advanced functionality.

The tech preview supports using PCIe-based Flash NVMe devices as tiered memory. Memory
Tiering must be configured on each host in the cluster to benefit from using it.

Memory Tiering is recommended for use by customers who are running specific workload types
in test / lab environments and not for use in production environments.

Issues related to Memory Tiering, including any performance issues, may be reported using the
standard Support Request process through VMware by Broadcom support teams.

This document only covers the recommendations and behavior when using Memory Tiering.

When using vSphere 8.0 Update 3 without Memory Tiering, please refer to the official release
notes for changes from previous vSphere releases.

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 4


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

Workload Recommendations
For the tech preview, it is recommended that only Virtual Desktop Infrastructure (VDI) or similar
workloads are run on the cluster and hosts. It is not recommended to run workloads that are
sensitive to memory latency and memory performance, such as database workloads, since
Memory Tiering has not yet been optimized for those workloads.

VM Profile Type Recommendations

Memory Tieiring does not require any changes in how VMs are configured. In the tech preview,
all VMs powered on a host with MemoryTiering over NVMe will start using it by default.

Large Pages are disabled by default


For the tech preview, when Memory Tiering is enabled, VMs will be configured with Large Pages
disabled by default. This behavior is only for the tech preview since performance has only been
optimized for VMs configured to use 4K Pages (i.e. small pages).

VMs can be configured to use 2 MB Large Pages when Memory Tiering is enabled. This is not
recommended, but can be done for evaluation. It should be noted that performance has not
been optimized yet for these types of VMs. See the Configuring Optional Features section for
more information.

Configuring VMs to use any other Large Pages size is not supported and will result in the VM
failing to power on.

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 5


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

Unsupported VMs
For the tech preview, all VM profile types are supported except for the ones noted below.
Unsupported VM profile types will fail to power on on a host configured to use Memory Tiering.

The unsupported VM profile types are:

● VMs that require pinned memory or preallocated memory, including:


○ Latency Sensitive VMs
○ VMs configured with PCI Passthrough devices
○ VMs configured with Distributed Services Engine (DSE) Universal Pass Through
(UPTv2)
○ VMs configured with vSphere Fault Tolerance (FT)
○ VMs configured to use Virtualized Quick Assist Technology (vQAT)

● VMs configured to use 1 GB Large Pages

● VMs configured to use Nested Virtualization, including:


○ VMs configured to use Virtualization Based Security (VBS)

● VMs configured to use Virtual Software Guard Extensions (SGX)

Networking Adapters
Under certain circumstances, using the vmxnet3 networking adapter can result in slow
networking performance which may not resolve itself. It is recommended to use either the
e1000 or e1000e as the networking adapter when evaluating this Tech Preview.

Distributed Resource Scheduler (DRS)


In a DRS managed cluster, DRS is currently unaware of the limitations of the tech preview and
might schedule migrations of unsupported VMs to hosts configured to use Memory Tiering.
That will lead to repeated vMotion failures since those hosts will not allow unsupported VM
profile types to be powered on.

For the tech preview, it is not recommended to use DRS for managing clusters containing hosts
configured to use Memory Tiering.

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 6


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

Guest OS and Workload Data Recommendations


Memory Tiering stores Guest OS and workload data on the configured NVMe device when the
NVMe device is used as tiered memory. In the tech preview, NVMe device encryption is not
supported. When evaluating the tech preview, Memory Tiering is not recommended for
applications that are security sensitive to the storage location of the Guest OS and workload
in-memory data.

Virtual Machine Memory Settings Recommendations


For the tech preview, when configuring VMs with memory reservations, the maximum reservable
capacity for all VMs is the Total Reservation Capacity as displayed in the vSphere Client under
Memory Reservation Details (in the Cluster view, see Monitor, then Resource Allocation, then
Memory). This capacity only reflects the Total Reservation Capacity of DRAM and does not
include NVMe-based tiered memory.

In the tech preview, note that reserving memory is only supported using DRAM.

In order to see the benefits of Memory tiering, it is advised to create VMs such that the total
configured VM size as well as other used memory exceeds the amount of DRAM available.
NVMe as tiered memory will start being used once the DRAM capacity has been exhausted.

It is not necessary to fully populate all the DIMM slots in the host.

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 7


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

NVMe Device Recommendations


For the tech preview, the following criteria for NVMe devices should be followed.

● An NVMe device must have the performance and endurance characteristics comparable
to Enterprise-Class SLC SSD devices. The VMware Compatibility Guide for vSAN can
be consulted for compatible devices which are of:

○ Search For: SSD


○ Device Type: NVMe
○ Performance Class: Class F: 100,000-349,999 Writes Per Second specifications
○ Endurance Class: Class D >= 7300 TBW specifications
○ Equivalent specifications to: Dell Ent NVMe P5600 MU U.2 1.6TB

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 8


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

● Only 1 PCIe-based NVMe device per host should be configured as tiered memory. Only
PCIe-based Flash NVMe devices are supported.

● An NVMe device must be installed in the host in a drive bay or PCIe slot (i.e. locally,
directly). For the best performance, it is recommended to consult the host’s hardware
manuals to choose a drive bay or PCIe slot that is not shared with other PCIe devices.

● NVMe devices over fabric or ethernet are not supported due to the longer latency times
to access the devices and the potential loss of fabric or network connectivity. USB
connected NVMe devices are not supported.

● An NVMe device is configured as tiered memory by using the ESXCLI or PowerCLI to


create a Tier Partition on the device. The NVMe device is not permitted to have any
other existing partitions.

● The maximum partition size supported for the Tier Partition is 4 TB. Using an NVMe
device with a capacity that is greater than 4 TB will result in only 4 TB being available as
tiered memory. For those devices, the remaining capacity is often used by the device
for endurance purposes.

● An NVMe device to be configured as tiered memory cannot be shared with any other
uses, including VMFS or vSAN.

● An NVMe device to be configured as tiered memory should not be configured to use


RAID. This will ensure the best performance.

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 9


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

Cluster and Host Configuration Recommendations


For the tech preview, it is recommended that:

● The cluster and all hosts in the cluster are using vCenter 8.0 Update 3 and ESXi 8.0
Update 3. A cluster with hosts using different versions of vSphere or ESXi is not
supported.

● All hosts in the cluster should be configured to use Memory Tiering. A cluster with a mix
of hosts with Memory Tiering configured and hosts without Memory Tiering configured is
not supported.

● The cluster should contain no more than 4 hosts when first evaluating the tech preview.
This will aid in troubleshooting any issues with configuration or operation. Once the
cluster is operating normally, additional hosts can be added to the cluster.

● Ensure that only VMs with supported VM Profiles will be running on the Cluster.

● The Customer Experience Improvement feature (i.e. telemetry) should be configured on


the cluster and hosts since this enables VMware to use telemetry data to improve
Memory Tiering in future releases. See this VMware Doc for more details.

● vSAN should not be configured for the cluster. For the tech preview, compatibility issues
may be seen when using both vSAN and Memory Tiering in the same cluster.

● If Intel Optane Persistent Memory® modules are installed in any hosts, configure the
BIOS on the hosts to disable the DIMM slots containing Intel Optane Persistent
Memory® modules or physically remove the Intel Optane Persistent Memory® modules
from the host.

○ Refer to the maintenance guides provided by the server vendor for the host.
○ Note that the recommended ratio of DRAM to Intel Intel Optane Persistent
Memory® was 1:4.
○ Disabling Intel Intel Optane Persistent Memory® could result in a lower DRAM
capacity.
○ It is recommended to adjust workloads for that or to add more DRAM to the
server when evaluating Memory Tiering.

● vMotion should not be configured to migrate VMs between a cluster with Memory Tiering
configured and a cluster without Memory Tiering configured.

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 10


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

Configuring the Tech Preview


Before proceeding with this section, please review the Cluster and Host Configuration
Recommendations section in this document.

Enabling Memory Tiering for a cluster and hosts requires the following steps.

1. Identifying an NVMe device for each host to use as tiered memory.


2. Configuring an NVMe device on each host to be used as tiered memory.
3. Configuring each host to use Memory Tiering.
4. Rebooting each host for Memory Tiering to take effect.
5. Checking Memory Tiering has been correctly configured on each host.

These workflows can be done using the vSphere ESXCLI, vSphere PowerCLI scripts, or the
vSphere Client (limited functionality available).

These workflows only need to be done once for each host that will use an NVMe device as
tiered memory.

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 11


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

Identifying NVMe devices to use as tiered memory (vSphere


Client)
Before proceeding with this section, please review the NVMe Device Recommendations section
in this document.

To identify NVMe devices to be used as tiered memory for each host, you will need to:

1. For each host, identify the exact NVMe device and its Location

a. Navigate to Configure > Storage > Storage Devices.


b. Select the NVMe device to be used for Memory Tiering.
c. Note its Model and Location information.
d. Repeat the above steps for each host.

See this example:

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 12


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

Configuring NVMe devices


Before proceeding with this section, please review the NVMe Device Recommendations section
in this document.

To configure using specific NVMe devices as tiered memory for each host, you will need to:

1. Ensure the NVMe device is installed in the host (i.e. locally, directly)

2. As noted in the previous section, identify the location and model of the NVMe device that
will be configured as tiered memory.

3. Put the host into maintenance mode. It is important that no changes are made to
partitions on NVMe devices used as tiered memory while VMs are operational.

4. Ensure all data has been migrated off the NVMe device to avoid any data loss.

5. Delete all existing partitions on the NVMe device.

6. Use the vSphere PowerCLI Interface or vSphere esxcli for the host to create a single
Tier Partition on the NVMe device and verify the Tier Partition was created.

This step only needs to be done once for each host that will use an NVMe device as
tiered memory.

For the tech preview, this step can only be done either using the vSphere ESXCLI
commands or vSphere PowerCLI scripts. There is no support to do this step from the
vSphere Client. Future vSphere releases will improve this process as part of configuring
the NVMe devices from the vSphere Client.

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 13


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

Configuring NVMe devices (vSphere PowerCLI)


Using the vSphere PowerCLI Interface to configure specific NVMe devices to be used as tiered
memory for each host requires the following steps:

1. Connect to the host.

$hostname = <host ip address>

Connect-VIServer $hostname
$vmHost = Get-VMHost $hostname
$esxcli = Get-EsxCli -VMHost $vmHost -V2

2. Put the host into maintenance mode. It is important that no changes are made to
partitions on NVMe devices used as tiered memory while VMs are operational.

$arguments = $esxcli.system.maintenanceMode.set.CreateArgs()
$arguments.enable = "True"
$esxcli.system.maintenanceMode.set.invoke($arguments)

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 14


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

3. List the NVMe devices.

$esxcli.storage.core.adapter.device.list.invoke()

Example

4. Choose the NVMe device to use as tiered memory.

Example
$devicename =
"/vmfs/devices/disks/t10.NVMe____Dell_Ent_NVMe_P5600_MU_U.2_1.6
TB________00050283D0E4D25C"

5. List any existing partitions on the NVMe device using the ESXCLI partedUtil utility to list
any existing partitions - see KB Article 1036609.

6. Delete any existing partitions on the NVMe device using the ESXCLI partedUtil utility -
see KB Article 1036609. This can also be performed from the vSphere Client - see the
VMware Doc Erase ESXi Storage Devices.

7. Create the tier partition on the NVMe device.

$arguments = $esxcli.system.tierdevice.create.CreateArgs()
$arguments.nvmedevice = $devicename
$esxcli.system.tierdevice.create.invoke($arguments)

8. Check if the NVMe device has a tier partition created.


$esxcli.system.tierdevice.list.invoke()

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 15


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

Configuring NVMe devices (vSphere ESXCLI)


Using the vSphere ESXCLI Interface to configure specific NVMe devices to be used as tiered
memory for each host requires the following steps:

1. Connect to the host - see Getting Started with ESXCLI 8.0.

2. Put the host into maintenance mode. It is important that no changes are made to
partitions on NVMe devices used as tiered memory while VMs are operational.

esxcli system maintenanceMode set --enable true

3. List the NVMe devices on the host.

esxcli storage core adapter device list

4. Choose the NVMe device to use as tiered memory and note the NVMe device path (i.e.
/vmfs/devices/disks/<nvme device name>).

5. List any existing partitions on the NVMe device using the ESXCLI partedUtil utility to list
any existing partitions - see KB Article 1036609.

6. Delete each existing partition on the NVMe device using the ESXCLI partedUtil utility to
delete any non-tier partitions - see KB Article 1036609. This can also be performed from
the vSphere Client - see the VMware Doc Erase ESXi Storage Devices.

7. Create the tier partition on the NVMe device.

esxcli system tierdevice create -d /vmfs/devices/disks/<nvme


device name>

8. Check if the NVMe device has a tier partition created.

See the following example for above commands for creating a partition, and if necessary,
deleting an existing partition:

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 16


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

Configuring Memory Tiering for each host

Configuring Memory Tiering (vSphere Client)


Using the vSphere Client to configure Memory Tiering for each host requires the following steps:

1. Login to the vSphere Client for the cluster.

2. Configure the Cluster to be managed by a single image and using Config Manager
Profiles. Refer to VMware Doc Using vSphere Configuration Profiles to Manage Host
Configuration at a Cluster Level.

These other references may also be helpful:

● Configuration Management using vSphere Configuration Profiles


● Transition to Using vSphere Configuration Profiles
● Create a Cluster That Uses a Single Image by Importing an Image from a Host

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 17


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

3. Navigate to Configuration > Draft > Create Draft > vmkernel > options > COMMON
SETTINGS > memory_tiering. Change the setting to TRUE as noted below:

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 18


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

4. From the Configuration pane, verify Memory Tiering is configured in the configuration.

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 19


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

5. Run the pre-check on Cluster/Host and Apply the Changes.

This step will perform Pre-Check and Remediate operations to validate and apply the
configuration changes to the cluster and hosts. The results of the Pre-Check and
Remediate steps will be briefly displayed.

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 20


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

Configuring Memory Tiering (vSphere PowerCLI)


Using the vSphere PowerCLI scripts to configure Memory Tiering for each host requires the
following steps:

1. Connect to the host

$hostname = <host ip address>

Connect-VIServer $hostname
$vmHost = Get-VMHost $hostname
$esxcli = Get-EsxCli -VMHost $vmHost -V2

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 21


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

2. Enable MemoryTiering in the ESXi boot options

$arguments = $esxcli.system.settings.kernel.set.CreateArgs()
$arguments.setting = "MemoryTiering"
$arguments.value = "TRUE"
$esxcli.system.settings.kernel.set.invoke($arguments)

3. Restart the host

Restart-VMHost $hostname -RunAsync -Confirm -Force

4. Reconnect to the host and exit the maintenance mode

Connect-VIServer $hostname
$arguments = $esxcli.system.maintenanceMode.set.CreateArgs()
$arguments.enable = "False"
$esxcli.system.maintenanceMode.set.invoke($arguments)

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 22


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

Configuring Memory Tiering (vSphere ESXCLI)


Using the vSphere ESXCLI to configure Memory Tiering for each host requires the following
steps:

1. Connect to the host

2. Enable MemoryTiering as a ESXi boot option


esxcli system settings kernel set -s MemoryTiering -v TRUE

3. Restart the host


reboot

4. Reconnect to the host and exit the maintenance mode

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 23


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

Checking Memory Tiering has been correctly configured on each


host (vSphere Client)
Using the vSphere Client to check that Memory Tiering has been configured and is enabled for
each host requires the following steps:

1. From the Hosts pane, verify Memory Tiering shows as Software

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 24


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

2. From the Configure tab, under Hardware Overview, under Memory, Tier 0 will indicate
the amount of DRAM in the system, Tier 1 will indicate the amount of NVMe storage
being used as tiered memory, and Total will show the total memory for the host as a sum
of Tier 0 and Tier 1.

System refers to the amount of memory used by ESXi and other vSphere related
services, and that memory is not available for VMs to use. To determine the amount of
capacity for VMs, use the formula VM Capacity = Total - System

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 25


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

Configuring Optional Features


This section contains information on how to configure various optional features of Memory
Tiering over NVMe. These features are only meant to be evaluated once the cluster has been
configured following the steps in the previous section and confirmed as operating normally.

Enabling the Use of Large Pages


As noted in the VM Profile Type Recommendations section, VMs are configured with Large
Pages disabled by default. This behavior is only for the tech preview since performance has
only been optimized for VMs configured to use 4K Pages (i.e. small pages).

Large Pages can be configured as enabled to evaluate how workloads behave, but it’s important
to not over-evaluate the performance since we are still tuning performance when using Large
Pages.

To enable using Large Pages (i.e. 2 MB Pages) for specific VMs, you will need to:

1. Login to the vSphere Client for the cluster

2. Power off the VM that will be changed to use Large Pages

3. Navigate to Edit VM settings > VM Options > Advanced > Edit Configuration Parameters
> Add Parameter

4. Add or update the key value pair:

Key Value
------ --------
monitor_control.disable_mmu_largepages FALSE

5. Save the settings.

6. Power on the VM

7. Repeat this process for any VMs that will be changed to use Large Pages

Note that the mixing of VMs using Large Pages and using Small Pages (4K) is supported.

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 26


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

Disabling the Use of Large Pages


The following procedure is only needed if you had previously configured a VM to use Large
Pages.

To restore the default behavior of using Small Pages (i.e. 4K Pages) for specific VMs, you will
need to:

1. Login to the vSphere Client for the cluster.

2. Power off the VM that will be changed to use Small Pages

3. Navigate to Edit VM settings > VM Options > Advanced > Edit Configuration Parameters
> Add Parameter

4. Add or update the key value pair:

Key Value
------ --------
monitor_control.disable_mmu_largepages TRUE

5. Save the settings

6. Power on the VM

7. Repeat this process for any VMs that will be changed to use Small Pages

Note that the mixing of VMs using Large Pages and using Small Pages (4K) is supported.

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 27


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

Configuring the DRAM to NVMe Ratio


As noted in the NVMe Device Recommendations section, by default, hosts are configured to
use a DRAM to NVMe ratio of 4:1. This can be configured per host to evaluate performance
when using different ratios.

The host advanced setting for Mem.TierNvmePct sets the amount of NVMe to be used as
tiered memory using a percentage equivalent of the total amount of DRAM. A host reboot is
required for any changes to this setting to take effect.

For example, setting a value to 25 would configure using an amount of NVMe as tiered memory
that is equivalent to 25% of the total amount of DRAM. This is known as the DRAM to NVMe
ratio of 4:1. A host with 1 TB of DRAM would use 256 GB of NVMe as tiered memory.

Another example, setting this value to 50 would configure using an amount of NVMe as tiered
memory that is equivalent to 50% of the total amount of DRAM. This is known as the DRAM to
NVMe ratio of 2:1. A host with 1 TB of DRAM would use 512 GB of NVMe as tiered memory.

One last example, setting this value to 100 would configure using an amount of NVMe as tiered
memory that is equivalent to 100% of the total amount of DRAM. This is known as the DRAM to
NVMe ratio of 1:1. A host with 1 TB of DRAM would use 1 TB of NVMe as tiered memory.

It is recommended that the amount of NVMe configured as tiered memory does not exceed the
total amount of DRAM.

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 28


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

Configuring the DRAM to NVMe Ratio (vSphere Client)

Using the vSphere Client for the cluster to configure the DRAM to NVMe ratio requires the
following steps:

1. Login to the vSphere Client for the cluster

2. Navigate to Host > Manage > System > Advanced settings

3. Filter or find the option Mem.TierNvmePct

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 29


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

4. Edit the option Mem.TierNvmePct and set this option to a value between 1 and 400.
This value is the percentage of NVMe to be used out of the Total Memory Capacity. The
default value is 25.

5. Save the settings

6. Reboot the host

7. Repeat these steps for each host as needed

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 30


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

Configuring the DRAM to NVMe Ratio (vSphere ESXCLI)

Using the vSphere ESXCLI to configure the DRAM to NVMe ratio requires the following steps:

1. Connect to the host - see Getting Started with ESXCLI 8.0

2. Set the advanced configuration /Mem/TierNvmePct option to a value between 1 and 400.
This value is the percentage of NVMe to be used out of the Total Memory Capacity. The
default value is 25.

esxcfg-advcfg -s <percentage value> /Mem/TierNvmePct

Example: esxcfg-advcfg -s 50 /Mem/TierNvmePct

3. Reboot the host.

4. Repeat these steps for each host as needed

Configuring the DRAM to NVMe Ratio (vSphere PowerCLI)

Using the vSphere PowerCLI scripts to configure the DRAM to NVMe ratio requires the following
PowerCLI script commands:

1. Get the Host Advanced Configuration

Get-VMHostAdvancedConfiguration -Name "Mem.TierNvmePct"

2. Update the Host Advanced Configuration

Get-VMHost <host IP address> | Set-VMHostAdvancedConfiguration


-Name \ Mem.TierNvmePct -value 50

3. Reboot the host

Restart-VMHost <host IP address> -RunAsync -Confirm -Force

4. Repeat these steps for each host as needed

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 31


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

Disabling the Tech Preview


Disabling Memory Tiering for a cluster and hosts requires the following steps.

1. Putting each host into maintenance mode.

It is important that no changes are made to partitions on NVMe devices used as tiered
memory while VMs are operational.

2. Configuring each host to not use Memory Tiering.

This can be done by setting the VMKernel Configuration Option for Memory Tiering to
FALSE or by removing the same Configuration Option.

See the Configuring Memory Tiering for each host section for how that was set initially to
enable Memory Tiering.

3. Configuring any NVMe devices on each host to remove the Tier Partition.

This step will make an NVMe device that was used as tiered memory available for other
uses.

Delete any existing partitions on the NVMe device using the ESXCLI partedUtil utility -
see KB Article 1036609. This can also be performed from the vSphere Client - see the
VMware Doc Erase ESXi Storage Devices.

4. Rebooting each host for the changes to take effect and exiting maintenance mode.

This step will completely disable Memory Tiering.

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 32


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

Troubleshooting
How to report issues and get help
Issues related to Memory Tiering, including any performance issues, may be reported using the
standard Support Request process through VMware by Broadcom support teams. Issues and
questions can also be posted on the VMware Technology Network (VMTN) Community for
vSphere.

Total Memory Capacity only contains the DRAM capacity and no additional NVMe
capacity

1. This could be caused by memory tiering not being configured correctly, an NVMe device
not being configured correctly, a failed NVMe device, or a corrupted Tier Partition on the
NVMe device.
2. Refer to the Configuring Memory Tiering for Each Host section to verify the Memory
Tiering configuration and NVMe device configuration. Repeat the configuration steps in
that section for the host and NVme device to be used as tiered memory.

NVMe Driver Configuration Error Message “The device specified does not have a tier
partition. Aborting partition cleanup”

1. This is caused by trying to use the esxcli command to delete a non-tier partition.
2. To resolve this, use the esxcli partedUtil utility to delete any non-tier partitions. See KB
Article 1036609.

NVMe Driver Configuration Error Message “Selected device does not have a tier device
partition. Aborting partition deletion.”

1. This is caused by trying to use the esxcli command to delete a partition on a device that
does not have any partitions.
2. This can be ignored.

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 33


Memory Tiering over NVMe Tech Preview
vSphere 8.0.3 Technical Guide

NVMe Driver Configuration Error Message “Selected device already has an existing partition.
Aborting partition creation. Please ensure the device name is correct and clean-up any existing
partitions.”

1. This is caused by trying to use the esxcli command to create a new Tier Partition while
there is already an existing partition, including an existing Tier Partition.
2. To resolve this, use the esxcli partedUtil utility to delete any non-tier partitions. See KB
Article 1036609.
3. Follow the steps in this document to create a new Tier Partition on the device.

Broadcom Proprietary and Confidential. © 2024 Broadcom. All rights reserved. 34

You might also like