0% found this document useful (0 votes)
57 views205 pages

Vxrail 80 Adminguide

Uploaded by

fotobaihaqi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views205 pages

Vxrail 80 Adminguide

Uploaded by

fotobaihaqi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 205

Dell VxRail 8.0.

x
Administration Guide

October 2024
Rev. 11
Notes, cautions, and warnings

NOTE: A NOTE indicates important information that helps you make better use of your product.

CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.

WARNING: A WARNING indicates a potential for property damage, personal injury, or death.

© 2023 - 2024 Dell Inc. or its subsidiaries. All rights reserved. Dell Technologies, Dell, and other trademarks are trademarks of Dell Inc. or its
subsidiaries. Other trademarks may be trademarks of their respective owners.
Contents
Revision history..........................................................................................................................................................................8

Chapter 1: Introduction................................................................................................................. 9
Dell Technologies Support.................................................................................................................................................9
Register for a Dell Technologies Support account................................................................................................ 9
Support resources......................................................................................................................................................... 9
Use SolVe Online for VxRail procedures.................................................................................................................10
Locate your VxRail serial number...................................................................................................................................10
Locate your VxRail serial number in VxRail Manager.......................................................................................... 10
Locate your physical VxRail serial number............................................................................................................. 10
Access VxRail content using the QRL.....................................................................................................................10

Chapter 2: VxRail components and features................................................................................. 12


VxRail cluster overview.................................................................................................................................................... 12
VxRail advanced features.................................................................................................................................................13
VxRail deployment options for VMware vCenter Server......................................................................................... 14

Chapter 3: VxRail Manager overview............................................................................................ 16


Configure parameters for VxRail clusters.................................................................................................................... 16
Expand a cluster................................................................................................................................................................. 17
Configure VxRail satellite nodes..................................................................................................................................... 18
Monitor the health of your VxRail..................................................................................................................................19
Shut down a VxRail cluster........................................................................................................................................19
Add a VxRail node to a cluster.................................................................................................................................. 19
Remove a VxRail node from a cluster.................................................................................................................... 20
Configure iDRAC..........................................................................................................................................................20
Configure automated renewal of VxRail Manager certificate.......................................................................... 20

Chapter 4: Manage VxRail entitlements....................................................................................... 21

Chapter 5: Manage VLAN IDs and VxRail IP addresses................................................................. 22


Add or remove the upstream DNS for internal DNS (for VxRail versions earlier than 8.0.210).................... 23
Add or remove the upstream DNS for internal DNS (VxRail 8.0.210 and later)................................................ 24
Change the IP address of the DNS server..................................................................................................................26
Change the IP address of the NTP server..................................................................................................................26
Customize the default network IP address for docker ........................................................................................... 27
Restore the VxRail Manager VM............................................................................................................................. 28

Chapter 6: Manage VxRail passwords.......................................................................................... 29


Change the password of the iDRAC root user...........................................................................................................30
Change the VMware ESXi host management user password (VxRail 8.0.310 and earlier) ........................... 30
Change the password of the VxRail management user .......................................................................................... 31

Chapter 7: Manage VxRail cluster settings.................................................................................. 32


Configure external storage for standard clusters..................................................................................................... 32

Contents 3
Convert one VMware VDS to two VMware VDS...................................................................................................... 34
Identify the port groups............................................................................................................................................. 34
Convert one VMware VDS with two uplinks to two VMware VDS with two uplinks.......................................35
Convert one VMware VDS with four uplinks to two VMware VDS with four uplinks/two uplinks...............36
Convert one VMware VDS with four uplinks to two VMware VDS with two uplinks ..................................... 37
Create a VMware VDS and assign two uplinks.................................................................................................... 37
Add existing VxRail nodes to VDS2.........................................................................................................................37
Create the port group for VMware vSAN in VDS2............................................................................................. 38
Create port group for VMware vSphere vMotion in VDS2...............................................................................38
Unassign uplink3 in VDS1...........................................................................................................................................38
Assign the released VMNIC to uplink1 in VDS2....................................................................................................38
Migrate the VMware vSAN VMkernel from VDS1 to VDS2 port groups....................................................... 39
Migrate the VMware vMotion VMkernel from VDS1 to VDS2 port groups ................................................. 39
Unassign uplink4 in VDS1...........................................................................................................................................40
Assign the released VMNIC to uplink2 in VDS2...................................................................................................40
Enable DPU offloads on VxRail...................................................................................................................................... 40
Enable the DPU offload after Day 1 VxRail deployment..................................................................................... 41
Add a VxRail node........................................................................................................................................................42
Remove VxRail nodes................................................................................................................................................. 43
Remediate the CPU core count after node addition or replacement ..................................................................45
Update the cluster status..........................................................................................................................................47
Trigger a rolling update.............................................................................................................................................. 49
Repoint the VMware vCenter Server to a VMware vCenter Server in a different domain............................50
Repoint a single VMware vCenter Server node to an existing domain...........................................................51
Back up each VxRail node (optional)......................................................................................................................52
Repoint the VMware vCenter Server A of domain 1 to domain 2................................................................... 52
Update the VMware vCenter Server SSL certificates from VMware vCenter Server B.......................... 53
Refresh the node certificates in the VMware vCenter Server A.................................................................... 53
Repoint the VMware vCenter Server node to a new domain.......................................................................... 54
Submit install base updates for VxRail.........................................................................................................................55
View APEX AIOps Infrastructure Observability information in VxRail..................................................................55

Chapter 8: Manage network settings...........................................................................................56


Configure a VxRail node to support the PCIe adapter port....................................................................................56
Configure jumbo frames.................................................................................................................................................. 58
Convert a VxRail-managed VMware VDS to a customer-managed VMware VDS............................................61
Enable a VxRail node to support the PCIE adapter port without an NDC connection.................................... 62
Enable dynamic link aggregation for two ports on a VxRail network for a VxRail-managed VMware
VDS...................................................................................................................................................................................63
Verify the version of the VxRail cluster................................................................................................................. 64
Verify the health state of the VxRail cluster........................................................................................................ 64
Verify the VMware VDS health status................................................................................................................... 64
Verify the VMware VDS uplinks...............................................................................................................................65
Confirm isolation of the VxRail port group............................................................................................................65
Identify the NICs for LAG..........................................................................................................................................66
Identify NIC assignment to node ports.................................................................................................................. 67
Identify the switch ports that are targeted for LAG using iDRAC.................................................................. 67
Prepare the switches for LAG ................................................................................................................................ 68
Configure the first switch for LAG......................................................................................................................... 69
Configure the second ToR switch for LAG...........................................................................................................69

4 Contents
Identify the load-balancing policy on the switches............................................................................................. 70
Configure the LACP policy on the VxRail VDS.....................................................................................................70
Verify the port flags.....................................................................................................................................................71
Migrate the uplink to a LAG port..............................................................................................................................71
Migrate the LACP policy to the standby uplink....................................................................................................72
Move the second VMNIC to LAG............................................................................................................................ 74
Verify LAG connectivity on VxRail nodes.............................................................................................................. 74
Verify that LAG is configured in the VMware VDS............................................................................................. 75
Enable dynamic link aggregation for four ports on a VxRail network for a VxRail-managed VMware
VDS...................................................................................................................................................................................75
Verify the VxRail version on the VxRail cluster....................................................................................................75
Verify the health state of the VxRail cluster........................................................................................................ 76
Verify the VMware VDS health status................................................................................................................... 76
Verify the VMware VDS uplinks...............................................................................................................................76
Confirm uplink isolation of the VxRail port group................................................................................................ 77
Identify the NICs for LAG.......................................................................................................................................... 77
Identify NIC assignment to node ports.................................................................................................................. 78
Identify switch ports for LAG................................................................................................................................... 78
Prepare the switches for LAG .................................................................................................................................79
Configure switch ports for link aggregation.......................................................................................................... 81
Configure the LACP policy on the VxRail VDS..................................................................................................... 81
Verify that port flags are all individual on the switch......................................................................................... 82
Migrate the LACP policy to standby uplink...........................................................................................................82
Change LAG to the active uplink............................................................................................................................. 84
Migrate the active uplink to a link aggregation port........................................................................................... 84
Verify link aggregation connectivity....................................................................................................................... 85
Enable dynamic link aggregation for four ports on a VxRail network for a customer-managed
VMware VDS..................................................................................................................................................................86
Verify the VxRail version on the VxRail cluster....................................................................................................86
Verify the health state of the VxRail cluster........................................................................................................ 86
Verify the VMware VDS health status................................................................................................................... 86
Verify the VMware VDS uplinks............................................................................................................................... 87
Confirm isolation of the VxRail port group............................................................................................................ 87
Identify the NICs for LAG..........................................................................................................................................88
Identify NIC assignment to node ports.................................................................................................................. 89
Identify switch ports for LAG...................................................................................................................................89
Prepare the switches for link aggregation ........................................................................................................... 90
Identify the load-balancing policy on the switches............................................................................................. 92
Configure the LACP policy on the VxRail VDS.....................................................................................................92
Migrate the LACP policy to standby uplink...........................................................................................................93
Migrate an unused uplink to a LAG port................................................................................................................ 95
Configure the first switch for LAG......................................................................................................................... 96
Verify LAG connectivity on the switch.................................................................................................................. 96
Verify link aggregation connectivity on VxRail nodes.........................................................................................97
Move VMware vSAN or VMware vSphere vMotion traffic to LAG................................................................ 97
Verify that LAG is configured in the VMware VDS.............................................................................................98
Move the second VMNIC to LAG............................................................................................................................99
Configure the second ToR switch for LAG...........................................................................................................99
Verify LAG connectivity on the second switch.................................................................................................. 100
Verify LAG connectivity on VxRail nodes............................................................................................................ 100

Contents 5
Enable network redundancy across NDC and PCIe ports...................................................................................... 101
Verify that the VxRail version supports network redundancy........................................................................ 103
Verify that the VxRail cluster is healthy...............................................................................................................103
Verify the VxRail physical network compatibility............................................................................................... 103
Verify the physical switch port configuration..................................................................................................... 104
Verify active uplink on the VMware VDS port groups post migration..........................................................106
Add uplinks to the VMware VDS............................................................................................................................106
Migrate the VxRail network traffic to a new VMNIC........................................................................................106
Set the port group teaming and failover policies............................................................................................... 108
Remove the uplinks from the VMware VDS....................................................................................................... 109
Reset the VMware vSphere alerts for network uplink redundancy.............................................................. 109
Enable VMware vSAN RDMA in the VxRail cluster (VxRail 8.0.210 and later)................................................. 110
Enable VMware vSAN RDMA in the VxRail cluster (VxRail versions earlier than 8.0.210)............................ 113
Migrate the satellite node to a VMware VDS............................................................................................................114
Capture the satellite node VMware standard switch settings........................................................................ 114
Create the VMware VDS for the satellite node.................................................................................................. 115
Set the MTU on the VMware VDS.........................................................................................................................116
Create the VMware VDS port groups for the satellite node........................................................................... 116
Migrate the satellite node to the new VMware VDS......................................................................................... 117
Modify the VMware VDS port group teaming and failover policy........................................................................ 118
Optimize cross-site traffic for VxRail.......................................................................................................................... 119
Configure telemetry settings using curl commands ..........................................................................................121
Configure telemetry settings from VxRail Manager...........................................................................................121

Chapter 9: Manage witness settings.......................................................................................... 123


Change the hostname and IP address of the witness sled................................................................................... 123
Change the IP address of the VxRail-managed witness sled..........................................................................123
Change the hostname of the witness sled.......................................................................................................... 129
Change the hostname and IP address of the VxRail-managed Witness VM.................................................... 132
Change the IP address of the VxRail-managed Witness VM.......................................................................... 133
Change the hostname of the VxRail-managed witness VM for a customer-managed DNS server...... 138
Change the hostname of the VxRail-managed witness VM for a VxRail-managed DNS server............139
Change the hostname of the customer-managed witness host...........................................................................141
Collect the VxRail-supplied witness configuration.................................................................................................. 142
Separate witness traffic on an existing stretched cluster.....................................................................................143

Chapter 10: Collect log bundles..................................................................................................149


Collect the VxRail Manager log bundle.......................................................................................................................149
Collect log bundles from VxRail Manager..................................................................................................................150
Collect the VMware vCenter Server log bundle....................................................................................................... 151
Collect the VMware ESXi log bundle.......................................................................................................................... 152
Collect the iDRAC log bundle........................................................................................................................................152
Collect the platform log bundle....................................................................................................................................153
Collect the log bundle with node selection............................................................................................................... 154
Collect the log bundle with component selection....................................................................................................154
Collect the full log bundle.............................................................................................................................................. 155
Collect the witness log bundle..................................................................................................................................... 156
Delete log bundles from VxRail Manager................................................................................................................... 157
Collect the satellite node log bundles from VxRail Manager.................................................................................157

6 Contents
Delete the satellite node bundles from VxRail Manager ....................................................................................... 157
Set the PostgreSQL log destination to the system log..........................................................................................157
Renew the PostgreSQL certificate............................................................................................................................. 159

Chapter 11: Manage certificates................................................................................................. 161


Import the VMware vCenter Server certificates into the VxRail Manager trust store...................................161
Import the VMware ESXi host certificates to VxRail Manager............................................................................ 164
Import VMware vSphere SSL certificates to VxRail Manager............................................................................. 165

Chapter 12: Rename VxRail components.....................................................................................168


Change the FQDN of the VMware vCenter Server Appliance............................................................................. 168

Chapter 13: Remove VxRail nodes...............................................................................................173


Verify the VxRail cluster health.................................................................................................................................... 173
Verify the capacity, CPU, and memory requirements.............................................................................................173
Remove the node.............................................................................................................................................................174
Reboot VxRail nodes....................................................................................................................................................... 175
Reboot VxRail nodes sequentially................................................................................................................................ 175

Chapter 14: Restore the VMware vCenter Server from a file-based backup................................ 177

Chapter 15: VxRail Manager file-based backup........................................................................... 191


Back up the VxRail Manager manually........................................................................................................................ 191
Back up VxRail Manager................................................................................................................................................ 193
Configure automatic backup for the VxRail Manager.............................................................................................193

Chapter 16: Replace and add VxRail hardware............................................................................ 195

Chapter 17: Set up external storage for a dynamic node cluster................................................. 196

Chapter 18: Upgrade your VxRail................................................................................................ 197


Upgrade workflow for LCM...........................................................................................................................................197
Generate the Update Advisor Report.........................................................................................................................199
Update Advisor Report...................................................................................................................................................199

Chapter 19: Broadcom products used with VxRail...................................................................... 205

Contents 7
Revision history
Date Revision Description of change
October 2024 11 License information updated.
August 2024 10 Updated for VxRail 8.0.300.
July 2024 9 Updated for VxRail 8.0.230.
May 2024 8 Updated with CloudIQ rebranding changes.
May 2024 7 Updated with licensing information.
March 2024 6 Updated for VxRail 8.0.210.
November 2023 5 Updated for VxRail 8.0.200 and subscription licensing.
August 2023 4 Updated with additional procedures from SolVe.
March 2023 3 Updated for VxRail 8.0.020.
January 2023 2 Updated for VxRail 8.0.010.
January 2023 1 Initial release for VxRail 8.0.000.

8 Revision history
1
Introduction
This document describes some of the administrative tasks that you can perform for VxRail.
This document is also designed for people familiar with:
● Dell Technologies systems and software
● Broadcom virtualization products
● Data center appliances and infrastructure
● SolVe Online for VxRail
This document is intended for customers, field personnel, and partners who want to manage and operate VxRail clusters.
See the VxRail Documentation Quick Reference List for a complete list of VxRail documentation.

Dell Technologies Support


Create a Support account to access support resources for your VxRail. Link your Support account with VxRail Manager to
access resources without a separate login.
If you already have an account, register your VxRail to access the available resources. You can link your Online Support account
with VxRail Manager and access support resources without having to log in separately.

Register for a Dell Technologies Support account


Create a Support account to obtain VxRail documentation and software updates.

About this task


If you already have an account, link your Support account with VxRail Manager and access resources without having to log in
separately.
After you register, you can:
● Access or download the SolVe Desktop application for customized procedures to replace hardware components and upgrade
software components.
● Link your Support account with VxRail Manager to access resources.
For information about how to access a Support account or to upgrade an existing account, see KB 21768.

Steps
1. Go to Dell Technologies Support.
2. Click Create an Account and follow the steps to create an account.
It may take approximately 48 hours to receive a confirmation of account creation.

Support resources
Support resources are available for your VxRail.
Use the following resources to obtain support for your VxRail:
● In the VMware vSphere Web Client, select VxRail. Use the Support functions on the VxRail Dashboard.
● Go to Dell Technologies Support.

Introduction 9
Use SolVe Online for VxRail procedures
To avoid potential data loss, always use SolVe Online for VxRail to generate procedures before you replace any hardware
components or upgrade software.
CAUTION: If you do not use SolVe Online for VxRail to generate procedures to replace hardware components or
perform software upgrades, data loss may occur for VxRail.
You must have a Dell Technologies Support account to use SolVe Online for VxRail.

Locate your VxRail serial number


If you contact Dell Technologies Support for your VxRail, provide the VxRail serial number, also known as the Product Serial
Number Tag (PSNT).
Identify the VxRail serial number in VMware vSphere Web Client or locate the serial number that is printed on the physical
VxRail.

Locate your VxRail serial number in VxRail Manager


The PSNT is the VxRail serial number in VxRail Manager.

Steps
1. On the VMware vSphere Web Client, select the Inventory icon.
2. Select the VxRail cluster and click the Monitor tab.
3. Expand VxRail, and click Physical View to view the serial number.

Locate your physical VxRail serial number


Locate the serial number on your VxRail.

Steps
1. On the upper right corner of the VxRail chassis, locate the luggage tag.
2. Pull out the blue-tabbed luggage tag.
3. Locate the serial number label on the pull-out tag.
The Product Serial Number Tag (PSNT) is the 14-digit number that is on the front edge of the luggage tag.

Access VxRail content using the QRL


Use the Service Tag or QRL code on the Dell QRL site to access VxRail information for VxRail 15G, and later models.

About this task


If your VxRail has a QRL that is added to the luggage tag, you can use this tag to obtain factory configuration and warranty
information. You can also enter the Service Tag to access information.

Steps
1. On the VxRail luggage tag, locate the QRL or Service Tag.

10 Introduction
Figure 1. QRL code
2. Using the camera on your phone or laptop, use the QRL code on the Service Tag to access information specific to your
VxRail. You can also go to qrl.dell.com to enter the Service Tag information.

Introduction 11
2
VxRail components and features
PowerEdge servers provide power to VxRail which then uses HCI software to provide virtualization, compute, and storage in
a scalable system. VxRail provides centralized management, orchestration, and life cycle management. VxRail can be rapidly
deployed into an existing data center environment that can deploy applications and services.
The following table provides an overview of VxRail components and features:

Table 1. VxRail components and features


Components Description
Management VxRail Manager is a plug-in for the VMware vCenter Server to manage VxRail clusters from the VMware
vSphere Web Client. VxRail Manager provides the following features:
● Performs diagnostics by automating physical views of each node, down to the component level.
● Automatically detects new nodes and adds the nodes to a cluster.
● Automates Day 0, Day 1, and Day 2 operations.
● Provides a single point of support, KB articles, user forums, and best practices.
● Provides the SDDC building blocks software stack that includes compute, network, storage, and
management.
● Monitors system health with deep hardware intelligence and a UI.
● Views software versions and updates and upgrade system software.
● Accesses qualified software products with VxRail Market.
● Replaces hardware, add drives, and cycle power to the cluster or nodes.
● Continuously Validated State (CVS) monitors the VxRail compliance state and reports detected drifts.
VxRail SaaS multicluster management provides centralized data collection and analytics to streamline
monitoring of VxRail clusters, improve serviceability, and upgrade clusters. Use this information to
manage the performance and capacity of your engineered HCI.
Storage ● VMware vSAN represents VMware vSAN Original Storage Architecture (OSA).
● VMware vSAN Express Storage Architecture (ESA) for VxRail 8.0.0 and later. VxRail 8.0.010 does not
support VMware vSAN ESA.
● VxRail dynamic nodes
○ PowerStore, PowerMax, and Dell Unity
○ PowerFlex
○ Another VMware vSAN cluster through VMware vSAN HCI Mesh
● Virtualization infrastructure administrators manage storage on a per-VM basis. Storage policies are
defined at the VM level for provisioning and load balancing.
Virtualization ● VMware vSphere, including VMware ESXi
● VMware vCenter Server
VMs RecoverPoint for VMs among other applications.

VxRail leverages VMware vSphere and VMware vSAN to provide server virtualization and software-defined storage. Through
the logical and physical networks, individual nodes act as a single system providing scalability, resiliency, and workload balance.
The VxRail software bundle in the compute nodes contains VxRail Manager, VMware vCenter Server, VMware vSAN, and
VMware vSphere. Broadcom components are installed with temporary licenses that expire after 60 days.
For more information, see documentation at VMware Docs by Broadcom.

VxRail cluster overview


VxRail consists of server nodes that are designed and engineered for VxRail. A VxRail cluster depends on adjacent ToR Ethernet
switches to support cluster operations. You can customize and allocate the nodes to applications and services that are based on

12 VxRail components and features


defined business and operational requirements. All physical compute, network, and storage resources in VxRail are managed as a
single shared pool.
Local disk drives on each node are used to form a VMware vSAN data store as the primary storage resource for application
workloads. You can also customize the nodes without local disk drives to use external data center resources for primary storage.
VxRail supports the following types of clusters and satellite nodes:

Table 2. Clusters and satellite nodes


Cluster or node Description
VxRail cluster with VxRail cluster with VMware vSAN is known as the standard cluster with a minimum of three nodes and
VMware vSAN can scale to 64 nodes. All the nodes provide the physical compute and storage resources to support the
application workload. Every slot in the node contains disk drives that meet performance and capacity
requirements for the application workload. When the cluster is initialized, the nodes are formed into
a local VMware vSAN data store. You can expand the cluster with additional compute and storage
resources.

VMware vSAN supports VMware vSAN Original Storage Architecture (OSA).

VMware vSAN Express Storage Architecture (ESA) is supported for VxRail 8.0.x. VxRail 8.0.010 does
not support VMware vSAN ESA.

VxRail dynamic A dynamic node VxRail cluster starts with a minimum of two nodes and can scale to a maximum of
node cluster 64 nodes. Dynamic clusters do not have local drives and instead use the following external storage
resources to support workload and applications:
● PowerStore, PowerMax, and Dell Unity
● PowerFlex
● Another VMware vSAN cluster through VMware vSAN HCI Mesh
● FC
VxRail stretched Supports synchronous I/O on a local VMware vSAN data store on two sites that are separated
cluster with geographically. The VMware vSAN stretched cluster enables site-level failure protection with no loss
VMware vSAN of service or loss of data. The VMware vSAN stretched cluster requires a witness for monitoring and the
strict network guidelines. See the VxRail Architecture Overview for more information.
VxRail 2-node Supports small-scale deployments with reduced workload and availability requirements with two nodes.
cluster with The two-node cluster also requires a witness and strict network guidelines. You can convert a two-node
VMware vSAN ROBO cluster to a standard VxRail three-node cluster and then expand to 64 nodes.
NOTE: VxRail 8.0.010 does not support two-node ROBO clusters.

Satellite node A centrally located VxRail cluster monitors and manages satellite nodes that are deployed locally and
remotely. Satellite nodes use the same PowerEdge servers as the other VxRail cluster types and the
same engineering, qualification, and manufacturing processes. VxRail Manager supports the software
LCM of satellite nodes. Satellite nodes require a single IP address to enable connectivity to a central
cluster.

See the Dell VxRail Network Planning Guide for more information about VxRail clusters.

VxRail advanced features


VxRail provides advanced features such as automatic deployment, automatic scale-out, fault tolerance, and diagnostic logging.

Automatic deployment
After you set up your system and configure network settings, VxRail Manager automates the installation and configuration of all
nodes into a cluster.

VxRail components and features 13


Automatic scale-out
VxRail provides automated scale-out by detecting an unconfigured node when powered on and adding a node to the cluster. The
following scale-out options are available:
● With multinode expansion, you reduce the time required to expand your cluster by adding up to six VxRail nodes into a
cluster in parallel.
● If you are using VMware Loudmouth, VxRail Manager uses autodiscovery capabilities that are based on the RFC-recognized
ZNC protocol. Loudmouth requires IPv6 multicast which is limited to the management VLAN that the nodes use for
communication. VMware Loudmouth:
○ Runs on each VMware ESXi host device and on the VxRail Manager VM.
○ Enables you to automatically discover and configure VxRail on your network.
○ Enables VxRail Manager to discover all nodes and automate the configuration process.

Node failure tolerance


VxRail tolerates node failures when using VMware vSAN or VMware vSAN ESA, as defined by the VMware vSAN policy. VxRail
implements the following standard VMware vSAN policy of one failure by default:
● An entire node can fail, and the system continues to function.
● A drive failure cannot affect more than one node.
● One cache drive can affect as many as six capacity drives (HDDs or SSDs).
● VMware vSAN ESA does not use cache drive.
● One network port on any node can fail without affecting the node.
VxRail Manager configures network failover through the virtual switch configuration in VMware ESXi during the initial setup.

Logging and log bundles


VxRail Manager provides logging and log bundles that provide operation and event information about your VxRail cluster.

VxRail deployment options for VMware vCenter


Server
A VxRail cluster can join an existing customer-managed VMware vCenter Server during the initial configuration. With a
customer-managed VMware vCenter Server, you can manage multiple VxRail clusters from a single interface. A customer-
managed VMware vCenter Server can be hosted on the VxRail cluster it is managing or outside of that cluster within the
customer environment.
Depending on the VMware vCenter Server location and source, the scope of VxRail management may differ. The following table
describes the types of management:

Table 3. Management types


VMware vCenter Cluster type Internal to VxRail External to VxRail VxRail scope of
Server cluster cluster management
VxRail-managed Regular Default and preferred Not supported Multiple clusters
Stretched Supported Not supported Multiple clusters
vSAN 2-node Supported VxRail 7.0.410 Not supported Multiple clusters
and later
Customer- Regular Supported Supported Multiple clusters
managed
Stretched Supported Supported and preferred Multiple clusters
vSAN 2-node Supported VxRail 7.0.410 Supported VxRail 7.0.410 Multiple clusters
and later and later

14 VxRail components and features


To join an existing customer-managed VMware vCenter Server, enter an existing data center and a nonconflicting cluster name
during the initial configuration. VxRail joins the data center as a VMware vSAN cluster with the specified cluster name.
When using your VxRail with a customer-managed VMware vCenter Server, verify that:
● The customer-managed VMware vCenter Server version is listed in the KB 000520355.
● A customer-supplied license is installed.

VxRail components and features 15


3
VxRail Manager overview
VxRail Manager is registered with the VMware vCenter Server when you install or upgrade the VxRail version.
Use VxRail Manager and SolVe Online for VxRail to perform the following tasks:
● Configure, add or remove hosts
● Shut down VxRail clusters
● Configure satellite nodes
● Configure iDRAC
● View service connectivity and system health information

Configure parameters for VxRail clusters


Use VxRail Manager to manage parameters for VxRail clusters.
The following table lists the VxRail cluster parameters:

Table 4. VxRail cluster parameters


Parameter Description
System Indicates the version of VxRail Manager software running and enables the VxRail update.
Updates Perform cluster-level VxRail upgrades and view compliance reports.
Certificate Update the VxRail certificate.
Market Access qualified applications to install and run on your VxRail cluster.
Hosts View or modify the hosts within the VxRail cluster.
Support Displays the linked Dell Technologies Support account and link or change to a new account.
Connectivity Displays the linked Dell Technologies Support account and provides a link to change to a
new account. Enables service connectivity.
Networking Displays the proxy status, configure proxy settings for Internet connections, and configure
traffic throttling.
Health Monitoring Enable or disable the system health monitoring feature for maintenance purposes.
Troubleshooting Displays the last several collected logs and generates a customized log bundle using types
and nodes.

Service connectivity
Service connectivity provides secure, automated access between Dell Technologies Support and VxRail. You can enable service
connectivity in direct connection mode or through an external secure connection gateway. Enable remote support connectivity
for VxRail using the VMware vSphere Web Client. Remote support connectivity is required for APEX AIOps Infrastructure
Observability. Using service connectivity, you can:
● Provide usage data to the Dell Technologies customer experience improvement program.
● Determine the level of data about your VxRail environment that is collected. Environmental usage, performance, capacity,
and configuration information are the different types of data that are collected.
Dell Technologies uses this information to improve your experience with VxRail.

16 VxRail Manager overview


SaaS multicluster management features
To access the SaaS multicluster management and analytics features of VxRail, log in to APEX AIOps Infrastructure Observability
and enable remote support connectivity on each cluster.

Convert a VxRail-managed VMware vCenter Server to a customer-


managed VMware vCenter Server
Converting a VxRail-managed VMware vCenter Server to a customer-managed VMware vCenter Server is a one-way
conversion. You cannot convert a customer-managed VMware vCenter Server back to a VxRail-managed VMware vCenter
Server once it has been converted.
A VxRail-managed VMware vCenter Server is licensed by VxRail. If a VxRail-managed VMware vCenter Server is converted to
a customer-managed VMware vCenter Server, the customer-managed VMware vCenter Server is not supported by the VxRail
license. After the conversion process, provide a customer-supplied VMware vCenter Server license. For more information about
obtaining licenses, see Manage VxRail licenses.
After you convert a VxRail-managed VMware vCenter Server to a customer-managed VMware vCenter Server, you cannot use
VxRail Manager to manage the VMware vCenter Server life cycle.

Enable VMware vSphere Lifecycle Manager (optional)


Use the VMware vSphere Web Client to enable VMware vSphere Lifecycle Manager (vLCM).
The following guidelines apply after you enable VMware vLCM:
● If VMware vLCM is enabled on a cluster:
○ The VxRail native LCM backend cannot be used for host upgrades on that cluster.
○ Manually upgrade the VMware vSAN disk format after the hosts are upgraded.
● You cannot disable VMware vLCM.

Update system software


To update system software, generate a procedure using SolVe Online for VxRail. You can also install and use the SolVe Desktop
application on your Windows system. Dell Technologies continually updates the information in SolVe to ensure that the latest
versions, procedures, and notes are available.
If your cluster is in an unhealthy state or has critical health alarms, you may not be able to update your system software.
Contact your sales representative or reseller or open a service request to update your system.
For major upgrades from VxRail version 4.7.000 or 7.0.000 to VxRail 8.0.x, obtain VMware vSphere 8.0 licenses for the VMware
vCenter Server, VMware vSAN, or VMware vSphere from the VMware licensing portal.

Expand a cluster
With VxRail automated installation and scale-out features, you can expand your cluster from three nodes.
You can use automated installation and scale-out features or multinode expansion to expand your clusters. VxRail automated
installation and scale-out features to expand your clusters from three nodes. VxRail multinode expansion for a higher compute
and storage capacity, and to simultaneously add up to six nodes.
VxRail supports expansion of the following clusters:
● The VxRail VMware vSAN cluster configuration is three to 64 nodes. Expansion of a cluster through node addition may lead
to stranded assets where excess compute and storage resources cannot be shared outside of the cluster. If your workloads
require a precise balance of compute and storage resources, use a dynamic cluster.
● The dynamic node cluster configuration is two to 64 nodes.
● The VxRail 2-node ROBO cluster configuration consists of two nodes. You can convert a two-node ROBO cluster into a
standard VxRail 3-node cluster and expand to 64 nodes.

VxRail Manager overview 17


Deploy a mixed cluster in VxRail
Follow best practices when you deploy a mixed cluster:
● For most VxRail models, the first three nodes in a cluster must the same type and with an identical configuration. For 2-node
clusters, both nodes must be the same type with an identical configuration.
● VxRail G560 requires three nodes.
● All nodes in the cluster must be running the same VxRail software version.
● The version must meet the minimum for the newest hardware model node that is being added.
● All nodes must match with the hardware model, configuration, memory, processor, drive size, number of drives, and type.
● The 15G PowerEdge server must be running VxRail 7.0.210 or VxRail 8.0.0 or later.
● Do not use 10 GbE bandwidth in clusters with 25 GbE bandwidth.
● Do not use hybrid nodes in clusters with all-flash or all-NVMe nodes.
● VxRail Intel-based nodes can only be added into a cluster with other Intel-based nodes.
● VxRail AMD-based nodes can only be added into a cluster with other AMD-based nodes.

Expand a cluster
The following actions are not permitted when adding a node in a VxRail cluster:
● Add a VIB to the cluster, such as RecoverPoint for VMs, VMware NSX, NVIDIA GPU, or other third-party VIBs.
● Configure jumbo frames on the cluster.
● Enable VMware vSAN encryption.
● Install external storage targets in the cluster, such as iSCSI, NFS, or FC.
● Install an additional VMware VDS.
● Configure a stretched cluster.
● Perform security hardening on the cluster.
If any change is made after the initial cluster deployment, place the new node in the maintenance mode and apply matching
settings.

Configure VxRail satellite nodes


Using VxRail Manager, you can configure certain parameters that apply to the hosts in your VxRail cluster.
VxRail uses satellite nodes to provide simplicity, agility, and automation. Satellite nodes are used to address more edge use
cases with single node deployments. Using satellite nodes, you can extend the VxRail operational model and efficiencies to edge
sites while automating day-to-day operations, health monitoring, and life cycle management. This service is provided from a
centralized location without the need for local technical or specialized resources.
With VxRail version 8.0.0, a VxRail cluster with a customer-managed VMware vCenter Server and VMware vSAN ESA can
manage the satellite nodes. The deployed VxRail Manager VM can control all satellite nodes from a centralized host management
location in the VMware vCenter Server. You can add, remove, and update satellite nodes from one access point using the VxRail
Manager. The host folder is used to logically group the VxRail satellite nodes together.
VxRail version 8.0.010 does not support satellite nodes or VMware vSAN ESA.
From the VMware vSphere Web Client, you can do the following:
● Configure iDRAC.
● Add, edit, or remove a host folder.
● Add a node to a folder.
● Upgrade the satellite nodes in a folder.
● Remove a host device.
Go to VxRail Manager for configuration steps that use the VMware vSphere Web Client. For more information about using the
VMware vSphere Web Client, see VMware Docs.

18 VxRail Manager overview


Monitor the health of your VxRail
Monitor the health of your VxRail by viewing the health of components in VMware vSphere Client.

Service connectivity
You can verify your VxRail connectivity heartbeat, which is the last time that your system has communicated using service
connectivity. You can also review the configuration data that was sent to service connectivity.
Your VxRail can use service connectivity by connecting directly to the Dell backend (Dell Support Team that handles requests)
or through secure connect gateway. Use VxRail Manager to enable service connectivity on your VxRail using VMware vSphere
Web Client.

Physical system health


VxRail Manager enables you to monitor the physical health of the VxRail. All tasks are performed using the VMware vSphere
Web Client.
You can monitor the following VxRail components:
● Health, status, and event information
● Drives
● Nodes
● Power supply
● NIC status
For more information about using the VMware vSphere Web Client, see VMware Docs.

Shut down a VxRail cluster


Shut down your VxRail cluster from VxRail Manager.

About this task


If your customer-managed VMware vCenter Server is hosted on VxRail, you cannot use the cluster shutdown functionality. You
must perform a manual shutdown and use the start-up procedure to prevent a login issue after restart. When you shut down a
cluster, VxRail Manager automatically performs the following steps:

Steps
1. Shuts down related VMs and services.
2. Performs system health diagnostics and maintenance mode diagnostics.
3. Indicates any errors or conditions that prevent shutting down.

Add a VxRail node to a cluster


Add a VxRail node to a cluster.

Prerequisites
Before adding a VxRail host to a cluster, verify that the nodes are the same type, family, and configuration in the VMware vSAN
ESA initial release.

Steps
To add a node to the cluster, see the Dell VxRail 8.0.x Admin Guide.

VxRail Manager overview 19


Remove a VxRail node from a cluster
After a node is removed from a cluster, you must image the node before you add or repurpose the node. Do not use the node
until it is imaged.

Steps
To remove a VxRail node from a cluster, generate a step-by-step procedure using SolVe Online for VxRail.

Configure iDRAC
Configure iDRAC for a VxRail host.

Steps
1. In the VMware vSphere Web Client, select the Inventory icon.
2. Select a host and click the Configure tab.
3. Select VxRail > iDRAC Configuration.
4. Click Edit next to IPv4 Settings or IPv6 Settings.
5. Modify the settings and click Apply.
6. Click Edit next to VLAN Settings.
7. Modify the settings and click Apply.
8. To add an iDRAC user, click Add, enter user information, and click Apply.

Configure automated renewal of VxRail Manager certificate


VxRail Manager automatically enrolls the VxRail Manager certificate. This procedure is provided if you need to edit the automatic
renewal.

About this task


This procedure is provided if you need to edit the automatic renewal.

Steps
1. From VMware vSphere Web Client, select Inventory icon.
2. Select the VxRail cluster on which you want to configure automatic renewal of VxRail Manager certificate.
3. Click the Configure tab.
4. Select VxRail > Certificate in the left window.
5. Click EDIT AUTOMATED RENEWAL.
6. In the Edit Automated Renewal window, click Enable or Disable.
7. Enter the Certificate Authority Server URL, Challenge Password, Certificate Validation Frequency and Renew
Certificate Before Expiration, and then click APPLY.

20 VxRail Manager overview


4
Manage VxRail entitlements
VxRail is deployed with temporary licenses for Broadcom products that expire after 60 days.
Activate subscription licenses on the VxRail cluster before the 60-day grace period expires to ensure uninterrupted cluster
operations.
Contact your Dell Technologies account team and/or Broadcom account team to determine what options are available for your
organization.

Manage VxRail entitlements 21


5
Manage VLAN IDs and VxRail IP addresses
Change VLAN IDs, and VxRail IP addresses using the following links.
Dynamic node clusters do not support VMware vSAN or witness traffic.
If you change a subnet, the changes must be within the same subnet. Changes outside the subnet are not supported.

Change VLAN IDs


Use the following links to change VLAN IDs:

Table 5. Change the VLAN ID for VxRail components


VLAN ID Link
VM networks Change the VLAN ID of the VM Networks
VMware vSphere vMotion
VMware vSAN
VMware vCenter Server Appliance
Management and VMware vCenter Configure Virtual Machine Networking on a VMware VDS
Server Network
Witness port group in the L3 Deploying a vSAN Witness Appliance
configuration for a 2-node cluster

Repoint NTP or DNS server IP addresses


Use the public API to repoint the NTP server IP address or DNS server IP address. For more information about how to change
the DNS server IP address, go to DNS server IP on VxRail 8.0 releases using the rest API. To repoint to a new DNS server IP
address, see VxRail API- Set DNS of VxRail cluster.

Change a hostname or IP address


Changes must always be made within the same subnet. You cannot make changes outside of the existing subnet.
Use the following links to change VxRail IP addresses:

Table 6. Change a hostname or IP addresses


IP address Description or link
Witness traffic in an L2 configuration Change the IP Address of Witness Traffic in an L2 configuration for a 2-node cluster
for a 2-node cluster
Witness traffic in an L3 configuration Change the IP Address of Witness Traffic in an L3 configuration for a 2-node cluster
for a 2-node cluster

Witness traffic does not apply to dynamic node clusters.


Contact Dell Support to change the following hostnames and IP addresses:
● Internal VMware vCenter Server Appliance VM IP address
● Internal VMware Platform Service Controller VM IP address
● VxRail Manager VM hostname and IP address

22 Manage VLAN IDs and VxRail IP addresses


● VMware vCenter Server VM
● VMware vSAN Network for the customer-managed VMware vCenter Server Appliance
● VMware vSAN Network for the VxRail-managed VMware vCenter Server Appliance

Add or remove the upstream DNS for internal DNS


(for VxRail versions earlier than 8.0.210)
Add or remove the upstream DNS when using internal DNS for a VxRail cluster. If a cluster uses external DNS, resolve the
FQDN outside the cluster. If a cluster uses internal DNS, add the record manually.

Prerequisites
Verify that the VxRail cluster is using the internal DNS. To convert the internal DNS to external DNS, see Change the IP address
of the DNS server.
Use the python script to add or remove upstream DNS:
● Download the python upstream_dns_operation.py script (.zip) .
● Extract the file from DL100623_upstream_dns_operation.zip.

About this task


If there are multiple clusters that are deployed in the same domain and subnet, the internal DNS does not forward external
queries for these clusters. If an external host in the same domain and subnet must be resolved, use an external DNS server.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Steps
1. Log in to the VxRail Manager as mystic and su to root.
2. To add the upstream DNS, enter:
python upstream_dns_operation.py add -s <upstream_dns_ipaddress>

3. To verify that the upstream DNS that is added, enter:


nslookup <new_FQDN>

The new FQDN must be from the upstream DNS and not by the internal DNS.
4. To remove the upstream DNS server, enter:
python upstream_dns_operation.py remove -s <upstream_dns_ipaddress>

5. To view the DNS help options, enter:


python upstream_dns_operation.py -h

6. DNS queries for an external domain or different subnet on the upstream DNS server are not forwarded through VxRail
Manager as the internal DNS server. This behavior is set up by default to enhance security. To add the upstream DNS server
on the VMware vCenter Server and VMware ESXi host manually, perform the following steps:
For more information, see KB 226207.
a. Add an upstream DNS server on the VMware vCenter Server, go to https://<vCenter_Server_Ip>:5480.
b. Select Networking and click Edit.
c. On the Edit Network Settings window, under Edit Settings, select Hostname and DNS.
d. Click Enter DNS settings manually and enter the server following the internal DNS server IP address. Use commas to
separate addresses. Click Next and Finish, wait unit the DNS is updated.
e. SSH to the VMware vCenter Server. Use nslookup to ensure that the upstream DNS server could be queried on
VMware vCenter Server.

Manage VLAN IDs and VxRail IP addresses 23


Figure 2. Example output

f. Log in to the VMware vSphere Web Client as an administrator.


g. From the Inventory icon, select the VxRail cluster and a host. Perform steps h through k for each host.
h. Click the Configure tab, and select Networking > TCP/IP Configuration.
i. Select Default and click Edit.
j. Select DNS configuration and click Enter Settings manually to enter the upstream DNS server on Alternate DNS
server and press OK.
k. SSH to the VMware ESXi node. Use the ping command to ensure that the FQDN is resolved by the upstream DNS
server.

Figure 3. Example output

l. Repeat steps h through k for each host in the cluster.

Add or remove the upstream DNS for internal DNS


(VxRail 8.0.210 and later)
Add or remove the upstream DNS when using internal DNS for a VxRail cluster. If a cluster uses external DNS, resolve the
FQDN outside the cluster. If a cluster uses internal DNS, add the record manually.

Prerequisites
Verify that the VxRail cluster is using the internal DNS. To convert the internal DNS to external DNS, see Change the IP address
of the DNS server.

About this task


For VxRail 8.0.210 and later, IPv6 and dual-stack environments are supported. Use the VxRail Manager or the API to add or edit
the upstream DNS servers.
If there are multiple clusters that are deployed in the same domain and subnet, the internal DNS does not forward external
queries for these clusters. If an external host in the same domain and subnet must be resolved, use an external DNS server.

24 Manage VLAN IDs and VxRail IP addresses


This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Steps
1. Log in to the VMware vSphere Web Client as administrator.
2. From the Inventory icon, select the cluster and click the Configure tab.
3. Click VxRail > Settings.
4. Under DNS Server, click ACTIONS > Edit Upstream DNS.
5. Enter the IP address and click APPLY.
6. DNS queries for an external domain or different subnet on the upstream DNS server are not forwarded through VxRail
Manager as the internal DNS server. This behavior is set up by default to enhance security. To add the upstream DNS server
on the VMware vCenter Server and VMware ESXi host manually, perform the following steps:
For more information, see KB 226207.
a. Add an upstream DNS server on the VMware vCenter Server, go to https://<vCenter_Server_Ip>:5480.
b. Select Networking and click Edit.
c. On the Edit Network Settings window, under Edit Settings, select Hostname and DNS.
d. Click Enter DNS settings manually and enter the server following the internal DNS server IP address. Use commas to
separate addresses. Click Next and Finish, wait unit the DNS is updated.
e. SSH to the VMware vCenter Server. Use nslookup to ensure that the upstream DNS server could be queried on
VMware vCenter Server.

Figure 4. Example output

f. Log in to the VMware vSphere Web Client as an administrator.


g. From the Inventory icon, select the VxRail cluster and a host and perform steps h through k for every host in the cluster.
h. Click the Configure tab, and select Networking > TCP/IP Configuration.
i. Select Default and click Edit.
j. Select DNS configuration and click Enter Settings manually to enter the upstream DNS server on Alternate DNS
server and press OK.
k. SSH to the VMware ESXi node. Use the ping command to ensure that the FQDN is resolved by the upstream DNS
server.

Figure 5. Example output

Manage VLAN IDs and VxRail IP addresses 25


l. Repeat steps h through k for each host in the cluster.

Change the IP address of the DNS server


Change the IP address of the external DNS server or the internal DNS server that was configured during installation.

About this task


The internal DNS that was configured during installation uses VxRail Manager as the DNS server. You can convert the internal
DNS server to and external DNS server. A new DNS Server IP address can be changed or deployed through the procedure.
These changes repoint the VxRail cluster to the new DNS IP address. You can deploy more than one DNS server.
For VCF environments, go to SolVe Online for VxRail to generate a VCF procedure to change the IP address of the NTP server.

CAUTION: Do not perform this task in a VCF environment.

NOTE: The maximum number of DNS servers for IPv4 and IPv6 environments is two. Ensure that all records are in the DNS
servers.

NOTE: The maximum number of DNS servers is three for dual-stack environments and must contain one IPv4 and one IPv6
address. Ensure that all records are in the DNS servers.
This procedure applies to VxRail 8.0.210 and later clusters that a VxRail-managed VMware vCenter Server or a customer-
managed VMware vCenter Server manages with an external DNS server.
For VxRail versions earlier than 8.0.210, to repoint DNS servers using the public API, check the API on VxRail API.
See the VxRail 8.x Support Matrix for a list of the supported versions.
This procedure is intended for customers, Dell Technologies service providers who are authorized to work on a VxRail Cluster,
and VxRail administrators.

Steps
1. To convert an internal DNS to an external DNS, perform the following:
a. Log in to the VMware vSphere Web Client as administrator.
b. Select the cluster and click the Configure tab.
c. Click VxRail > Settings.
d. Under DNS Server, click ACTIONS > Convert to External DNS Server.
e. Enter the IP address and click APPLY.
2. To repoint DNS servers, perform the following:
a. Log in to the VMware vSphere Web Client as administrator.
b. Select the cluster and click the Configure tab.
c. Click VxRail > Settings.
d. Under DNS Server, click Edit.
e. Enter the IP address and click APPLY.

Change the IP address of the NTP server


Repoint the IP address of the NTP server that was configured during installation.

About this task


You can create an NTP server IP address or deploy a new server. The cluster must direct to the new NTP server IP address.
Time synchronization is essential for VxRail clusters. When changing the NTP server, verify that all services such as Active
Directory, DNS, and your own workstation are synchronized.

CAUTION: Do not perform this task in a VCF environment.

Go to SolVe Online for VxRail to generate a VCF procedure to change the IP address of the NTP server.

26 Manage VLAN IDs and VxRail IP addresses


This procedure applies to VxRail 8.0.xxx and later clusters that a VxRail-managed VMware vCenter Server or a customer-
managed VMware vCenter Server manages with an external NTP server. See the VxRail 8.x Support Matrix for a list of the
supported versions.
This procedure is intended for customers, Dell Technologies service providers who are authorized to work on a VxRail cluster,
and VxRail administrators.

Steps
1. For VxRail 8.0.210 and later, to repoint an NTP server, perform the following:
a. Log in to the VMware vSphere Web Client as administrator.
b. Select the cluster and click the Configure tab.
c. Click VxRail > Settings.
d. Under NTP Server, click Edit.
e. Enter the IP address or FQDN and click APPLY.
For dual-stack environments, provide at least one IPv4 and one IPv6 address, or FQDN, which can be resolved to one
IPv4 and one IPv6 address.
2. For versions earlier than VxRail 8.0.210, to repoint NTP servers using the public API, check the API on VxRail API.

Customize the default network IP address for docker


Configure the default network interface for the RKE2 cluster.

Prerequisites
Before you run the script, create a snapshot of the VxRail Manager VM.
1. Log in to the VMware vSphere Web Client and select the Inventory icon.
2. To take a snapshot of the VxRail Manager VM, right-click VxRail Manager > Snapshots > Take Snapshot.
3. Enter a name and click OK.

About this task


This procedure applies to the VxRail cluster running VxRail 8.0.x and later. See the VxRail 8.0.x Support Matrix for a list of
supported versions.
To change the configuration of the dummy0 interface, you must specify options in /etc/sysconfig/network/ifcfg-
dummy0. You can customize the dummy0 network interface. The default for VxRail Manager has the dummy0 interface with
the IP address 172.28.177.1/32. If there is a conflict with your IP address on your LAN, specify another IP address for the
dummy0 interface.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Steps
1. See KB 225002 to download the attached ZIP file and rename it to rke2-scripts.zip.
2. Upload the .zip file to /home/mystic/ in VxRail Manager and extract the rke2-scripts.zip.
3. Using SSH, log in to the VxRail Manager as mystic.
4. To switch to root, enter:
su root

5. To view the configuration of the dummy0 interface and edit, enter:

/etc/sysconfig/network/ifcfg-dummy0

STARTMODE='auto'
BOOTPROTO='static'
IPADDR='172.28.177/32'

6. To change the IP address of the dummy0 interface for VxRail Manager from 172.28.177.1/32, Update the IPADDR field
with a new IP address.

Manage VLAN IDs and VxRail IP addresses 27


7. To restart the network service, enter:
systemctl restart network

Wait for a few seconds and verify that the network is restarted.
8. To check the RKE2 version, enter:
# rke2 -v

9. To restart the RKE2 cluster and run the RKE2 precheck, enter:
● If the RKE2 version is v1.21.x, enter:
# bash /usr/local/bin/rke2-precheck.sh
● If the RKE2 version is later than v1.21.x, enter:
# bash /home/mystic/rke2-scripts/rke2-precheck_fix.sh
10. To change the CIDR of the RKE2 cluster, enter:
By default, the VxRail Manager is configured with CIDR for RKE2 services and pods with IP addresses as
172.28.176.0/24 and 172.28.175.0/24. If there is an IP address conflict with your LAN configuration, specify
another IP address range for the CIDR for RKE2.

If the RKE2 version is v1.21.x, enter:


# bash /home/mystic/rke2-scripts/rke2-reset-cidr-v1.21.sh -s=""<xx.xx.xx.xx/xx>""
-c=""<xx.xx.xx.xx/xx>""

If the RKE2 version is later than v1.21.x, enter:


# bash /home/mystic/rke2-scripts/rke2-reset-cidr_fix.sh -s="<xx.xx.xx.xx/xx>"
-c="<xx.xx.xx.xx/xx>"

Where:
-c --cluster-cidr=""<xx.xx.xx.xx/xx>""

-s --service-cidr="<xx.xx.xx.xx/xx>"

Wait for few seconds for the RKE2 cluster to restart.

If the VxRail Manager VM is damaged from the RKE2 script, go to Restore the VxRail Manager VM to restore the VM.

Restore the VxRail Manager VM


After you run the RKE2 script, the VxRail Manager is broken. Use the snapshot you created before running the script to recover
the VxRail Manager VM.

Prerequisites
You must have created a snapshot to restore the VM.

Steps
1. Log in to the VMware vSphere Web Client and select the Inventory icon.
2. Select the VxRail Manager VM and click the Snapshots tab.
3. Go to the snapshot which created in the snapshot tree, click Revert.

28 Manage VLAN IDs and VxRail IP addresses


6
Manage VxRail passwords
For VxRail components, accounts are set up with a default password during deployment. Most default passwords expire after 90
days and should be changed after deployment is complete. You can only change a default password if it has not expired. When
a management account changes or expires, VxRail Manager mutes health monitoring and displays alerts. After VxRail Manager
passwords are updated, the system returns to a normal state and unmutes health monitoring.
See KB 158231 for account and password best practices for VxRail.
The following accounts are set up during deployment with default passwords:

Table 7. Default passwords


Password Description Link or task
VMware vCenter The default password is set when you deploy See Change the VMware vCenter Server root password
Server root user the VMware vCenter Server. The default and settings .
password expires after 90 days. You can
change the default password and the number
of days for expiration.
VMware ESXi Existing VMware ESXi root account for each See Change the VMware ESXi host root password .
host root user host that is used for script execution and file
uploading.
VMware ESXi is installed into every satellite
node. You must assign a password for the
root account.

VMware vCenter Users in the vsphere.local domain can For the vsphere.local domain, use the VMware
Server SSO user change their VMware vCenter Server SSO vSphere Web Client.
passwords from the VMware vSphere Web
For other domains, see Change the VMware vCenter
Client.
Server SSO password.
The default user account name is
[email protected] for
customer-managed and VxRail-managed
VMware vCenter Servers. You cannot change
a password that has expired. If your password
expired, contact the Administrator group.

VMware vCenter The default root password for the VMware Change the VMware vCenter Server Appliance root
Server Appliance vCenter Server instance is set during password
root user deployment. The default password expires
You can change the expiry time for an account by
after 90 days.
logging as root to the VMware vCenter Server Bash
shell, and running chage -M number_of_days -W
warning_until_expirationuser_name. To increase
the expiration time of the root password to infinity, run
chage -M -1 -E -1 root command.

VMware ESXi See ESXi Passwords and Account Lockout for See Change the password of the VMware ESXi host
host management more information. management user.
user
VxRail You can change the default password of See Change the password of the VxRail management user.
Management user the VxRail management user. Follow the
requirements for changing the password.
VxRail root user Default user passwords are applied during Use the passwd command.
installation and deployment.
VxRail mystic Use the passwd command.
user

Manage VxRail passwords 29


Table 7. Default passwords (continued)
Password Description Link or task
VxRail service Use the passwd command.
user (for VxRail
7.0.350 and later)
iDRAC root A default iDRAC root account password See Change the password of the iDRAC root user.
username and calvin is entered during deployment. See
password KB 133536.

Change the password of the iDRAC root user


A default iDRAC root account password is entered during deployment. You can change this password after deployment.

About this task


The default password for the iDRAC user is calvin.
To simplify the complexity of the password, go to the iDRAC UI and select Settings > Users > Global User Settings >
Password Settings > Policy Settings.
This procedure is intended for customers, Dell Technologies employees, and partners who are authorized to work on a VxRail
cluster. This procedure applies to the VxRail cluster running VxRail 8.0.210 and later.

Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. Click the Inventory icon.
3. Select the target VMware ESXi host and click the Configure tab.
4. Under VxRail, click iDRAC Configuration.
5. In iDRAC Settings, click Edit for users.
6. In the Edit Credentials wizard, enter the password information and click Apply.

Change the VMware ESXi host management user


password (VxRail 8.0.310 and earlier)
Change the VMware ESXi host management user password in the VMware ESXi host client. After you change the password,
apply changes in the VMware vSphere Web Client.

Prerequisites
For more information, go to VMware Docs by Broadcom and search for ESXi Passwords and Account Lockout.

About this task


This procedure applies for VxRail 8.0.310 and earlier.
This procedure is intended for customers, Dell Technologies employees, and partners who are authorized to work on a VxRail
cluster.

Steps
1. Log in to the VMware ESXi host client as a root.
2. Select Host > Manage and then select Security & users > Users.
3. Select the VxRail management user and click Edit user.
4. In the Edit User window, enter a password in the Password window and click Save.
5. To apply the password changes, in the VMware vSphere Web Client, perform the following:
a. Under the Inventory icon, select the target cluster, and click the Configure tab.

30 Manage VxRail passwords


b. Under VxRail, for versions earlier than VxRail 8.0.210, click System. For VxRail 8.0.210 and later, click Security.
c. Click Update passwords, and then enter the new password and click SUBMIT and FINISH.

Change the password of the VxRail management user


There are specific requirements that must be followed when you change the default password of the management user.

About this task


The following requirements apply for passwords:
● Eight to 20 characters
● One lowercase letter
● One uppercase letter
● One numeric character
● One special character
For VxRail versions earlier than 8.0.210, VxRail 8.0.x cluster manages a VxRail-managed VMware vCenter Server or a customer-
managed VMware vCenter Server. This procedure is for VxRail 8.0.210 that uses the VxRail password management UI.
This procedure is intended for customers, Dell Technologies employees, and partners who are authorized to work on a VxRail
cluster.

Steps
1. For VxRail versions earlier than 8.0.210, to change and apply the VMware management user password, perform the
following:
a. Log in to the VMware vSphere Web Client as an administrator.
b. Click Administration from the main menu.
c. Under Single Sign On, click Users and Groups.
d. From the Domain drop-down list, select vsphere.local.
e. Select the VxRail Management username and click EDIT.
f. In the Edit User window, enter and confirm the password and then click Save.
g. To apply the changes, select the target cluster, and click the Configure tab.
h. Under VxRail, click System.
i. Click Update passwords.
j. In the Update Passwords wizard, enter the new password and click SUBMIT, and then click FINISH.
2. For VxRail 8.0.210 and later, the VxRail password management UI is supported for a VxRail-managed VMware vCenter
Server.
a. Log in to the VMware vSphere Web Client as an administrator.
b. Select the target cluster and click the Configure tab.
c. Under Single Sign On, click Users and Groups.
d. Under VxRail, click Security > Credentials.
e. Click EDIT.
f. In the Edit Credentials wizard, enter the new password and click Apply.

Manage VxRail passwords 31


7
Manage VxRail cluster settings
Use the following links to manage cluster settings:
● To configure external storage for the dynamic node cluster, see Configure External Storage of the Dynamic Node Cluster -
IPv4.
● Change the VxRail Cluster EVC Mode
● Fault Tolerance on VxRail
● Managing the VMware vCenter Server for AD authentication
● Join or leave an Active Directory Domain
● VxRail Change Default VDS NIOC Configuration

Configure external storage for standard clusters


After you install two VMware vSAN clusters, manually mount the remote VMware vSAN of another VMware vSAN cluster.

Prerequisites
Verify that two VMware vSAN clusters are deployed in the same VMware data center.
1. Log in to the VMware vSphere Web Client as an administrator.
2. From the Inventory icon, select the cluster and click the Configure tab.
3. Under Remote Datastores, verify that two VMware vSAN clusters are deployed in the same VMware data center.

About this task


This procedure applies to VxRail 7.0.480 and later and VxRail 8.0.200 and later.
To ensure connectivity in L3 topology, verify that the VMware vSAN override gateway is configured for each server cluster
node.
If the server cluster is running a VxRail version earlier than 8.0.200 (VMware vSphere 8.x), ensure that there is a static route on
the server cluster nodes to reach the VMware vSAN network of the client cluster.
This procedure is intended for customers and Dell Technologies service providers who are authorized to work on a VxRail
cluster.

Steps
1. To ensure the VMware vSAN override gateway is set for the server cluster nodes if both clusters are on an L3 network,
perform one the following:
● If the server cluster is running VxRail 7.0.480 and later or VxRail 8.0.200 or later, go to step 2.
● If the server cluster is running an earlier version than VxRail 7.0.480 or VxRail 8.0.200, go to step 3.
2. From the Inventory icon, select the VMware VDS and click the Configure tab, and then select Topology.
a. Select the VMware vSAN traffic setting on each node and click the edit icon. On the Edit Settings window, check
Override default gateway for this adapter on IPv4 and click OK.
b. If the override gateway on the server cluster is not configured for each node, select the VMware vSAN port group.
c. Select the hosts and click Edit Settings to configure the VMkernel adapter.
d. Under IPv4 settings, click Use IPv4 settings and then enable and configure the default gateway.

32 Manage VxRail cluster settings


Figure 6. IPv4 settings

e. On the Ready to complete window, click FINISH.


f. To configure the IPv4 static route, enter:
esxcli networkip route ipv4 add -n <hci_mesh_vsan_cluster_subnet>/<netmask_length> -g
<server_cluster_vsan_gateway>

3. For versions earlier than VxRail 7.0.480 or VxRail 8.0.200 only, to set a static route on the server cluster nodes to reach the
VMware vSAN network of the client cluster, perform the following:
a. Select the configured node.
b. Click the Configure tab and select System > Services.
c. Select SSH or ESXi Shell and click START.
If the SSH service is enabled, you can log in to the configured node CLI using the SSH client. If the VMware ESXi Shell
service is enabled, you can log in to the configured node CLI using DCUI with Alt and F1.
d. Log in to the configured node as root.
e. To check the IPv4 static route, enter: esxcli network ip route ipv4 list
4. On the Ready to complete page, click Finish.
5. To mount the remote VMware vSAN data store on another VMware vSAN cluster, perform the following:
a. Select a cluster, then click the Configure tab.
b. Select Remote Datastores and click MOUNT REMOTE DATASTORE.

Figure 7. Remote data stores

c. On the Mount Remote Datastore window, select the data store and click NEXT.
d. On the Check compatibility window, click FINISH.

Manage VxRail cluster settings 33


Convert one VMware VDS to two VMware VDS
Convert one VMware VDS to two VMware VDS in Day2 for VxRail traffic. You can change the default NIC layout of VxRail
nodes for greater flexibility and higher availability through a NIC-level redundancy solution.

Prerequisites
Before you convert a VMware VDS, perform the following:
● Verify that the node that is configured has a PCIE NIC with the same speed.
● Validate that all network VLAN and MTU configurations are properly set on the physical switches before making any network
profile changes.

CAUTION: Misconfiguration may lead to data unavailability or loss.

● Confirm that the new uplinks from newly configured ports comply with existing VLAN and MTU configurations.
● Verify that the cluster is in healthy state.
● Configure remote VMware vSAN cluster connection before VMware VDS configuration in dynamic node cluster with
VSAN_HCI_MESH storage type.

About this task


You can convert a customer-managed VMware VDS or VxRail-managed VMware VDS for a standard cluster deployment.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster. This procedure applies to VxRail versions 7.0.240 or 8.0 and later.
See VxRail 7.x Support Matrix or VxRail 8.x Support Matrix for a list of supported versions.

Identify the port groups


Identify the port groups to switch from the default NIC layout of VxRail nodes to a more flexible and highly available NIC level
redundancy solution.

About this task


Default names are used to identify port groups for Management, VxRail discovery, VMware vSAN, and VMware vSphere
vMotion.
The following table describes the port group types, default names, and VMkernel port groups:

Table 8. Port group types, default names, and VMkernel port groups
Port group type Port group default name VMkernel port group
Management Management Network-xxxxxx vmk2
vSAN Virtual SAN-xxxxxxxx vmk3
vMotion vSphere vMotion-xxxxxxxxxxx vmk4
VxRail discovery VxRail Management-xxxxxx vmk0

Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. Under the Inventory icon on in the top-left menu bar, select a node.
3. Select the Configure tab.
4. Click Networking > VMkernel adapters.

34 Manage VxRail cluster settings


Figure 8. VMkernel adapters

5. In the VMkernel adapters window, under Network Label, view the port group name.

Convert one VMware VDS with two uplinks to two


VMware VDS with two uplinks
Two VMware VDS permanently to handle all traffic. Two additional ports are added for VMware vSAN or VMware vSphere
vMotion traffic to create a new VDS2 to use these ports. The same MTU configuration is used for all traffic during the
conversion procedure.

About this task


This procedure uses the same tasks as Convert one VMware VDS with four uplinks to two VMware VDS with two uplinks with
few modifcations.

Steps
1. Use the following table to perform the first four tasks:

Table 9. Procedures for conversion


Procedure Entry
Create a VMware VDS and assign two uplinks Same entries as previous procedure
Add existing VxRail nodes to VDS2 Same entries as previous procedure
Create the port group for VMware vSAN in VDS2 Set uplink1 status as Active/Standby
Create port group for VMware vSphere vMotion in VDS2 Set uplink2 status as Active/Standby

2. To assign a new VMNIC to uplink1/uplink in VDS2, perform the following:


a. From the VMware vSphere Web Client, log in as administrator.
b. Under the Inventory icon on in the top left menu bar, select a data center.
c. Click the Networks tab.
d. Select Distributed Switches to view VS2.
e. From Actions menu, select Add and Manage Hosts.
f. From Select task page, select Manage host networking and click NEXT.
g. From Select hosts page, select Attached hosts and choose hosts that are linked to the distributed switch.

Manage VxRail cluster settings 35


h. Click OK and then click Next.
i. Select an active physical NIC from on other switches/unclaimed list from the Manage physical adapters page to
assign an uplink to the adapter.
j. Click Assign uplink.
k. Select uplink1 and click OK.
l. Repeat step f for assigning new VMNIC to uplink 2.
m. Click Assign uplink.
n. Select uplink2 and click OK and Next.
3. Use the table to perform the following tasks:

Table 10. Migration procedures


Procedure Entry
Migrate the VMware vSAN VMkernel from VDS1 to VDS2 Same entries as previous procedure
port groups
Migrate the VMware vMotion VMkernel from VDS1 to VDS2 Same entries as previous procedure
port groups

Convert one VMware VDS with four uplinks to two


VMware VDS with four uplinks/two uplinks
Allocate different VMNIC ports for VMware management, VMware VSAN, and VMware vSphere vMotion traffic separation. The
same MTU configuration is used for all traffic during the conversion procedure. Separate the VMware vSphere vMotion to VDS2
with two extra ports.

About this task


This procedure uses the same tasks as Convert one VMware VDS with four uplinks to two VMware VDS with two uplinks with
few modifcations.

Steps
1. Use the following table to perform the first four tasks:

Table 11. Convert uplink procedures


Procedure Entry
Create a VMware VDS and assign two uplinks Same entries as previous procedure
Add existing VxRail nodes to VDS2 Same entries as previous procedure
Create the port group for VMware vSAN in VDS2 Set uplink1 status as Active/Standby
Create port group for VMware vSphere vMotion in VDS2 Set uplink2 status as Active/Standby

2. To assign a new VMNIC to uplink1/uplink in VDS2, perform the following:


a. From the VMware vSphere Web Client, log in as administrator.
b. Under the Inventory icon on in the top left menu bar, select a data center.
c. Click the Networks tab.
d. Select Distributed Switches to view VS2.
e. From Actions menu, select Add and Manage Hosts.
f. From Select task page, select Manage host networking and click NEXT.
g. From Select hosts page, select Attached hosts and choose hosts that are linked to the distributed switch.
h. Click OK and then click Next.
i. Select an active physical NIC from on other switches/unclaimed list from the Manage physical adapters page to
assign an uplink to the adapter.
j. Click Assign uplink.

36 Manage VxRail cluster settings


k. Select uplink1 and click OK.
l. Repeat step f for assigning new VMNIC to uplink 2.
m. Click Assign uplink.
n. Select uplink2 and click OK and Next.
3. Migrate the VMware vMotion VMkernel from VDS1 to VDS2 port groups using the same entries as the previous procedure.
VxRail Physical View page does not support PCIE adapter display. The PCIE port display information missing is a known
issue.
See Configure Physical Network Adapters on a vSphere Distributed Switch for more information.

Convert one VMware VDS with four uplinks to two


VMware VDS with two uplinks
Two VMware VDS permanently handle all traffic. The conversion procedure only supports using the same MTU configuration for
all traffic.

Create a VMware VDS and assign two uplinks


Create the VMware VDS as VDS2 and set the uplinks to 2. Edit the uplinks to be uplink1 and uplink2.

Steps
1. From the VMware vSphere Web Client, log in as an administrator
2. Under the Inventory icon on in the top-left menu bar, and select a data center.
3. Select the Networks tab.
4. Select Distributed Switch.
5. Under the Actions menu, select Distributed Switch > New Distributed Switch.
6. Enter the name and location and click Next.
7. Select the same version of the existing VMware VDS and click Next.
8. Set the number of uplinks to 2 and click Next.
9. Review settings and click FINISH.
10. From the left menu, select the new VMware VDS and click the Actions menu.
11. Select Settings > Edit Settings....
12. Go to the Uplinks tab and modify Uplink 1 to uplink1 and Uplink 2 to uplink2 to adhere to the unified name rule
and click OK.

Add existing VxRail nodes to VDS2


Steps
1. From the VMware vSphere Web Client, log in as an administrator.
2. Select the data center and click Networks.
3. Select Distributed Switches to view VS2.
4. From Actions menu, select Add Host to launch the wizard.
5. From Select task page, select Add hosts and click NEXT.
6. From Select hosts page, select New hosts and choose the associated hosts to add the VxRail nodes to the VDS2
distributed switch.
7. Click OK.
8. Click Next to go to the management physical adapters.
9. Click OK or Next.

Manage VxRail cluster settings 37


Create the port group for VMware vSAN in VDS2
Steps
1. From the VMware vSphere Web Client, log in as an administrator.
2. Under the Inventory icon on in the top left menu bar, select a data center.
3. Click Networks tab.
4. Select Distributed Switches to view VS2.
5. From Actions menu, select Distributed Switch > New Distributed Switch.
6. Create the VMware vLAN in VDS1 and assign it to the VMware vSAN port group from the Configuration Settings step.
7. Click NEXT after verifying Customize default policies configuration.
8. Adjust uplink1 to standby uplinks and click NEXT in the Teaming and failover step.
9. Follow the instructions on the screen to complete the remaining steps and finish the configuration.

Create port group for VMware vSphere vMotion in VDS2


Steps
1. From the VMware vSphere Web Client, log in as an administrator.
2. Under the Inventory icon on in the top left menu bar, select a data center.
3. Click Networks tab.
4. Select Distributed Switches to view VS2.
5. From Actions menu, select Distributed Switch > New Distributed Switch.
6. Create the vLAN in VDS1 and assign it to the VMware vSAN port group from the Configuration Settings step.
7. Click NEXT after verifying the Customize default policies configuration.
8. Adjust uplink2 to standby uplinks and click NEXT in the Teaming and failover step.
9. Follow the instructions on the screen to complete the remaining steps and finish the configuration.

Unassign uplink3 in VDS1


Steps
1. From the VMware vSphere Web Client, log in as an administrator.
2. Under the Inventory icon on in the top left menu bar, select a data center.
3. Click Networks tab.
4. Select Distributed Switches to view VS1.
5. From Actions menu, select Add and Manage Hosts.
6. From Select task page, select Manage host networking and click NEXT.
7. From Select hosts page, select Attached hosts and choose hosts that are linked to the VMware VDS.
8. Click OK and then click Next.
9. Select the VMNIC assigned to uplink3 and click Unassign adapter.
10. Click Next to complete the configuration.

Assign the released VMNIC to uplink1 in VDS2


Steps
1. From the VMware vSphere Web Client, log in as an administrator.
2. Under the Inventory icon on in the top left menu bar, select a data center.
3. Click Networks tab.
4. Select Distributed Switches to view VS1.
5. From Actions menu, select Add and Manage Hosts.

38 Manage VxRail cluster settings


6. From Select task page, select Manage host networking and click NEXT.
7. From Select hosts page, select Attached hosts and choose hosts that are linked to the VMware VDS.
8. Click OK and then click Next.
9. Select an active physical NIC released in Unassign uplink3 in VDS1 on the Manage physical adapters page.
10. Click Assign uplink.
11. Select uplink1 and click OK.
12. Click Next twice to complete the process.

Migrate the VMware vSAN VMkernel from VDS1 to VDS2 port


groups
The VMware vSAN VMkernel is represented as vmk3.

Steps
1. From the VMware vSphere Web Client, log in as an administrator.
2. Under the Inventory icon on in the top left menu bar, select a data center.
3. Click Networks tab.
4. Select Distributed Switches to view VS1.
5. From Actions menu, select Add and Manage Hosts.
6. From Select task page, select Manage host networking and click NEXT.
7. From Select hosts page, select Attached hosts and choose hosts that are linked to the distributed switch.
8. Click OK and then click Next.
9. Click Next without making any changes on the Manage physical adapters page.
10. Click Assign port group after selecting VMware vSAN vmk3 on each host.
11. Select the newly created VMware vSAN port group and click OK.
12. Click Next twice and then click Finish.

Migrate the VMware vMotion VMkernel from VDS1 to VDS2 port


groups
The VMware vMotion VMkernel is represented as vmk4.

Steps
1. From the VMware vSphere Web Client, log in as an administrator.
2. Under the Inventory icon on in the top left menu bar, select a data center.
3. Click Networks tab.
4. Select Distributed Switches to view VS1.
5. From Actions menu, select Add and Manage Hosts.
6. From Select task page, select Manage host networking and click NEXT.
7. From Select hosts page, select Attached hosts and choose hosts that are linked to the distributed switch.
8. Click OK and then click Next.
9. Click Next without making any changes on the Manage physical adapters page.
10. Click Assign port group after selecting VMware vSAN vmk4 on each host. Select the newly created VMware vSAN port
group from Create the port group for VMware vSAN in VDS2 and click OK.
11. Click Next twice and then click Finish.

Manage VxRail cluster settings 39


Unassign uplink4 in VDS1
Steps
1. From the VMware vSphere Web Client, click Networking and go to VDS1.
2. From Actions menu, select Add and Manage Hosts.
3. From Select task page, select Manage host networking and click NEXT.
4. From Select hosts page, select Attached hosts and choose hosts that are linked to the distributed switch.
5. Click OK and then click Next.
6. Select the VMNIC assigned to uplink4 and click Unassign adapter.
7. Click Next to complete the configuration.

Assign the released VMNIC to uplink2 in VDS2


Steps
1. From the VMware vSphere Web Client, log in as an administrator.
2. Under the Inventory icon on in the top left menu bar, select a data center.
3. Click Networks tab.
4. Select Distributed Switches to view VS1.
5. From Actions menu, select Add and Manage Hosts.
6. From Select task page, select Manage host networking and click NEXT.
7. From Select hosts page, select Attached hosts and choose hosts that are linked to the VMware VDS.
8. Click OK and then click Next.
9. Select an active physical NIC released in Unassign uplink4 in VDS1 on the Manage physical adapters page.
10. Click Assign uplink.
11. Select uplink2 and click OK.
12. Click Next twice to complete the process.

Next steps
The summary of Hosts and Clusters page displays alerts for Network uplink redundancy loss in the reconfigured nodes. Click
Reset to Green to skip the alert.

Enable DPU offloads on VxRail


Enable DPU offloads on VxRail.

Prerequisites
● Do not involve the DPU NICs in the Day 1 bring up.
● Create the VMware VDS in Day 2.
● Use V670F, P670N, and E660F to build your VxRail cluster.

About this task


VxRail supports Pensando and BF-2 NVIDIA DPUs.
This procedure applies to the VxRail cluster running VMware vSphere 8.0.x and VxRail 8.0.010 through 8.0.230.
DPUs are not support on VxRail 8.0.300 and later.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

40 Manage VxRail cluster settings


Enable the DPU offload after Day 1 VxRail deployment
Enable the DPU offload after the Day 1 bring up.

Steps
1. On the Physical adapters page, verify the DPU Backed column is marked on DPU adapters.

Figure 9. Physical adapters

2. Select Networking > Datacenter.


3. From the Actions menu, select Distributed Switch > New Distributed Switch.
a. From the left-menu, click Name and location and enter the details and click Next.
b. From the left-menu, click Select version and select the VMware VDS version as 8.0.0.
c. From the left-menu, click Configure settings and select the associated DPU vendor (Pensando or BF-2 NVIDIA).
d. Create the distributed virtual port group (DVPG) and manage the teaming policy if needed.
4. Right-click the DPU-VDS and select Add and Manage Hosts.
a. On the Select task window, select Add hosts and click NEXT.
b. On the Select hosts window, select all the compatible hosts and click NEXT.
c. On the Manage physical adapters window, select the physical adapters from the drop-down menu and assign to the
uplinks. Click NEXT.
You can use only the compatible DPU adapters.

Manage VxRail cluster settings 41


Figure 10. Manage physical adapters

d. OPTIONAL: Assign the VMkernel adapters to the specified DVPG.


e. If you are not using the DPVG, from the Migrate VM Networking window, click NEXT.
f. On the Ready to complete window, click FINISH.
The VMware VDS is deployed and configured so that the VxRail is prepared to support the DPU offload.

NOTE: The VxRail nodes should be integrated with VMware NSX to leverage any network offload functionality.

Add a VxRail node


Add a node only for a VxRail that is equipped with a Pensando and NVIDIA DPUs.

Prerequisites
● Verify that the nodes are the same type, family, and configuration in the VxRail vSAN ESA initial release.
● Obtain the access to the management system from the user to communicate with the VxRail.
● Ensure that the VxRail node that you add is compatible with the VxRail version 8.0.010.
● Ensure that you have the compatible DPUs to add a node.
● Ensure that the node you add is identical.

Steps
1. Log in to the VMware vSphere Web Client as administrator.
2. From the Inventory icon, select a VMware vSAN cluster.
3. From the Configure tab, select VxRail > Health Monitoring and verify that the Health Monitoring Status is set to
Enable.
4. Select VxRail > Hosts.
5. Click ADD.
● If the new node version matches the cluster version, select the host. To discover the VxRail hosts by Loudmouth mode,
configure the ToR switches and power on the hosts.
● If the new node version is lower than the cluster version and the node is compatible, add the new node to the cluster.
The new node is upgraded to the cluster level during the node addition.
● If the new node is not compatible, upgrade the corresponding subcomponent, or downgrade before you add the node to
the VxRail cluster.
● If no new hosts are found, and you want to add a node using the IP address and credentials, click ADD.
6. To add the node manually, in the Add Hosts screen, enter the ESXi IP Address and the ESXi Root Password.
7. Click VALIDATE.
8. Click ADD.

42 Manage VxRail cluster settings


9. If using host discovery to add a node, in the Add VxRail Hosts window, select the nodes to add to your VxRail cluster and
click NEXT to configure new nodes.
NOTE: You can add a maximum of six nodes at a time.

10. In the vCenter User Credentials window, enter the VMware vCenter Server user credentials. Click NEXT.
11. In the NIC Configuration window, select a configuration, and select NICs and VMNICs. Click NEXT.
Select the proper NIC configuration and define the NIC-mapping configuration plan for the new hosts.
The default NIC configuration is from the node that you configured first in the VxRail cluster. The default values of the
VMNIC for the new nodes must align with the selected NIC configuration.
Default values must satisfy the common configuration requirement.

NOTE: If the VxRail cluster uses an external DNS server, all the nodes in the cluster must have DNS hostname and IP
address lookup records.

12. In the Host Settings window, enter the ESXi Host Configuration settings for the hosts and click NEXT.
13. OPTIONAL: In the Host Location window, to customize the host location, enter the Rack Name, Rack Position, and click
NEXT.
14. In the Network Settings window, enter the VMware vSAN IPv4 Address and VMware vSphere vMotion IPv4 Address.
Click NEXT.
NOTE: A dynamic node cluster with a fiber channel array does not have the VMware vSAN field sets.

15. In the Validate window, review the details and click VALIDATE. Click BACK to make any changes.
VxRail validates the configuration details and if the validation passes, a success message appears on the screen.
16. In the Validate window, select Yes to put the hosts in maintenance mode and click FINISH.
NOTE: You must select Put Hosts in Maintenance Mode option to add the nodes to VCF on a VxRail environment.

17. Monitor the progress of each host that is added to the VxRail cluster.
18. Once the expansion progress is complete, a success message appears. If a supported lower version of the node is added, the
node gets upgraded to the cluster level.

Remove VxRail nodes


Remove nodes to decommission the older generation VxRail nodes and migrate them to the new generation VxRail.
This procedure applies to the VxRail cluster running the VxRail version 8.0.010.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

CAUTION: You cannot use this task to replace a node. Node removal does not destroy the VxRail cluster.

Prerequisites
● Disable the remote support connectivity, if enabled.
● Verify that the VxRail cluster is in a healthy state.
● Add new nodes into the cluster before running the node removal procedure to avoid any capacity or node limitations.
● Verify that the VxRail cluster has enough nodes remaining after the node removal to support the current Failure to Tolerate
(FTT) setting.
The following table lists the minimum number of VMware ESXi nodes in the VxRail cluster before node removal:

Table 12. VMware ESXi nodes


VMware vSAN RAID and FTT Minimum nodes
RAID 1, FTT = 1 4
RAID 1, FTT = 2 6

Manage VxRail cluster settings 43


Table 12. VMware ESXi nodes (continued)
VMware vSAN RAID and FTT Minimum nodes
RAID 5, FTT = 1 (For All flash VxRail only) 5
RAID 6, FTT = 2 (For All flash VxRail only) 7

Verify the VxRail cluster health


Verify the VxRail cluster health status.

Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. Select a cluster and click the Monitor tab.
3. Select vSAN > Skyline Health.
4. If alarms display, acknowledge the Reset to Green at the node and cluster levels before you remove the node.

Verify the capacity, CPU, and memory requirements


Before removing the node, verify that the capacity, CPU, and memory are sufficient to allow the VxRail cluster to continue
running without any issue.

About this task


If the VMware vSAN used capacity percentage is over 80 percent, do not remove the node as it may lead to the VMware vSAN
performance issue.
Use the following formula to determine whether cluster requirements can be met after the node removal: VSAN used
Capacity % = used total / (current capacity - capacity to be removed)
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Steps
1. To view capacity for the cluster, log in to the VMware vSphere Web Client as administrator, and perform the following:
a. Under the Inventory icon, select the VMware vSAN cluster and click the Monitor tab.
b. Select vSAN > Capacity.
2. To check the impact of data migration on a node, perform the following:
a. Select vSAN > Data Migration Pre-check.
b. From the SELECT OBJECT drop-down, select the host.
c. From the vSAN data migration drop-down, select Full data migration and click PRE-CHECK.
3. To view disk capacity, perform the following:
a. Select the VMware vSAN cluster and click the Configure tab.
b. Select vSAN > Disk Management to view capacity.
Use the following formulas to compute percentage used:
CPU_used_% = Consumed_Cluster_CPU /(CPU_capacity - Plan_to_Remove_CPU_sum)
Memory_used_% = Consumed_Cluster_Memory /(Memory_capacity - Plan_to_Remove_Memory_sum)

4. To view to view the CPU and memory overview, perform the following:
a. Select the VMware vSAN cluster and click Monitor tab.
b. Select Resource Allocation > Utilization.
5. To check the CPU and memory resources on a node, perform the following: click the node and select Hardware.
a. Select the node and click the Summary tab.
b. View the Hardware window for CPU, memory, Virtual Flash Resource, Networking, and Storage.

44 Manage VxRail cluster settings


Remove the node
Place the node into maintenance mode before you remove the node.

Prerequisites
Before you remove the node, perform the following steps to place the node in to maintenance mode:
1. Log in to the VMware vSphere Web Client as an administrator.
2. Under the Inventory icon, right-click host that you want to remove and select Maintenance Mode > Enter Maintenance
Mode.
3. In the Enter Maintenance Mode dialog, check Move powered-off and suspended virtual machines to other hosts in
the cluster.
4. Next to vSAN data migration, from the drop-down menu, select Full data migration and click GO-TO PRECHECK.
5. Verify that the test was successful and click ENTER MAINTENANCE MODE and click OK.
6. To monitor the VMware vSAN resyncing, click the cluster name and select Monitor > vSAN > Resyncing Objects.

About this task


You can reboot hosts immediately or schedule a reboot.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Steps
1. To remove the host from the VxRail cluster, perform the following:
a. Select the cluster and click the Configure tab.
b. Select VxRail > Hosts.
c. Select the host and click REMOVE.
2. In the Remove Host from Cluster window, enter the VMware vCenter Server administrator and root account information.
3. After the account information is entered, click VERIFY CREDENTIALS .
4. When the validation is complete, click APPLY to create the Run Node Removal task.
5. After the precheck successfully completes, the host shuts down and is removed.
6. For L3 deployment: If you have removed all the nodes of a segment, select the unused port group on VMware VDS and click
Delete.

Next steps
To access the SSH, perform the following:
● Log in to the VMware vCenter Server Management console as root.
● From the left-menu, click Access.
● From the Access Settings page, click EDIT and enable SSH.
If a DNS resolution issue occurs after you removed the node or you added the same removed node back into the cluster but
with a new IP address, on the VMware vCenter Server, to update dnsmasq, enter:

# service dnsmasq restart

Remediate the CPU core count after node addition or


replacement
Get the cluster CPU drifts to determine if you need to perform this procedure.

Prerequisites
● Verify that you have a PowerEdge 15G or higher model with Intel CPU configuration.
● Enable the cluster DRS.
● Obtain the API guide.

Manage VxRail cluster settings 45


About this task
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
This is applicable for VxRail 8.0.xxx and later. See VxRail 8.x Support Matrix for a list of supported versions.
During the cluster bring-up, [email protected] and the password are configured.

Steps
1. To get the cluster CPU drifts, use the GET method to invoke the REST API:
curl -k -XGET -u <username>:<password>https://localhost/rest/vxm/private/v1/cluster/
i2e_config

2. If the driftConfiguration in the API response is empty, do not perform this procedure.

Figure 11. Drift configuration

If driftConfiguration is not empty, view the CPU core count under desiredConfiguration:

46 Manage VxRail cluster settings


Figure 12. Desired configuration

3. If the driftConfiguration is not empty, continue to Update the cluster status.

Update the cluster status


If cluster CPU drifts are populated, update the cluster status.

Prerequisites
● Verify that you have a PowerEdge 15G or higher model with an Intel CPU configuration.
● Enable the cluster DRS.

Manage VxRail cluster settings 47


Steps
1. On the VMware vSphere Web Client, log in as administrator.
2. Select the VxRail cluster, and then click the Configure tab. Under Services, click vSphere DRS.

Figure 13. VMware vSphere DRS

3. On the right-menu, view the VMware vSphere DRS configuration. If cluster DRS is off or the Automation Level is not fully
automated, click EDIT.
a. In Edit Cluster Settings, enable vSphere DRS. For the Automation Level, use the drop-down menu to select Fully
Automated.

Figure 14. Edit cluster settings

b. Click OK to save the settings.


4. Check the mode status for each node. If a node is in Maintenance Mode, select the node and right-click Maintenance
Mode > Exit Maintenance Mode

48 Manage VxRail cluster settings


Figure 15. Maintenance mode

Trigger a rolling update


Prerequisites
● Verify that you have a PowerEdge 15G or higher model with Intel CPU configuration.
● Enable the cluster DRS.
● See the VxRail API guide.

Steps
1. Verify that the desired CPU core count is set to enable core count value to prepare a rolling update API request body.

{
"desiredConfiguration": {
"cpu": {
"enabledCores": <enable core count>
}
}
}

2. To trigger rolling update process and invoke the REST API:


curl -XPATCH -u <username>:<password> https://<vxm-ip>/rest/<vxm>/private/v1/cluster/
i2e_config --data-raw '{
"desiredConfiguration": {
"cpu": {
"enabledCores": <enable core count>

You will get a request ID.


3. To check the task running status and invoke the REST API GET, enter:
curl -k -XGET -u <username>:<password> https://127.0.0.1/rest/<vxm>/v1/requests/
<request_id>

Manage VxRail cluster settings 49


Repoint the VMware vCenter Server to a VMware
vCenter Server in a different domain
Move a VMware vCenter Server from one VMware vSphere domain to another VMware vSphere domain. Tagging and licensing
are retained and migrated to the new domain.

Prerequisites
● To avoid data loss, take a file-based backup of each node before repointing.
● Be familiar with the UNIX or LINUX commands, and the VMware vSphere management interface.

About this task


Repointing is supported with the VMware vCenter Server 6.7 U1 and later. The following use cases are supported:
● Migrate a VMware vCenter Server from an existing domain to another existing domain with or without replication. The
migrated VMware vCenter Server moves from the existing single sign-on domain and joins the existing domain as another
using the ELM.
● Migrate a VMware vCenter Server from an existing domain to a new domain (where the migrated VMware vCenter Server is
the first instance). In this case, there is no replication partner.
This procedure is not applicable to dynamic node clusters. This procedure applies to the VxRail cluster running the VxRail 8.0.x
and later. The VxRail VMware vCenter Server with an external DNS manages the VxRail version 8.0.x or later cluster. See the
VxRail 8.0.x Support Matrix for a list of supported versions.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
Use the command syntax in the following table to perform the repointing:

Table 13. Domain repointing command syntax


Argument Description
-m, --mode The mode (precheck or execute) in which the argument runs the command.

-spa, --src-psc-admin SSO administrator username for the source VMware vCenter Server. Do not append
the @domain.
-dpf, --dest-psc-fqdn The FQDN of the VMware vCenter Server to repoint.

-dpa, --dest-psc-admin SSO administrator username for the destination VMware vCenter Server. Do not
append @domain.
-ddn, --dest-domain-name SSO domain name of the destination VMware vCenter Server.

-dpr, --dest-psc-rhttps (Optional) HTTPS port for the destination VMware vCenter Server. If not set, the
default port is 443.
-dvf, --dest-vc-fqdn The FQDN of the VMware vCenter Server pointing to a destination VMware vCenter
Server. The VMware vCenter Server is used to check for the component data
conflicts in the precheck mode. If not provided, conflict checks are skipped and
the default resolution (COPY) is applied for any conflicts that are found during the
import process.
This argument is optional only if the destination domain does not have a VMware
vCenter Server.

-sea, --src-emb-admin Administrator for the VMware vCenter Server with embedded VMware vCenter
Server. Do not append @domain to the administrator ID.
-rpf, --replication-partner- (OPTIONAL) The FQDN of the replication partner node to which the VMware
fqdn vCenter Server is replicated.

-rpr, --replication-partner- (OPTIONAL) The HTTPS port for the replication node. If not set, the default port
rhttps is 443.

50 Manage VxRail cluster settings


Table 13. Domain repointing command syntax (continued)
Argument Description
-rpa, --replication-partner- (OPTIONAL) SSO administrator username of the replication partner VMware
admin vCenter Server.

-dvr, --dest-vc-rhttps (OPTIONAL) The HTTPS port for the VMware vCenter Server pointing to the
destination VMware vCenter Server. If not set, the default port is 443.
--ignore-snapshot (OPTIONAL) Ignore the snapshot warning.
--no-check-certs (OPTIONAL) Ignore the certification validation.
(OPTIONAL) Retrieves the command execution detail.
-h, --help (OPTIONAL) Displays the help message for the cmsso-util domain
repoint command.

Repoint a single VMware vCenter Server node to an existing


domain
You can repoint a single VMware vCenter Server from one VMware SSO domain to an existing VMware SSO domain without a
replication partner. Each VMware SSO domain contains a single VMware vCenter Server.

Prerequisites
Power on both VMware vCenter Server nodes (A and B) before beginning the repointing process.

Steps
1. Using SSH, log in to the VMware vCenter Server as root.
2. To access the VMware vCenter Server A of domain 1, enter:
ssh root@<vcenter_a_ip_address>

3. To perform the precheck from domain 1 to domain 2, enter:


cmsso-util domain-repoint -m pre-check --src-emb-admin administrator --
replication-partner-fqdn <vcenter_a_ipaddress_domain2> --replication-partner-admin
PSC_Admin_of_destination_node --dest-domain-name destination_PSC_domain

Enter Source embedded vCenter Server Admin Password:


Enter Replication partner Platform Services Controller Admin Password:

The domain-report operation will export License, Tags, Authorization data


before repoint and import after repoint.

WARNING: Global Permissions for the source vCenter Server system will be lost. The
administrator for the target domain must add global permissions manually.
Source domain users and groups will be lost after the Repoint operation.
User '[email protected]' will be assigned administrator role on the
source vCenter Server
The following license keys are being copied to the target Single Sign-On
domain. VMware recommends using each license key in only a single domain. See
"vCenter Server Domain Repoint License Considerations" in the vCenter Server
Installation and Setup documentation

MH2HL-2PH9N-08C70-19573

Repoint Node Information:


Source embedded vCenter Server:c3-vc.rackk01.local
Replication partner Platform Services Controller: c2-vc.rackk01.local
Thumbprint: 5C:04:EE:F2:E4:83:F0:D7:0D:AD:3A:F3:34:A5:D1:46:BE:E0:45:77

All Repoint configuration settings are correct: proceed? [Y|y|N|n]: y

Manage VxRail cluster settings 51


Starting License pre-check
Starting Authz Data export
Starting Tagging Data export
Conflict data, if any, can be found under /storage/domain-data/Conflict*.json
Pre-checks successful

The precheck writes the conflicts to the /storage/domain-data directory.

4. OPTIONAL: Review conflicts and apply the same resolution for all the conflicts, or apply a separate resolution for each
conflict.
The conflict resolutions are:
● Copy: Creates a copy of the data in the target domain.
● Skip: Skips copying the data in the target domain.
● Merge: Merges the conflict without creating duplicates.

Back up each VxRail node (optional)


To ensure no loss of data, take a file-based backup of each node before repointing.

Steps
1. Log in to the VMware vCenter Server as root.
2. Click Backup.
The table under Activity displays the latest backup version from the VMware vCenter Server.
3. Click Backup Now.
4. OPTIONAL: Click Use backup location and username from backup schedule and perform the following:
a. Enter the backup location details.
b. OPTIONAL: Enter an encryption password if you want to encrypt your backup file.
To encrypt the backup data, use the encryption password.

c. OPTIONAL: Select Stats, Events, and Tasks to back up additional historical data from the database.
d. OPTIONAL: In the Description field, enter a description for the backup.
e. Click Start.

Repoint the VMware vCenter Server A of domain 1 to domain 2


Repoint the VMware vCenter Server A of domain 1 to domain 2.

Steps
1. To repoint the VMware vCenter Server A of domain 1 to domain 2, enter:
cmsso-util domain-repoint -m execute --src-emb-admin Administrator --replication-partner-
fqdn <vcenterb_fqdn_domain2> --replication-partner-admin PSC_Admin_of_destination_node --
dest-domain-name destination_PSC_domain

Enter Source embedded vCenter Server Admin Password:


Enter Replication partner Platform Services Controller Admin Password:

The domain-report operation will export License, Tags, Authorization data


before repoint and import after repoint.

WARNING: Global Permissions for the source vCenter Server system will be lost. The
administrator for the target domain must add global permissions manually.
Source domain users and groups will be lost after the Repoint operation.
User '[email protected]' will be assigned administrator role on the
source vCenter Server system.

The default resolution node for Tags and Authorization conflicts us Copy,
unless overridden in the conflict files generated during pre-check.

52 Manage VxRail cluster settings


Solutions and plugins registered with vCenter Server must be re-registered.

Before running the Repoint operation, you should backup all nodes. You can use
file based backups to restore in case of failure. By using the
Repoint tool
you agree to take the responsibility for creating backups.
Otherwise you should
cancel this operation.

The following license keys are being copied to the target Single Sign-On
domain. VMware recommends using each license key in only a single domain. See
"vCenter Server Domain Repoint License Considerations" in the vCenter Server
Installation and Setup documentation

MH2HL-2PH9N-08C70-0R80K-19573

Repoint Node Information:


Source embedded vCenter Server:c3-vc.rackk01.local

Replication partner Platform Services Controller: c3-vc.rackk01.local


Thumbprint: B7:C0:FF:9D:C8:A1:64:AB:1B:24:8C:1C:AB:4D:86:62:1D:E6:A5:64

All Repoint configuration settings are correct: proceed? [Y|y|N|n]: y

Starting License export ... Done


Export Service Data ... Done
Uninstalling Platform Controller Services ... Done
Stopping all services ... Done
Updating registry settings ... Done
Re-installing Platform Controller Services ... Done
Registering Infra Services ... Done
Starting License import ... Done
Starting Authz Data import ... Done
Starting Tagging Data import ... Done
Starting CLS import ... Done
Starting WCP service import phase... ... Done
Starting NSXD import ... Done
Applying target domain CEIP participation preference ... Done
Starting all services ... Done
Repoint successful. ... Done

2. View the Summary in the VMware vSphere Client.

Update the VMware vCenter Server SSL certificates from VMware


vCenter Server B
To update SSL certificates, go to Import VMware vSphere SSL certificates to VxRail Manager.

Refresh the node certificates in the VMware vCenter Server A


Refresh the node CA certificates in the VMware vCenter Server A.

Steps
1. Log in to the VMware vCenter Server as root.
2. Select Host > Configure > System > Certificate.
3. Click REFRESH CA CERTIFICATES and wait for the task to complete.
4. Repeat these steps on all the nodes in the VMware vCenter Server A.

Manage VxRail cluster settings 53


Repoint the VMware vCenter Server node to a new domain
Repoint the VMware vCenter Server from an existing domain to a newly created domain.

Steps
1. Shut down the node (VMware vCenter Server A) that is repointed to domain 1 (moved to a different domain).
2. Decommission the VMware vCenter Server node that is repointed.
For example, to decommission the VMware vCenter Server A, log in to the VMware vCenter Server B (on the original
domain) and enter:
ssh root@<vcenter_ip_address>
cmsso-util unregister --node-pnid <vcentera_fqdn> --username
VC_B_sso_administrator@sso_domain.com --passwd
VC_B_sso_adminuser_password

Solution users, computer account and service endpoints will be unregistered


2021-01-29T03:15:10.144Z Running command: ['/usr/lib/vmware-vmafd/bin/dir-cli',
'service',
'list, '--login', '[email protected]']
2021-01-29T03:15.10.167Z Done running command
Stopping all the services ...
All services stopped.
Starting all the services ...
Started all the services.
Success

3. Power on the VMware vCenter Server A.


4. Optionally, to prevent data loss, take a file-based backup of each node before repointing the VMware vCenter Server.
a. Log in to the VMware vCenter Server management interface as root.
b. Click Backup.
The table under Activity displays the latest backup version that is taken of the VMware vCenter Server.

c. Click Backup Now to open the wizard.


d. OPTIONAL: Click Use backup location and username from backup schedule to use the information from a scheduled
backup.
● Enter the backup location details.
● OPTIONAL: Enter an encryption password if you want to encrypt your backup file.

To encrypt the backup data, you must use the encryption password.
● OPTIONAL: Select Stats, Events, and Tasks to back up additional historical data from the database.
● OPTIONAL: In the Description field, enter a description for the backup.
● Click Start.
5. To repoint the VMware vCenter Server A to new domain 2, enter:
cmsso-util domain-repoint -m execute --src-emb-admin administrator --dest-domain-name
destination_PSC_domain

Enter Source embedded vCenter Server Admin Passowrd:


The domain-repoint operation will export License, Tags, Authorization data before
repoint and import after repoint.
WARNING: Global Permissions for the source vCenter Server system will be lost.

6. Update the VMware vCenter Server A SSL certificates from its VMware vCenter Server.
Generate Import VMware vSphere SSL certificates to VxRail Manager to update certificates.
7. Generate Refresh node certificates in VMware vCenter Server A to refresh node certificates.
For VMware documentation, see VMware docs.

54 Manage VxRail cluster settings


Submit install base updates for VxRail
This section provides information about how to install base updates.

About this task


Detailed information about product registration, move or party changes, and other install base maintenance updates is available
for Dell partners.
This procedure is intended for Dell Technologies employees and partners who are authorized to work on a VxRail cluster.
This is applicable for VxRail 8.0 and later. See VxRail 8.x Support Matrix for a list of supported versions.

Steps
1. For Dell Technologies employees, see KB 197636 for information that is related to installation of install base updates for
VxRail. For more information, see Product Registration and Install Base Maintenance Job Aid.
2. View the video tutorial for the partner product registration process Dell Partner Product Registration Process and
Deployment Operations Guide.

View APEX AIOps Infrastructure Observability


information in VxRail
The APEX AIOps Infrastructure Observability information web portal provides cloud-based multicluster management and
analytics of your VxRail.

Prerequisites
Bring up the VxRail cluster and verify that there are no critical alarms and that VMware vSAN is healthy.

About this task


This procedure applies to VxRail versions 7.0.410 and 8.0.020 and later.
See the VxRail 7.x Support Matrix or the VxRail 8.x Support Matrix for list of the supported versions.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Steps
1. Open the VMware vSphere Web Client select the Inventory icon.
2. Select the VxRail cluster and click the Configure tab.
3. Select VxRail > Support.
4. Under VxRail HCI System Software SAAS multi-cluster management, a description of the information is displayed with
a link to a demo.

Manage VxRail cluster settings 55


8
Manage network settings
You can change the default VMware VDS NIOC configuration, change NIC ports and share network traffic with VMware vSAN.
Use the following links to manage some network settings:

Table 14. Network settings


Network setting Link
Change the default VMware a NIOC b VxRail Change Default VDS NIOC configuration
Share network traffic with VMware vSAN Configure Bandwidth Allocation for System Traffic

a. VDS
b. NIOC

Configure a VxRail node to support the PCIe adapter


port
You can use advanced NIC definition with flexible configurations without an NDC connection. Configure the code for VxRail
initialization and node expansion to use the PCIE adapter.

Prerequisites
Before you configure the node:
● Go to the Day 1 public API to verify that the NIC profiles in the API are ADVANCED_VXRAIL_SUPPLIED_VDS and
ADVANCED_CUSTOMER_SUPPLIED_VDS.
● Verify that the node has enough spare PCIe NICs for configuration.
● Configure the required VLAN on the switch for the PCIe adapter ports that are planned for discovery and management.
● When using the PCIe only adapter, disable the NDC or OCP ports. To avoid network interruptions, use DCUI to log in to the
iDRAC console and configure the NDC or OCP ports.

About this task


Use the PCIe adapters only if NDC adapters are not used for VxRail management and discovery. Adjust the PCIe adapter
configuration before starting the VxRail initialization.
This procedure applies for VxRail clusters that are running VxRail 8.0.x or later. See the VxRail 8.0.x Support Matrix for a list of
supported versions.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Steps
1. Log in to the iDRAC console as root.
2. Press Alt-F1 to switch to the CLI mode.
3. To verify the status and locate the VMNICs are from PCIe, enter:
esxcfg-nics -l
Check the PCI column to identify different PCIe adapters.
4. To view the current NIC teaming policy of a vSwitch, enter:
esxcli network vswitch standard policy failover get -v vSwitch0
5. Select one of the PCIe ports and add the PCIe VMNIC into the default VMware vSwitch0.

56 Manage network settings


2-port NDC and 2-port PCIe adapters are used in the VxRail E560F model. VMNIC 2 and VMNIC 3 are the ports that are
planned to use from PCIe adapters.
● Identify the PCIe NIC to configure the active and standby uplinks.
● Identify the NDC or OCP NICs to be removed from the VMware vSwitch.

To configure the VxRail node before deployment, one port from the PCIe adapter is required.

6. To add one PCIe VMNIC into the port groups, enter:


esxcli network vswitch standard uplink add -u vmnic2 -v vSwitch0

esxcli network vswitch standard policy failover set -v vSwitch0 -a vmnic2

esxcli network vswitch standard portgroup policy failover set -p "Management Network" -a
vmnic2

esxcli network vswitch standard portgroup policy failover set -p "Private Management
Network" -a vmnic2

esxcli network vswitch standard portgroup policy failover set -p "VM Network" -a vmnic2

esxcli network vswitch standard portgroup policy failover set -p "Private VM Network" -a
vmnic2

7. To add an additional PCIe NICs for the VxRail networking as a standby uplink, enter:
esxcli network vswitch standard policy failover set -v vSwitch0 -s vmnic2
8. After the nodes are configured, ping the VxRail management IP address. Perform one of the following to start the
deployment.
● For the VMware vCenter Server UI, perform the following:
○ In VDS Settings step, select Custom or VDS configuration.
○ In the Uplink Definition checklist, select two PCIe adapter ports and complete the VxRail deployment.
● If you are using the API to perform the initialization, only ADVANCED_VXRAIL_SUPPLIED_VDS and
ADVANCED_CUSTOMER_SUPPLIED_VDS NIC profiles are supported.
9. To expand the VxRail cluster host, perform the following:
a. Complete all the procedures on the new node.
b. Perform the node expansion using the VMware vCenter Server UI or API.
10. To expand the VxRail satellite host, perform the following:
a. Ensure that there are two adjacent PCIe adapter ports with the same network speed that is greater than or equal to one
GB per second.
b. Remove unused ports from the vSwitch0 and add the PCIe adapter ports. For example, to remove the VMNIC0 and
VMNIC1 from vSwitch0, enter:
esxcli network vswitch standard uplink remove -u vmnic0 -v vSwitch0

esxcli network vswitch standard uplink remove -u vmnic1 -v vSwitch0

c. Verify that at least one PCIe adapter port is Active and the other is Standby. For example, to add VMNIC2 to vSwitch0
and configure it as an Active PCIe adapter port, enter:
esxcli network vswitch standard uplink add -u vmnic2 -v vSwitch0

esxcli network vswitch standard policy failover set -v vSwitch0 -a vmnic2

esxcli network vswitch standard portgroup policy failover set -p


"Management Network" -a vmnic2

esxcli network vswitch standard portgroup policy failover set -p


"Private Management Network" -a vmnic2

esxcli network vswitch standard portgroup policy failover set -p


"VM Network" -a vmnic2

Manage network settings 57


esxcli network vswitch standard portgroup policy failover set -p
"Private VM Network" -a vmnic2

For example, to add VMNIC3 to vSwitch0 and configure it as a Standby PCI adapter port, enter:
esxcli network vswitch standard uplink add -u vmnic3 -v vSwitch0
esxcli network vswitch standard policy failover set -v vSwitch0 -s vmnic3

esxcli network vswitch standard portgroup policy failover set -p


"Management Network" -s vmnic3

esxcli network vswitch standard portgroup policy failover set -p


"Private Management Network" -s vmnic3
esxcli network vswitch standard portgroup policy failover set -p
"VM Network" -s vmnic3
esxcli network vswitch standard portgroup policy failover set -p
"Private VM Network" -s vmnic3

d. Use the VMware vCenter Server wizard or API to expand the node.
NOTE: The VxRail physical view page does not display the PCIe adapter information.

See Configure Physical Network Adapters on a VMware VDS for more information.

Configure jumbo frames


VxRail supports jumbo frames on VMware vSAN, management, VMware vSphere vMotion, iSCSI, and NFS traffic types.

Prerequisites
● Verify that the VxRail cluster is healthy and all nodes are running.
● On the Windows client, install the following:
○ PowerShell 5.1.14409.1005
○ Posh-SSH 2.0.2 for PowerShell
○ VMware.PowerCLI 12.2.0 build 17538434 for PowerShell
● Download the enablejumboframe_movevc_70100.ps1 script.
● When you enable the jumbo frames on the VMware VDS, uplinks are cycled up and down for approximately 20-40 seconds.
For critical applications, shut down and power on all the user VMs.
● The scripts power off and power on the VxRail Manager and the user VMs. If some VM services prevent the VM from
shutting down, manually shut down the VM. If the script fails after you power off the VMs, power on the VMs and retry.
● Do not power off the VxRail-managed VMware vCenter Server.
● If connectivity to the VMware vCenter Server fails due to a certificate error, enter:
C:\Users\stshell\Downloads>Set-PowerCLIConfiguration -InvalidCertificateAction Ignore

Perform operation?
Performing operation 'Update PowerCLI configuration.'?
[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "Y"): Y

Scope ProxyPolicy DefaultVIServerMode InvalidCertificateAction


DisplayDeprecationWarnings WebOperationTimeoutSeconds
----- ----------- ------------------- ------------------------
-------------------------- --------------------------
Session UseSystem Multiple Ignore True
300
Proxy
User Ignore
AllUsers

● Set the security protocol to Tls12 by entering:

58 Manage network settings


C:\Users\stshell\Downloads>[Net.ServicePointManager]::SecurityProtocol =
[Net.SecurityProtocolType]::Tls12
● On the physical switch, set the MTU value to 9216 for any switch ports in the VxRail network.

About this task


The MTU setting on the physical switch must be larger than the virtual switch to accommodate the packet header and footer
overhead. The maximum MTU value depends on the physical switch limitation. The VMware ESXi supports the MTU size of up
to 9000 bytes. MTU can be any number greater than 1500.
● To enable jumbo frames on the VMware VDS, see Jumbo Frames.
● To disable the network rollback, see VMware vSphere Networking Rollback - Disable Network Rollback.
This procedure applies to the VxRail cluster running the VxRail 8.0.x and later. See the VxRail 8.0.x Support Matrix for a list of
supported versions.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Steps
1. To enable jumbo frames for the VMware vCenter Server, perform the following:
a. Enter enablejumboframe_movevc_70100.ps1 with the following parameters:

Table 15. Command parameters


Command parameter
vCenterServer <vcenter_ipaddress>

vcUser <vcenter_username>

vcPwd <vcenter _password>

vxVDS <vds_name>

vxCluster <cluster_name>

MTU <size>
Optional: Enter the MTU size. The MTU value range is 1280–9000 bytes.

validIP <ip_address>
Use the IP address from the vmkping for the jumbo frame validation.

skipValid If skipValid is selected, ignore the validIP.

vcNotInCluster If used, VMware vCenter Server is not a VM in the selected cluster.

retryTimes <retry_times>
To retry the failed steps in the script, the minimum value is 3.

VMK <vmk interface>


The source VMkernel interface that the vmkping uses to test the jumbo frames.
The default value is vmk2

vxmIP <vxrail_mgr_ipaddr>
VxRail Manager VM skips the power off. When your cluster uses internal DNS, this
field is required.

For example:
● Internal VMware vCenter Server with external DNS (VMware vCenter Server is a VM in the VxRail Cluster):
.\enablejumboframe_movevc_70100.ps1 -MTU 9000 -vCenterServer 192.168.101.201
-vcUser "[email protected]" -vcPwd "Testvxrail123!" -vxVDS
"VMware HCIA Distributed Switch" -vxCluster "VxRail-Virtual-SAN-Cluster-
d5fff3cd-49dc-4230-8aa1-071050aa4fc0" -validIP 192.168.101.211 -retryTimes 5

Manage network settings 59


● Internal VMware vCenter Server with internal DNS (VMware vCenterServer is a VM in the VxRail Cluster):
.\enablejumboframe_movevc_70100.ps1 -MTU 9000 -vCenterServer 192.168.101.201
-vcUser "[email protected]" -vcPwd "Testvxrail123!" -vxVDS
"VMware HCIA Distributed Switch" -vxCluster "VxRail-Virtual-SAN-Cluster-
d5fff3cd-49dc-4230-8aa1-071050aa4fc0" -vxmIP 192.168.101.200 -validIP
192.168.101.211 -retryTimes 5
● External VMware vCenter Server with external DNS (VMware vCenter Server is not in the VxRail Cluster):
.\enablejumboframe_movevc_70100.ps1 -skipValid -MTU 9000 -vCenterServer
192.168.101.201 -vcUser "[email protected]" -vcPwd "Testvxrail123!"
-vxVDS "VMware HCIA Distributed Switch" -vxCluster "VxRail-Virtual-SAN-Cluster-
d5fff3cd-49dc-4230-8aa1-071050aa4fc0" -vcNotInCluster
2. To enable the jumbo frame to add or replace a node, perform the following:
a. When you add a node to the cluster with the jumbo frame is enabled, select Put Hosts in Maintenance Mode.
b. Run enablejumboframe_movevc_70100.ps1 with the following parameters:

Table 16. Command parameters


Command parameter
vCenterServer <vcenter_ipaddress>

vcUser <vcenter_username>

vcPwd <vcenter _password>

vxVDS <vds_name>

vxCluster <cluster_name>

hostMode <host_mode>

addHostName <name>

MTU <MTU_size>
Optional: the MTU value range is 1280–9000 bytes.

validIP <ip_address>
Use the IP address from the vmkping for the jumbo frame validation.

skipValid If skipValid is selected, ignore the validIP.

vcNotInCluster If used, VMware vCenter Server is not a VM in the selected cluster.

VMK <vmk interface>


The source VMkernel interface that the vmkping uses to test the jumbo frames.
The default value is vmk2

vxmIP <vxrail_mgr_ipaddr>
VxRail Manager VM skips the power off. When your cluster uses internal DNS, this
field is required.

For example:
.\enablejumboframe_movevc_70100.ps1 -skipValid -MTU 9000 -vCenterServer
192.168.101.201 -vcUser "[email protected]" -vcPwd "Testvxrail123!"
-vxVDS "VMware HCIA Distributed Switch" -vxCluster "VxRail-Virtual-SAN-
Cluster-d5fff3cd-49dc-4230-8aa1-071050aa4fc0" -vcNotInCluster -hostMode -addHostName
"engdell1-01.localdomain.local"

c. After the node is added, exit Maintenance Mode.


If you configure the cluster with the internal DNS, the VMware vCenter Server temporarily loses connectivity to the hosts
after restarting the VxRail Manager.
Power on the VxRail Manager VM if it is not powered on automatically after the procedure.

60 Manage network settings


Convert a VxRail-managed VMware VDS to a
customer-managed VMware VDS
Convert VxRail-managed VMware VDS to a customer-managed VMware VDS on a customer-managed VMware vCenter Server.

Prerequisites
Obtain access to the customer-managed VMware VDS and VxRail Manager.
Before you begin the conversion, take a snapshot of all the service VMs:
1. Log in to the VMware vSphere Web Client as an administrator.
2. Select the Inventory icon.
3. Right-click VxRail Manager and select Snapshots > Take Snapshot.
4. Enter a name and click OK.
5. Repeat these steps for the remaining service VMs.

About this task


This procedure applies to the VxRail cluster running VxRail version 8.0.x or later.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
See the VxRail 8.0.x Support Matrix for a list of supported versions.
This procedure is not supported in a VCF environment.

CAUTION: Do not perform this task in a VCF environment.

Steps
1. Using SSH, log in to VxRail Manager as mystic.
2. To connect to the database, enter:
psql -U postgres vxrail
3. To view the VMware VDS status in the database, enter:
select * from configuration.configuration where key='customer_supplied_vds';

id | category | key | value


----+-----------+------------------------+-------
84 | setting | customer_supplied_vds | false
(1 row)

Optional: If the above query returns null for the customer-managed VMware VDS, to add a row, enter:

INSERT INTO configuration.configuration (category,key,value)

VALUES ('setting','customer_supplied_vds','true');

4. To convert to a customer-managed VMware VDS, in the VMware VDS field, enter:


update configuration.configuration set value='true' where key='customer_supplied_vds';
5. To confirm the status of the VMware VDS, enter:
select * from configuration.configuration where key='customer_supplied_vds';

id | category | key | value


----+-----------+------------------------+-------
84 | setting | customer_supplied_vds | true
(1 row)

6. To exit the database, enter: \q

Manage network settings 61


7. Optional: To migrate the VMware VDS to two VMware VDS, see Convert one VMware VDS to two VMware VDS.

Enable a VxRail node to support the PCIE adapter port


without an NDC connection
VxRail 7.0.130 and later supports advanced NIC definition to use NIC with flexible configurations. How to configure the code
for VxRail initialization and node expansion to use PCIE adapter and the steps that you must follow to modify the PCIE adapter
configuration are provided.

Prerequisites
● Standard cluster deployment running VxRail 7.0.130 or later.
● NIC profiles in API: ADVANCED_VXRAIL_SUPPLIED_VDS and ADVANCED_CUSTOMER_SUPPLIED_VDS.
● The new node must have enough spared PCIE NICs for configuration.
● You must configure the required VLAN on switch for the PCIE adapter ports which are planned for Discovery and
management.
● When using pure PCIE adapter, the NDC ports should not be in a connected or active state. To avoid network interruption,
configure NDC ports using DCUI through IDRAC console.

About this task


For 16G nodes with VxRail 7.0.460 or 8.0.210 or later, PCIe adapters can only be installed from the factory and configured
without the NDC connections. Predefined options are supported for NIC profiles.
Use PCIE adapters only if no NDC adapters are used for VxRail management and discovery. You must adjust the PCIE adapter
configuration before starting the VxRail initialization procedure.
This procedure is intended for customers, Dell Service providers who are authorized to work on VxRail clusters, and VxRail
administrators. VxRail 7.0.130 or later cluster that is managed by either a VxRail-managed VMware vCenter Server or a
customer-managed VMware vCenter server.

Steps
1. Log in to the node IDRAC interface and open the console.
2. Press Alt+F1 to check into CLI mode.
3. Log in to the CLI as root.
4. To check the VMNIC status and locate the VMNIC from PCIE, enter: esxcfg-nics -l
Check the PCI column to identify different PCIE adapter in the result.
5. Select one of the PCIE ports and add it into vSwitch0 as shown in the next section.
6. In the following example, we used 2-port NDC and 2-port PCIE adapters on VxRail E560F. The VMNIC2 and VMNIC3 are
the ports that we planned to use from PCIE adapters. Only one port from PCIE adapter is required to be configured before
VxRail deployment. To configure and add PCIE VMNIC into default vSwitch0 configure and add PCIE adapters on VxRail
560F, perform the following:
a. Enter:
esxcli network vswitch standard uplink add -u vmnic2 -v vSwitch0

esxcli network vswitch standard policy failover set -v vSwitch0 -a vmnic2

esxcli network vswitch standard portgroup policy failover set -p "Management Network"
-a vmnic2

esxcli network vswitch standard portgroup policy failover set -p "Private Management
Network" -a vmnic2

esxcli network vswitch standard portgroup policy failover set -p "VM Network" -a
vmnic2

esxcli network vswitch standard portgroup policy failover set -p "Private VM Network"
-a vmnic2

b. Repeat the above step on all nodes planned for deployment.


After all nodes are configured, ping the VxRail management IP address and go to the UI or API to start deployment.

62 Manage network settings


7. After VxRail initialization is complete, perform the following:
a. In the VMware VDS configuration setting, select Custom.
b. In the uplink definition checklist, select the proper PCIE adapter port and complete VxRail Deployment Wizard.
If you are using the API to perform the initialization, only ADAVANCE_VXRAIL_SUPPLIED_VDS and
ADVANCED_CUSTOMER_SUPPLIED_VDS nic_profiles are supported.
8. For VxRail node expansion, perform the following:
a. Expand the cluster host and complete the entire procedure for new node.
b. Perform the node expansion using the Wizard or API.
c. To expand the satellite node, ensure that there are at least two adjacent PCIE ports with the same network speed and
greater than or equal to 1 Gb/s.
d. Remove unused ports from vSwitch0 and add PCIE ports. At least one adapter is active, and one adapter is standby.
For example, to remove VMNIC0 and VMNIC1 from vSwitch0, enter the following:
esxcli network vswitch standard uplink remove -u vmnic0 -v vSwitch0
esxcli network vswitch standard uplink remove -u vmnic1 -v vSwitch0
e. To add VMNIC2 to vSwitch0, and configure it as an active adapter, enter:
esxcli network vswitch standard uplink add -u vmnic2 -v vSwitch0
esxcli network vswitch standard policy failover set -v vSwitch0 -a vmnic2
esxcli network vswitch standard portgroup policy failover set -p "Management Network"
-a vmnic2

esxcli network vswitch standard portgroup policy failover set -p "Private Management
Network" -a vmnic2

esxcli network vswitch standard portgroup policy failover set -p "VM Network" -a
vmnic2

esxcli network vswitch standard portgroup policy failover set -p "Private VM Network"
-a vmnic2

f. To add VMNIC3 to vSwitch0, and configure it as a standby adapter, enter:


esxcli network vswitch standard uplink add -u vmnic3 -v vSwitch0

esxcli network vswitch standard policy failover set -v vSwitch0 -s vmnic3

esxcli network vswitch standard portgroup policy failover set -p "Management Network"
-s vmnic3

esxcli network vswitch standard portgroup policy failover set -p "Private Management
Network" -s vmnic3

esxcli network vswitch standard portgroup policy failover set -p "VM Network" -s
vmnic3

esxcli network vswitch standard portgroup policy failover set -p "Private VM Network"
-s vmnic3

9. Perform the node expansion using the UI wizard or API. Known issue: The VxRail Physical View page does not display PCIE
adapter information. See Configure Physical Network Adapters on a vSphere Distributed Switch for more information.

Enable dynamic link aggregation for two ports on a


VxRail network for a VxRail-managed VMware VDS
Enable dynamic link aggregation on a VxRail network running the VxRail 8.0.x or later versions.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Manage network settings 63


Verify the version of the VxRail cluster
The VxRail cluster must be running VxRail 8.0.x or later versions to enable LAG with two ports that support VxRail networking.

About this task


Verify that the firmware on the Dell switch is later than 10.5.3.0 and set each port with LACP individual function. For a non-Dell
switch, check each port with LACP individual function.

Steps
1. Connect to the VMware vCenter Server instance that supports the VxRail cluster.
2. From the VMware vSphere Web Client, click the Inventory icon.
3. Select a VxRail cluster, and click the Configure tab.
4. Expand VxRail and click System.

Verify the health state of the VxRail cluster


Verify that the VxRail cluster is healthy.

About this task

CAUTION: If the VxRail cluster is not healthy, you cannot enable dynamic LAG.

Steps
1. From the VMware vSphere Web Client, click the Inventory icon.
2. Select a VxRail cluster, and click the Monitor tab.
3. Select VxRail > Physical View.
4. Verify that Cluster Health displays as Healthy.

Verify the VMware VDS health status


The VMware VDS must be in a healthy state.

About this task

CAUTION: You cannot enable LAG if the VxRail cluster is in an unhealthy state.

Steps
1. From the VMware vSphere Web Client, select the Networking icon.
2. Select the VMware VDS that supports the VxRail cluster network, and click the Configure tab.
3. Expand Settings, and click Health Check.
4. To enable or disable health check, click Edit.
5. In the Edit Health Check Settings window, do the following:
a. Under VLAN and MTU, from the State menu, select Enabled.
b. In the Interval box, enter the interval for the VLAN and MTU health check. The default value is 1 minute.
c. Under Teaming and failover, from the State menu, select Enabled.
d. In the Interval box, enter the interval for the Teaming and failover health check. The default value is 1 minute.
e. Click OK.
6. Click the Monitor tab, and click Health.
7. Confirm that the VMware VDS switch is in a healthy state.
8. Disable the VMware VDS health service.
Health check can detect incorrect network conditions when LAG is enabled.

64 Manage network settings


Verify the VMware VDS uplinks
Verify that the minimum number of uplinks are assigned to support VxRail.

About this task


This procedure applies to all port group transfers to LAG traffic.
If you have multiple ports or NICs, reallocate some port groups to LAG traffic. Other port groups remain uplinks.
The following minimum uplinks are required for a VxRail cluster configuration:
● For one VMware VDS, two uplinks are required.
● For two VMware VDS, two uplinks per VMware VDS are required.
CAUTION: Do not proceed with this task unless the required minimum uplinks are assigned to support the VxRail
network.

Steps
1. From the VMware vSphere Web Client, click the Networking icon.
2. Right-click the VMware VDS that supports the VxRail cluster network.
3. Select Settings > Edit Settings.
4. On the Edit Settings window, select the Uplinks tab.
5. Verify the uplinks and click OK.

Confirm isolation of the VxRail port group


LAG is supported on all networks. You can apply LAG on either or both networks.

About this task


CAUTION: The uplinks in an active-standby state on the VxRail networks targeted for link aggregation must be
isolated to continue with this procedure.

Steps
1. From the VMware vSphere Web Client, click the Networking icon.
2. Select the VMware VDS and click the Networks tab.
3. Under the Distributed Port Groups tab, select the port group.
4. Right-click the port group and then click Edit Settings.
5. In the Distributed Port Group - Edit Settings page, click Teaming and failover.

Manage network settings 65


Figure 16. Distributed Port Groups - teaming and failover

6. Select the two uplinks that are assigned to the port group.
7. Open each port group that represents the management networks (management, VxRail management, and VMware vCenter
Server).
8. Verify that the two uplinks do not match the uplinks that are assigned to the port groups for LAG.
Do not continue with this procedure if the uplinks are not isolated.

Identify the NICs for LAG


Identify the NICs that are targeted for LAG.

About this task


If you have already identified the switch ports that support LAG, you can skip this task.

Steps
1. From the VMware vSphere Web Client, select the VMware VDS that is targeted for LAG.
2. Click the Configure tab.
3. Expand Settings and click Topology.
4. Expand the two uplinks that support the VxRail networks.
5. Locate the VMNICs that are assigned to each uplink.

66 Manage network settings


Figure 17. VMNICs assigned to each uplink

Identify NIC assignment to node ports


Identify assignment of the NICs to node ports.

Steps
1. Open a browser to the iDRAC console on one of the nodes.
2. Log in as root.
3. Open a virtual console session to the VxRail node.
4. Select Keyboard from the top toolbar, and click F2.
5. Log in to the VMware ESXi operating system as root.
6. Go to Troubleshooting Options, and select Enable ESXi Shell.
7. On the virtual keyboard, click Alt-F1.
8. Log in to the VMware ESXi console as root.
9. To obtain the MAC address and description for each VMNIC, enter:
esxcli network nic list

Identify the switch ports that are targeted for LAG using iDRAC
If the ToR switches do not support LLDP discovery, use iDRAC to identify the switch port connection.

Prerequisites
Verify that you have connectivity to the iDRAC on each VxRail node.

Steps
1. Log in to the iDRAC on a VxRail node as root.
2. Select the System view.
3. From the Overview tab, click Network Devices to view the NDC and PCIe adapter cards.
4. To view the switch port assignment for each NDC port and any of the unused PCIe based ports, perform the following:
a. Select Integrated NIC to view the NDC-OCPport properties.
b. Select NIC Slot to view the PCIe based port properties.
c. Select Summary.
The Switch Port Connection ID column identifies the switch port connection. The MAC address under Switch
Connection ID for each view differs, indicating that each port is connected to a different switch.
5. Repeat the iDRAC query for each of VxRail node to discover the switch port connections.

Manage network settings 67


Prepare the switches for LAG
To enable multichassis link aggregation across a pair of switches, configure VLT between the switches. VLT supports the
aggregation of the ports terminating on separate switches.

Prerequisites
Verify that the ToR switches that support the VxRail cluster also support VLT.

About this task


For Dell operating system 10, VLT configures a logical connection to enable LAG across a pair of switches. The command syntax
that is shown in this task is based on Dell operating system 10. The command differs from model to model and vendor to vendor.
See your switch vendor documentation or contact your technical support team for more information.
For the Dell switch model, confirm that the firmware is greater than 10.5.3.0 and set each port with LACP individual function.
For a non-Dell switch, check each port with LACP individual function.
For a multichassis LAG, configure a VLT trunk between the switches.

Steps
1. To view the configuration on each switch, enter:
show running-configuration vlt

!
vlt-domain 255
backup destination 172.17.186.204
discovery-interface ethernet 1/1/29-1/1/30
peer-routing
vlt-mac 59:9a:4c:da:5d:30

2. Configure a port channel to support LAG with the node ports.


● A port channel is configured for each node in the VxRail cluster.
● For a multichassis link aggregation, port channels are configured on both switches.
● For a multichassis link aggregation, the port channel ID values must match on both switches.
● Define the VLAN or VLANs for the VxRail networks that are targeted for link aggregation.

To view the configuration on a port channel, enter: show running-configuration interface port-channel
100

interface port-channel 100


description "Node 1 VPC"
no shutdown
switchport mode trunk
switchport trunk allowed vlan 202
mtu 9216
vlt-port-channel 100
lacp individual

interface port-channel01 description "Node 2 VPC"


no shutdown
switchport mode trunk
switchport trunk allowed vlan 202
mtu 9216
vlt-port-channel 101
lacp individual

3. (Optional) If the STP was enabled in the network, set the port channel to STP portfast mode to avoid temporary network
loss during STP convergence. The command to set STP to portfast depends on the model of the switch. As the command
is different from model to model and vendor to vendor, contact your physical switch vendor for more detailed information
about how to configure. For example:
Cisco switch:
● spanning-tree portfast (for an access port)

68 Manage network settings


● spanning-tree portfast trunk (for a trunk port)
Dell switch:
● spanning-tree port type edge (for an access port)
● spanning-tree port type edge trunk (for a trunk port)

Configure the first switch for LAG


The port on the switch connecting the VMNIC moved to the LACP policy is added to the port channel. In this example, move
the VMNIC1 to the LAG and then move the LAG into the port channel for each node.

Steps
1. Open a console to the ToR switches.
2. To confirm the switch port for the VMNIC connection using LLDP, enter:
show lldp neighbors | grep <vmnic>

ethernet1/1/3 crkm01esx03.crk.v... b8:59:9f:58:44:a5 vmnic1


ethernet1/1/6 crkm01esx04.crk.v... b8:59:9f:58:45:55 vmnic1
ethernet1/1/9 crkm01esx01.crk.v... b8:59:9f:58:49:7d vmnic1
ethernet1/1/12 crkm01esx02.crk.v... b8:59:9f:58:49:dd vmnic1

3. To configure the switch interface and set the channel group to Active, enter:

interface ethernet 1/1/9

channel-group 101 mode active

4. Repeat these steps for each switch interface that is configured into the LACP policy.

Configure the second ToR switch for LAG


After you move VMNIC to LAG on the VMware VDS, the switch interface that is connected to the VMNIC is added to the port
channel. Move the second VMNIC into the port channel for each node. Migrate the second switch interface that supports the
VMware vSAN or VMware vSphere vMotion to a port channel.

Steps
1. Open a console session to the second ToR switch.
2. To confirm the VMNIC that is connected to LLDP, enter:
show lldp neighbors | grep <vmnic>

26-II-TOR-A# show lldp neighbors | grep crkm01 | grep vmnic2


ethernet1/1/1 crkm01esx03.crk.v... 04:3f:72:c3:77:78 vmnic2
ethernet1/1/5 crkm01esx04.crk.v... 04:3f:72:c3:77:7c vmnic2
ethernet1/1/7 crkm01esx01.crk.v... 04:3f:72:c3:77:28 vmnic2
ethernet1/1/10 crkm01esx02.crk.v... 04:3f:72:c2:09:2c vmnic2

3. To configure the switch interface, enter:


26-II-TOR-A(config)# interface ethernet 1/1/7

4. To set the channel group to active, enter:


26-II-TOR-A(conf-if-eth1/1/17# channel-group 101 mode active

5. For the remaining interfaces, set the channel group to active.

Manage network settings 69


Identify the load-balancing policy on the switches
The command syntax that is shown is based on Dell Operating System 10. The command differs from model to model and
vendor to vendor.

About this task


See your switch vendor documentation or contact your technical support team for more information.

Steps
1. To view the load-balancing policies set on the switch, enter:
show load-balance

Load-Balancing Configuration For LAG and ECMP:


----------------------------------------------
IPV4 Load Balancing : Enabled
IPV6 Load Blaancing : Enabled
MAC Load Balancing : Enabled
TCP-UDP Load Balancing : Enabled
Ingress Port Load Balancing : Disabled
IPV4 FIELDS : source-ip destination-ip protocol vlan-id l4-destination-port
l4-source-port
IPV6 FIELDS : source-ip destination-ip protocol vlan-id l4-destination-port
l4-source-port
MAC FIELDS : source-mac destination-mac ethertype vlan-id
TCP-UDP FIELDS : l4-destination-port l4-source-port

2. Verify that the load-balancing policy on the switches align with the load-balancing policy that is to be configured on the
VxRail network.

Configure the LACP policy on the VxRail VDS


Configure the LACP policy on the VxRail VDS.

Steps
1. From the VMware vSphere Web Client, click the Networking icon.
2. Select the VMware VDS to configure the LACP policy, and click the Configure tab.
3. Expand Settings and click LACP.
4. Under MIGRATING NETWORK TRAFFIC TO LAGS, click NEW.
5. In the New Link Aggregation Group window, enter the following:
● Name: <name>
● Number of ports: 2
● Mode
○ Active: Initiate negotiation with the remote ports by sending the LACP packets. If the LAGs on the physical switch
are in Active mode, set the LACP policy mode to either Active or Passive.
○ Passive: Responds to the LACP packet that it receives but does not initiate LACP negotiation. If the LAGs on the
physical switch are in Passive mode, set the LACP policy mode to Active.
● Load balancing mode: Select the load-balancing algorithm that aligns with the ToR switch settings, and click OK.
6. Click Topology.
7. Verify that the LACP policy is listed for the uplink selection.

70 Manage network settings


Figure 18. Topology

Verify the port flags


Verify that the port flag is set to individual on each switch.

Steps
1. To check the flag setting on the switch, enter:
show port-channel summary

Figure 19. Flag settings

2. Verify that IND is displayed next to each of the ports.

Migrate the uplink to a LAG port


Assign one of the standby VMNICs to LAG ports. Verify that the LAG ports peers with the switch ports.

Steps
1. Right-click the VMware VDS that is targeted for LAG, and click Add and Manage Hosts.
2. On the Select task page, select Manage host networking and click NEXT .
3. On the Select hosts page, select all the member hosts and click NEXT.

Manage network settings 71


Figure 20. Select hosts

4. On the Manage physical adapters page, select one VMNIC to LAG on each host and click NEXT.

Figure 21. Manage physical adapters

5. Skip the Manage VMkernel adapters and Migrate VM networking pages.


6. On the Ready to complete page, review the uplink reassignment and click FINISH.

Migrate the LACP policy to the standby uplink


Migrate the LACP policy to the standby uplink on the target port group.

Steps
1. Right-click the VMware VDS on which you want to migrate the LACP policy to the standby uplink .
2. Select Distributed Port Group > Manage Distributed Port Groups.
3. On the Select port group policies page, select Teaming and failover, and then click Next.
4. On the Select port groups page, select a single port group or two port groups (VMware vSAN or VMware vSphere
vMotion) to assign for the LACP policy, and click Next.
5. On the Teaming and failover page, under Failover order section, use the UP and DOWN arrows to migrate between the
uplinks.
a. Migrate the LACP policy to Active uplinks.
b. Migrate the remaining uplinks to Unused uplinks.

72 Manage network settings


c. Repeat steps a and b for all port groups.

Figure 22. Distributed port group settings

6. On the Ready to complete page, review the changes, and click FINISH.
7. A warning message displays while migrating the physical adapters. Click OK to discard the issues and proceed or Cancel to
review your changes.
8. Verify that one of the ports is connect to LAG. Yellow connections in the following example indicate that connections are
applied to all port groups.

Figure 23. LAG connectivity to ports

9. To view the status of the switch, enter:


show port-channel summary

Manage network settings 73


Figure 24. Switch status

10. Verify that IND and P display next to each of the ports.

Move the second VMNIC to LAG


Migrate the second VMNIC that supports all the port groups to LAG.

Steps
1. Right-click the VMware VDS and select Add and Manage Hosts.
2. On the Select task window, select Manage host networking and click NEXT.
3. On the Select hosts window, under Member hosts, select all the hosts in the VxRail cluster and click NEXT.
4. On the Manage physical adapters window,perform the following:
● For uplinks transferred to LAG, select the VMNIC associated with the uplink and select lag1-1 in topology that has all
traffic with port groups.
● Replace the vmnic2 which still use the original uplink with unassigned LAG.
5. Skip the remaining screens and click Finish.
6. To verify the switch status, enter: show port-channel summary
7. Verify that all connections are migrated to LAG.
Vmnic1 and vmnic5 support the network that is targeted for link aggregation. They were unassigned from uplink2 and uplink4
and reassigned to the two ports that are attached to the LACP policy.
8. Skip the rest of the screens and click FINISH.

Verify LAG connectivity on VxRail nodes


Verify the LACP connectivity on the VMware VDS.

Steps
1. Open a VMware ESXi console session to a VxRail node.
2. To verify the LACP counters on the VMware ESXi console, enter:
esxcli network vswitch dvs vmware lacp stats get

DVSwitch LAGID NIC Rx Errors Rx LACPDUs Tx Errors Tx LACPDUs


------------------- ---------- -------- ---------- ---------- --------- ----------
crk-m01-c01-vds01 3247427758 vmnic1 0 21 0 89

3. Repeat this procedure on the other VxRail nodes to validate the LACP status.

74 Manage network settings


Verify that LAG is configured in the VMware VDS
Verify that LAG is active on the VMware VDS port groups.

Prerequisites
Configure the LAG for the VMNIC on all the VxRail nodes.

Steps
1. From the VMware vSphere Web Client, select the VMware VDS that is targeted for LAG.
2. Select the Configure tab and click Topology.
3. Select the LAG and verify that the specified VMNIC is assigned to the uplink against the LAG.

Figure 25. LAG uplinks

Enable dynamic link aggregation for four ports on a


VxRail network for a VxRail-managed VMware VDS
Enable dynamic LAG on a VxRail network running the VxRail 8.0.x or later versions.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Verify the VxRail version on the VxRail cluster


About this task
The VxRail cluster must be running VxRail 8.0.x or later versions.

Steps
1. Connect to the VMware vCenter Server instance that supports the VxRail cluster.
2. From the VMware vSphere Web Client, click the Inventory icon.
3. Select a VxRail cluster and click the Configure tab.
4. Expand VxRail, and click System.

Manage network settings 75


Verify the health state of the VxRail cluster
Verify that the VxRail cluster is healthy.

About this task

CAUTION: If the VxRail cluster is not healthy, you cannot enable dynamic LAG.

Steps
1. From the VMware vSphere Web Client, click the Inventory icon.
2. Select a VxRail cluster, and click the Monitor tab.
3. Select VxRail > Physical View.
4. Verify that Cluster Health displays as Healthy.

Verify the VMware VDS health status


The VMware VDS must be in a healthy state.

About this task

CAUTION: You cannot enable LAG if the VxRail cluster is in an unhealthy state.

Steps
1. From the VMware vSphere Web Client, select the Networking icon.
2. Select the VMware VDS that supports the VxRail cluster network, and click the Configure tab.
3. Expand Settings, and click Health Check.
4. To enable or disable health check, click Edit.
5. In the Edit Health Check Settings window, do the following:
a. Under VLAN and MTU, from the State menu, select Enabled.
b. In the Interval box, enter the interval for the VLAN and MTU health check. The default value is 1 minute.
c. Under Teaming and failover, from the State menu, select Enabled.
d. In the Interval box, enter the interval for the Teaming and failover health check. The default value is 1 minute.
e. Click OK.
6. Click the Monitor tab, and click Health.
7. Confirm that the VMware VDS switch is in a healthy state.
8. Disable the VMware VDS health service.
Health check can detect incorrect network conditions when LAG is enabled.

Verify the VMware VDS uplinks


Verify that the minimum number of uplinks are assigned to support VxRail.

About this task


The following minimum uplinks are required for a VxRail cluster configuration:
● For one VMware VDS supporting the VxRail cluster, for Day 1 with the 4 port high speed, there must be four uplinks. For
Day1 with the two port high speed, there must be two uplinks.
● For two VMware VDS, two uplinks per VMware VDS are required for link aggregation.
CAUTION: Do not proceed with this task unless four ports per node are assigned to support the VxRail network.

Steps
1. From the VMware vSphere Web Client, click the Networking icon.

76 Manage network settings


2. Right-click the VMware VDS that supports the VxRail cluster network that is targeted for LAG.
3. Select Settings > Edit Settings.
4. Select Uplinks.
5. Verify that the number of uplinks that are assigned to the VMware VDS support LAG.

Confirm uplink isolation of the VxRail port group


Confirm uplink isolation of the VxRail port group that is targeted for LAG.

About this task


Link aggregation is supported on the non-management VxRail networks only. These networks are the VMware vSAN and
VMware vMotion networks. Link aggregation can be applied to either or both of these networks.
CAUTION: The uplinks in an active-standby state on the VxRail networks targeted for link aggregation must be
isolated to continue with this procedure.

Steps
1. From the VMware vSphere Web Client, click the Networking icon.
2. Select the VMware VDS switch that supports the VxRail cluster network, and click the Networks tab.
3. Under the Distributed Port Groups tab, select the port group.
4. Right-click the selected port group, and then click Edit Settings.
5. In the Distributed Port Group - Edit Settings page, click Teaming and failover.
6. Select the two uplinks that are assigned to the port group.
7. Open each port group that represents the management networks (management, VxRail management, and VMware vCenter
Server).
8. Verify that the two uplinks do not match the uplinks that are assigned to the port groups for LAG.
Do not continue with this procedure if the uplinks are not isolated.

Identify the NICs for LAG


Identify the NICs that are targeted for LAG.

About this task


If you have already identified the switch ports that support LAG, you can skip this task.

Steps
1. From the VMware vSphere Web Client, select the VMware VDS that is targeted for LAG.
2. Click the Configure tab.
3. Expand Settings and click Topology.
4. Expand the two uplinks that support the VxRail networks.
5. Locate the VMNICs that are assigned to each uplink.

Manage network settings 77


Figure 26. VMNICs assigned to each uplink

Identify NIC assignment to node ports


Identify assignment of the NICs to node ports.

Steps
1. Open a browser to the iDRAC console on one of the nodes.
2. Log in as root.
3. Open a virtual console session to the VxRail node.
4. Select Keyboard from the top toolbar, and click F2.
5. Log in to the VMware ESXi operating system as root.
6. Go to Troubleshooting Options, and select Enable ESXi Shell.
7. On the virtual keyboard, click Alt-F1.
8. Log in to the VMware ESXi console as root.
9. To obtain the MAC address and description for each VMNIC, enter:
esxcli network nic list

Identify switch ports for LAG


Identify switch ports that are targeted for LAG using LLDP or iDRAC.

About this task


To use LLDP to identify the switch ports, the ToR switches must support LLDP discovery.
If LLDP discovery is not supported, use iDRAC to identify the switch ports. Go to Identify the switch ports that are targeted for
LAG using iDRAC to perform this procedure.
.

Identify the switch ports that are targeted for LAG using LLDP
The ToR switches must support LLDP discovery to identify the switch ports. Do not perform this task if the switch does not
support LLDP discovery.

About this task


Skip this task and go to Identify the switch ports that are targeted for LAG using iDRAC if you do not have console access to
the ToR switches or if the ToR switches do not support LLDP discovery.

78 Manage network settings


The command syntax in this task is based on Dell OS10. The command differs from model to model and vendor to vendor.
Contact your technical support team or see your switch vendor documentation.

Steps
1. Open a console session to the ToR switches that support the VxRail cluster.
2. To identify the VMNICs that are connected for each node, enter:
show lldp neighbors | grep <hostname>

ethernet1/1/1 mrm-wd-n4.mrmvxra... e4:43:4b:5e:01:e0 vmnic0


ethernet1/1/2 mrm-wd-n4.mrmvxra... f4:e9:d4:09:7d:5f vmnic5

ethernet1/1/1 mrm-wd-n4.mrmvxra... f4:e9:d4:09:7d:5e vmnic4


ethernet1/1/2 mrm-wd-n4.mrmvxra... e4:43:4b:5e:01:e1 vmnic1

● In this example, VMNIC0 and VMNIC4 are assigned to the VxRail network that is not targeted for LAG. The VMNIC1 and
VMNIC5 are assigned to the VxRail network that is targeted for LAG.
● The VMNIC1 and VMNIC2 are connected to separate switches.
● The MAC address for each pairing is different. This indicates that the source adapter for one NIC port is on the NDC and
the other NIC port is on a PCIe adapter card.
3. Use the VMNIC values captured from the switch topology view in the vClient to identify the switch ports planned for link
aggregation.
4. Repeat the query for each VMware ESXi hostname to discover the NICs.

Identify the switch ports that are targeted for LAG using iDRAC
If the ToR switches do not support LLDP discovery, use iDRAC to identify the switch port connection.

Prerequisites
Verify that you have connectivity to the iDRAC on each VxRail node.

Steps
1. Log in to the iDRAC on a VxRail node as root.
2. Select the System view.
3. From the Overview tab, click Network Devices to view the NDC and PCIe adapter cards.
4. To view the switch port assignment for each NDC port and any of the unused PCIe based ports, perform the following:
a. Select Integrated NIC to view the NDC-OCP port properties.
b. Select NIC Slot to view the PCIe based port properties.
c. Select Summary.
The Switch Port Connection ID column identifies the switch port connection. The MAC address under Switch
Connection ID for each view differ, indicating that each port is connected to a different switch
5. Repeat the iDRAC query for each of VxRail node to discover the switch port connections.

Prepare the switches for LAG


To enable multichassis link aggregation across a pair of switches, configure VLT between the switches. VLT supports the
aggregation of the ports terminating on separate switches.

Prerequisites
Verify that the ToR switches that support the VxRail cluster also support VLT.

Manage network settings 79


About this task
For Dell operating system 10, VLT configures a logical connection to enable LAG across a pair of switches. The command syntax
that is shown in this task is based on Dell operating system 10. The command differs from model to model and vendor to vendor.
See your switch vendor documentation or contact your technical support team for more information.
For the Dell switch model, confirm that the firmware is greater than 10.5.3.0 and set each port with LACP individual function.
For a non-Dell switch, check each port with LACP individual function.
For a multichassis LAG, configure a VLT trunk between the switches.

Steps
1. To view the configuration on each switch, enter:
show running-configuration vlt

!
vlt-domain 255
backup destination 172.17.186.204
discovery-interface ethernet 1/1/29-1/1/30
peer-routing
vlt-mac 59:9a:4c:da:5d:30

2. Configure a port channel to support LAG with the node ports.


● A port channel is configured for each node in the VxRail cluster.
● For a multichassis link aggregation, port channels are configured on both switches.
● For a multichassis link aggregation, the port channel ID values must match on both switches.
● Define the VLAN or VLANs for the VxRail networks that are targeted for link aggregation.

To view the configuration on a port channel, enter: show running-configuration interface port-channel
100

interface port-channel 100


description "Node 1 VPC"
no shutdown
switchport mode trunk
switchport trunk allowed vlan 202
mtu 9216
vlt-port-channel 100
lacp individual

interface port-channel01 description "Node 2 VPC"


no shutdown
switchport mode trunk
switchport trunk allowed vlan 202
mtu 9216
vlt-port-channel 101
lacp individual

3. (Optional) If the STP was enabled in the network, set the port channel to STP portfast mode to avoid temporary network
loss during STP convergence. The command to set STP to portfast depends on the model of the switch. As the command
is different from model to model and vendor to vendor, contact your physical switch vendor for more detailed information
about how to configure. For example:
Cisco switch:
● spanning-tree portfast (for an access port)
● spanning-tree portfast trunk (for a trunk port)
Dell switch:
● spanning-tree port type edge (for an access port)
● spanning-tree port type edge trunk (for a trunk port)

80 Manage network settings


Configure switch ports for link aggregation
A port channel is configured on a switch port for each VxRail node. For a multichassis link aggregation, switch ports on both
switches are configured to port channels.

About this task


The examples in this task are for reference only. Consult with your switch vendor or reference the technical guide for your
switch models to complete this task. In this example using Dell OS10, the channel group is set to active mode on each switch
port.

Steps
1. Open a console to the adjacent ToR switches.
2. To configure each switch port to peer with a pair of node ports for link aggregation, enter:
configure terminal
interface ethernet 1/1/2
channel-group 100 mode active
exit

Configure the LACP policy on the VxRail VDS


Configure the LACP policy on the VxRail VDS.

Steps
1. From the VMware vSphere Web Client, click the Networking icon.
2. Select the VMware VDS to configure the LACP policy, and click the Configure tab.
3. Expand Settings and click LACP.
4. Under MIGRATING NETWORK TRAFFIC TO LAGS, click NEW.
5. In the New Link Aggregation Group window, enter the following:
● Name: <name>
● Number of ports: 2
● Mode
○ Active: Initiate negotiation with the remote ports by sending the LACP packets. If the LAGs on the physical switch
are in Active mode, set the LACP policy mode to either Active or Passive.
○ Passive: Responds to the LACP packet that it receives but does not initiate LACP negotiation. If the LAGs on the
physical switch are in Passive mode, set the LACP policy mode to Active.
● Load balancing mode: Select the load-balancing algorithm that aligns with the ToR switch settings, and click OK.
6. Click Topology.
7. Verify that the LACP policy is listed for the uplink selection.

Manage network settings 81


Figure 27. Topology

Verify that port flags are all individual on the switch


View the output and verify that each port displays (IND).

Steps
1. To view the port channels on each switch, enter:
show port-channel summary

2. Verify that each port displays (IND).

Migrate the LACP policy to standby uplink


Migrate the LACP policy to the standby uplink on the target port group.

Steps
1. From the VMware vSphere Web Client, click the Inventory icon.
2. Right-click the VMware VDS on which you want to migrate the LACP policy to the standby uplink .
3. Select Distributed Port Group > Manage Distributed Port Groups.
4. On the Select port group policies page, select Teaming and failover, and then click Next.
5. On the Select port groups page, select a single port group or two port groups (VMware vSAN or VMware vSphere
vMotion) to assign for the LACP policy, and click Next.
6. On the Teaming and failover page, under Failover order, use the UP and DOWN arrows to migrate between the uplinks.
a. Migrate the LACP policy to Active uplinks.
b. Migrate the remaining uplinks to Unused uplinks.
c. Repeat these steps for all port groups.

82 Manage network settings


Figure 28. Distributed Port Group - Edit Settings

7. On the Ready to complete page, review the changes, and click FINISH.
8. A warning message displays while migrating the physical adapters. Click OK to discard the issues and proceed or Cancel to
review your changes.
9. Verify that one of the ports is connected to LAG. Yellow connections in the following example indicate that connections are
applied to all port groups.

Figure 29. Port group connections

10. To view the status of the switch, enter:


show port-channel summary

11. Verify that (IND) and (P) are displayed next to each of the ports.

Manage network settings 83


Figure 30. Port status output

Change LAG to the active uplink


Set the LAG as Active in the Teaming and Failover Order of the distributed port group.

Steps
1. Select the VMware VDS.
2. Right-click the port group where LAG is the standby and click Edit settings.
3. Select Teaming and failover.
4. In the Failover order, use the up and down arrows to move the LAG in the Active list, all stand-alone uplinks in the Unused
list, and leave the Standby list empty, and click OK.
5. Repeat for all the port groups which use LAG.

Migrate the active uplink to a link aggregation port


The LAG ports must peer with the switch ports to complete the link aggregation process.

About this task


Assigning the NICs to LAG ports should be temporary.

Steps
1. Right-click the VMware VDS and select Add and Manage Hosts > Manage Host Networking.
2. On the next screen, select Attached hosts.
3. In the Select Member Hosts window, select all hosts in the VxRail cluster and click OK.
4. On the next screen, repeat the steps for the two NICs targeted for link aggregation.
a. Select one VMNIC on the first host.
b. Select Unassign adapter.
c. Check Apply this operation to all other hosts and click Unassign.
d. Select the same NIC under the other switches/unclaimed list and click Assign uplink.
e. Select a port that is assigned to the LACP policy.
f. Check Apply uplink assignment to rest of the hosts and click OK.
g. Select the next NIC targeted for link aggregation and repeat the steps.
5. Review the uplink reassignment.

84 Manage network settings


Figure 31. Uplink reassignments

Vmnic1 and vmnic5 support the network that is targeted for link aggregation. Both VMNICs were unassigned from uplink2
and uplink4 and reassigned to the two ports that are attached to the LACP policy.
6. Skip the rest of the screens and click Finish.

Verify link aggregation connectivity


Verify LAG on the port channels to validate LACP status.

Steps
1. To verify the port channels on each switch, enter:
show port-channel summary

2. To view the LACP counters on the switches, enter:


show lacp counters

Check the output for errors.


3. Reopen the VMware ESXi console session on one of the VxRail nodes.
4. To verify the LACP counters on the VMware ESXi console, enter:
esxcli network vswitch dvs vmware lacp stats get

DVSwitch LAGID NIC Rx Errors Rx LACPDUs Tx Errors Tx LACPDUs


------------------- ---------- -------- ---------- ---------- --------- ----------
crk-m01-c01-vds01 3247427758 vmnic2 0 17 0 62
crk-m01-c01-vds01 3247427758 vmnic1 0 243 0 312

5. Repeat these steps on the other VxRail nodes to validate the LACP status.

Manage network settings 85


Enable dynamic link aggregation for four ports on a
VxRail network for a customer-managed VMware VDS
Enable dynamic LAG on a VxRail network running the VxRail 8.0.x or later versions.

About this task


This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Verify the VxRail version on the VxRail cluster


About this task
The VxRail cluster must be running VxRail 8.0.x or later versions to enable LAG with two, four, six, or eight ports that support
VxRail networking.

Steps
1. Connect to the VMware vCenter Server instance that supports the VxRail cluster.
2. From the VMware vSphere Web Client, click the Inventory icon.
3. Select a VxRail cluster and click the Configure tab.
4. Expand VxRail, and click System.

Verify the health state of the VxRail cluster


Verify that the VxRail cluster is healthy.

About this task

CAUTION: If the VxRail cluster is not healthy, you cannot enable dynamic LAG.

Steps
1. From the VMware vSphere Web Client, click the Inventory icon.
2. Select a VxRail cluster, and click the Monitor tab.
3. Select VxRail > Physical View.
4. Verify that Cluster Health displays as Healthy.

Verify the VMware VDS health status


The VMware VDS must be in a healthy state.

About this task

CAUTION: You cannot enable LAG if the VxRail cluster is in an unhealthy state.

Steps
1. From the VMware vSphere Web Client, select the Networking icon.
2. Select the VMware VDS that supports the VxRail cluster network, and click the Configure tab.
3. Expand Settings, and click Health Check.
4. To enable or disable health check, click Edit.
5. In the Edit Health Check Settings window, do the following:
a. Under VLAN and MTU, from the State menu, select Enabled.

86 Manage network settings


b. In the Interval box, enter the interval for the VLAN and MTU health check. The default value is 1 minute.
c. Under Teaming and failover, from the State menu, select Enabled.
d. In the Interval box, enter the interval for the Teaming and failover health check. The default value is 1 minute.
e. Click OK.
6. Click the Monitor tab, and click Health.
7. Confirm that the VMware VDS switch is in a healthy state.
8. Disable the VMware VDS health service.
Health check can detect incorrect network conditions when LAG is enabled.

Verify the VMware VDS uplinks


Verify that the minimum number of uplinks are assigned to support VxRail.

About this task


The following minimum uplinks are required for a VxRail cluster configuration:
● For one VMware VDS supporting the VxRail cluster, there must be two uplinks.
● For two VMware VDS, two uplinks per VMware VDS are required for link aggregation.
CAUTION: Do not proceed with this task unless four ports per node are assigned to support the VxRail network.

Steps
1. From the VMware vSphere Web Client, click the Networking icon.
2. Right-click the VMware VDS that supports the VxRail cluster network that is targeted for LAG.
3. Select Settings > Edit Settings.
4. Select Uplinks.
5. Verify that the number of uplinks that are assigned to the VMware VDS support LAG.

Confirm isolation of the VxRail port group


LAG is supported on all networks. You can apply LAG on either or both networks.

About this task


CAUTION: The uplinks in an active-standby state on the VxRail networks targeted for link aggregation must be
isolated to continue with this procedure.

Steps
1. From the VMware vSphere Web Client, click the Networking icon.
2. Select the VMware VDS and click the Networks tab.
3. Under the Distributed Port Groups tab, select the port group.
4. Right-click the port group and then click Edit Settings.
5. In the Distributed Port Group - Edit Settings page, click Teaming and failover.

Manage network settings 87


Figure 32. Distributed Port Groups - teaming and failover

6. Select the two uplinks that are assigned to the port group.
7. Open each port group that represents the management networks (management, VxRail management, and VMware vCenter
Server).
8. Verify that the two uplinks do not match the uplinks that are assigned to the port groups for LAG.
Do not continue with this procedure if the uplinks are not isolated.

Identify the NICs for LAG


Identify the NICs that are targeted for LAG.

About this task


If you have already identified the switch ports that support LAG, you can skip this task.

Steps
1. From the VMware vSphere Web Client, select the VMware VDS that is targeted for LAG.
2. Click the Configure tab.
3. Expand Settings and click Topology.
4. Expand the two uplinks that support the VxRail networks.
5. Locate the VMNICs that are assigned to each uplink.

88 Manage network settings


Figure 33. VMNICs assigned to each uplink

Identify NIC assignment to node ports


Identify assignment of the NICs to node ports.

Steps
1. Open a browser to the iDRAC console on one of the nodes.
2. Log in as root.
3. Open a virtual console session to the VxRail node.
4. Select Keyboard from the top toolbar, and click F2.
5. Log in to the VMware ESXi operating system as root.
6. Go to Troubleshooting Options, and select Enable ESXi Shell.
7. On the virtual keyboard, click Alt-F1.
8. Log in to the VMware ESXi console as root.
9. To obtain the MAC address and description for each VMNIC, enter:
esxcli network nic list

Identify switch ports for LAG


Identify switch ports that are targeted for LAG using LLDP or iDRAC.

About this task


To use LLDP to identify the switch ports, the ToR switches must support LLDP discovery.
If LLDP discovery is not supported, use iDRAC to identify the switch ports. Go to Identify the switch ports that are targeted for
LAG using iDRAC to perform this procedure.
.

Identify the switch ports that are targeted for LAG using LLDP
The ToR switches must support LLDP discovery to identify the switch ports. Do not perform this task if the switch does not
support LLDP discovery.

About this task


Skip this task and go to Identify the switch ports that are targeted for LAG using iDRAC if you do not have console access to
the ToR switches or if the ToR switches do not support LLDP discovery.

Manage network settings 89


The command syntax in this task is based on Dell OS10. The command differs from model to model and vendor to vendor.
Contact your technical support team or see your switch vendor documentation.

Steps
1. Open a console session to the ToR switches that support the VxRail cluster.
2. To identify the VMNICs that are connected for each node, enter:
show lldp neighbors | grep <hostname>

ethernet1/1/1 mrm-wd-n4.mrmvxra... e4:43:4b:5e:01:e0 vmnic0


ethernet1/1/2 mrm-wd-n4.mrmvxra... f4:e9:d4:09:7d:5f vmnic5

ethernet1/1/1 mrm-wd-n4.mrmvxra... f4:e9:d4:09:7d:5e vmnic4


ethernet1/1/2 mrm-wd-n4.mrmvxra... e4:43:4b:5e:01:e1 vmnic1

● In this example, VMNIC0 and VMNIC4 are assigned to the VxRail network that is not targeted for LAG. The VMNIC1 and
VMNIC5 are assigned to the VxRail network that is targeted for LAG.
● The VMNIC1 and VMNIC2 are connected to separate switches.
● The MAC address for each pairing is different. This indicates that the source adapter for one NIC port is on the NDC and
the other NIC port is on a PCIe adapter card.
3. Use the VMNIC values captured from the switch topology view in the vClient to identify the switch ports planned for link
aggregation.
4. Repeat the query for each VMware ESXi hostname to discover the NICs.

Identify the switch ports that are targeted for LAG using iDRAC
If the ToR switches do not support LLDP discovery, use iDRAC to identify the switch port connection.

Prerequisites
Verify that you have connectivity to the iDRAC on each VxRail node.

Steps
1. Log in to the iDRAC on a VxRail node as root.
2. Select the System view.
3. From the Overview tab, click Network Devices to view the NDC and PCIe adapter cards.
4. To view the switch port assignment for each NDC port and any of the unused PCIe based ports, perform the following:
a. Select Integrated NIC to view the NDC-OCP port properties.
b. Select NIC Slot to view the PCIe based port properties.
c. Select Summary.
The Switch Port Connection ID column identifies the switch port connection. The MAC address under Switch
Connection ID for each view differ, indicating that each port is connected to a different switch
5. Repeat the iDRAC query for each of VxRail node to discover the switch port connections.

Prepare the switches for link aggregation


To enable multichassis link aggregation across a pair of switches, configure VLT between the switches. VLT supports the
aggregation of the ports terminating on separate switches.

Prerequisites
Verify that the ToR switches that support the VxRail cluster also support VLT.
Connect the Ethernet cables between one or two pairs of ports on each switch.

90 Manage network settings


About this task
For Dell operating system 10, VLT configures a logical connection to enable LAG across a pair of switches. The command syntax
that is shown in this task is based on Dell operating system 10. The command differs from model to model and vendor to vendor.
See your switch vendor documentation or contact your technical support team for more information.
For the Dell switch model, confirm that the firmware is greater than 10.5.3.0 and set each port with LACP individual function.
For a non-Dell switch, check each port with LACP individual function.
For a multichassis LAG, configure a VLT trunk between the switches.

Steps
1. To view the configuration on each switch, enter:
show running-configuration vlt

!
vlt-domain 255
backup destination 172.17.186.204
discovery-interface ethernet 1/1/29-1/1/30
peer-routing
vlt-mac 59:9a:4c:da:5d:30

2. Configure a port channel to support LAG with the node ports.


● A port channel is configured for each node in the VxRail cluster.
● For a multichassis link aggregation, port channels are configured on both switches.
● For a multichassis link aggregation, the port channel ID values must match on both switches.
● Define the VLAN or VLANs for the VxRail networks that are targeted for link aggregation.
● For each port channel, LACP individual function is enabled.

To view the configuration on a port channel, enter: show running-configuration interface port-channel
100

interface port-channel 100


description "Node 2 VPC"
no shutdown
switchport mode trunk
switchport trunk allowed vlan 202
mtu 9216
vlt-port-channel 100
lacp individual

3. (Optional) If the STP was enabled in the network, set the port channel to STP portfast mode to avoid temporary network
loss during STP convergence. The command to set STP to portfast depends on the model of the switch. As the command
is different from model to model and vendor to vendor, contact your physical switch vendor for more detailed information
about how to configure. For example:
Cisco switch:
● spanning-tree portfast (for an access port)
● spanning-tree portfast trunk (for a trunk port)
Dell switch:
● spanning-tree port type edge (for an access port)
● spanning-tree port type edge trunk (for a trunk port)

Manage network settings 91


Identify the load-balancing policy on the switches
The command syntax that is shown is based on Dell Operating System 10. The command differs from model to model and
vendor to vendor.

About this task


See your switch vendor documentation or contact your technical support team for more information.

Steps
1. To view the load-balancing policies set on the switch, enter:
show load-balance

Load-Balancing Configuration For LAG and ECMP:


----------------------------------------------
IPV4 Load Balancing : Enabled
IPV6 Load Blaancing : Enabled
MAC Load Balancing : Enabled
TCP-UDP Load Balancing : Enabled
Ingress Port Load Balancing : Disabled
IPV4 FIELDS : source-ip destination-ip protocol vlan-id l4-destination-port
l4-source-port
IPV6 FIELDS : source-ip destination-ip protocol vlan-id l4-destination-port
l4-source-port
MAC FIELDS : source-mac destination-mac ethertype vlan-id
TCP-UDP FIELDS : l4-destination-port l4-source-port

2. Verify that the load-balancing policy on the switches align with the load-balancing policy that is to be configured on the
VxRail network.

Configure the LACP policy on the VxRail VDS


Configure the LACP policy on the VxRail VDS.

Steps
1. From the VMware vSphere Web Client, click the Networking icon.
2. Select the VMware VDS to configure the LACP policy, and click the Configure tab.
3. Expand Settings and click LACP.
4. Under MIGRATING NETWORK TRAFFIC TO LAGS, click NEW.
5. In the New Link Aggregation Group window, enter the following:
● Name: <name>
● Number of ports: 2
● Mode
○ Active: Initiate negotiation with the remote ports by sending the LACP packets. If the LAGs on the physical switch
are in Active mode, set the LACP policy mode to either Active or Passive.
○ Passive: Responds to the LACP packet that it receives but does not initiate LACP negotiation. If the LAGs on the
physical switch are in Passive mode, set the LACP policy mode to Active.
● Load balancing mode: Select the load-balancing algorithm that aligns with the ToR switch settings, and click OK.
6. Click Topology.
7. Verify that the LACP policy is listed for the uplink selection.

92 Manage network settings


Figure 34. Topology

Migrate the LACP policy to standby uplink


Migrate the LACP policy to the standby uplink on the target port group.

Steps
1. From the VMware vSphere Web Client, click the Inventory icon.
2. Right-click the VMware VDS on which you want to migrate the LACP policy to the standby uplink .
3. Select Distributed Port Group > Manage Distributed Port Groups.
4. On the Select port group policies page, select Teaming and failover, and then click Next.
5. On the Select port groups page, select a single port group or two port groups (VMware vSAN or VMware vSphere
vMotion) to assign for the LACP policy, and click Next.
6. On the Teaming and failover page, under Failover order, use the UP and DOWN arrows to migrate between the uplinks.
a. Migrate the LACP policy to Active uplinks.
b. Migrate the remaining uplinks to Unused uplinks.
c. Repeat these steps for all port groups.

Manage network settings 93


Figure 35. Distributed Port Group - Edit Settings

7. On the Ready to complete page, review the changes, and click FINISH.
8. A warning message displays while migrating the physical adapters. Click OK to discard the issues and proceed or Cancel to
review your changes.
9. Verify that one of the ports is connected to LAG. Yellow connections in the following example indicate that connections are
applied to all port groups.

Figure 36. Port group connections

10. To view the status of the switch, enter:


show port-channel summary

11. Verify that (IND) and (P) are displayed next to each of the ports.

94 Manage network settings


Figure 37. Port status output

Migrate an unused uplink to a LAG port


You can temporarily assign the VMNICs to LAG ports. The LAG ports must peer with the switch ports to complete the LAG
process.

Steps
1. Right-click the VMware VDS that is targeted for LAG, and click Add and Manage Hosts.
2. On the Select task page, select Manage host networking and click NEXT .
3. On the Select hosts page, select all the Member hosts and click NEXT.

Figure 38. Add and Manage Hosts

4. On the Manage physical adapters page, select one VMNIC to assign an uplink on each host.
5. Repeat the process of assigning uplinks to all the hosts, and click Next.

Figure 39. Assign uplink

6. Review the uplink reassignment.

Manage network settings 95


In the above example, vmnic1 and vmnic5, which support the network that is targeted for link aggregation, were
unassigned from uplink2 and uplink4 and reassigned to the two ports that are attached to the LACP policy.
7. Skip the rest of the screens and click FINISH.

Configure the first switch for LAG


The port on the switch connecting the VMNIC moved to the LACP policy is added to the port channel. In this example, move
the VMNIC1 to the LAG and then move the LAG into the port channel for each node.

Steps
1. Open a console to the ToR switches.
2. To confirm the switch port for the VMNIC connection using LLDP, enter:
show lldp neighbors | grep <vmnic>

ethernet1/1/3 crkm01esx03.crk.v... b8:59:9f:58:44:a5 vmnic1


ethernet1/1/6 crkm01esx04.crk.v... b8:59:9f:58:45:55 vmnic1
ethernet1/1/9 crkm01esx01.crk.v... b8:59:9f:58:49:7d vmnic1
ethernet1/1/12 crkm01esx02.crk.v... b8:59:9f:58:49:dd vmnic1

3. To configure the switch interface and set the channel group to Active, enter:

interface ethernet 1/1/9

channel-group 101 mode active

4. Repeat these steps for each switch interface that is configured into the LACP policy.

Verify LAG connectivity on the switch


Verify the port channel and LACP counters on ToR switches.

Steps
1. To verify the port channels of the switch, enter:

show port-channel summary

Flags: D - Down I - member up but inactive P - member up and active


U - Up (port-channel) F - Fallback Activated

---------------------------------------------------------------------------
Group Port-Channel Type Protocol Member Ports
---------------------------------------------------------------------------
101 port-channel101 (U) Eth DYNAMIC 1/1/9 (P)
102 port-channel102 (U) Eth DYNAMIC 1/1/12(P)
103 port-channel103 (U) Eth DYNAMIC 1/1/3 (P)
104 port-channel104 (U) Eth DYNAMIC 1/1/6 (P)

2. To view the LACP counters on the switches for errors, enter:

show lacp counter

LACPDUs Port Marker Marker Response LACPDUs


Sent Recv Sent Recv Sent Recvs Err Pkts
--------------------------------------------------------------------------

96 Manage network settings


ethernet1/1/9 0 0 0 0 18 15 0
ethernet1/1/12 0 0 0 0 17 14 0
ethernet1/1/3 0 0 0 0 16 13 0
ethernet1/1/6 0 0 0 0 15 10 0

3. For a multi-chassis LAG, to verify the port channel status for both the VLT peers, enter:
show vlt
<id> vlt-port-detail

Verify link aggregation connectivity on VxRail nodes


Verify the LACP connectivity on the VMware VDS.

Steps
1. Open a VMware ESXi console session to a VxRail node.
2. To verify the LACP counters on the VMware ESXi console, enter:
esxcli network vswitch dvs vmware lacp stats get

DVSwitch LAGID NIC Rx Errors Rx LACPDUs Tx Errors Tx LACPDUs


------------------- ---------- -------- ---------- ---------- --------- ----------
crk-m01-c01-vds01 3247427758 vmnic1 0 21 0 89

3. Repeat these steps on the other VxRail nodes to validate the LACP status.

Move VMware vSAN or VMware vSphere vMotion traffic to LAG


Once the LAG is enabled with a single connected interface, you can migrate the VMware vSAN or VMware vSphere vMotion
traffic to the LAG.

Steps
1. From the VMware vSphere Web Client, click the Networking icon.
2. Right-click the VMware VDS that is targeted for LAG.
3. Select Distributed Port Group > Manage Distributed Port Groups.
4. On the Select port group policies page, select Teaming and Failover, and then click Next.
5. On the Select port groups page, select the VMware vSAN or VMware vSphere vMotion distributed port groups and click
Next.
6. On the Teaming and failover page, click MOVE UP and MOVE DOWN to move the LACP policy to Active uplinks and all
the other uplinks to Unused uplinks, and then click Next.

Manage network settings 97


Figure 40. Teaming and failover

7. On the Ready to complete page, review the changes, and click FINISH.

Verify that LAG is configured in the VMware VDS


Verify that LAG is active on the VMware VDS port groups.

Prerequisites
Configure the LAG for the VMNIC on all the VxRail nodes.

Steps
1. From the VMware vSphere Web Client, select the VMware VDS that is targeted for LAG.
2. Select the Configure tab and click Topology.
3. Select the LAG and verify that the specified VMNIC is assigned to the uplink against the LAG.

Figure 41. LAG uplinks

98 Manage network settings


Move the second VMNIC to LAG
Migrate the second VMNIC that supports VMware vSAN and VMware vSphere vMotion traffic to LAG.

Steps
1. Right-click the VMware VDS, and click Add and Manage Hosts.
2. On the Select task page, select Manage host networking and click NEXT.
3. On the Select hosts page, under Member hosts, select all the hosts in the VxRail cluster and click NEXT.
4. For the second port on the LAG, select the VMNIC associated with the uplink that is not used for VMware vSAN and
VMware vSphere vMotion.
5. Select the VMNIC on the first host.
6. Select Unassign adapter.
7. Enable Apply this operation to all other hosts.
8. Click UNASSIGN.
9. Select the same NIC under the On other switches/unclaimed list.
10. Select Assign uplink.
11. Assign the uplink to an available port on the LAG.
12. Select Apply uplink assignment to rest of the hosts and click OK.
13. Review the uplink assignment.

Figure 42. Uplink assignment

In this example, the unused uplink assigned vmnic2 has been unassigned from the uplink2 and reassigned the second port
that is attached to the LAG.

14. Skip the remaining screens and click Finish.

Configure the second ToR switch for LAG


After you move VMNIC to LAG on the VMware VDS, the switch interface that is connected to the VMNIC is added to the port
channel. Move the second VMNIC into the port channel for each node. Migrate the second switch interface that supports the
VMware vSAN or VMware vSphere vMotion to a port channel.

Steps
1. Open a console session to the second ToR switch.
2. To confirm the VMNIC that is connected to LLDP, enter:

Manage network settings 99


show lldp neighbors | grep <vmnic>

26-II-TOR-A# show lldp neighbors | grep crkm01 | grep vmnic2


ethernet1/1/1 crkm01esx03.crk.v... 04:3f:72:c3:77:78 vmnic2
ethernet1/1/5 crkm01esx04.crk.v... 04:3f:72:c3:77:7c vmnic2
ethernet1/1/7 crkm01esx01.crk.v... 04:3f:72:c3:77:28 vmnic2
ethernet1/1/10 crkm01esx02.crk.v... 04:3f:72:c2:09:2c vmnic2

3. To configure the switch interface, enter:


26-II-TOR-A(config)# interface ethernet 1/1/7

4. To set the channel group to active, enter:


26-II-TOR-A(conf-if-eth1/1/17# channel-group 101 mode active

5. For the remaining interfaces, set the channel group to active.

Verify LAG connectivity on the second switch


Verify the port channel and LACP counters on a ToR switch.

Steps
1. To verify that the switch port channels are up and active, enter:
show port-channel summary

Flags: D - Down I - member up but inactive P - member up and active


U - Up (port-channel) F - Fallback Activated

---------------------------------------------------------------------------
Group Port-Channel Type Protocol Member Ports
---------------------------------------------------------------------------
101 port-channel101 (U) Eth DYNAMIC 1/1/7 (P)
102 port-channel102 (U) Eth DYNAMIC 1/1/10(P)
103 port-channel103 (U) Eth DYNAMIC 1/1/1 (P)
104 port-channel104 (U) Eth DYNAMIC 1/1/5 (P)

2. To view the LACP counters for errors, enter:


show lacp counter

LACPDUs Port Marker Marker Response LACPDUs


Sent Recv Sent Recv Sent Recvs Err Pkts
--------------------------------------------------------------------------
ethernet1/1/7 0 0 0 0 14 11 0
ethernet1/1/10 0 0 0 0 13 9 0
ethernet1/1/1 0 0 0 0 12 10 0
ethernet1/1/5 0 0 0 0 10 7 0

3. For a multichassis LAG, to verify that the port channel status for both VLT peers are active, enter:
show vlt <id> vlt-port-detail

Verify LAG connectivity on VxRail nodes


Verify the LACP connectivity on the VMware VDS.

Steps
1. Open a VMware ESXi console session to a VxRail node.
2. To verify the LACP counters on the VMware ESXi console, enter:

100 Manage network settings


esxcli network vswitch dvs vmware lacp stats get

DVSwitch LAGID NIC Rx Errors Rx LACPDUs Tx Errors Tx LACPDUs


------------------- ---------- -------- ---------- ---------- --------- ----------
crk-m01-c01-vds01 3247427758 vmnic1 0 21 0 89

3. Repeat this procedure on the other VxRail nodes to validate the LACP status.

Enable network redundancy across NDC and PCIe


ports
Enable network redundancy after the VxRail deployment. Migrate the VxRail network traffic on a node from the NDC port to
both NDC and PCIe ports.
You must be able to configure the adjacent ToR switches to complete this task.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
This procedure applies to the VxRail cluster running the VxRail version 8.0.x and later.

Network redundancy options


Review the network redundancy options and select the option that fits your requirements. Using the following examples,
populate the table for your requirements.
The following table provides an example of four NDC port to two NDC and two PCIE ports:

Table 17. Example of four NDC port to two NDC and two PCIE ports
Uplink Starting uplink Starting VMNIC Ending uplink Ending VMNIC
configuration assignment configuration assignment
uplink1 Management VMNIC0 (NDC) Management VMNIC0 (NDC)
uplink2 Management VMNIC1 (NDC) Management VMNIC4 (PCIE)
uplink3 VMware vSAN/VMware VMNIC2 (NDC) VMware vSAN/VMware VMNIC2 (NDC)
vSphere vMotion vSphere vMotion
uplink4 VMware vSAN/VMware VMNIC3 (NDC) VMware vSAN/VMware VMNIC5 (PCIE)
vSphere vMotion vSphere vMotion

The following table provides an example of two NDC ports to one NDC and one PCIE port:

Table 18. Example of two NDC ports to one NDC and one PCIE port
Uplink Starting uplink Starting VMNIC Ending uplink Ending VMNIC
configuration assignment configuration assignment
uplink1 Management/VMware VMNIC0 (NDC) Management/VMware VMNIC0 (NDC)
vSAN/VMware vSphere vSAN/VMware vSphere
vMotion vMotion
uplink2 Management/VMware VMNIC1 (NDC) Management/VMware VMNIC4(PCIE)
vSAN/VMware vSphere vSAN/VMware vSphere
vMotion vMotion
uplink3 N/A N/A N/A N/A
uplink4 N/A N/A N/A N/A

The following table provides an example of two NDC ports to two NDC and two PCIE ports:

Manage network settings 101


Table 19. Example of two NDC ports to two NDC and two PCIE ports
Uplink Starting uplink Starting VMNIC Ending uplink Ending VMNIC
Configuration assignment Configuration assignment
uplink1 Management/VMware VMNIC0 (NDC) Management VMNIC0 (NDC)
vSAN/VMware vSphere
vMotion
uplink2 Management/VMware VMNIC1 (NDC) Management VMNIC4 (PCIE)
vSAN/VMware vSphere
vMotion
uplink3 N/A N/A VMware vSAN/VMware VMNIC1 (NDC)
vSphere vMotion
uplink4 N/A N/A VMware vSAN/VMware VMNIC5 (PCIE)
vSphere vMotion

The following table provides an example of four NDC ports to one NDC and one PCIE ports:

Table 20. Example of four NDC ports to one NDC and one PCIE ports
Uplink Starting uplink Starting VMNIC Ending uplink Ending VMNIC
Configuration assignment Configuration assignment
uplink1 Management VMNIC0 (NDC) Management/VMware VMNIC0 (NDC)
vSAN/VMware vSphere
vMotion
uplink2 Management VMNIC1 (NDC) Management/VMware VMNIC4 (PCIE)
vSAN/VMware vSphere
vMotion
uplink3 VMware vSAN/VMware VMNIC2 (NDC) N/A N/A
vSphere vMotion
uplink4 VMware vSAN/VMware VMNIC3 (NDC) N/A N/A
vSphere vMotion

The following table provides an example of N ports to N ports:

Table 21. Example of N ports to N ports


Uplink Starting uplink Starting VMNIC Ending uplink Ending VMNIC
configuration assignment Configuration assignment
uplink1 Management VMNIC0 (NDC) Management VMNIC0 (NDC)
uplink2 Management VMNIC1 (NDC) Management VMNIC6 (PCIE)
uplink3 VMware vSAN VMNIC2 (NDC) VMware vSAN VMNIC2 (NDC)
uplink4 VMware vSAN VMNIC3 (NDC) VMware vSAN VMNIC7 (PCIE)
uplink5 VMware vSphere vMotion VMNIC4 (NDC) VMware vSphere VMNIC4 (NDC)
vMotion
uplink6 VMware vSphere vMotion VMNIC5 (NDC) VMware vSphere VMNIC8 (PCIE)
vMotion

Populate the grid with the uplink names and VMNIC names.

102 Manage network settings


Verify that the VxRail version supports network redundancy
Check your VxRail version to determine whether network redundancy is supported.

Steps
1. Open the VMware vSphere Web Client and connect to the VMware vCenter Server instance that supports the VxRail
cluster.
2. Select Home > Hosts and Clusters.
3. Select the VxRail cluster to enable network redundancy.
4. Select Configure > VxRail > System.
5. Confirm that the VxRail version supports network redundancy.

Verify that the VxRail cluster is healthy


Validate the VxRail cluster health status.

Prerequisites
Verify access to the VMware vCenter Server that supports the VxRail cluster.

Steps
1. From the VMware vSphere Web Client, select the VxRail cluster in which you want to enable network redundancy.
2. Select the Monitor tab.
3. From the left-menu, select VxRail > Physical View.
4. Verify that the Health State is healthy.

Verify the VxRail physical network compatibility


Check the physical network adapters of the VxRail nodes to verify the planned ending network configuration.

Steps
1. Log in to VMware vSphere Web Client as an administrator.
2. Select Home > Hosts and Clusters > VxRail Cluster.
3. From the VxRail clusters, select a node.
4. Select Configure > Networking > Physical adapters.
5. View the physical adapters serving as an uplink to the VMware VDS. In the following figure, VMNIC 0, VMNIC 1, VMNIC 2,
and VMNIC 3 are connected to a single VMware VDS at a connection speed of 10 Gbps. There are four NDC ports. If your
cluster has only two NDC ports, only two VMNICs are visible.

Manage network settings 103


Figure 43. Physical adapters

6. View the unused physical adapters. In the following figure, VMNIC 4 and VMNIC 5 are PCIe network ports. The connection
speed is 10 Gbps and is compatible with the NDC ports.

Figure 44. Physical adapters with PCIe network ports

7. Repeat these steps for each node in the VxRail cluster.

Verify the physical switch port configuration


Validate the physical switch port configuration. Repeat the steps in this task for each port on each switch port that supports
VxRail network traffic.

Prerequisites
Ensure that you have access to the adjacent ToR switches.
To discover the VxRail node connections, your switch operating system must support the LLDP neighbor functionality.

104 Manage network settings


About this task
The command syntax that is shown in this task is based on Dell OS10. As the command differs from model to model and vendor
to vendor, contact your technical support team or see your switch vendor documentation for more details.

Steps
1. Open a console session to one of the Ethernet switches that supports the VxRail cluster.
2. To verify the ports that are connected to the VxRail nodes and VMNIC assignment, enter:
show lldp neighbors | grep vmnic

Following are the sample outputs shown for two different switches:

18KK-TOR-A# show lldp neighbors | grep vmnic

ethernet1/1/3 mrm-md-nl.mrmvxa... e4:43:4b:5e:04:f0 vmnic0


ethernet1/1/4 mrm-md-nl.mrmvxa... e4:43:4b:5e:04:f2 vmnic2
ethernet1/1/5 mrm-md-n3.mrmvxa... e4:43:4b:5e:07:90 vmnic0
ethernet1/1/6 mrm-md-n3.mrmvxa... e4:43:4b:5e:07:92 vmnic2
ethernet1/1/7 mrm-md-n2.mrmvxa... e4:43:4b:5f:84:50 vmnic0
ethernet1/1/8 mrm-md-n2.mrmvxa... e4:43:4b:5f:84:52 vmnic2

18KK-TOR-B# show lldp neighbors | grep vmnic

ethernet1/1/3 mrm-md-nl.mrmvxa... e4:43:4b:5e:04:f1 vmnic1


ethernet1/1/4 mrm-md-nl.mrmvxa... e4:43:4b:5e:04:f3 vmnic3
ethernet1/1/5 mrm-md-n3.mrmvxa... e4:43:4b:5e:07:91 vmnic1
ethernet1/1/6 mrm-md-n3.mrmvxa... e4:43:4b:5e:07:93 vmnic3
ethernet1/1/7 mrm-md-n2.mrmvxa... e4:43:4b:5f:84:51 vmnic1
ethernet1/1/8 mrm-md-n2.mrmvxa... e4:43:4b:5f:84:53 vmnic3

3. Identify a switch port that supports VxRail network traffic.


4. Identify an unused switch port that is planned as target port to enable network redundancy.
5. To ensure the switch port that supports the VxRail network traffic after migration has a compatible configuration, perform
the following:
a. Verify that the VLANs used for VxRail networks (external management, internal management, VMware vSphere vMotion,
VMware vSAN, and guest networks) are compatible on both the switch ports.
b. Verify that the other switch port settings are compatible on both the switch ports.
c. If your final configuration reduces the number of uplinks, verify that the VLANs, and the other switch port settings are
consolidated into the target switch ports.
The following is a sample switch configuration for a source NDC port and a target PCIe port:

interface ethernet1/1/3
description VxRail-NDC-Port
no shutdown
switchport mode trunk
switchport access vlan 1386
switchport trunk allowed vlan 100-103,3939
mtu 9216
flowcontrol receive on
flowcontrol transmit off
spanning-tree port type edge
exit

interface ethernet1/1/16
description VxRail-PCIe-Port
no shutdown
switchport mode trunk
switchport access vlan 1386
switchport trunk allowed vlan 100-103,3939
mtu 9216
flowcontrol receive on
flowcontrol transmit off
spanning-tree port type edge
exit

Manage network settings 105


NOTE: Configure the Ethernet ports before you enable the network redundancy for the VxRail cluster.

Verify active uplink on the VMware VDS port groups post migration
Verify at least one uplink in each VMware VDS port group is active after the migration.

Prerequisites
Ensure that you have access to the planning grid table Enable network redundancy across NDC and PCIe ports.
Review the planning grid table that is populated with the starting and ending network configuration to identify any uplinks that
are disconnected as part of the uplink reassignment process.

Steps
1. From the VMware vSphere Web Client, select Networking.
2. Right-click the VMware HCIA Distributed Switch.
3. Select Distributed Port Group > Manage Distributed Port Groups.
4. Select Teaming and Failover.
5. Select all the VMware VDS port groups.
6. Verify at least one of the active uplinks in the failover order is not disconnected during the migration task.
7. If there is an uplink under Active uplinks, that uplink gets disconnected during the migration. Modify the failover order to
move an uplink that might not get disconnected during the migration to Active uplinks.

Add uplinks to the VMware VDS


Add the VMware VDS uplinks before migrating the VMNICs.

Prerequisites
Review the planning grid table populated in Enable network redundancy across NDC and PCIe ports.

Steps
1. To add the uplinks to the VMware VDS, perform the following:
a. From the VMware vSphere Web Client, select Networking inventory view.
b. Right-click the VMware HCIA Distributed Switch and select Settings > Edit Settings.
c. Click Uplinks to display the existing uplinks.
d. Click ADD to add the uplinks according to the planning grid table populated in Enable network redundancy across NDC
and PCIe ports and click OK.
2. Skip this task if you are removing or not changing the uplinks.

Migrate the VxRail network traffic to a new VMNIC


Change the VxRail network traffic to use a new VMNIC.

Prerequisites
Review the planning grid table in Enable network redundancy across NDC and PCIe ports.

Steps
1. From the VMware vSphere Web Client, select Networking.
2. From the VxRail Datacenter menu, right-click VMware HCIA Distributed Switch.
3. Click Add and Manage Hosts... and click Manage host networking.
4. Select all the hosts in the VxRail cluster and click NEXT.
5. From the left-menu, select Manage physical adapters to review the existing VMNICs and uplinks mapping.

106 Manage network settings


In the example below:
● Four uplinks on the VMware VDS are linked to four VMNICs.
● The VMNIC0 to VMNIC3 are backed by ports on an NDC physical adapter.
● VMNIC4 and VMNIC5 are unassigned and backed by ports on a PCIe adapter.

Figure 45. Manage physical adapters

6. Use the planning grid table in Enable network redundancy across NDC and PCIe ports to set and update the VMNIC and
uplink mapping.
In the example below:
● VMNIC1 from an NDC-based adapter is unassigned from uplink2.
● VMNIC3 from an NDC-based adapter is unassigned from uplink4.
● VMNIC4 from a PCIe-based adapter is assigned to uplink2.
● VMNIC5 from a PCIe-based adapter is assigned to uplink4.

Figure 46. Manage physical adapters

7. Click NEXT.
8. From the VMware HCIA Distributed Switch > Add and Manage Hosts menu, click Manage VMkernel adapters. Do not
migrate any network on the Manage VMkernel adapters window.

Manage network settings 107


Figure 47. Manage VMkernel adapters

9. Click NEXT.
10. From the Migrate VM networking window, click NEXT > FINISH.
Monitor the network migration progress until it is complete.

Set the port group teaming and failover policies


Configure the teaming and failover settings for the VMware VDS port groups.

Prerequisites
Go to Enable network redundancy across NDC and PCIe ports to identify the VMNICs that are assigned and unassigned to the
VMware VDS port groups. Identify the ending uplinks from the planning grid table and the VMware VDS port groups that are
assigned to each uplink.

Steps
1. From the VMware vSphere Web Client, select Networking.
2. Right-click the VMware HCIA distributed switch.
3. Select a VMware VDS port group to modify for the network reconfiguration.
4. Move the uplinks up and down to align with the ending network configuration from the planning grid table.
5. Move any unused uplinks and remove as part of the reconfiguration process and click OK.

108 Manage network settings


Figure 48. Distributed port group settings

6. Select the next VMware VDS port group that you plan to modify for the network reconfiguration.
7. Move the uplinks up and down to align with the ending network configuration from the planning grid table.
8. Move any unused uplinks and remove as part of the reconfiguration process and click OK.

Remove the uplinks from the VMware VDS


Remove the unused uplinks from the VMware VDS. For example, four NDC to one NDC, one PCIe.

Prerequisites
Go to Enable network redundancy across NDC and PCIe ports to identify the uplinks removed from the VMware VDS port
groups. Identify any uplinks that are listed in the starting network configuration column of planning grid table that is not listed in
the ending network configuration.

Steps
1. From the VMware vSphere Web Client, select Networking.
2. Right-click the VMware VDS.
3. Select Settings > Edit Settings.
4. Click Uplinks.
5. Next to each uplink you want to remove, click REMOVE, and then click OK.

Reset the VMware vSphere alerts for network uplink redundancy


Reset the network uplink redundancy alerts.

Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. Select vCenter > Hosts and Clusters.
3. Select the VxRail cluster to perform the network migration.
4. Select a host in the VxRail cluster and select the Summary view.

Manage network settings 109


5. Select Reset to Green to silence alarms.
6. Repeat these steps for each host in the VxRail cluster.

Enable VMware vSAN RDMA in the VxRail cluster


(VxRail 8.0.210 and later)
For VxRail 8.0.210 and later, VMware vSphere has decoupled from large-scale support of VMware vSAN Remote Direct Memory
Access (RDMA). RDMA allows direct access from the memory of one system to the memory of another without using the
operating system or CPU. The memory transfer is offloaded to the host channel adapters with RDMA enabled.

Prerequisites
● Complete the VxRail cluster Day 1 bring-up.
● Verify that there are no critical alarms in the cluster.
● Verify that the VMware vSAN is in a healthy state.
● Configure the DCB-capable switch. Verify that the RDMA-enabled physical NIC is configured for lossless traffic.
● To ensure a lossless SAN, configure the data center bridging (DCB) mode as IEEE.
○ Set the priority flow control (PFC) value to CoS priority 3, per VMware.
○ See the operation guide from the physical switch vendor to set up the outside network environment to match the data
center cluster network strategy and topology.
● Disable the VMware vSAN large-scale cluster support (LSCS) feature. VxRail enables VMware vSAN LSCS as a default
setting during the VxRail cluster setup. LSCS conflicts with the VMware vSAN RDMA and must be disabled to use the
VMware vSAN RDMA.

About this task


This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
This procedure applies to the VxRail cluster running the VxRail version 8.0.210 and later.
The physical NIC cards depend on the project requirements. Only Mellanox NICs are supported. The RDMA pNIC is dedicated
to the storage network. All hosts in the cluster must support RDMA. If any host loses RDMA support, the entire VMware vSAN
cluster switches to TCP.
See VMware vSphere RDMA for more information.
For VxRail releases from 8.0.210 and later, there is a VMware vSAN feature conflict between LSCS and RDMA. When
configuring for VMware vSAN RDMA, LSCS must be disabled. See KB 200153.
VMware removed both VMware vSAN UI for LSCS configuration from VMware vCenter and SDK interface which allowed the
VMware vSAN RDMA configuration.
NOTE: If VMware vSAN LSCS is enabled before configuring RDMA for the VxRail 8.0.210 and later releases, the RDMA
configuration option is disabled and not supported.
For VxRail releases 7.0.400 or later, the VMware vSAN UI interface for LSCS in the VMware vCenter Server is not present.
VMware has re-enabled SDK interface to allow configuration options of vSAN LSCS feature option to allow setup for vSAN
RDMA. The VMware vSAN interface for LSCS in the VMware vCenter Server is not present. VMware has re-enabled the SDK
interface to allow configuration options for the VMware vSAN LSCS feature to set up VMware vSAN RDMA.
To create VMware vSAN clusters for up to 64 nodes, see KB 2110081 and follow the SDK steps for large-scale configurations.
See Set-VsanClusterConfiguration commands for more information.

Steps
1. To place the host into maintenance mode and configure advanced settings, perform the following:
a. Enter System Setup and select Device Settings.
b. Select a device.

110 Manage network settings


Figure 49. Device setup

c. Next to NIC +RDMA Mode, click Enabled and click Finish.


d. Repeat these steps for each NIC card to enable NIC +RDMA mode.
e. On the Main Configuration Page, set each NIC card Auto-negotiation Protocol to IEEE and Consortium. Set
DCBX Mode to Enabled (IEEE only).

Figure 50. Main Configuration Page

f. For VxRail releases 8.0.xxx and earlier, disable large cluster support in the cluster level.

Manage network settings 111


Figure 51. Large cluster support

esxcli system settings advanced set -o /VSAN/goto11 -i 0


2. To adjust the TCP/IP heap size, if needed, enter:
esxcli system settings advanced set -o /Net/TcpipHeapMax -i XXXX

a. Manually reboot the host.


b. Repeat the process for each host in the cluster.
c. Disable LSCS to add a node to the cluster.
3. Verify that the physical NIC is applied as RDMA adapters.
For Mellanox NIC, see Configure RoCEv2 Lossless Fabric for VMware ESXi 6.5 and above.
4. The following step is only required if the switch detects multipeer during the DCBx negotiation. On a Dell physical switch, if
the PFC operation show status is down or disabled, Multiple peers Detected message is displayed. To disable
multi-LLDP neighbor, perform the following:
a. To disable the VxRail-supplied VIB service port-lldp in the VMware ESXi host, enter:

/etc/init.d/port-lldpd disable

Disabling Port LLDP Service daemon


Port LLDP Service successfully disabled

b. Place the host into maintenance mode and individually reboot each host.

For Mellanox NIC, see the vendor documentation on disabling the hardware DCBx from Mellanox for VMware.

5. To enable RDMA support in the VMware vSAN service, perform the following:
a. Select Configure > vSAN > Services.
b. Under the Network section, click EDIT and enable the RDMA support.
Verify that there are no critical alarms in the VxRail cluster. Verify that the VMware vSAN and RDMA configurations are
healthy.
c. To verify the VMware vSAN health and the RDMA configuration health status, select Monitor > vSAN > System
Health > RDMA Configuration Health.
d. Under RDMA Configuration Health, check the health status.

112 Manage network settings


Enable VMware vSAN RDMA in the VxRail cluster
(VxRail versions earlier than 8.0.210)
VMware vSphere supports VMware vSAN Remote Direct Memory Access (RDMA). RDMA allows direct access from the
memory of one system to the memory of another without using the operating system or CPU. The memory transfer is offloaded
to the host channel adapters with RDMA enabled.

Prerequisites
● Complete the VxRail cluster Day 1 bring-up.
● Verify that there are no critical alarms in the cluster.
● Verify that the VMware vSAN is in a healthy state.
● Configure the DCB-capable switch. Verify that the RDMA-enabled physical NIC is configured for lossless traffic.
● To ensure a lossless SAN, configure the data center bridging (DCB) mode as IEEE.
○ Set the priority flow control (PFC) value to CoS priority 3, per VMware.
○ See the operation guide from the physical switch vendor to set up the outside network environment to match the data
center cluster network strategy and topology.
● Disable the VMware vSAN large-scale cluster support (LSCS) feature. VxRail enables VMware vSAN LSCS as a default
setting during the VxRail cluster setup. LSCS conflicts with the VMware vSAN RDMA and must be disabled to use the
VMware vSAN RDMA.

About this task


This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster. This procedure applies to the VxRail clusters running VxRail versions earlier than 8.0.210.
The physical NIC cards depend on the project requirements. Only Mellanox NICs are supported.
The RDMA pNIC is dedicated to the storage network.
All hosts in the cluster must support RDMA. If any host loses RDMA support, the entire VMware vSAN cluster switches to TCP.
See VMware vSphere RDMA for more information.

Steps
1. The VMware vSAN interface for LSCS in the VMware vCenter Server is not present. VMware has re-enabled the SDK
interface to allow configuration options for the VMware vSAN LSCS feature to set up VMware vSAN RDMA. See KB 2110081
and follow the SDK steps for large-scale configurations.
See Set-VsanClusterConfiguration commands for more information.
2. To place the host into maintenance mode and configure advanced settings, enter:
esxcli system settings advanced set -o /VSAN/goto11 -i 0
a. To adjust the TCP/IP heap size, if needed, enter:
esxcli system settings advanced set -o /Net/TcpipHeapMax -i XXXX

b. Manually reboot the host.


c. Repeat the process for each host in the cluster.
d. Disable LSCS to add a node to the cluster.
3. Verify that the physical NIC is applied as RDMA adapters.
For Mellanox NIC, see Configure RoCEv2 lossless fabric for VMware ESXi 6.5 and above.
4. The following step is only required if the switch detects multipeer during the DCBx negotiation. On a Dell physical switch, if
the PFC operation show status is down or disabled, Multiple peers Detected message is displayed. To disable
multi-LLDP neighbor, perform the following:
a. To disable the VxRail-supplied VIB service port-lldp in the VMware ESXi host, enter:

/etc/init.d/port-lldpd disable

Disabling Port LLDP Service daemon


Port LLDP Service successfully disabled

Manage network settings 113


b. Place the host into maintenance mode and individually reboot each host.

For Mellanox NIC, see the vendor documentation on disabling the hardware DCBx from Mellanox for VMware.

5. To enable RDMA support in the VMware vSAN service, perform the following:
a. Select Configure > vSAN > Services.
b. Under the Network section, click EDIT and enable the RDMA support.
Verify that there are no critical alarms in the VxRail cluster. Verify that the VMware vSAN and RDMA configurations are
healthy.
c. To verify the VMware vSAN health and the RDMA configuration health status, select Monitor > vSAN > System
Health > RDMA Configuration Health.
d. Under RDMA Configuration Health, check the health status.

Migrate the satellite node to a VMware VDS


A satellite node is deployed with a VMware standard switch by default. Migrate the VMware standard switch to a VMware VDS
that manages the VMware vCenter Server instance.

Prerequisites
To set up the satellite node, you must:
● Verify that the VxRail management cluster is deployed.
● Verify that the satellite node is added into a folder that manages the VMware VDS.

About this task


This procedure applies to the VxRail cluster running the VxRail version 8.0.x and later. VxRail 8.0.010 does not support VMware
vSAN ESA or satellite nodes.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Capture the satellite node VMware standard switch settings


Capture the satellite node VMware standard switch settings to create the VMware VDS.

About this task


The VMware VDS uses these same settings to manage the VMware vCenter Server instance.

Steps
1. Log in to VMware vSphere Web Client as an administrator.
2. From the left-menu, select Networking.
3. Select the Virtual switches tab and locate the VMware standard switch that supports the satellite node.
4. Click Edit Settings.
5. Identify and capture the MTU.
6. Identify and capture the VMNIC that is connected to the VMware VDS.
7. Identify and capture the NIC teaming policy.

114 Manage network settings


Figure 52. NIC teaming

8. Select the Port groups tab.


a. Identify and capture the VLAN that is assigned to the management network and VM network. The VLANs must be the
same.

Figure 53. Port groups tab

b. Identify and capture any port groups and VLANs that are assigned for the guest networks or other management
networks with at least one active port.
9. Select the VMkernel NICs tab and capture the name of each VMkernel NIC and the name of the port group assignment.
10. Exit the VMware ESXi session.

Create the VMware VDS for the satellite node


Create a VMware VDS for the satellite node.

Steps
1. Log in to the VMware vSphere Web Client of the management cluster as an administrator.
2. Select Networking.
3. From the vSphere Client menu, select Inventory.
4. Select the data center that contains the satellite node folder.
5. Right-click the data center and select Distributed Switch > New Distributed Switch.
6. Enter a name for the VMware VDS and click NEXT.
7. Select the latest version that is compatible with the VMware ESXi version on the satellite node and click NEXT.
8. Set the number of uplinks to match the number of uplinks on the satellite node VMware standard switch.

Manage network settings 115


9. Click NEXT and then FINISH.

Set the MTU on the VMware VDS


Configure the MTU value on the VMware VDS.

Steps
1. In the VMware vSphere Web Client , select the new VMware VDS.
2. Right-click the VMware VDS and select Settings > Edit Settings.
3. In the Edit Settings window, select Advanced.
4. Set the MTU to match the satellite node VMware standard switch and click OK.

Create the VMware VDS port groups for the satellite node
Create a VMware VDS port group on the VMware VDS that supports satellite node networking. Repeat these steps to add the
new port group to the VMware VDS.

Steps
1. Locate the first port group that was captured on the satellite node on the VMware standard switch.
2. In the VMware vSphere Web Client, select the new VMware VDS.
3. Right-click the data center and select Distributed Switch > New Distributed Switch.
4. Under Name and Location, perform the following:
a. The distributed port group name can be the same or correlate with the port group on the satellite node VMware standard
switch. Enter the distributed port group name.
b. Click NEXT.
5. Under Configure Settings, to set the properties of the new port group, perform the following:
a. For the VLAN Type, select VLAN.
b. Enter the VLAN ID.
The VLAN ID must match with the port group VLAN ID on the satellite node VMware standard switch.
c. Select Customize default policies configuration.
6. From Teaming and Failover, set the policy that matches the settings that are captured on the satellite node VMware
standard switch.

Figure 54. Teaming and failover

116 Manage network settings


7. Proceed through the remaining screens and click FINISH.
8. Select the next port group that is captured from the satellite node VMware standard switch and repeat these steps.

Migrate the satellite node to the new VMware VDS


Add the satellite node into the new VMware VDS.

Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. Select VxRail-Datacenter > VMware HCIA Distributed Switch.
3. Right-click the satellite node for the VMware VDS and select Add Hosts....
4. From the Add hosts wizard, enter host information and click ADD HOST.
5. Under Select hosts, select the satellite node.
6. Under Manage VMkernel adapters, to migrate the VMkernel from the satellite node VMware standard switch to the port
groups on the VMware vCenter Server VDS, perform the following:
a. Select the first VMkernel to assign to a port group.
b. Click ASSIGN PORT GROUP.
c. Select the port group from the drop-down.

Figure 55. Manage VMkernel adapters

d. Click ASSIGN.
e. Repeat these steps for the next VMkernel on the list.
7. Under Migrate VM Networking, to migrate the VMs to the new port group on the VMware VDS, perform the following:
a. Select the first VM.
b. Migrate the NIC from the source port group on the satellite node VMware standard switch to the new port group on the
VMware VDS.
c. Repeat these steps for the remaining VMs in the list.
8. Click FINISH.

Next steps
Verify the VMware VDS.
1. Connect to the VMware vSphere Web Client.
2. Select Home > Hosts and Clusters.
3. Select Configure > Virtual Switches.
4. Select the satellite node and verify the new VMware VDS.

Manage network settings 117


Modify the VMware VDS port group teaming and
failover policy
Modify the port group teaming and failover policy for the VMware VDS that supports VxRail networks.

About this task


This procedure applies to the VxRail cluster running the VxRail 8.0.x and later.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Steps
1. To connect to the VMware VDS, perform the following:
a. Log in to the VMware vSphere Web Client as an administrator.
b. Select Networking.
c. Select the VMware VDS that supports the VxRail cluster that you plan to modify.
2. To identify the number of uplinks that support the VMware VDS, perform the following:
a. From the Home screen, select the Actions drop-down menu.
b. Select Settings > Edit Settings and click the Uplinks tab to view the number of uplinks that are assigned to the
VMware VDS.
The options for the failover settings are based on the number of uplinks.
3. To configure to the port group teaming and failover policy, perform the following:
a. From the VMware VDS, select the port group to modify.
b. Select Configure and from the left-menu, click Properties > EDIT.
4. From the left-menu, select Teaming and failover to view the existing port group policy.
5. Select the Load balancing policy that meets the requirements for the network traffic on the port group.

Table 22. Load balancing


Load-balancing option Description Supported
Route based on originating Forwards the network traffic through the originating uplink. There is Yes
virtual port no load balance that is based on the network traffic.

Use explicit failover order Use the highest order uplink that passes the failover detection. Yes
There is no load balance that is based on the network traffic.
Route based on source MAC Uplink is selected based on the VM MAC address. There is no load Yes
hash balance that is based on the network traffic.

Route based on physical NIC Monitor the network traffic and adjust the overloaded uplinks by Yes
load moving the network traffic to another uplink.

Route based on IP hash Dependency on the logical link setting of the physical switch port No
adapters is not supported in VxRail.

6. Select the failover order for teaming and failover policy.


a. Select the table based on the number of uplinks that are configured on the VxRail VMware VDS.
b. Use the name of the VMware VDS port group to map the corresponding row in the selected table.
● The second column displays the supported settings where the uplinks are configured as active/active.
● The fourth column displays the supported settings where the uplinks are configured as active/standby.
The following table lists the supported failover options for the VxRail port groups with two configured uplink ports:

Table 23. VMware VDS port groups with two uplinks


VMware VDS port group Active/Active Active/Standby
Management Network uplink1 uplink2 uplink1 uplink2

118 Manage network settings


Table 23. VMware VDS port groups with two uplinks (continued)
VMware VDS port group Active/Active Active/Standby
VMware vCenter Server uplink1 uplink2 uplink1 uplink2
VMware vSAN Network uplink2 uplink1 uplink2 uplink1
VMware vSphere vMotion uplink1 uplink2 uplink1 uplink2
Network
VxRail Management Network uplink1 uplink2 uplink1 uplink2
VMware Guest Network uplink1 uplink2 uplink1 uplink2

The following table lists the supported failover options for the VxRail port groups with four configured uplink ports:

Table 24. VMware VDS port groups with four uplinks


VMware VDS port group Active/Active Active/Standby
Management Network uplink2 uplink1 uplink2 uplink1
VMware vCenter Server uplink1 uplink2 uplink1 uplink2
VMware vSAN Network uplink3 uplink4 uplink3 uplink4
VMware vSphere vMotion uplink4 uplink3 uplink4 uplink3
Network
VxRail Management Network uplink2 uplink1 uplink2 uplink1
VMware Guest Network uplink1 uplink2 uplink1 uplink2

You cannot configure the unused uplinks into the failover order setting.
7. To configure an active/active failover order, perform the following:
a. Select the uplink under Standby uplinks.
b. Use the UP arrow to move the uplink to Active uplinks.
8. To configure an active/standby failover order, perform the following:
a. Under the Active uplinks, select the uplink that is supported to be in standby mode per the supported failover order for
this port group.
b. Use the DOWN arrow to go to the uplink in the Standby uplinks setting.
9. To complete the policy update, click OK.

Optimize cross-site traffic for VxRail


You can use telemetry settings to collect the system running data that is sent back using remote support connectivity to
provide advance system health status. Telemetry settings allow you to collect the system running data such as performance and
alarms. The data is sent back using remote support connectivity for analysis to provide the advance system health status.
This procedure applies to the VxRail cluster running the VxRail version 8.0.x and later.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Remote-office branch-office sites


Remote-office branch-office (ROBO) sites deploy a centralized VMware vCenter Server with limited bandwidth between the
cluster and the VMware vCenter Server. If the life cycle management (LCM) bundle that is distributed from Repo to the cluster
is 4 GB, limited bandwidth is consumed. The process becomes time consuming and may trigger a distribution failure which
causes network congestion.
The customer can provide a jump box service to store the bundle and ROBO cluster. The LCM is locally triggered from the UI to
upload the bundle. To centrally perform LCM for each ROBO site, use a jump box to store the upgrade bundle, and trigger the
LCM locally to decrease the traffic.

Manage network settings 119


The following figure shows a simple topology with a centrally shared VMware vCenter Server:

Figure 56. Simple topology with a centrally shared VMware vCenter Server

Telemetry settings
The following table describes the data that is collected and the amount of daily traffic between VxRail Manager and the VMware
vCenter Server:

Table 25. Telemetry levels


Telemetry level Daily traffic between the VxRail Manager and
the VMware vCenter Server
LIGHT 11 MB
BASIC 64 MB
ADVANCED 75 MB
NONE 0 MB

NOTE: Telemetry settings are different on the API as shown in the table.

You can manage telemetry settings using the VxRail onboard API, client URL (curl) commands, or through VxRail Manager. To
modify telemetry settings using VxRail onboard API, verify access to:
● Verify that you have access to the REST API.
● Verify the IP address for VxRail Manager onboard API.

Limitations for a ROBO environment T1


The following limitations apply for a ROBO environment with a T1 line (network speed of 1.544 Mbps):
● A ROBO environment between the VMware vCenter Server and the VxRail clusters is not supported. VMware vCenter
Server log details cannot be collected when using the ROBO environment between the VMware vCenter Server and the
VxRail clusters.

120 Manage network settings


● The backup and restore script consumes extra bandwidth between the VMware vCenter Server and the VxRail clusters. You
can temporarily use VMware snapshots instead of a backup.

Configure telemetry settings using curl commands


Configure or disable telemetry settings client URL (curl) commands.

Prerequisites
Verify that you have the following:
● Username and password for the curl command
● Four 14 GB and 10 GB R640 nodes with 4*10 GB NIC
● VxRail cluster with VMware vCenter Server
● Twenty-five running workload VMs, without I/O
● 10+ alarms
● Remote support connectivity enabled
● One market application

Steps
1. To view the telemetry setting, enter:
curl -k -H "Content-Type: application/json" -X GET --user username:password https://
<vxrailmanager_ipaddr>/rest/vxm/v1/telemetry/tier

{‘level’:’BASIC’}

2. To modify the telemetry level, using the POST request method, enter:
curl -k -X POST -H "Content-type: application/json" -d '{"level":"BASIC"}' --user
management:tell1103@ https://<vxrailmanager_ipaddr>/rest/vxm/v1/telemetry/tier

-k: turn off verification of the certificate


-d: data
-X: http method
-H: header
--user: credential followed, spilit by “:”, here management:tell1103@ is just for
example, you need input based on your setup.
Sample
Request Body:
{
‘level’:’BASIC’
}

3. To disable telemetry, set the level to NONE.

Configure telemetry settings from VxRail Manager


Select telemetry settings to define the level of data that is collected for your VxRail environment.

Prerequisites
Verify that you have the following:
● Four 14 GB and 10 GB R640 nodes with 4*10 GB NIC
● VxRail cluster with VMware vCenter Server
● Twenty-five running workload VMs, without I/O
● 10+ alarms
● Remote support connectivity enabled
● One market application

Manage network settings 121


Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. Select the cluster and from the Configure left-menu, select VxRail > Support.
3. If the remote support connectivity is enabled, click Edit > Edit Customer Improvement to redirect you to the Customer
Improvement Program page.
4. Select the telemetry setting and click NEXT > FINISH.

122 Manage network settings


9
Manage witness settings
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Change the hostname and IP address of the witness


sled
This procedure targets the VMware ESXi host on the VxRail-supplied witness sled. The witness sled is hardware. This procedure
does not change network settings on the witness VM. A shutdown of the witness VM is required to make the update.

About this task


This procedure applies to VxRail 7.0.420 and later cluster.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Change the IP address of the VxRail-managed witness sled


Modify the IP address of the VMware ESXi host on the VxRail-managed witness sled.

Prerequisites
Before you change the IP address of the witness sled, perform the following:
● Do not update the witness sled DNS entry with the new IP address until instructed to in the steps.
● Verify the health status of the sled to avoid running in a degraded state.
● Verify that the DNS mapping is correct.
● Verify that the health monitoring status is disabled.

About this task


DNS must be configured properly or this task may not work.

Steps
1. To shut down the witness VM, perform the following:
a. Log in to the VMware vSphere Web Client as an administrator.
b. From the VxRail cluster left-menu, select VxRail data center and click Witness Folder Cluster. Select the witness sled
and click VMware vSAN Witness Appliance.
c. Click the shutdown icon at the upper right corner of the screen and click YES to confirm.
2. To remove the witness sled from the VMware vCenter Server, perform the following:
a. Right-click the sled and select Maintenance Mode > Enter Maintenance Mode.
b. Right-click the sled again and select Remove from Inventory.
3. To change the IP address for the witness sled, perform the following:
a. Log in to the VMware ESXi of the witness sled through the management IP address.
b. From the Networking left-menu, under the VMkernel NICs tab, click vmk2.

Manage witness settings 123


Figure 57. VMware ESXi host clients

c. Click Edit Settings.

Figure 58. Edit settings

d. When the wizard opens, configure the new IP address and click Save.
NOTE: The new management IP address disconnects immediately when you click Save. To reconnect, use the
updated IP address or change it using the ESXi shell command line using the iDRAC remote console.

4. Determine how the DNS is managed before you update your DNS server with a new DNS mapping and perform one of the
following:
● For the customer-managedn DNS server, add a DNS entry where the new witness sled IP address is mapped to the
original witness sled FQDN. Delete the old entry of the witness sled. Continue to step 5.
● If the DNS server is VxRail managed, use SSH to log in to the VxRail Manager as mystic and su to root.
a. Use an editor to open the /etc/hosts file.

127.0.0.1 localhost localhost.localdom


20.12.91.200 c1-vxm.rackE11.local c1-vxm
20.12.91.201 c1-vc.rackE11.local c1-vc

124 Manage witness settings


20.12.91.101 c1-esx01.rackE11.local c1-esx01
20.12.91.102 c1-esx02.rackE11.local c1-esx02
20.12.91.202 c1-psc.rackE11.local c1-psc--- Orignial witness sled IP address in the DNS
entry
b. To add a DNS entry where the new witness sled IP is mapped with the original witness sled FQDN, enter:
<new_sled_ipaddr><original_sled_fqdn><original_sled_host>

127.0.0.1 localhost localhost.localdom


20.12.91.200 c1-vxm.rackE11.local c1-vxm
20.12.91.201 c1-vc.rackE11.local c1-vc
20.12.91.101 c1-esx01.rackE11.local c1-esx01
20.12.91.102 c1-esx02.rackE11.local c1-esx02
20.12.91.202 c1-psc-witness-new.rackE11.local c1-psc-witness-new
20.12.91.203 c1-psc-witness-new.rackE11.local c1-psc-witness-new--- Map the old
FQDN to the new witness sled IP address
c. To delete the old DNS entry for the witness sled in the DNS server. Save changes and quit.

127.0.0.1 localhost localhost.localdom


20.12.91.200 c1-vxm.rackE11.local c1-vxm
20.12.91.201 c1-vc.rackE11.local c1-vc
20.12.91.101 c1-esx01.rackE11.local c1-esx01
20.12.91.102 c1-esx02.rackE11.local c1-esx02
20.12.91.202 c1-psc-witness-new.rackE11.local c1-psc-witness-new--- Delete this old
DNS entry
20.12.91.203 c1-psc-witness-new.rackE11.local c1-psc-witness-new

d. To restart the DNS service, enter:


systemctl restart dnsmasq
e. To verify the FQDN mapping to the new witness sled IP address, enter:
dig <witness_sled_fqdn> +short

NOTE: You can also use the nslookup command.

5. To clear the DNS cache on the VMware vCenter Server, perform the following:
a. Using SSH, log in to the VxRail vCenter Server as root.
b. To restart the DNS service, enter:
systemctl restart dnsmasq
c. To verify the FQDN mapping to the new witness sled IP address, enter:
dig <witness_sled_fqdn> +short

NOTE: You can also use the nslookup command.

6. To add the witness sled to the VMware vCenter Server, perform the following:
a. Log in to the VMware vSphere Web Client as an administrator.
b. Right-click the Witness Folder cluster and select Add Host.
c. To add the witness sled, use the witness sled FQDN.

Manage witness settings 125


Figure 59. Name and location

d. Follow the steps in the wizard to add the witness sled.


e. From the Witness Folder cluster, right-click the cluster and select Maintenance Mode > Exit Maintenance Mode.
f. From the Witness Folder cluster, select the witness sled and verify that the IP address is changed to the new IP
address.

Figure 60. Configure VMkernel adapters

7. To power on the witness VM, perform the following:


a. Log in to the VMware vSphere Web Client as an administrator.
b. From the VxRail cluster left-menu, select the VxRail data center and click Witness Folder Cluster. Select the witness
sled and click VMware vSAN Witness Appliance.
c. Under Summary tab, click the Power on icon on the upper right corner of the screen and wait for the witness VM to
power on.
8. To change the witness sled platform service binding IP address, perform the following:
a. Log in to the VMware ESXi host client of the witness sled using the management IP address.
b. From the Manage left-menu, select the Services tab.
c. Click the Start icon to turn on the SSH service on the witness sled.

126 Manage witness settings


Figure 61. VMware host client

d. Using SSH, log in to the witness sled.


e. To edit the platform configuration file and change the IP address to the new witness sled IP address, enter:
For versions earlier than VxRail 8.0.300, enter:
vi /etc/config/vxrail/platform.conf

[general]
log_level = INFO
log_target = syslog
listener_type = unix
listener_address = /tmp/platform-services.sock

[backend]
max_workers = 12

[restservice]
bind = 20.12.91.202---Original witness sled IP address in the platform.conf file

[general]
log_level = INFO
log_target = syslog
listener_type = unix
listener_address = /tmp/platform-services.sock

[backend]
max_workers = 12

[restservice]
bind = 20.12.91.203---New witness sled IP address

For VxRail 8.0.300 and later, enter:


vi /opt/platformsvc/vital/platform.conf

[general]
log_level = INFO
log_target = syslog
listener_type = unix
listener_address = /tmp/platform-linzhi.sock

[backend]
max_workers = 12

[restservice]
bind = 20.12.91.202---Original witness sled IP address in the platform.conf file

[general]
log_level = INFO
log_target = syslog
listener_type = unix
listener_address = /tmp/platform-linzhi.sock

[backend]
max_workers = 12

Manage witness settings 127


[restservice]
bind = 20.12.91.203---New witness sled IP address

f. To restart the platform service for versions earlier than VxRail 8.0.300, enter:
/etc/init.d/vxrail-pservice restart

To restart the platform service for versions earlier than VxRail 8.0.300, enter:
esxcli daemon control restart -s platformsvc

9. To verify the health status, perform the following:


a. Log in to the VMware vSphere Web Client as an administrator.
b. Select a cluster and select Monitor > vSAN > Skyline Health. Verify that the VxRail cluster is in a healthy state.
10. After you change the IP, to update witness VM moid, perform the following:
a. Go to MOB and check VM moid on the witness sled host:

Figure 62. Moid ID

You can also use the API to get the moid ID by entering:
curl -X POST --unix-socket /var/lib/vxrail/nginx/socket/nginx.sock -H
"accept: application/json" -H "Content-Type: application/json" -d
'{"query":"{ multiVirtualmachines(name: \"VMware vSAN Witness Appliance\",
hostname:\"<witness_sled_hostname>\", datacentername: \"<datacenter_name>\",
clustername: \"<cluster_name>\", host:\"<vcenter_hostname>\", username:
\"<vcenter_admin_username>\", password: \"<vcenter_admin_password>\") {moid config
{ name uuid }}}"}' http://localhost/rest/vxm/internal/do/v1/vm/query

Using the moid, to set the witness VM moid to VxRail, enter:

psql -U postgres vxrail -c "update system.system_vm set moref_id='<vm_moid>' where


server_type='WITNESS';"

To use the API to get the id of the witness VM in the VxRail database:

psql -U postgres vxrail -c "select id, uuid, server_type, moref_id from


system.system_vm where server_type='WITNESS';"

Using the id from the output, to update the witness VM moid, enter:

128 Manage witness settings


curl -X PUT --unix-socket /var/lib/vxrail/nginx/socket/nginx.sock -H "accept:
application/json" -H "Content-Type: application/json" -d '{"moref_id": "<vm_moid>"}'
http://localhost/rest/vxm/internal/do/v1/vxm/system-vm/<id>

Change the hostname of the witness sled


Modify your DNS server with a new mapping for the witness sled.

Prerequisites
Verify that DNS has been configured properly or this task may not work.
Before you change the hostname of the witness sled, verify the following:
● DNS mapping is correct.
● The VxRail cluster is in a healthy state.
● Health monitoring status is disabled.

About this task


You can add a server to the witness mode for the customer-managed DNS server or the VxRail-managed DNS server. The
procedure depends on how the DNS server is managed.

Steps
1. Perform one of the following to determine how the DNS is managed:
● If the DNS server is customer-managed, add a DNS server entry where the new FQDN is mapped to the original witness
sled IP address. Continue to step 2.
● If the DNS server is VxRail managed, use SSH to log in to the VxRail Manager as mystic and su to root.
a. Use an editor to open the /etc/hosts file.

127.0.0.1 localhost localhost.localdom


20.12.91.200 c1-vxm.rackE11.local c1-vxm
20.12.91.201 c1-vc.rackE11.local c1-vc
20.12.91.101 c1-esx01.rackE11.local c1-esx01
20.12.91.102 c1-esx02.rackE11.local c1-esx02
20.12.91.202 c1-psc.rackE11.local c1-psc--- Orignial DNS entry of the witness sled

b. To add a DNS entry where the new FQDN is mapped with the original witness sled IP address, enter:
<sled_ipaddr><new_sled_fqdn><new_sled_host>

For example: 172.16.10.105 witness-sled-new.vv009.local witness-sled-new

127.0.0.1 localhost localhost.localdom


20.12.91.200 c1-vxm.rackE11.local c1-vxm
20.12.91.201 c1-vc.rackE11.local c1-vc
20.12.91.101 c1-esx01.rackE11.local c1-esx01
20.12.91.102 c1-esx02.rackE11.local c1-esx02
20.12.91.202 c1-psc.rackE11.local c1-psc
20.12.91.202 c1-psc-witness-new.rackE11.local c1-psc-witness-new
c. To restart the DNS service, enter: systemctl restart dnsmasq
d. To verify the new DNS entry, enter:
dig <new_sled_fqdn> +short

NOTE: You can also use the nslookup command.

2. To shut down the witness VM, perform the following:


a. Log in to the VMware vSphere Web Client as an administrator.
b. From the VxRail cluster left-menu, select VxRail data center and select a witness sled. Click VMware vSAN Witness
Appliance.

Manage witness settings 129


Figure 63. Witness summary

c. Click the Shutdown icon at the upper right corner of the screen. Click YES to confirm.
3. To remove the witness VMware ESXi host from the VMware vCenter Server, perform the following:
a. Right-click the witness sled and select Maintenance Mode > Enter Maintenance Mode .
b. Right-click the witness sled again and select Remove from Inventory.
4. To change the hostname for the witness sled, perform the following:
a. Log in to the VMware ESXi host client of the witness sled through the management IP address.
b. From the Networking left-menu, under the TCP/IP stacks tab, click Default TCP/IP stack.

Figure 64. Default TCP/IP stack

c. Click Edit Settings.

Figure 65. Default TCP/IP stack - edit settings

d. When the wizard opens, enter the new Host name and click Save.

130 Manage witness settings


5. To add the witness sled to the VMware vCenter Server, perform the following:
a. Log in to the VMware vSphere Web Client as an administrator.
b. Right-click the Witness Folder cluster and select Add Host....
c. Use the new witness sled FQDN to add the witness sled.

Figure 66. Witness sled FQDN

d. Follow the steps in the wizard to add the witness sled.


e. From the Witness Folder cluster, right-click the cluster and select Maintenance Mode > Exit Maintenance Mode.
f. From the Witness Folder cluster, select the witness sled and verify that the FQDN is new.
6. To power on the witness VM, perform the following:
a. From the VxRail cluster left-menu, select VxRail data center and select a witness sled.
b. Click VMware vSAN Witness Appliance.
c. Under the Summary tab, click the Power on icon on the upper right corner of the screen and wait for the witness VM
to power on.
7. Determine how the DNS is customer-managed before you remove an entry and perform one of the following:
● If the DNS server is , delete the old DNS entry where the old FQDN is mapped to the witness sled IP address. Go to step
8.
● If the DNS server is VxRail managed, use SSH to log in to the VxRail Manager as mystic and su to root.
a. Use an editor to open the /etc/hosts file and delete the old DNS entry that is mapped to the old FQDN to the
witness sled IP address. Save the changes and quit.

127.0.0.1 localhost localhost.localdom


20.12.91.200 c1-vxm.rackE11.local c1-vxm
20.12.91.201 c1-vc.rackE11.local c1-vc
20.12.91.101 c1-esx01.rackE11.local c1-esx01
20.12.91.102 c1-esx02.rackE11.local c1-esx02
20.12.91.202 c1-psc.rackE11.local c1-psc--- Delete this original DNS entry
20.12.91.202 c1-psc-witness-new.rackE11.local c1-psc-witness-new

b. To restart the DNS service, enter: systemctl restart dnsmasq


8. To verify the health status, perform the following:
a. Log in to the VMware vSphere Web Client as an administrator.
b. Select a cluster and select Monitor > vSAN > Skyline Health to verify that the VxRail cluster is in a healthy state.
9. After you change the hostname to update witness VM moid, perform the following:
a. Go to MOB and check VM moid on the witness sled host:

Manage witness settings 131


Figure 67. MOID on the witness sled host

Alternatively, you can query by API:


curl -X POST --unix-socket /var/lib/vxrail/nginx/socket/nginx.sock -H
"accept: application/json" -H "Content-Type: application/json" -d
'{"query":"{ multiVirtualmachines(name: \"VMware vSAN Witness Appliance\",
hostname:\"<witness_sled_hostname>\", datacentername: \"<datacenter_name>\",
clustername: \"<cluster_name>\", host:\"<vcenter_hostname>\", username:
\"<vcenter_admin_username>\", password: \"<vcenter_admin_password>\") {moid config
{ name uuid }}}"}' http://localhost/rest/vxm/internal/do/v1/vm/query

10. Using the moid, to set the witness VM moid to VxRail, enter:
psql -U postgres vxrail -c "update system.system_vm set moref_id='<vm_moid>' where
server_type='WITNESS';"

You can also use the API to get the id of the witness VM in the VxRail database:
psql -U postgres vxrail -c "select id, uuid, server_type, moref_id from system.system_vm
where server_type='WITNESS';"

Using the id from the output, to update the witness VM moid, enter:

curl -X PUT --unix-socket /var/lib/vxrail/nginx/socket/nginx.sock -H "accept:


application/json" -H "Content-Type: application/json" -d '{"moref_id": "<vm_moid>"}'
http://localhost/rest/vxm/internal/do/v1/vxm/system-vm/<id>

Change the hostname and IP address of the VxRail-


managed Witness VM
For the VxRail-managed Witness VM, you can change the IP address of the Witness VM or the hostname for the DNS server.

About this task


You can change the IP address of the VxRail-managed Witness VM. The Witness VM is deployed at Day 1 and imported into the
Witness folder. The Witness VM is added as a host.
Change the hostname of the VxRail-managed witness VM for a customer-managed DNS server. Add a DNS server entry where
the new FQDN is mapped to the original witness sled IP address.

132 Manage witness settings


Change the hostname of the VxRail-managed witness VM host or reconfigure the VxRail-managed witness VM from IP address
to FQDN.

Change the IP address of the VxRail-managed Witness VM


In the VxRail data center, the Witness folder cluster contains the Witness ESXi host. The Witness VM is deployed on the
Witness ESXi host.

Prerequisites
● Disable the stretched cluster.
● Remove the Witness VM.

NOTE: The VxRail-managed Witness VM is also known as the mapping host. The VMware ESXi operating system is
running on this VM. When the VxRail-managed Witness VM is added to the witness folder, it is displayed as a VMware
ESXI host. If the VxRail-managed Witness VM IP address is changed, the VMware ESXi host IP address is also changed.
The VMware ESXi host IP address must be removed and added back using the new IP address.

● Modify the IP address and restart the network.


● Add the Witness VM as a host with the new IP address.
● Update the VxRail Manager database.
● Configure the stretched cluster.
● Verify that the DNS name on the Witness VM is localhost.localdomain.
If the DNS name does not match, you cannot change the IP address of the Witness VM.

About this task


For VxRail-managed re-hostname section applies to VxRail 8.0.200 and later for VMware vSAN 2-node clusters or stretched
cluster with VxRail-managed Witness VM on VD-4000W.
If the DNS name in the VMware vSphere Web Client is not localhost.localdomain, you cannot change the IP address of
the Witness VM.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Steps
1. Log in to the VMware vSphere Web Client as an administrator and select the Inventory icon.
2. To verify the health status, select a cluster and select the Monitor tab. Select vSAN > Skyline Health.
3. To disable the stretched cluster, perform the following:
a. Select the VxRail cluster and click the Configure tab.
b. Select vSAN > Fault Domains.

Figure 68. Configure fault domains

c. From Fault Domains window, click DISABLE STRETCHED CLUSTER and click REMOVE.
4. To remove the Witness VM mapping host, perform the following:

Manage witness settings 133


a. Right-click the VMware vSAN Witness Appliance and select Maintenance Mode > Enter Maintenance Mode.
b. Right-click the VMware vSAN Witness Appliance again and select Remove from Inventory.
5. To modify the IP address and restart the network, perform the following:
a. Select the VxRail data center and select the Witness Folder > VMware vSAN Witness Appliance.
b. On the Summary tab, click LAUNCH WEB CONSOLE.
c. On the console, press F2 and log in to the Witness VM on the data center UI that is set on Day1.
d. Under System Customization, select Configure Management Network.
e. Enter IPv4 Configuration information and press Y to confirm the changes.

Figure 69. IPv4 configuration

6. To add the Witness VM as a host with the new IP address, perform the following:
NOTE: There is no procedure to change the management IP of a physical node where the customer-managed Witness
VM is running.

a. Right-click the Witness Folder cluster and select Add Host.


b. In the Add Host wizard, select Name and location and enter the new IP address of the host to add to the VMware
vCenter Server. Click NEXT.

134 Manage witness settings


Figure 70. Name and location

c. Accept the default entries for Connection settings, Host summary, Assign License, and Lockdown mode and click
NEXT.
d. For VM location wizard, select the folder location and click NEXT.
e. From the Witness Folder cluster, right-click the witness host and select Maintenance Mode > Exit Maintenance
Mode.
7. To update the VxRail Manager database, perform the following:
a. Use SSH to log in to the VxRail Manager as mystic and su to root.
b. Connect to the database and enter:
psql -U postgres vxrail
c. To query the witness sled IP address, enter:
select * from configuration.configuration where key = 'witness_vm_host';

id | category | key | value


-----------------------------------------------
89 | setting | witness_vm_host | 20.12.91.109--- old witness sled VM IP address
(1 row)

d. To update the witness VM IP address, enter:


update configuration.configuration set value = '<new_IP>'
where key = 'witness_vm_host';

select * from configuration.configuration where key = 'witness_vm_host';

id | category | key | value


-----------------------------------------------
89 | setting | witness_vm_host | 20.12.91.112--- new witness sled VM IP address
(1 row)

e. To exit the database, enter: \q.


8. To configure the stretched cluster, perform the following:
a. Log in to the VMware vSphere Web Client as an administrator.
b. Select the VxRail cluster and click the Configure tab.

Manage witness settings 135


c. Select vSAN > Fault Domains.
d. From the Fault Domains wizard, click CONFIGURE STRETCHED CLUSTER.
e. Follow the wizard steps and click NEXT twice. Click FINISH.
9. To verify the IP address change, perform the following:
a. Log in to the Witness VM with the new IP address from the web console.
b. To verify the Witness VM IP address configuration, enter:
esxcli network ip interface ipv4 get

Name IPv4 Address IPv4 Netmask IPv4 Broadcast Address Type Gateway DHCP DNS
---- ------------ ------------ -------------- ------------ ----------- --------
vmk0 20.12.91.112 255.255.255.0 20.12.91.255 STATIC 20.12.91.1 false
vmk1 192.168.101.33 255.255.255.0 192.168.101.255 STATIC 20.12.91.1 false

c. Log in to the VMware vCenter Server MOB as an administrator.


d. Under Properties, click content.

Figure 71. Content

e. Select the rootFolder: datacenter and examine through the following values:

Figure 72. Data center

● childEntity: datacenter-3

Figure 73. Child entity data center


● hostFolder: group-h5

Figure 74. Host folder group


● childEntity: group-h12 (Witness Folder cluster)

136 Manage witness settings


Figure 75. Witness folder cluster
● childEntity: domain-s62 (Witness VM)

Figure 76. Witness VM


● Host: host-64

Figure 77. Host


● VM: vm-67

Figure 78. VM
● guest: guest

Figure 79. Guest


● ipAddress: <new_IP_address>

Figure 80. New IP address

Verify that the IP address is new.


10. To verify the health status, perform the following:
a. Log in to the VMware vSphere Web Client as an administrator.
b. Select a cluster and select the Monitor tab. Click vSAN > Skyline Health to verify that the VxRail cluster is in a healthy
state.

Manage witness settings 137


Change the hostname of the VxRail-managed witness VM for a
customer-managed DNS server
Add a DNS server entry where the new FQDN is mapped to the original witness sled IP address.

Prerequisites
Verify the following:
● DNS mapping is correct.
● The VxRail cluster is in a healthy state.
● Health monitoring status is disabled.

About this task


This procedure is for customers, Dell Technologies employees, and partners who are authorized to work on a VxRail cluster.

Steps
1. To remove the witness VMware ESXi VM host from the VMware vCenter Server, perform the following:
a. Right-click the witness VM host and select Maintenance Mode > Enter Maintenance Mode.
b. Right-click the witness VM host again and select Remove from Inventory.
c. Go to step 4.
2. To change or add the hostname for the witness VM host, perform the following:
a. For the witness VM host, log in to the VMware ESXi host client using the management IP address.
b. From the Networking left-menu, under the TCP/IP stacks tab, click Default TCP/IP stack.
c. On the Default TCP/IP stack window, click Edit Settings.
d. When the wizard opens, enter a hostname and click Save.
3. To add the witness VM host to the VMware vCenter Server, perform the following:
a. Log in to the VMware vSphereWeb Client as an administrator.
b. Right-click the Witness Folder cluster and select Add Host....
c. Use the new witness VM FQDN to add the witness VM host.
d. Select Compose a new image on host life cycle options.
e. Follow the steps in the wizard to add the witness VM host.
f. From the Witness Folder cluster, right-click the host and select Maintenance Mode > Exit Maintenance Mode .
4. To verify the health status, perform the following:
a. Log in to the VMware vSphere Web Client as an administrator.
b. Select a cluster and select Monitor > vSAN > Skyline Health to retest and verify that the VxRail cluster is in a healthy
state.

20.12.91.202 c1-psc-witness-new.rackE11.local c1-psc-witness-new

5. After you change the hostname to update witness VM moid, perform the following:
a. Go to MOB and check VM moid on the witness sled host:

138 Manage witness settings


Figure 81. MOID on the witness sled host

Alternatively, you can query by API:


curl -X POST --unix-socket /var/lib/vxrail/nginx/socket/nginx.sock -H
"accept: application/json" -H "Content-Type: application/json" -d
'{"query":"{ multiVirtualmachines(name: \"VMware vSAN Witness Appliance\",
hostname:\"<witness_sled_hostname>\", datacentername: \"<datacenter_name>\",
clustername: \"<cluster_name>\", host:\"<vcenter_hostname>\", username:
\"<vcenter_admin_username>\", password: \"<vcenter_admin_password>\") {moid config
{ name uuid }}}"}' http://localhost/rest/vxm/internal/do/v1/vm/query

6. Using the moid, to set the witness VM moid to VxRail, enter:


psql -U postgres vxrail -c "update system.system_vm set moref_id='<vm_moid>' where
server_type='WITNESS';"

You can also use the API to get the id of the witness VM in the VxRail database:
psql -U postgres vxrail -c "select id, uuid, server_type, moref_id from system.system_vm
where server_type='WITNESS';"

Using the id from the output, to update the witness VM moid, enter:

curl -X PUT --unix-socket /var/lib/vxrail/nginx/socket/nginx.sock -H "accept:


application/json" -H "Content-Type: application/json" -d '{"moref_id": "<vm_moid>"}'
http://localhost/rest/vxm/internal/do/v1/vxm/system-vm/<id>

Change the hostname of the VxRail-managed witness VM for a


VxRail-managed DNS server
Change the hostname of the VxRail-managed witness VM host or reconfigure the VxRail-managed witness VM from IP address
to FQDN.

Prerequisites
Verify the following:
● DNS mapping is correct.
● The VxRail cluster is in a healthy state.
● Disable health monitoring status.

Manage witness settings 139


About this task
This procedure is for customers, Dell Technologies employees, and partners who are authorized to work on a VxRail cluster.

Steps
1. For a VxRail-managed DNS server, perform the following:
a. Use SSH to log in to the VxRail Manager as mystic and su to root.
b. Use an editor to open the /etc/hosts file.

127.0.0.1 localhost localhost.localdom


20.12.91.200 c1-vxm.rackE11.local c1-vxm
20.12.91.201 c1-vc.rackE11.local c1-vc
20.12.91.101 c1-esx01.rackE11.local c1-esx01
20.12.91.102 c1-esx02.rackE11.local c1-esx02
20.12.91.202 c1-witness-vm.rackE11.local c-witness-vm
Original DNS entry of Witness VM

c. To add a DNS entry where the new FQDN is mapped with the original witness VM IP address, enter:
<vm_ipaddr><new_vm_fqdn><new_vm_host>

127.0.0.1 localhost localhost.localdom2


0.12.91.200 c1-vxm.rackE11.local c1-vxm
20.12.91.201 c1-vc.rackE11.local c1-vc
20.12.91.101 c1-esx01.rackE11.local c1-esx01
20.12.91.102 c1-esx02.rackE11.local c1-esx02
20.12.91.202 c1-witness-nw-vm.rackE11.local c1-witness-nw-vm

d. To restart the DNS service, enter: systemctl restartdnsmasq.


e. To verify the DNS entry, either use the nslookup command or enter:
dig <new_sled_fqdn> +short
2. To change or add the hostname for the witness VM host, perform the following:
a. For the witness VM host, log in to the VMware ESXi host client using the management IP address.
b. From the Networking left-menu, under the TCP/IP stacks tab, click Default TCP/IP stack.
c. On the Default TCP/IP stack window, click Edit Settings.
d. When the wizard opens, enter a hostname and click Save.
3. To add the witness VM host to the VMware vCenter Server, perform the following:
a. Log in to the VMware vSphere Web Client as an administrator.
b. Right-click the Witness Folder cluster and select Add Host....
c. Use the new witness VM FQDN to add the witness VM host.
d. Select Compose a new image on host life cycle options.
e. Follow the steps in the wizard to add the witness vm host.
f. From the Witness Folder cluster, right-click the host and select Maintenance Mode > Exit Maintenance Mode .
4. If the DNS server is VxRail-managed, perform the following:
a. Use SSH to log in to the VxRail Manager as mystic and su to root.
b. Use an editor to open the /etc/hosts file and delete the old DNS entry that is mapped to the old FQDN to the witness
sled IP address. Save the changes and quit.

127.0.0.1 localhost localhost.localdom


20.12.91.200 c1-vxm.rackE11.local c1-vxm
20.12.91.201 c1-vc.rackE11.local c1-vc
20.12.91.101 c1-esx01.rackE11.local c1-esx01
20.12.91.102 c1-esx02.rackE11.local c1-esx02
20.12.91.202 c1-witness-vm.rackE11.local c-witness-vm
Delete this original DNS entry
20.12.91.202 c1-witness-vm-nw.rackE11.local c-witness-vm-nw

c. To restart the DNS service, enter: systemctl restartdnsmasq


5. After you change the hostname to update witness VM moid, perform the following:
a. Go to MOB and check VM moid on the witness sled host:

140 Manage witness settings


Figure 82. MOID on the witness sled host

Alternatively, you can query by API:


curl -X POST --unix-socket /var/lib/vxrail/nginx/socket/nginx.sock -H
"accept: application/json" -H "Content-Type: application/json" -d
'{"query":"{ multiVirtualmachines(name: \"VMware vSAN Witness Appliance\",
hostname:\"<witness_sled_hostname>\", datacentername: \"<datacenter_name>\",
clustername: \"<cluster_name>\", host:\"<vcenter_hostname>\", username:
\"<vcenter_admin_username>\", password: \"<vcenter_admin_password>\") {moid config
{ name uuid }}}"}' http://localhost/rest/vxm/internal/do/v1/vm/query

6. Using the moid, to set the witness VM moid to VxRail, enter:


psql -U postgres vxrail -c "update system.system_vm set moref_id='<vm_moid>' where
server_type='WITNESS';"

You can also use the API to get the id of the witness VM in the VxRail database:
psql -U postgres vxrail -c "select id, uuid, server_type, moref_id from system.system_vm
where server_type='WITNESS';"

Using the id from the output, to update the witness VM moid, enter:

curl -X PUT --unix-socket /var/lib/vxrail/nginx/socket/nginx.sock -H "accept:


application/json" -H "Content-Type: application/json" -d '{"moref_id": "<vm_moid>"}'
http://localhost/rest/vxm/internal/do/v1/vxm/system-vm/<id>

Change the hostname of the customer-managed


witness host
Follow the steps to change the hostname of the customer-managed witness host or reconfigure the customer-managed witness
host from IP to FQDN. Modify your DNS server with a new mapping for the customer-managed witness VM.

About this task


This procedure is for customers, Dell Technologies employees, and partners who are authorized to work on a VxRail cluster.

Manage witness settings 141


Prerequisites
Before you change the hostname of the witness host, verify the following:
● Using an external DNS server: Repoint the DNS to an external DNS.
● Verify that DNS has been configured properly or this task may not work.
● Verify that the VxRail cluster is in a healthy state.
● Disable health monitoring status.

Steps
1. To remove the witness VMware ESXi host from the VMware vCenter Server, perform the following:
a. Right-click the witness host and select Maintenance Mode > Enter Maintenance Mode.
b. Right-click the witness host again and select Remove from Inventory.
2. To change or add the hostname for the witness VM host, perform the following:
a. Log in to the VMware ESXi host client of the witness VM host using the management IP address.
b. From the Networking left-menu, under the TCP/IP stacks tab, click Default TCP/IP stack.

Figure 83. VMware ESXi Host Client- Default TCP/IP stack

c. Click Edit Settings.


d. When the wizard opens, enter a Host name and click Save.
3. To add the witness host to the VMware vCenter Server, perform the following:
a. Log in to the VMware vSphereWeb Client as an administrator .
b. Right-click the witness folder cluster or VxRail data center and select Add Host.
c. Use the new witness FQDN to add the witness host.
d. Follow the steps in the wizard to add the witness host.
e. From the Witness Folder cluster, right-click the cluster and select Maintenance Mode > Exit Maintenance Mode.
f. From the witness folder cluster, select the witness host and verify that the FQDN is new.
4. Delete the DNS entry where the old FQDN is mapped to the witness host IP address on the customer-managed DNS server.
5. To verify the health status, perform the following:
a. Log in to the VMware vSphere Web Client as an administrator .
b. Select a cluster and select Monitor > vSAN > Skyline Health to verify that the VxRail cluster is in a healthy state.

Collect the VxRail-supplied witness configuration


Collect the witness configuration details from the VxRail configuration file. The VxRail configuration file is contained within the
configuration report that is stored on the VMware vSphere Web Client.

About this task


This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Steps
1. To download the VxRail configuration .xml file from the current configuration report, perform the following:
a. Log in to the VMware vSphere Web Client as an administrator.
b. Select the Inventory icon.

142 Manage witness settings


c. Select the VMware vSAN cluster.
d. Click Configure tab.
e. Select VxRail > System.
f. Click DOWNLOAD to download the VxRail configuration .xml file and save the file to your local repository.
2. To collect the witness configuration from the VxRail configuration .xml, perform the following:
a. Open the VxRail Config XML File.
b. Search for WitnessNode in the configuration report and collect the details as shown from the following XML file:

Figure 84. Witness node configuration report

Separate witness traffic on an existing stretched


cluster
Deploy VxRail stretched clusters to provide an alternate VMkernel interface. The alternate VMkernel interface is designated to
handle traffic that is aimed at the witness instead of the VMware vSAN tagged VMkernel interface. This feature allows for more
flexible network configurations by allowing separate networks for node-to-node and node-to-witness traffic.

Prerequisites
● Provision a dedicated VLAN or subnet at each data site for witness traffic. The VLAN at each data site should be different.
For example, VLAN 19 at Site-1 on subnet 172.18.19.0/24 and VLAN 20 at Site-2 on subnet 172.18.20.0/24.
● For both sites, create the VLAN on the ToR switches and add to the trunk ports going to the nodes.

Manage witness settings 143


● Create gateways for each witness traffic VLAN and verify network connectivity between the witness subnet at each data
site and the witness site.
● Set static routes on existing stretched cluster deployments.

About this task


Upgrade the stretched cluster firmware that is earlier than VMware vSAN 6.7. You can configure a different witness traffic
network than the VMware vSAN traffic network on existing stretched cluster deployments.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Steps
1. To create a port group on each data site, perform the following:
a. Log in to the VMware vCenter Web Client.
b. From the main menu, click the Networking icon.
c. Right-click the VMware VDS and select Distributed Port Group > New Distributed Port Group.

Figure 85. Distributed port group

d. In the New Distributed Port Group wizard, enter the name for the port group. Click NEXT.
e. In Configure settings, enter or select the following:
● From the VLAN type drop-down menu, select VLAN.
● Enter the VLAN ID.
● Select Customize default policies configuration and click NEXT.

144 Manage witness settings


Figure 86. Configure settings

f. In the Teaming and Failover window, modify the Failover order of the uplinks to match the existing failover order of
the management traffic. Click NEXT.

Figure 87. Teaming and failover

g. For the remaining steps, accept the default settings by clicking NEXT.
h. In Ready to Complete, review the selections and click FINISH.

Manage witness settings 145


i. Repeat these steps for the second data site.
2. To create VMkernel interfaces on data nodes for the WTS network at each data site, perform the following:
a. Select Networking.
b. Right-click the Port Group you created earlier. For example, Site1_WTS_PG or Site2_WTS_PG.
c. Select Add VMkernelAdapters.
d. Click + Attached Hosts and select the specific hosts to be used for the data site. Click OK and then click NEXT.
e. Leave the default settings on Configure VMkernel adapter and click NEXT.
f. Enter the IP address and the subnet mask of the WTS network and click NEXT.

Figure 88. IPv4 settings

g. In Ready to Complete, review your selections and click FINISH.


h. Repeat these steps on the second data site.
3. To enable witness service on each node, perform the following:
a. Select the node.
b. Select the Configure tab and click Networking > VMkernel adapters view to determine the VMkernel interface for
witness traffic.
c. Enable SSH use the SSH client to log in as root to the node.
d. To set the traffic type to witness, enter:
esxcli vsan network ip add -i vmk<#> -T=witness

To verify the traffic type, enter:


esxcli vsan network list
e. Repeat these steps for each node in the cluster.
4. To remove the witness host disk group, perform the following:
a. Select the VxRail cluster and select the Configure tab.
b. Click vSAN > Disk Management.
c. In the Disk Group window, locate the witness host and select its disk group. Click Delete.
d. Click DELETE from the Remove Disk Group window.

146 Manage witness settings


Figure 89. Configure

5. To disable the stretched cluster, perform the following:


a. Log in to the VMware vCenter vSphere Web Client.
b. Select the VxRail cluster and select the Configure tab. Click vSAN > Fault Domains.
c. In the Stretched Cluster window, click DISABLE.
d. From the Remove Witness Host window, click REMOVE.

Figure 90. Stretched cluster

6. When using L3 switching for the witnessPg port group, add static routes to the witness and the VMware ESXi hosts to
communicate. When witness traffic is separated, reset the static routes on each node and the witness host.
a. Enable SSH on the node.
b. Determine the existing static route on the node for the vSAN network (vmk3), enter:
esxcli network ip route ipv4list

c. To remove the existing static route on the node, enter:


esxcli network ip route ipv4 remove -n <witness_vsan_subnet>/24 -g
<local_vsan_gateway>

d. To add a static route on the node for the witness traffic network (vmk5), depending on which site the node is associated
with, enter:
● For Site-1, enter:
esxcli network ip route ipv4 add -n <witness_vsan_subnet>/24 -g
<site1_witness_traffic_subnet>
● For Site-2, enter:

Manage witness settings 147


esxcli network ip route ipv4 add -n <witness_vsan_subnet>/24 -g
<site2_witness_traffic_subnet>

Repeat this task for each node in the cluster.


e. Enable SSH on the witness ESXi host.
f. To determine the existing static route on the witness ESXi host for the vSAN network (vmk1), enter:
esxcli network ip route ipv4list
g. To remove the existing static route on the Witness ESXi host for the vSAN network, enter:
esxcli network ip route ipv4 remove -n <data hostsVSAN subnet>/24 -g
<local_vsan_gateway>
h. To add static route on the witness ESXi host for the witness traffic network for Site-1, enter:
esxcli network ip route ipv4 add -n <site1_witness_traffic_subnet>/24 - g
<site1_witness_traffic_subnet>
i. To add a static route for Site-2:
esxcli network ip route ipv4 add -n <site2_witness_traffic_subnet>/24 - g
<site2_witness_traffic_subnet>
To validate the network setup, send a ping in both directions:
● From any data node on Site-1 and Site-2, enter: vmkping -l vmk5 <vsan_witness_ipaddr>
● From the witness host, pint the witness traffic IP address of any data node on Site-1 and Site-2, enter: vmkping -l
vmk1 <site1or2_witness_traffic_node_ippaddr>
7. To reconfigure the stretched cluster, perform the following:
a. Log in to the VMware vCenter vSphere Web Client.
b. Select the VxRail cluster and select the Configure tab. Click vSAN > Fault Domains
c. In the Stretched Cluster window, click the CONFIGURE and follow the instructions in the wizard.

148 Manage witness settings


10
Collect log bundles
You can collect logs using the full log bundle method or the light log bundle method. The full log collection method is time
consuming, and the light log bundle method contains only the VxRail Manager logs. To accelerate the diagnostic process, the
component and node selection of the log bundle is part of the VxRail Manager log bundle collection.

About this task


This feature provides a new REST API to obtain the logs. The following apply for collecting the log bundle:
● Node selection is only supported with VMware ESXi, iDRAC, and PTAgent log collection.
● iDRAC log collection and PTAgent log collection are supported on Dell 14G and later platforms.
● Witness log collection does not support dynamic node clusters.
● Platform log collection is only supported on a Dell platform.
● VMware vCenter Server log bundle collection is not supported in T1 network configurations. You can collect the VMware
vCenter Server log bundle directly from the VMware vCenter Server.
You can generate the VxRail Manager, VMware vCenter Server, VMware ESXi, iDRAC, and PTAgent log bundle with the node
specification.
This procedure applies to the VxRail cluster running the VxRail 4.5.3xx, 4.7.xx, 7.0.3xx or later, and 8.0.x and later. The
VxRail-managed VMware vCenter Server or customer-managed VMware vCenter Server manages the VxRail 4.5.3xx, 4.7.xx,
7.0.3xx or later, and 8.0.x and later clusters.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Collect the VxRail Manager log bundle


Collect the VxRail Manager log bundle.

Steps
1. Log in to the VMware vSphere Web Client as an administrator or log in to the VxRail Manager as mystic.
You can SSH to VxRail Manager or log in to the VMware vSphere Web Client and launch the VxRail Manager VM on the web
console.
2. To switch to root, enter:
su root
3. To generate the VxRail Manager log bundle, enter any of the following commands:
/mystic/generateLogBundle.py -v

/mystic/generateLogBundle.py --vxm

/mystic/generateLogBundle.py --types vxm

dellvxm:~ # / mystic/generateLogBundle.py -v
Start to collect log bundle.
types: vxm
The request id for collecting log bundle is
a2a3f85e-408a-4de4-96dc-1c8dfd3e17fa
Start looping a2a3f85e-408a-4de4-96dc-1c8dfd3e17fa until request finished.
Collect log bundle successfully.
Please find the generated log bundle here:
/tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_05_21_41_13.zip

4. To verify the log bundle file, enter:

Collect log bundles 149


ls -l <file_path>
For example:
ls -l /tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_05_21_41_13.zip

-rw-rw-rw- 1 root root 64092471 August 5 21:41 /tmp/mystic/dc/


VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_05_21_41_13.zip

Collect log bundles from VxRail Manager


Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. From the Inventory icon, select the VMware vSAN cluster or the VMware vSphere cluster.
3. For VxRail versions earlier than 8.0.210, click the Configure tab and select VxRail > Troubleshooting.
4. Under Log Collection, click CREATE and select the log types.

Figure 91. Log Collection

5. For VxRail 8.0.210 and later, click the Configure tab and select VxRail > Support. Under Support, click the
Troubleshooting tab.
6. Under Support, click the Troubleshooting tab and click CREATE to select the log types.

150 Collect log bundles


Figure 92. Log collection

7. When finished, select the generated log bundle and click Download.
The <witness> log type is only for 2-node robot environment and does not work in a normal cluster configuration.

Collect the VMware vCenter Server log bundle


Collect the VMware vCenter Server log bundle.

Steps
1. Log in to the VxRail Manager as mystic.
2. To switch to root, enter:
su root

3. To generate the VMware vCenter Server log bundle, enter any of the following commands:
/mystic/generateLogBundle.py -c

/mystic/generateLogBundle.py --vcenter

/mystic/generateLogBundle.py --types vcenter

Wait for the command to finish. The following file path displays:

dellvxm:~ # / mystic/generateLogBundle.py -c
Start to collect log bundle.
types: vcenter
The request id for collecting log bundle is
0e9d3cb3-89c3-49d7-921c-b35fed410fe1
Start looping 0e9d3cb3-89c3-49d7-921c-b35fed410fe1 until request finished.
Collect log bundle successfully.
Please find the generated log bundle here:
/tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_18_14_29_49.zip

4. To verify the VMware vCenter Servers log bundle file, enter:


ls -l <file_path>

dellvxm:~ # ls -l /tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_05_21_57_54.zip
-rw-rw-rw- 1 root root 246578225 August 5 21:58 /tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_05_21_57_54.zip

Collect log bundles 151


Collect the VMware ESXi log bundle
Collect the VMware ESXi log bundle.

Steps
1. Log in to the VxRail Manager CLI as mystic.
2. To switch to root, enter:
su root
3. To generate the VMware ESXi log bundle, enter any of the following commands:
/mystic/generateLogBundle.py -e
/mystic/generateLogBundle.py --esxi
/mystic/generateLogBundle.py --types esxi

Wait for the command to finish. The following file path displays:

dellvxm:~ # / mystic/generateLogBundle.py -e
Start to collect log bundle.
types: esxi
The request id for collecting log bundle is
2807b8ca-5d84-4578-9409-d6eb5389ff8b
Start looping 2807b8ca-5d84-4578-9409-d6eb5389ff8b until request finished.
Collect log bundle successfully.
Please find the generated log bundle here:
/tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_05_22_15_46.zip

4. To verify the VMware ESXi log bundle file, enter:


ls -l <file_path>

dellvxm:~ # ls -l /tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_05_22_15_46.zip
-rw-rw-rw- 1 root root 3019014 August 5 22:27 /tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_05_22_15_46.zip

If the vSAN encryption is enabled, the following warning messages displays while performing ESXi log collection: Failed
to generate esxi log bundle on host <hostname> due to internal error. See KB000200163.

vxm:~ # /mystic/generateLogBundle.py -e
Start to collect log bundle.
types: esxi
The request id for collecting log bundle is
7c260275-1921-4e2f-8408-95d6cef88a35
Start looping 7c260275-1921-4e2f-8408-95d6cef88a35 until request finished.
Failed to generate esxi log bundle on host esx-c.122-powerx.dell.com due to internal
error. See KB000200163.
Failed to generate esxi log bundle on host esx-a.122-powerx.dell.com due to internal
error. See KB000200163.
Failed to generate esxi log bundle on host esx-b.122-powerx.dell.com due to internal
error. See KB000200163.

Collect the iDRAC log bundle


Generate and collect the iDRAC log bundle.

Steps
1. Log in to the iDRAC console.
2. To generate an iDRAC log bundle, enter any of the following commands:

152 Collect log bundles


/mystic/generateLogBundle.py -i

/mystic/generateLogBundle.py --idrac

/mystic/generateLogBundle.py --types idrac

Start to collect log bundle.


types: idrac
The request id for collecting log bundle is
6baa1017-b44f-4c2b-9310-fa1605cc976a
Start looping 6baa1017-b44f-4c2b-9310-fa1605cc976a until request finished.
Collect log bundle successfully.
Please find the generated log bundle here:
/tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_05_22_27_49.zip

3. To verify the iDRAC log bundle file, enter:


ls -l <file_path>

-rw-rw-rw- 1 root root 3019014 August 5 22:27 /tmp/mystic/dc/


VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_05_22_27_49.zip

Collect the platform log bundle


Collect the platform log bundle.

Steps
1. Log in to the VxRail Manager as mystic.
2. To switch to root, enter:
su root
3. To generate the platform log bundle, enter any of the following commands:
/mystic/generateLogBundle.py -p

/mystic/generateLogBundle.py --platform

/mystic/generateLogBundle.py --types ptagent

Wait for the command to finish. The following file path displays:

dellvxm:~ # / mystic/generateLogBundle.py -p
Start to collect log bundle.
types: platform
The request id for collecting log bundle is
50661fb1-d552-47ef-be8f-e42ffc08d07f
Start looping 50661fb1-d552-47ef-be8f-e42ffc08d07f until request finished.
Collect log bundle successfully.
Please find the generated log bundle here:
/tmp/mystic/dc/VxRail_Support_Bundle_5291caa7-8938-
fa82-169c-8b010f5d1658_2022-10-08_12_53_48.zip

4. To verify the PTAgent log bundle file, enter:


ls -l <file_path>

Collect log bundles 153


Collect the log bundle with node selection
Collect the log bundle with node selection for VMware ESXi, iDRAC, and platforms.

Steps
1. Log in to the VMware vSphere Web Client as an administrator or log in to the VxRail Manager as mystic.
2. To switch to root, enter:
su root
3. To generate the VMware ESXi log bundle with node selection , enter any of the following commands:
/mystic/generateLogBundle.py -e 2C49DN2, 3F89DN2
/mystic/generateLogBundle.py --esxi --nodes 2C49DN2, 3F89DN2
/mystic/generateLogBundle.py --types esxi --nodes 2C49DN2, 3F89DN2

Wait for the command to finish. The following file path displays:

dellvxm:~ # / mystic/generateLogBundle.py -e --nodes 2C49DN2, 3F89DN2


Start to collect log bundle.
types: esxi
nodes: 2C49DN2, 3F89DN2
The request id for collecting log bundle is
e8778824-2cc8-407d-9912-be8d73261d85
Start looping e8778824-2cc8-407d-9912-be8d73261d85 until request finished.
Collect log bundle successfully.
Please find the generated log bundle here:
/tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_07_09_34_21.zip

4. To verify the log bundle file, enter:


ls -l <file_path>

dellvxm:~ # ls -l /tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_07_09_34_21.zip
-rw-rw-rw- 1 root root 485734016 August 7 09:34 /tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_07_09_34_21.zip

If the vSAN encryption is enabled, the following warning messages displays while performing ESXi log collection: Failed
to generate esxi log bundle on host <hostname> due to internal error. See KB000200163.

vxm:~ # /mystic/generateLogBundle.py -e --nodes 4100003


Start to collect log bundle.
types: esxi
nodes: 4100003
The request id for collecting log bundle is
55d2c7db-0247-4842-bd06-0deea9b8bc35
Start looping 55d2c7db-0247-4842-bd06-0deea9b8bc35 until request finished.
Failed to generate esxi log bundle on host esx01.poda.powerx.dell.com due to internal
error. See KB000200163.

Collect the log bundle with component selection


Collect the log bundle with component selection for VxRail Manager, VMware vCenter Server, VMware ESXi, iDRAC, platform,
and witness.

Steps
1. Log in to the VMwarev Sphere Web Client as an administrator or log in to the VxRail Manager as mystic.
2. To switch to root, enter:

154 Collect log bundles


su root
3. To generate the log bundle with VxRail Manager and VMware vCenter Server types selected, enter any of the following
commands:
/mystic/generateLogBundle.py -vc

/mystic/generateLogBundle.py -v -c

/mystic/generateLogBundle.py --vxm --vcenter

/mystic/generateLogBundle.py --types vxm,vcenter

Wait for the command to finish. The following file path displays:

dellvxm:~ # / mystic/generateLogBundle.py -vc


Start to collect log bundle.
types: vxm,vcenter
The request id for collecting log bundle is
15e2d374-a38f-4296-a1e0-1bc42f3398a4
Start looping 15e2d374-a38f-4296-a1e0-1bc42f3398a4 until request finished.
Collect log bundle successfully.
Please find the generated log bundle here:
/tmp/mystic/dc/VxRail_Support_Bundle_521ffa8e-70f7-793e-
ea1a-8ec8db0fb3a3_2022_08_20_05_04_09.zip

4. To verify the log bundle file, enter:


ls -l <file_path>

dellvxm:~ # ls -l /tmp/mystic/dc/VxRail_Support_Bundle_521ffa8e-70f7-793e-
ea1a-8ec8db0fb3a3_2022_08_20_05_04_09.zip
-rw-rw-rw- 1 root root 691083648 August 20 09:34 /tmp/mystic/dc/
VxRail_Support_Bundle_521ffa8e-70f7-793e-ea1a-8ec8db0fb3a3_2022_08_20_05_04_09.zip

Collect the full log bundle


Collect the full log bundle.

Steps
1. Log in to the VMwarevSphere Web Client as an administrator or log in to the VxRail Manager as mystic.
2. To switch to root, enter:
su root
3. To generate the full log bundle, enter any of the following commands:
/mystic/generateLogBundle.py

/mystic/generateLogBundle

Wait for the command to finish. The following file path displays:

dellvxm:~ # / mystic/generateLogBundle
Start to collect log bundle.
types: vxm,vcenter,esxi,idrac,platform
The request id for collecting log bundle is
99419c45-3a75-4956-9470-255e94239175
Start looping 99419c45-3a75-4956-9470-255e94239175 until request finished.
Collect log bundle successfully.
Please find the generated log bundle here:
/tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_07_14_17_25.zip

4. To verify the full log bundle file, enter:

Collect log bundles 155


ls -l <file_path>

dellvxm:~ # ls -l /tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_07_14_17_25.zip
-rw-rw-rw- 1 root root 991840517 August 7 14:17 /tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_07_14_17_25.zip

If the vSAN encryption is enabled, the following warning messages displays while performing ESXi log collection: Failed
to generate esxi log bundle on host <hostname> due to internal error. See KB000200163.

vxm:~ # /mystic/generateLogBundle.py
Start to collect log bundle.
types: idrac,vcenter,platform,vxm,esxi
The request id for collecting log bundle is
8335787c-2641-48d3-9869-675f20489c38
Start looping 8335787c-2641-48d3-9869-675f20489c38 until request finished.
Collect log budle successfully.
Please find the generated log bundle here:
/tmp/mystic/dc/VxRail_Support_Bundle_52e92049-182f-40ab-
f117-4103dab9dc16_2023-04-06_22_08_37.zip
Warning
Failed to generate esxi log
bundle on host esx01.poda.powerx.dell.com due to internal error. See KB000200163.
Failed to generate esxi log
bundle on host esx02.poda.powerx.dell.com due to internal error. See KB000200163.
Failed to generate esxi log
bundle on host esx03.poda.powerx.dell.com due to internal error. See KB000200163.

Collect the witness log bundle


Collect the witness log bundle.

Steps
1. Log in to the VxRail Manager as mystic.
2. To switch to root, enter:
su root
3. To generate the witness log bundle, enter:
/mystic/generateLogBundle.py -w

/mystic/generateLogBundle.py --witness

/mystic/generateLogBundle.py --types witness

Wait for the command to finish. The following file path displays:

dellvxm:~ # / mystic/generateLogBundle.py -w
Start to collect log bundle.
types: witness
The request id for collecting log bundle is
5e4517fc-76f7-400a-85d1-64856a2aa46a
Start looping 5e4517fc-76f7-400a-85d1-64856a2aa46a until request finished.
Collect log bundle successfully.
Please find the generated log bundle here:
/tmp/mystic/dc/VxRail_Support_Bundle_521a4049-edb6-28ef-f7f1-
ebe7df507143_2022_09_21_05_20_34.zip

4. To verify the witness log bundle file, enter:


ls -l <file_path>

dellvxm:~ # ls -l /tmp/mystic/dc/VxRail_Support_Bundle_521a4049-edb6-28ef-f7f1-
ebe7df507143_2022_09_21_05_20_34.zip

156 Collect log bundles


-rw-rw-rw- 1 root root 236810339 September 21 05:23 /tmp/mystic/dc/
VxRail_Support_Bundle_521a4049-edb6-28ef-f7f1-ebe7df507143_2022_09_21_05_20_34.zip

The parameter <witness> log type is only for 2-node reboot environment and does not work in a normal cluster
configuration.
The witness log bundle is not in the full log bundle collection option. The witness log bundle collection must be performed
separately.

Delete log bundles from VxRail Manager


Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. From the Inventory icon, select the VMware vSAN cluster or the VMware vSphere cluster.
3. Click the Configure tab and select VxRail > Troubleshooting.
4. Select the generated log bundle and click Delete.

Collect the satellite node log bundles from VxRail


Manager
Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. From the Inventory icon, select the cluster that contains the satellite node.
3. Click the Configure tab and select VxRail > Troubleshooting.
4. Click Create and select the log type and the node to start the log collection.
5. Upon successful completion of the log collection, select the generated log bundle and click Download.

Delete the satellite node bundles from VxRail Manager


Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. From the Inventory icon, select the cluster that contains the satellite node.
3. Click the Configure tab and select VxRail > Troubleshooting.
4. Select the generated log bundle and click Delete.

Set the PostgreSQL log destination to the system log


Edit the postgresql.conf file to set the destination.

About this task


This procedure applies to VxRail 8.0.210 and later. See the VxRail 8.0.x Support Matrix for a list of supported versions.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Prerequisites
Be familiar with UNIX and Linux commands and obtain root credentials.

Collect log bundles 157


Steps
1. Use SSH to log in to VxRail Manager.
2. To edit the postgresql.conf file, as root, perform the following:
a. To open the file, enter:
# vi /var/lib/pgsql/data/postgresql.conf

b. To add to the file, enter:


log_destination='syslog'

log_destination='syslog'
syslog_facility='LOCAL0'
syslog_ident='postgres'

c. Save changes.
3. To reload the configuration file, as root, enter:
#systemctl reload postgresql

4. To verify the changes, as root, perform the following:


a. Enter:
#psql -U postgres -c "SHOW syslog_facility"
syslog_facility
--------
local0(1 row)

b. Enter:

#psql -U postgres -c "SHOW log_destination"


log_destination
--------
syslog(1 row)

c. Enter:
#cat /var/log/messages | grep postgres

158 Collect log bundles


Figure 93. Command output

Complete all tasks to ensure that the PostgreSQL log destination is the same as source VxRail Manager after running the
vxm_backup_restore.py script.

Renew the PostgreSQL certificate


Renew the PostgreSQL certificate.

About this task


This procedure applies to VxRail 8.0.210 and later. See the VxRail 8.0.x Support Matrix for a list of supported versions.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Prerequisites
Be familiar with UNIX and Linux commands and obtain root credentials.

Steps
1. To create a certificate using an existing key, enter:
#cd /var/lib/pgsql

#DOMAIN=`hostname -d`

#SHORT_NAME=`hostname -s`

#openssl req -new -nodes -out new-server.csr -keyout new-server.key -subj "/CN=$
{SHORT_NAME}.${DOMAIN}/O=vxrail"

# openssl x509 -req -in new-server.csr -CA /var/lib/pgsql/data/root.crt -CAkey /var/lib/


pgsql/data/root.key -CAcreateserial -out new-server.crt -days 365

2. To replace an old certificate with a new certificate, enter:


#cp new-server.crt /var/lib/pgsql/data/server.crt

Collect log bundles 159


#cp new-server.key /var/lib/pgsql/data/server.key

#chown postgres:postgres /var/lib/pgsql/data/server.crt

#chmod a-w /var/lib/pgsql/data/server.crt

#chown postgres:postgres /var/lib/pgsql/data/server.key

#chmod a-w /var/lib/pgsql/data/server.key

3. To restart the PostgreSQL service, enter:


#systemctl restart postgresql.service
# journalctl -xeu postgresql.service

160 Collect log bundles


11
Manage certificates
To replace the VxRail Manager SSL certificate, use the VMware vCenter Server.

Import the VMware vCenter Server certificates into


the VxRail Manager trust store
After deployment, if you replace an SSL certificate, update the new certificate authorities (CAs). VxRail Manager recognizes the
VMware vSphere components after the update in the VxRail Manager trust store using REST.

About this task


This procedure applies to the VxRail cluster running the VxRail 8.0.x and later.

CAUTION: Do not perform these steps in a VMware VCF environment.

This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Steps
1. For VxRail 8.0.210 and later, to import a VMware vCenter Server certificate into the VxRail Manager trust store using a
script, perform the following:
a. Use SSH to log in to VxRail Manager and switch to root.
b. To run the import script, enter:

For versions earlier than VxRail 8.0.300, enter: python /mystic/ssl/cert_util.py

For VxRail 8.0.300 and later, enter: mcp_python /mystic/ssl/cert_util.py

c. From the Inventory icon, select the VxRail cluster and click the Configure tab.
d. Select VxRail > Security > Certificates.
e. Click ALL TRUST STORE CERTIFICATES.
2. For VxRail versions earlier than VxRail 8.0.210, to import a certificate, obtain the fingerprint list from the VxRail Manager
trust store, perform the following:
a. Log in to the VxRail onboard API documentation at https://<vxm_ipaddr>/rest/vxm/api-doc.html.
b. From the VxRail REST API left-menu, select Certificates > Get a list of fingerprints retrieved from....
c. Enter the username and password, and then click Send Request.

Manage certificates 161


Figure 94. Fingerprint list

d. Copy the fingerprint value from the response window.


3. To get the certificate content from a specific fingerprint from the VxRail Manager trust store, perform the following:
a. Log in to the VxRail onboard API documentation at https://<vxm_ipaddr>/rest/vxm/api-doc.html.
b. Go to Certificates and click Search the VxRail Manager trust store.
c. Enter the username, password, fingerprint, and click Send Request.
4. To import the VMware vCenter Server certificates into the VxRail Manager trust store, perform the following:
a. Log in to the VxRail onboard API documentation at https://<vxm_ipaddr>/rest/vxm/api-doc.html.
b. Go to Certificates and click Import certificates into the VxRail.
c. Enter the username and password.
d. Place the certificate and the certificate revocation list (CRL) content in the body.
For example:
"-----BEGIN CERTIFICATE-----\nMIIEHzCCAwegAwIBAgIJANx5901VXVVVMA0GCSqGSIb3
DQEBCwUAMIGaMQswCQYD\nVQQDDAJDQTEXMBUGCgmSJomT8ixkARkWB3ZzcGhlcmUxFTATBgoJkiaJk/
IsZAEZ\
nFgVsb2NhbDELMAkGA1UEBhMCVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExHDAaBgNV\nBAoME2M0LXZjLnJhY2t
M
MDMubG9jYWwxGzAZBgNVBAsMElZNd2FyZSBFbmdpbmVl\ncmluZzAeFw0yMjAzMjcwNjA3NTVaFw0zMjAzMjQw
N
jA3NTVaMIGaMQswCQYDVQQD\nDAJDQTEXMBUGCgmSJomT8ixkARkWB3ZzcGhlcmUxFTATBgoJkiaJk/
IsZAEZFg
Vs\nb2NhbDELMAkGA1UEBhMCVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExHDAaBgNVBAoM\nE2M0LXZjLnJhY2tM
M
DMubG9jYWwxGzAZBgNVBAsMElZNd2FyZSBFbmdpbmVlcmlu\nZzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCA
Q
oCggEBALSoNvUmgFYouBS6qjgp\nwb8NZdeT1Gv4r2/wbWNr332iP1A/ffv5Kq66AbaaNDu+0G6NSsdh/
IPDI31
YtaAP\n0VN7xvwuUJsYeCCwzldQE3tm/M4Xe0h/Tw//GodYRIkC/
5uYxKxm4hRCPu7Qvs8/\n2q1ypGclpzj5U5
lXOoxHy4JsmX9Argqee3F0mT9l0bHqGBlNu+cWtK0Hwh7eTaUj\nyhJ+pHVf8SHvQQnxIYSlo1e0o3lQnGv+TX
c
LctbKzmsHMPVjYOletqs/
9aCSsEgO\ncxhjSIxGwwgRI5BLGhakoLXHznyWsJ81vc0TBvMock2hPOV7VOhGpNib
BMB6Fz+j\nC3cCAwEAAaNmMGQwHQYDVR0OBBYEFCaeddsZQeRukQL/
pfUX2MbCFk30MB8GA1Ud\nEQQYMBaBDmV
tYWlsQGFjbWUuY29thwR/AAABMA4GA1UdDwEB/wQEAwIBBjASBgNV\nHRMBAf8ECDAGAQH/
AgEAMA0GCSqGSIb3
DQEBCwUAA4IBAQBbbnY6I8d/qVCExT89\nthbae5n81hzFFtL0t36HzmSkcCLZnU/
w8cWuILabJCSYbJYREGcGr
vKkplF9Bfsp\nw/

162 Manage certificates


u4Y1nwHrLWmfX1spNWgEWFGbSzE2qxFLIozNBKcMS1+CvZP6fIc1CfqjvMTEt2\nyNGbR+gt
BG5Are3K6VMZPihSCcWqu7XMsX9yCVdpOFCbV5m27JxYMwleOA220io6\nI3PJVAvCsRNoaBu7UiWEmjAsqj0m
1
v4+c3XG+2QquJ6CGHrfgoxGQDormUXGbxvp\neUq86TgxcbH76LzmLTywJzQ/
DFYm3bBHOgzCH2F0Ra7jz46gnu
uOPqWtJ4pU1Ghj\nm2rf\n-----END CERTIFICATE-----\n-----BEGIN X509 CRL-----\nMIICFTCB/
gIB
ATANBgkqhkiG9w0BAQsFADCBmjELMAkGA1UEAwwCQ0ExFzAVBgoJ\nkiaJk/
IsZAEZFgd2c3BoZXJlMRUwEwYKC
ZImiZPyLGQBGRYFbG9jYWwxCzAJBgNV\nBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMRwwGgYDVQQKDBNjN
C
12Yy5yYWNr\nTDAzLmxvY2FsMRswGQYDVQQLDBJWTXdhcmUgRW5naW5lZXJpbmcXDTIyMDMzMTAx\nNTc1NVoX
D
TIyMDQzMDAxNTc1NVqgLzAtMAoGA1UdFAQDAgEFMB8GA1UdIwQYMBaA\nFCaeddsZQeRukQL/
pfUX2MbCFk30MA
0GCSqGSIb3DQEBCwUAA4IBAQBJ4QhmJQb/\nl/lU9FhYGcQEgFyBFEH9d6G2y66yPrJ/
40sCpUb7JMkdr7l2bYN
n1eRHljYBEkrx\n9KMX/
l5RkG+JTeZdHWkGQNB3U+qFvNANUYuOXYPwRoCVgiAoKs98YMzx8TKcluOE\nsHa8Ur
Cx5fy1gvPsreK9ODxdU9CpNjavfcV2sFkw07mmCDGGvX9GUc7y5JtFH50y\nAcVKVisZ5sT1yHRlJ0MOg1NGM0
8
VV2DpHUaZmNh7MgEx8/hNJlz2skQ0Zc8EVEzR\n3ULUC3/
djyXZP3QQ3PlKRgwaziPq8kRk+8jQby8ZipMtW4IH
S2WvvFvPDXWzgH/J\nE6TJVaqfezuc\n-----END X509 CRL-----"

You can also place only the certificate content in the body.
For example:
"-----BEGIN CERTIFICATE-----\nMIIEHzCCAwegAwIBAgIJANx5901VXVVVMA0GCSqGS
Ib3DQEBCwUAMIGaMQswCQYD\nVQQDDAJDQTEXMBUGCgmSJomT8ixkARkWB3ZzcGhlcmUxFTATBgoJkiaJk/
IsZA
EZ\nFgVsb2NhbDELMAkGA1UEBhMCVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExHDAaBgNV\nBAoME2M0LXZjLnJh
Y
2tMMDMubG9jYWwxGzAZBgNVBAsMElZNd2FyZSBFbmdpbmVl\ncmluZzAeFw0yMjAzMjcwNjA3NTVaFw0zMjAzM
j
QwNjA3NTVaMIGaMQswCQYDVQQD\nDAJDQTEXMBUGCgmSJomT8ixkARkWB3ZzcGhlcmUxFTATBgoJkiaJk/
IsZAE
ZFgVs\nb2NhbDELMAkGA1UEBhMCVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExHDAaBgNVBAoM\nE2M0LXZjLnJhY
2
tMMDMubG9jYWwxGzAZBgNVBAsMElZNd2FyZSBFbmdpbmVlcmlu\nZzCCASIwDQYJKoZIhvcNAQEBBQADggEPAD
C
CAQoCggEBALSoNvUmgFYouBS6qjgp\nwb8NZdeT1Gv4r2/wbWNr332iP1A/ffv5Kq66AbaaNDu+0G6NSsdh/
IPD
I31YtaAP\n0VN7xvwuUJsYeCCwzldQE3tm/M4Xe0h/Tw//GodYRIkC/
5uYxKxm4hRCPu7Qvs8/\n2q1ypGclpzj
5U5lXOoxHy4JsmX9Argqee3F0mT9l0bHqGBlNu+cWtK0Hwh7eTaUj\nyhJ+pHVf8SHvQQnxIYSlo1e0o3lQnGv
+
TXcLctbKzmsHMPVjYOletqs/
9aCSsEgO\ncxhjSIxGwwgRI5BLGhakoLXHznyWsJ81vc0TBvMock2hPOV7VOhGp
NibBMB6Fz+j\nC3cCAwEAAaNmMGQwHQYDVR0OBBYEFCaeddsZQeRukQL/
pfUX2MbCFk30MB8GA1Ud\nEQQYMBaB
DmVtYWlsQGFjbWUuY29thwR/AAABMA4GA1UdDwEB/wQEAwIBBjASBgNV\nHRMBAf8ECDAGAQH/
AgEAMA0GCSqGS
Ib3DQEBCwUAA4IBAQBbbnY6I8d/qVCExT89\nthbae5n81hzFFtL0t36HzmSkcCLZnU/
w8cWuILabJCSYbJYREG
cGrvKkplF9Bfsp\nw/
u4Y1nwHrLWmfX1spNWgEWFGbSzE2qxFLIozNBKcMS1+CvZP6fIc1CfqjvMTEt2\nyNGbR
+gtBG5Are3K6VMZPihSCcWqu7XMsX9yCVdpOFCbV5m27JxYMwleOA220io6\nI3PJVAvCsRNoaBu7UiWEmjAsq
j
0m1v4+c3XG+2QquJ6CGHrfgoxGQDormUXGbxvp\neUq86TgxcbH76LzmLTywJzQ/
DFYm3bBHOgzCH2F0Ra7jz46
gnuuOPqWtJ4pU1Ghj\nm2rf\n-----END CERTIFICATE-----"

e. Click Send Request.

Manage certificates 163


5. To delete the VMware vCenter Server certificate and the CRL files by a specific fingerprint from the trust store, perform the
following:
a. Log in to the VxRail onboard API documentation at https://<vxm_ipaddr>/rest/vxm/api-doc.html.
b. Go to Certificates and click Delete the certificate file from....
c. Enter the username, password, fingerprint, and click Send Request.

Import the VMware ESXi host certificates to VxRail


Manager
Import the VMware ESXi host certificates into VxRail.

Prerequisites
● Verify that the VMware ESXi host network is available when you replace the VMware ESXi host certificates into VxRail
Manager.
● Obtain the root password.

About this task


After the VxRail deployment, if you replace VMware ESXi certificates in the VxRail clusters in the VMware vSphere
environment, import them into the VxRail Manager. You can import multiple certificates simultaneously.
See the VxRail 8.0.x Support Matrix for a list of supported versions.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Steps
1. Log in to the VxRail Manager as mystic.
2. To switch to root, enter:
su root
3. To replace certificates on a node, enter:
cd /mystic/ssl/

For versions earlier than VxRail 8.0.300, enter: python certificate_replacement.py -sn <node_sn1>
<node_sn2>

For VxRail 8.0.300 and later , enter: mcp_python certificate_replacement.py -sn <node_sn1> <node_sn2>

The updated certificates are stored under the /var/lib/vmware-marvin/trust/host directory in VxRail Manager. If
a host fails, check the failed host network using the failed hosts serial number.
When the update is complete, the following table shows the results:

Table 26. Certificate results


Result Description
new The first time that you download the VMware ESXi host certificate.
update The VMware ESXi host certificate that is downloaded is different from the original one. The VMware
ESXi host certificate is updated successfully.
identical The VMware ESXi host certificate that is downloaded is identical from the original one. No action is
required.

4. To manually import the VMware vCenter Server SSL certificate on the VxRail Manager, see KB 000077894.

164 Manage certificates


Import VMware vSphere SSL certificates to VxRail
Manager
If an SSL certificate has been replaced in the VxRail cluster VMware vSphere environment, update the VxRail Manager with the
new certificate authorities (CAs). VxRail Manager cannot recognize the VMware vSphere components until the new CAs are
installed.

Prerequisites
Verify that the VMware ESXi host network is available during the replacement of the VMware ESXi host certificates into VxRail
Manager.

About this task

CAUTION: Do not perform this task in a VMware VCF environment.

The VxRail-managed VMware vCenter Server manages the VxRail 8.0.xxx and later. See the VxRail 8.0.x Support Matrix for a
list of supported versions.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Steps
1. For VxRail 8.0.210 and later, go to step 14.
2. For Enhanced Link Mode, to retrieve the new CA certificates from the VMware vCenter Server, perform the following:
a. Log in to the VMware vCenter Server.
b. Click Download trusted root CA certificate on the bottom-right corner or right-click the link and save it as a ZIP file.

Figure 95. Trusted root CA certificates

c. A download.zip file is downloaded to your local machine that contains the CA certificates (.<digit> files) and the
revocation lists (.r<digit> files).
).
NOTE: The revocation files are not used in this task.

3. Use FTP or SCP to transfer the download.zip to the VxRail Manager and select the target directory such as /tmp.
4. SSH to the root account in VxRail Manager. To extract the download, enter:
cd /tmp

unzip download.zip

cd certs

ls *

Manage certificates 165


The certs directory in download.zip contains three subfolders: lin, mac, and win. For VxRail, the files under the lin
subfolder are used.
5. Using the list of CA certificate files, for each distinct file name (ignore the digit extension), convert the file to a new distinct
CA file. The input file is the distinct file name with the largest number as the digit extension. For example, if the list of
certificate filenames is:
1285cf8e.0 1285cf8e.r0
The 1285cf8e.0 file must be converted.
a. To convert the file to DER format and output to a new file, enter:
openssl x509 -outform der -in /tmp/certs/lin/<file>.<highestdigit> -out /tmp/
certs/lin/newcertfile<x>
b. Repeat these steps for each distinct CA certificate file.
6. For each distinct file name (ignore the digit extension), rename the revocation list (r<digit> files) so that the file extension
starts from .r0, while the file name remains the same as before.
For example, if the list of certificate files is: e2cd3e88.0 e2cd3e88.r1 e2cd3e88.r2
The e2cd3e88.r1 and e2cd3e88.r2 files must be renamed.
a. To rename the files, enter:
mv
/tmp/certs/lin/e2cd3e88.r1 /tmp/certs/lin/e2cd3e88.r0
mv
/tmp/certs/lin/e2cd3e88.r2 /tmp/certs/lin/e2cd3e88.r1

b. Repeat these steps for each distinct revocation list.


7. To copy the new certificate file to /var/lib/vmware-marvin/trust/lin, enter:
cp -f /tmp/certs/lin/* /var/lib/vmware-marvin/trust/lin
8. To change the permission and ownership of the new certificate file, enter:
chmod 777 /var/lib/vmware-marvin/trust/lin/*

chown tcserver:pivotal /var/lib/vmware-marvin/trust/lin/*

9. Select the VxRail cluster and click the Configure tab. Select VxRail > Health Monitoring and enable the health monitoring
status.
10. To restart the marvin and runjars services, enter:
service vmware-marvin restart

systemctl status vmware-marvin

service runjars restart

systemctl status runjars

11. To change the permission on the new certificate file to -rw-r-r--, enter:
chmod 644 /var/lib/vmware-marvin/trust/lin/*
12. To restart the ms-day2 service, obtain the root credentials and switch to root by entering:

su root

a. To start a new instance of ms-day2 service, enter:


kubectl --kubeconfig
/etc/rancher/rke2/rke2.yaml -n helium scale deployment/ms-day2
--replicas=0

kubectl --kubeconfig
/etc/rancher/rke2/rke2.yaml -n helium scale deployment/ms-day2
--replicas=1

b. To check the previous ms-day2 service is terminated, enter:

166 Manage certificates


kubectl get pods | grep
day2

13. To update the VMware ESXi host certificates in VxRail Manager, enter:
cd /mystic/ssl/
For versions earlier than VxRail 8.0.300, enter: python certificate_replacement.py

For VxRail 8.0.300 and later , enter: mcp_python certificate_replacement.py

start to replace certificate replace certificate job:certificatesImportHostJob,


state:IN_PROGRESS
replace certificate job:certificatesImportHostJob, state:IN_PROGRESS
replace certificate job:certificatesImportHostJob, state:IN_PROGRESS
replace certificate job:certificatesImportHostJob, state:IN_PROGRESS
replace certificate job:certificatesImportHostJob, state:COMPLETED
replace certificates SUCCESS with failed hosts:[], successful hosts:['V010103',
'V010104', 'V010102', 'V010101']
● If the default VMware vCenter Server management account does not have sufficient permissions to get the VMware
ESXi host certificates, for versions earlier than VxRail 8.0.300, use python certificate_replacement.py -u.
For VxRail 8.0.300 and later, use mcp_python certificate_replacement.py -u. You can provide another
VMware vCenter Server account.
● Keep the ESXi host network available during the replacement of ESXi host certificates into VxRail Manager. The updated
certificates are stored under the VxRail Manager directory: /var /lib/vmware-marvin/trust/host. If some
hosts fail, check the failed host network according to the failed hosts serial number.
The updated certificates are stored under the VxRail Manager directory - /var/lib/vmware-marvin/trust/host. If
any host fails, check the failed host network according to the failed hosts serial number.
14. For VxRail 8.0.210 and later, log in to the VxRail Manager console as root and enter:

For versions earlier than VxRail 8.0.300, enter:python /mystic/ssl/cert_util.py

For VxRail 8.0.300 and later, enter:mcp_python /mystic/ssl/cert_util.py

If the default VMware vCenter Server management account does not have sufficient permissions to get VMware ESXi hosts
certificates. For versions earlier than VxRail 8.0.300, enter: python certificate_replacement.py -u. For VxRail
8.0.300 and later, enter: mcp_python certificate_replacement.py -u provide another VMware vCenter Server
account.

Next steps
For more information about replacing certificates, see KB 77894.
See Managing Certificates Using the vSphere Certificate Manager Utility.

Manage certificates 167


12
Rename VxRail components
You can use the VMware vCenter Server to rename many components. Links to additional procedures are provided.
Use the VMware vCenter Server to rename the following components:
● VxRail data center
● VM folder
● VxRail cluster
● Witness port group - 2-node cluster
● VMware ESXi hostname and IP address
Use the following links to rename other VxRail components:
● To rename the VMware VDS or dvPortGroup, see Renaming a VMware VDS/dvPortGroup while virtual machines are
connected.
● To rename the vSAN datastore, see Rename the VxRail vSAN Datastore
● To rename the VxRail VM, VxRail-managed VMware vCenter Server Appliance, and customer-managed VMware vCenter
Server Appliance, see General Virtual Machine Options

Change the FQDN of the VMware vCenter Server


Appliance
Change the VMware vCenter Server Appliance FQDN.

Prerequisites
● Back up the VMware vCenter Server in the VMware SSO domain.
● Unregister products from the VMware vCenter Server. Reregister the products after the FQDN change is complete.
● Delete the VMware vCenter High Availability (vCHA) configuration and reconfigure after the FQDN change is complete.
● If you rename the VMware vCenter Server, rejoin it to the Microsoft AD.
● Verify that the FQDN or hostname resolves to the provided IP address (DNS A records).
● Do not unregister the VxRail Manager VMware vCenter Server plug-in.

About this task


VMware vCenter Server manages the VxRail 8.0.x and later. This procedure applies to the VxRail 8.0.x and later cluster.
See the VxRail 8.0.x Support Matrix for a list of supported versions.
VxRail 8.0.210 supports IPv4, IPv6, and dual-stack environments.
The following table shows supported features:

Table 27. Features


Supported features Not supported features
Enhanced Linked Mode (ELM) Pure IP address customer-managed VMware vCenter Server without FQDN
Change FQDN to a different domain Change the VMware vCenter Server Appliance FQDN on VMware VCF.

This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Steps
1. For internal DNS, to configure VxRail Manager to add the VMware vCenter Server Appliance FQDN DNS record, perform the
following:

168 Rename VxRail components


a. Using SSH, log in to the VxRail Manager as mystic and su to root.
b. Update /etc/hosts and add an entry to the VMware vCenter Server FQDN.
For IPv4 sample output, the FQDN entry is 172.16.10.211 vcnew.testfqdn.local vcnew.

127.0.0.1 localhost localhost.localdom

172.16.10.211 vc.testfqdn.local vc

172.16.10.211 vcnew.testfqdn.local vcnew

172.16.10.150 vxm.testfqdn.local vxm

172.16.10.111 vcluster101-esx01.testfqdn.local vcluster101-esx01


172.16.10.112 vcluster101-esx02.testfqdn.local vcluster101-esx02
172.16.10.113 vcluster101-esx03.testfqdn.local vcluster101-esx03

For IPv6, the following example output is provided:

127.0.0.1 localhost localhost.localdom


fc00::20:18:38:200 v6cluster329-vxm.vv003.local v6cluster329-vxm
fc00::20:18:38:201 v6cluster329-vcsa.vv003.local v6cluster329-vcsa
fc00::20:18:38:101 v6cluster329-esx01.vv003.local v6cluster329-esx01

For dual-stack environments, the following example output is provided:

127.0.0.1 localhost localhost.localdom

172.16.10.200 vcluster101-vxm.vv003.local vcluster101-vxm

fc00::20:16:10:200 vcluster101-vxm.vv003.local vcluster101-vxm

172.16.10.201 vcluster101-vcsa.vv003.local vcluster101-vcsa

fc00::20:16:10:201 vcluster101-vcsa.vv003.local vcluster101-vcsa

NOTE: If the new FQDN is in different top-level domains, the new FQDN of the top-level domain and IP address
CICR on the auth-zone in /etc/dnsmasq.conf must be updated or added.

c. To edit the file, enter:


vi /etc/dnsmasq.conf

d. To locate and add a line for the new auth-zone with the new FQDN and IP CICR, enter:
auth-server=127.0.0.1,eth0

auth-zone=<new top-level domain>,<new IP CICR>

For example, for IPv4:


auth-server=127.0.0.1, eth0

auth-zone=vv003.local,172.16.0.0/16

For example, for IPv6:


auth-server=127.0.0.1, ::1, eth0

auth-zone=vv003.local,fc00::20:18:0:0/96,fc00::20:19:0:0/96

For example, for dual-stack:


auth-server=127.0.0.1,::1,eth0

auth-zone=vv003.local,fc00::20:18:0:0/96,fc00::20:19:0:0/96

e. To restart the dnsmasq service, enter:

Rename VxRail components 169


systemctl restart dnsmasq
2. To update the FQDN in VMware Application Management Interface (VAMI), perform the following:
a. Log in to the VMware vCenter Server as root on port 5480.
b. Click Networking.
c. Under Network Settings, click Edit.
d. In the wizard, select NIC 0 (management network) and click NEXT.
e. Change the VMware vCenter Server hostname or FQDN to its new name and click NEXT.
f. Enter the SSO administrator (nonroot user) credentials for the VMware vCenter Server.
NOTE: Do not use the root credentials to log in to the VMware vCenter Server.

g. Review the changes that are made to the VMware vCenter Servers FQDN and IP address settings.
h. Acknowledge that the VMware vCenter Server backup is performed.
Perform these additional steps after the FQDN of the VMware vCenter Server is changed. Do not unregister the VxRail
VMware vCenter Server plug-in.
3. Wait for the FQDN change procedure to complete.
After the changes are complete, an alert displays allowing the automatic redirection back to the VAMI on port 5480 within
10 seconds. Click Redirect Now to skip the automatic redirect.

4. Log in to the VMware vCenter Server as root on port 5480 and confirm that the configuration is complete.
5. To renew the node certificates in the VMware vSphere Web Client, perform the following:
a. From the Inventory icon, select a VxRail cluster, and then select a host within the cluster.
b. Click the Configure tab and select System > Certificate.
c. Under Certificate Management, click Renew.
NOTE: Each certificate must be updated manually on each node, including the witness node for certain clusters.

6. To restart the vxrail-platform-service or platformsvc for each node, perform the following:
a. From the Inventory icon, select a VxRail cluster, and then select a host within the cluster.
b. Click the Configure tab.
c. For versions earlier than VxRail 8.0.300, select System > Services and select vxrail-platform-service to restart
the service. For VxRail 8.0.300 and later, select System > Services and select platformsvc to restart the service.
7. (OPTIONAL) To update the VxRail Manager database for the TLD change, perform the following:
a. Connect to the database and enter:
psql -U postgres vxrail
b. To confirm your existing TLD, enter:
select * from configuration.configuration where key='system_tld';
c. To update your new FQDN value, enter:
update configuration.configuration set value='new_FQDN' where key='system_tld';
d. To verify your new FQDN, enter:
select * from configuration.configuration where key='system_tld';
8. To update the VMware vCenter Server Appliance FQDN information in the VxRail Manager using the root credentials,
perform the following:
a. To obtain the existing VMware vCenter Server host value, enter:
curl --location --request GET 'http://127.0.0.1/rest/vxm/internal/configservice/v1/
configuration/keys/vcenter_host' --header 'Content-Type: application/json' --unix-
socket /var/lib/vxrail/nginx/socket/nginx.sock

b. To update the VMware vCenter Server host value with the new FQDN, enter:
curl --location --request PUT 'http://127.0.0.1/rest/vxm/internal/configservice/v1/
configuration/keys/vcenter_host' --header 'Content-Type: application/json' --unix-
socket /var/lib/vxrail/nginx/socket/nginx.sock --data-raw '{"value": "<New_VC_FQDN>"}'

9. Using the API, to download and update the certificates, enter:

170 Rename VxRail components


curl -k -X POST -H "Content-Type: application/json" --unix-socket /var/lib/vxrail/
nginx/socket/nginx.sock -d @- << EOF http://localhost/rest/vxm/internal/operation/v1/vxm/
download-vc-certs/execute

{
"vc_info": {
"host": "<New_VC_FQDN>",
"username": "[email protected]",
"password": "<password>",
"port": 443
},
"auto_accept_vc_cert": true
}
EOF

{"result": {"vc_certificate_management_mode": "vmca", "vc_certificate": {"type":


"VC", "valid": true, "thumbprint":
"B7:43:CE:13:84:92:FA:0D:FF:03:ED:E7:B7:BB:48:09:D4:24:FF:5C", "data": {"validity":
{"from": 1667109980, "to": 1982469980}, "public_key": {"algorithm": "rsaEncryption",
"modulus":
"00:9b:ae:38:58:c4:8f:97:59:e8:c8:d5:28:ca:aa:1a:7e:d3:46:5d:c9:ad:e2:22:22:3f:48:32:8
8:17:3c:3f:2c:85:52:b8:a7:c7:69:6e:9d:61:b7:eb:24:c9:80:91:07:9c:43:9e:1f:01:46:09:b6:
44:2d:34:77:ff:6f:ed:d7:fd:5b:65:c1:e8:85:c8:51:86:4b:ae:b5:96:fd:c6:5e:03:81:1d:da:3a
:b8:8c:86:2a:9e:99:19:48:1f:16:37:41:bb:27:f2:ec:c8:e0:f5:1d:49:8e:80:df:49:c4:0b:de:1
a:61:5a:0a:9b:f6:9c:9c:5e:3c:24:84:e2:da:58:fe:c8:90:02:70:12:78:e8:21:47:4e:19:79:49:
0a:3b:3a:12:87:9b:ed:9e:45:01:b2:93:c6:ec:b5:4e:6e:a4:c8:37:25:69:df:21:e7:e7:34:d4:6e
:0a:fe:f1:83:b6:ce:31:5d:8c:37:61:8a:98:fb:e6:51:0b:98:48:9c:4c:ad:41:65:f7:47:d6:2b:1
7:72:be:80:ee:97:47:b6:3b:98:0f:b5:9e:d3:fa:8d:c3:b3:e3:70:d6:15:dd:8d:32:2a:b9:83:3d:
3b:85:3f:5d:cc:2d:44:db:f7:e0:40:83:a9:f0:be:97:6d:43:19:9d:e4:a3:12:af:1c:c4:17:cc:15
:28:8b:81:a0:8e:ba:1e:dd:e9:68:83:51:c4:69:5c:39:b2:c6:74:d2:b6:c3:dc:9b:27:65:53:6d:6
7:a5:ae:25:07:ab:8f:de:ed:f7:6f:b0:f7:71:7f:8d:ee:30:20:3c:a5:c4:2c:9a:93:dd:71:72:ba:
0c:08:70:8a:16:a0:2e:66:cf:34:ad:b7:b0:85:e7:7d:90:83:b0:b3:24:cb:8d:6b:16:6c:65:5c:72
:f2:45:95:dc:6c:37:01:06:c9:ad:4c:12:a1:4d:74:c4:97:eb:17:5b:50:d0:00:66:3e:fc:c8:d8:f
c:27:d9:e1:3a:16:b2:21:ef:a6:5b:c1:c9", "length": "(3072 bit)"}, "extensions":
{"key_usage": "Certificate Sign, CRL Sign", "subject_alternative_name":
"email:[email protected], IP Address:127.0.0.1", "subject_key_identifier":
"DF:DD:0C:91:75:92:26:B6:A8:4E:74:2B:A3:D9:27:4E:40:DD:DD:68"}, "version": "3 (0x2)",
"serial_number": "e9:68:06:7b:75:59:ce:bd", "signature_algorithm":
"sha256WithRSAEncryption", "issuer": "CN = CA, DC = vsphere, DC = local, C = US, ST =
California, O = c3-vc.rackH04.local, OU = VMware Engineering", "subject": "CN = CA,
DC = vsphere, DC = local, C = US, ST = California, O = c3-vc.rackH04.local, OU =
VMware Engineering"}}}}

curl -k -X POST -H "Content-Type: application/json" --unix-socket /var/lib/vxrail/


nginx/socket/nginx.sock -d @- << EOF http://localhost/rest/vxm/internal/operation/v1/vxm/
download-vc-certs/execute

{
"vc_info": {
"host": "vcnew1.testfqdn.local",
"username": "[email protected]",
"password": "password",
"port": 443
},
"auto_accept_vc_cert": true
}
EOF

{"result": {"vc_certificate_management_mode": "vmca", "vc_certificate": {"type":


"VC", "valid": true, "thumbprint":
"B7:43:CE:13:84:92:FA:0D:FF:03:ED:E7:B7:BB:48:09:D4:24:FF:5C", "data": {"validity":
{"from": 1667109980, "to": 1982469980}, "public_key": {"algorithm": "rsaEncryption",
"modulus":
"00:9b:ae:38:58:c4:8f:97:59:e8:c8:d5:28:ca:aa:1a:7e:d3:46:5d:c9:ad:e2:22:22:3f:48:32:8
8:17:3c:3f:2c:85:52:b8:a7:c7:69:6e:9d:61:b7:eb:24:c9:80:91:07:9c:43:9e:1f:01:46:09:b6:
44:2d:34:77:ff:6f:ed:d7:fd:5b:65:c1:e8:85:c8:51:86:4b:ae:b5:96:fd:c6:5e:03:81:1d:da:3a
:b8:8c:86:2a:9e:99:19:48:1f:16:37:41:bb:27:f2:ec:c8:e0:f5:1d:49:8e:80:df:49:c4:0b:de:1
a:61:5a:0a:9b:f6:9c:9c:5e:3c:24:84:e2:da:58:fe:c8:90:02:70:12:78:e8:21:47:4e:19:79:49:

Rename VxRail components 171


0a:3b:3a:12:87:9b:ed:9e:45:01:b2:93:c6:ec:b5:4e:6e:a4:c8:37:25:69:df:21:e7:e7:34:d4:6e
:0a:fe:f1:83:b6:ce:31:5d:8c:37:61:8a:98:fb:e6:51:0b:98:48:9c:4c:ad:41:65:f7:47:d6:2b:1
7:72:be:80:ee:97:47:b6:3b:98:0f:b5:9e:d3:fa:8d:c3:b3:e3:70:d6:15:dd:8d:32:2a:b9:83:3d:
3b:85:3f:5d:cc:2d:44:db:f7:e0:40:83:a9:f0:be:97:6d:43:19:9d:e4:a3:12:af:1c:c4:17:cc:15
:28:8b:81:a0:8e:ba:1e:dd:e9:68:83:51:c4:69:5c:39:b2:c6:74:d2:b6:c3:dc:9b:27:65:53:6d:6
7:a5:ae:25:07:ab:8f:de:ed:f7:6f:b0:f7:71:7f:8d:ee:30:20:3c:a5:c4:2c:9a:93:dd:71:72:ba:
0c:08:70:8a:16:a0:2e:66:cf:34:ad:b7:b0:85:e7:7d:90:83:b0:b3:24:cb:8d:6b:16:6c:65:5c:72
:f2:45:95:dc:6c:37:01:06:c9:ad:4c:12:a1:4d:74:c4:97:eb:17:5b:50:d0:00:66:3e:fc:c8:d8:f
c:27:d9:e1:3a:16:b2:21:ef:a6:5b:c1:c9", "length": "(3072 bit)"}, "extensions":
{"key_usage": "Certificate Sign, CRL Sign", "subject_alternative_name":
"email:[email protected], IP Address:127.0.0.1", "subject_key_identifier":
"DF:DD:0C:91:75:92:26:B6:A8:4E:74:2B:A3:D9:27:4E:40:DD:DD:68"}, "version": "3 (0x2)",
"serial_number": "e9:68:06:7b:75:59:ce:bd", "signature_algorithm":
"sha256WithRSAEncryption", "issuer": "CN = CA, DC = vsphere, DC = local, C = US, ST =
California, O = c3-vc.rackH04.local, OU = VMware Engineering", "subject": "CN = CA,
DC = vsphere, DC = local, C = US, ST = California, O = c3-vc.rackH04.local, OU =
VMware Engineering"}}}}

10. To restart the vmware-marvin service, enter:


systemctl restart vmware-marvin
systemctl restart runjars

11. Clear the cache to ensure that the VxRail Manager information is updated correctly.
12. To generate a base64 string for the username:password, enter:
# echo -n "[email protected]:password" | base64

# YWRtaW5pc3RyYXRvckB2c3BoZXJlLmxvY2FsOnBhc3N3b3Jk

13. To create a POST request in the VxRail Manager, enter:


curl --location --request POST 'https://127.0.0.1/rest/vxm/private/pv/cache/'
--header 'Content-Type: application/json' --header 'Authorization: Basic
YWRtaW5pc3RyYXRvckB2c3BoZXJlLmxvY2FsOnBhc3N3b3Jk' -k
Clear the cache in the VxRail Manager with the authorization basic string generated in Step 12.
14. (OPTIONAL) For internal DNS only, to clean up the VMware vCenter Server FQDN records, perform the following:
a. Using SSH, log in to the VxRail Manager as root.
b. Update the /etc/hosts file. Remove the unused entry for the VMware vCenter Server new FQDN. In the following
example, the old FQDN entry is 172.16.10.211 vc.testfqdn.local vc.

127.0.0.1 localhost localhost.localdom

172.16.10.211 vc.testfqdn.local vc<-- Delete the unused entry

172.16.10.211 vcnew.testfqdn.local vcnew

172.16.10.150 vxm.testfqdn.local vxm

172.16.10.111 vcluster101-esx01.testfqdn.local vcluster101-esx01

172.16.10.112 vcluster101-esx02.testfqdn.local vcluster101-esx02

172.16.10.113 vcluster101-esx03.testfqdn.local vcluster101-esx03

NOTE: You can use this step to delete the old record for the IPv6 or dual-stack environment.

c. To restart the dnsmasq service, enter:


systemctl restart dnsmasq

Next steps
For more information, see:
● KB 77894 to manually import the VMware vCenter Server SSL certificate on the VxRail Manager.
● Managing Certificates Using the vSphere Certificate Manager Utility
● Changing your vCenter Server's FQDN

172 Rename VxRail components


13
Remove VxRail nodes
Remove nodes to decommission the older generation of VxRail nodes and migrate them to the new generation VxRail.
This procedure applies to the VxRail cluster running the VxRail version 8.0.x and later.
You cannot use this task to replace a node. Node removal does not destroy the VxRail cluster.
NOTE: VxRail version 8.0.010 does not support VMware vSAN ESA or satellite nodes.

This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Prerequisites
● Disable the remote support connectivity, if enabled.
● Verify that the VxRail cluster is in a healthy state.
● Add new nodes into the cluster before running the node removal procedure to avoid any capacity or node limitations.
● Verify that the VxRail cluster has enough nodes remaining after the node removal to support the current Failure to Tolerate
(FTT) setting:
● The following table lists the minimum number of VMware ESXi nodes in the VxRail cluster before node removal:

Table 28. VMware vSAN RAID and minimum nodes


VMware vSAN RAID and FTT Minimum nodes
RAID 1, FTT = 1 4
RAID 1, FTT = 2 6
RAID 5, FTT = 1 (For All flash VxRail only) 5
RAID 6, FTT = 2 (For All flash VxRail only) 7

Verify the VxRail cluster health


Verify the VxRail cluster health status.

Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. Select a cluster and click the Monitor tab.
3. Select vSAN > Skyline Health.
4. If alarms display, acknowledge the Reset to Green at the node and cluster levels before you remove the node.

Verify the capacity, CPU, and memory requirements


Before removing the node, verify that the capacity, CPU, and memory are sufficient to allow the VxRail cluster to continue
running without any issue.

About this task


If the VMware vSAN used capacity percentage is over 80 percent, do not remove the node as it may lead to the VMware vSAN
performance issue.

Remove VxRail nodes 173


Use the following formula to determine whether cluster requirements can be met after the node removal: VSAN used
Capacity % = used total / (current capacity - capacity to be removed)
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Steps
1. To view capacity for the cluster, log in to the VMware vSphere Web Client as administrator, and perform the following:
a. Under the Inventory icon, select the VMware vSAN cluster and click the Monitor tab.
b. Select vSAN > Capacity.
2. To check the impact of data migration on a node, perform the following:
a. Select vSAN > Data Migration Pre-check.
b. From the SELECT OBJECT drop-down, select the host.
c. From the vSAN data migration drop-down, select Full data migration and click PRE-CHECK.
3. To view disk capacity, perform the following:
a. Select the VMware vSAN cluster and click the Configure tab.
b. Select vSAN > Disk Management to view capacity.
Use the following formulas to compute percentage used:
CPU_used_% = Consumed_Cluster_CPU /(CPU_capacity - Plan_to_Remove_CPU_sum)
Memory_used_% = Consumed_Cluster_Memory /(Memory_capacity - Plan_to_Remove_Memory_sum)

4. To view to view the CPU and memory overview, perform the following:
a. Select the VMware vSAN cluster and click Monitor tab.
b. Select Resource Allocation > Utilization.
5. To check the CPU and memory resources on a node, perform the following: click the node and select Hardware.
a. Select the node and click the Summary tab.
b. View the Hardware window for CPU, memory, Virtual Flash Resource, Networking, and Storage.

Remove the node


Place the node into maintenance mode before you remove the node.

Prerequisites
Before you remove the node, perform the following steps to place the node in to maintenance mode:
1. Log in to the VMware vSphere Web Client as an administrator.
2. Under the Inventory icon, right-click host that you want to remove and select Maintenance Mode > Enter Maintenance
Mode.
3. In the Enter Maintenance Mode dialog, check Move powered-off and suspended virtual machines to other hosts in
the cluster.
4. Next to vSAN data migration, from the drop-down menu, select Full data migration and click GO-TO PRECHECK.
5. Verify that the test was successful and click ENTER MAINTENANCE MODE and click OK.
6. To monitor the VMware vSAN resyncing, click the cluster name and select Monitor > vSAN > Resyncing Objects.

About this task


You can reboot hosts immediately or schedule a reboot.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Steps
1. To remove the host from the VxRail cluster, perform the following:
a. Select the cluster and click the Configure tab.
b. Select VxRail > Hosts.
c. Select the host and click REMOVE.

174 Remove VxRail nodes


2. In the Remove Host from Cluster window, enter the VMware vCenter Server administrator and root account information.
3. After the account information is entered, click VERIFY CREDENTIALS .
4. When the validation is complete, click APPLY to create the Run Node Removal task.
5. After the precheck successfully completes, the host shuts down and is removed.
6. For L3 deployment: If you have removed all the nodes of a segment, select the unused port group on VMware VDS and click
Delete.

Next steps
To access the SSH, perform the following:
● Log in to the VMware vCenter Server Management console as root.
● From the left-menu, click Access.
● From the Access Settings page, click EDIT and enable SSH.
If a DNS resolution issue occurs after you removed the node or you added the same removed node back into the cluster but
with a new IP address, on the VMware vCenter Server, to update dnsmasq, enter:
# service dnsmasq restart

Reboot VxRail nodes


Reboot the nodes from a cluster.

About this task


You can reboot hosts immediately or schedule a reboot.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Steps
1. From the VMware vSphere Web Client, select the Inventory icon.
2. Select a VxRail host and click the Configure tab.
3. Select VxRail > Hosts.
4. From the Cluster Hosts window, check the hosts that you want to reboot and click REBOOT.
5. For Reboot Hosts, select Reboot Now and click Next.
6. On the Prechecks window, view the prechecks and click NEXT.
7. On the Summary window, click REBOOT NOW.

Reboot VxRail nodes sequentially


If you reboot nodes sequentially, it is not necessary to restart large clusters using automation and orchestration which can lead
to issues that may halt an upgrade.

About this task


Select the nodes and click Reboot to open the wizard. The wizard provides options to reboot immediately or schedule the
reboot for a later time. Once this selection is made, the wizard runs a precheck, and the reboot cycles begin. Node reboots
improve the update cycle success rates by clearing issues like memory utilization or restarting any potentially hung processes.
This procedure applies to the VxRail cluster running the VxRail version 8.0.210 and later.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Prerequisites
To view when the notes were rebooted last, perform the following:
1. Log in to the VMware vSphere Web Client as an administrator.

Remove VxRail nodes 175


2. Under the Inventory icon, select the VxRail cluster and click the Configure tab.
3. Under VxRail, select Hosts and view the information in the Last Reboot column.
You can also see when the hosts were last rebooted using the Update Advisor Report. See KB 213039 for information on how
to generate this. When viewing the report, ensure that the Group by component box is enabled and to see the Last Reboot
column.

Steps
1. To perform an immediate reboot, select the VxRail cluster and click the Configure tab.
2. Under VxRail, select Hosts and view the information in the Last Reboot column.
3. Check the box to select the nodes and click REBOOT.
NOTE: A reboot may take up to 10 minutes.

4. Click Reboot Now and Next.

176 Remove VxRail nodes


14
Restore the VMware vCenter Server from a
file-based backup
Use a current file-based backup to restore the VMware vCenter Server in the original cluster.

Prerequisites
● Create a file-based back up.
● Verify that your system meets the minimum software and hardware requirements. See System Requirements for the vCenter
Server Appliance and Platform Services Controller Appliance.
● Download and mount the VMware vCenter Server Appliance Installer. SeeDownload and Mount the vCenter Server Installer.
● To restore a VMware vCenter Server HA cluster, first power off the active, passive, and witness nodes.
● Verify that the target VMware ESXi host is in lockdown or maintenance mode and that it is not part of a fully automated
DRS cluster.
● Check if the DRS cluster of a VMware vCenter Server inventory has a VMware ESXi host that is not in lockdown or
maintenance mode.
● Configure the forward and reverse DNS records for the IP address before you assign a static IP address to the VMware
vCenter Server Appliance.
● Power off the backed-up VMware vCenter Server.

About this task


Deploy the OVA file from the VMware vCenter Server Appliance UI installer during the restoration process:
● Use the VMware vSphere Web Client or VMware Host Client to deploy the OVA file for the new VMware vCenter Server
Appliance or Platform Services Controller appliance as an alternative to using the UI installer for the first stage of the restore
process.
● Use the VMware vSphere Web Client to deploy the OVA file on a VMware ESXi host or VMware vCenter Server instance 5.5
or 6.0. Once the deployment is complete, log in to the appliance management interface of the newly deployed appliance to
proceed with the second stage of the restore process.
This procedure applies to the VxRail cluster running VxRail version 8.0.100 and later. See the VxRail 8.0 Support Matrix for a list
of the supported versions.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. Select the Inventory icon.
3. Right-click the VxRail cluster and select Deploy OVF Template to launch the wizard.
4. From Select an OVF template, select Local file and then click UPLOAD FILES.
5. Select the VMware vCenter Server OVA file and click NEXT.
6. Enter a VM name and click NEXT.
7. Select the node where the VMware vCenter Server is installed and then click NEXT.
8. Verify that all details are correct. Ignore certificate warnings and click NEXT.
9. Accept all license agreements and click NEXT.
10. Select the appropriate configuration for the VMware vCenter Server environment and then click NEXT.
11. Select the VxRail vSAN data store storage and then click NEXT.

Restore the VMware vCenter Server from a file-based backup 177


Figure 96. Data store storage

12. Select the VMware vCenter Server Network as the Destination Network.
13. Enter the following network configurations in Customize template based on the network requirements of the end user:

178 Restore the VMware vCenter Server from a file-based backup


Figure 97. Customize the template

14. Verify that the setup details are correct and then click FINISH.
15. Locate the host from the VMware vCenter Server Appliance window.
16. Log in to the VMware ESXi host that the initial VMware vCenter Server is running on and then click Shut down.
17. Access the new VMware vCenter ESXi and then click Power on.
18. Launch the VMware vCenter console and verify the network configurations.
NOTE: If the configuration information fails to deploy successfully, reconfigure it in the VMware vCenter Server
console.

Restore the VMware vCenter Server from a file-based backup 179


Figure 98. VMware ESXi host client

Figure 99. Open a browser console to the VM

Verify the VMware vCenter IP Address, Subnet Mask, and Default Gateway are correct. If not, update them.

180 Restore the VMware vCenter Server from a file-based backup


Figure 100. IP configuration

Verify that the DNS configuration is correct. If incorrect, update the correct DNS and hostname.

Restore the VMware vCenter Server from a file-based backup 181


Figure 101. DNS configuration

Save the changes and exit from the VMware vCenter Server after you modify the IP address or DNS configurations.

182 Restore the VMware vCenter Server from a file-based backup


Figure 102. Configure Management Network

19. Go to the newly deployed VMware vCenter Server at http://<FQDN>:5480 and click Restore.

Restore the VMware vCenter Server from a file-based backup 183


Figure 103. VMware vCenter Server Installer

20. Log in as root to the VMware vCenter Server Appliance.


21. Enter the backup file server Location, Username, and Password.
a. Enter the encrypted password for the backup file, if the backup file is encrypted.
b. Enter backup server path/backup_vc_vxm_timestamp/vCenter/sn_hostname/
M_vCenter_version_backup_time as the VMware vCenter backup path.
22. Review the information and click Finish and OK to the warning message that displays.
23. To ensure a successful VMware vCenter Server restore, wait until the restore process is complete and click CLOSE.
24. To update VMware vCenter Server information in the VxRail database, perform the following:
a. Open a browser and log in to the VMware vCenter MOB.

184 Restore the VMware vCenter Server from a file-based backup


Figure 104. VMware vCenter Server MOB

b. Click content.

Figure 105. Content

c. Click rootFolder and select the data center.

Restore the VMware vCenter Server from a file-based backup 185


Figure 106. Data center root folder

d. Click datacenter.

Figure 107. Data center

e. Click host folder.

186 Restore the VMware vCenter Server from a file-based backup


Figure 108. Host folder

f. Click childEntity and select the VxRail vSAN cluster.

Figure 109. VxRail vSAN cluster

g. Locate the hosts.

Restore the VMware vCenter Server from a file-based backup 187


Figure 110. Hosts

h. Locate VMware vCenter Server in one host and click VMware vCenter Server Appliance.

Figure 111. VMware vCenter Server Appliance

i. Click summary.

Figure 112. Summary

j. Click config.

188 Restore the VMware vCenter Server from a file-based backup


Figure 113. Configure

k. Record the VM name and the UID.

Figure 114. VM name and the UID

l. Use SSH to log in to VxRail, and then enter:

Restore the VMware vCenter Server from a file-based backup 189


psql -U postgres
vxrail -c "Update system.system_vm set uuid='[uuid]',
moref_id='[vm]' where server_type='VCENTER';"
ex. psql -U postgres vxrail -c "Update system.system_vm set
uuid='564d8002-6cbb-3e6d-0f39-72d41a01d5a4', moref_id='vm-2022'
where server_type='VCENTER';"

m. Log in to the VMware vCenter Server and verify that VxRail is connected.
n. For a dual-stack environment, after you complete the process, log in to the VMware vCenter Server Appliance
management interface (VAMI) as root at https://<vCSA_ip_addr>:5480.
o. Go to Networking and verify that the DNS server contains at least one IPv4 and one IPv6 address(es). If a DNS server
is lost, log in to https://<vCSA_ip_addr>. Select a VxRail cluster and click the Configure tab. VxRail > Setting >
DNS server.
p. Apply the current DNS server to sync the DNS server on all the hosts and VMs.

190 Restore the VMware vCenter Server from a file-based backup


15
VxRail Manager file-based backup
Use a backup script on the VxRail Manager VM to archive the VxRail Manager configuration files, database tables, and the logs.
Run the script manually or schedule for automatic backups. Backups are stored in a folder on the VxRail primary data store.
Apply the backup to restore the VxRail Manager configuration files and database tables onto a newly deployed VxRail Manager
VM.
The vxm_backup_restore.py and vxm_backup_restore_limited_bandwidth.py scripts are used for backups.
The scripts are identical except that the latter is designed for limited Internet bandwidth. For example, if is a two-node
VxRail cluster at a ROBO site with T1 lines and the backup process takes time to complete. This can impact traffic to
and from the cluster. The vxm_backup_restore.py script uses the VMware vCenter Server as pass-through and the
vxm_backup_restore_limited_bandwidth.py script directly accesses the primary datastore on the host for both
upload and download operations.
If you use the vxm_backup_restore_limited_bandwidth.py script, use this task to replace the script name.
See KB 203882 for instructions to run a sed command before running the
vxm_backup_restore_limited_bandwidth.py script. If you have a dynamic node cluster and the primary storage type
is VMware vSAN HCI mesh, provision the primary storage first. See KB 185917.
This procedure applies to the VxRail cluster running the VxRail 8.0.x and later. See the VxRail 8.0 Support Matrix for a list of
supported versions.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.

Back up the VxRail Manager manually


If you have a limited bandwidth environment, use the vxm_backup_restore_limited_bandwidth.py script.

About this task


Wait for a few minutes for the backup to finish. Services are available after the backup completes.
CAUTION: Access to some VxRail features during the backup process may not be available because the script
restarts services.

Steps
1. To access the VxRail Manager bash shell, log in to the VMware vSphere Web Client as administrator and perform the
following:
a. From the Inventory icon, select the VxRail Manager VM.
b. On the Summary tab, click LAUNCH REMOTE CONSOLE.
c. Log in to the VxRail Manager as root or log in to the VxRail Manager VM as mystic and su to root.
2. You can create a backup with or without VxRail Manager logs. Select one of the following:
● To create a backup without the VxRail Manager logs, enter:
cd /mystic/vxm_backup_restore/

For versions earlier than VxRail 8.0.300, enter: python vxm_backup_restore.py -b

For VxRail 8.0.300 and later, enter: mcp_python vxm_backup_restore.py -b

NOTE: If your environment has limited bandwidth, use the vxm_backup_restore_limited_bandwidth.py


script.

● To create a backup that includes VxRail Manager logs, enter:


cd /mystic/vxm_backup_restore/

VxRail Manager file-based backup 191


For versions earlier than VxRail 8.0.300, enter: python vxm_backup_restore.py -b --keeplog

For VxRail 8.0.300 and later, enter: mcp_python vxm_backup_restore.py -b --keeplog

NOTE: You may not be able to access some of the VxRail features during the backup process because the script
includes restarting the services. Wait two to three minutes until the backup finishes and the services are ready to be
used.

3. To verify that the backup is complete and to list the backup copies, enter:
cd /mystic/vxm_backup_restore/

For versions earlier than VxRail 8.0.300, enter: python vxm_backup_restore.py -l

For VxRail 8.0.300 and later, enter: mcp_python vxm_backup_restore.py -l

4. (OPTIONAL) To list the existing services, enter:


cd /mystic/vxm_backup_restore/

For versions earlier than VxRail 8.0.300, enter: python vxm_backup_restore.py -d

For VxRail 8.0.300 and later, enter: mcp_python vxm_backup_restore.py -d

5. The following steps are only required following a first-run and following an upgrade. After the first backup, back
up the recoveryBundle.zip to the primary data store manually. For the upgraded VxRail, replace the old
recoveryBundle.zip with the new one.
a. Log in to the VMware vSphere Web Client as an administrator.
b. Select a host and click the Configure tab.
c. Select System > Services.
d. Select SSH and click START.
e. Select ESXi Shell and click START.
6. To back up the recoveryBundle.zip, SSH in to the VxRail Manager VM, log in as mystic and su to root.
a. For the VMware vSAN cluster, enter:
#scp /data/store2/recovery/recoveryBundle.zip

root@[hostIP]:/vmfs/volumes/VxRail-Virtual-SAN-Datastore-******/VxRail_backup_folder/

If lockdown mode is enabled, enter:


#scp /data/store2/recovery/recoveryBundle.zip

vxrailmanagement@[hostIP]:/vmfs/volumes/VxRail-Virtual-SAN-Datastore-******/
VxRail_backup_folder/

b. For the dynamic node cluster, enter:


#scp /data/store2/recovery/recoveryBundle.zip

root@[hostIP]:/vmfs/volumes/<primary storage name>/VxRail_backup_folder_******/

If lockdown mode is enabled, enter:


#scp /data/store2/recovery/recoveryBundle.zip

vxrailmanagement@[hostIP]:/vmfs/volumes/<primary storage name>/


VxRail_backup_folder_******/

192 VxRail Manager file-based backup


Back up VxRail Manager
Back up VxRail Manager from the cluster.

Steps
1. From the VMware vSphere Web Client, select the Inventory icon.
2. Select the VxRail cluster and click the Configure tab.
3. Under VxRail Integrated Backup, select the STATUS tab.
4. Click CREATE BACKUP.

Configure automatic backup for the VxRail Manager


VxRail Manager does the backup according to the backup policy defined.

Prerequisites
Before you schedule the backup, manually back up the recoveryBundle.zip file to the primary data store. This step is only
required following a first-run and an upgrade. For the upgraded VxRail, replace the old recoveryBundle.zip file with the
new one.
1. To back up the recoveryBundle.zip to the primary data store manually, perform the following:
a. Log in to the VMware vSphere Web Client as an administrator.
b. Select a host and click the Configure tab.
c. Select System > Services.
d. Select SSH and click START.
e. Select ESXi Shell and click START.
2. To back up the recoveryBundle.zip to the primary data store, enter:

SSH root@<host_ipaddr>

If lockdown mode is enabled, enter:


SSH vxrailmanagement@<host_ipaddr>

a. For the VMware vSAN cluster, enter:


# mkdir /vmfs/volumes/VxRail-Virtual-SAN-Datastore-******/VxRail_backup_folder/
b. For the dynamic node cluster, enter:
# mkdir /vmfs/volumes/<primary storage name>/VxRail_backup_folder_*****/
3. Use SSH to log in to the VxRail Manager VM as mystic and su to root.
a. For the VMware vSAN cluster, enter:
#scp /data/store2/recovery/recoveryBundle.zip

root@<host_ipaddr>:/vmfs/volumes/VxRail-Virtual-SAN-Datastore-******/
VxRail_backup_folder/

If lockdown mode is enabled, enter:


#scp /data/store2/recovery/recoveryBundle.zip

vxrailmanagement@<host_ipaddr>:/vmfs/volumes/VxRail-Virtual-SAN-Datastore-******/
VxRail_backup_folder/

b. For the dynamic node cluster, enter:


#scp /data/store2/recovery/recoveryBundle.zip

root@<host_ipaddr>:/vmfs/volumes/<primary storage name>/VxRail_backup_folder_******/

If lockdown mode is enabled, enter:


#scp /data/store2/recovery/recoveryBundle.zip

VxRail Manager file-based backup 193


vxrailmanagement@<host_ipaddr>:/vmfs/volumes/<primary storage name>/
VxRail_backup_folder_******/

Next steps
To stop the automatic backup, enter:
cd /mystic/vxm_backup_restore/

For versions earlier than VxRail 8.0.300, enter: python vxm_backup_restore.py -c --period manual

For VxRail 8.0.300 and later, enter: mcp_python vxm_backup_restore.py -c --period manual

194 VxRail Manager file-based backup


16
Replace and add VxRail hardware
You can use VxRail Manager to add and remove disks. Qualified VxRail customers can use SolVe Online for VxRail to add or
replace other customer-replaceable components.
Automated replace and add disk workflow is not supported with VxRail vSAN ESA.
Generate step-by-step hardware component procedures in SolVe Online for VxRail before replacing any hardware components
or performing upgrade procedures. For hardware-specific information, see Dell Technologies Support.
See the following KB articles before replacing or adding a disk:
● Dell VxRail: VxRail SAS3 to SAS4 Drive Transition
● Dell VxRail: Confirm vSAN object health status before replacing disk

CAUTION: If SolVe Online for VxRail is not used to generate procedures, VxRail is at risk for potential data loss.

Replace VxRail disks with VxRail Manager


The following requirements apply when you replace disks in VxRail:
● When you replace a drive in a disk group, you can use a larger capacity drive.
● When you expand a disk group, you can use larger capacity drives than what exists in a disk group.
● Do not use VMware vCenter Server or any other tool with the automated hardware replacement process.
● With the automated hard drive replacement workflow, only one hard drive can be replaced at a time, with no other devices.
● With the automated SSD replacement workflow, only one SSD can be replaced at a time, with no other devices.

Add disks with VxRail Manager


You add an HDD or SDD in the exact location you selected in VxRail Manager. Using the automated HDD or SSD add workflow,
only one hard drive or SSD can be added at a time (with no other devices). Automated disk addition for VxRail version 8.0.0 is
not supported for VMware vSAN ESA. For VMware vSAN ESA disk addition procedures, contact Dell Support.

Replace and add VxRail hardware 195


17
Set up external storage for a dynamic node
cluster
For a dynamic node cluster, you must use external storage along with the VxRail onboard storage resources.
See the appropriate Support Matrix on the Dell Technologies Support Site for the supported storage of dynamic clusters.
VxRail supports the following:
● NFS
● VMFS over iSCSI/FC/FC-NVMe/TCP-NVMe
● vVol over NFS/iSCSI/FC
● FC-NVMe over vVol
● PowerFlex
● VMware vSAN HCI-Mesh
You can use the following external storage arrays to provide primary storage for your VxRail cluster:
● Dell Unity
● PowerStore
● PowerMax
● PowerFlex
● VMware vSAN HCI-Mesh
For more information about the configuration of the external storage, see VxRail Configure External Storage of Dynamic Node
Cluster.
External storage does not impact the following VxRail features:
● Upgrades
● Reset
● Cluster shutdown
You can scale VxRail compute resources separately from storage capacity to improve overall hardware usage levels.

196 Set up external storage for a dynamic node cluster


18
Upgrade your VxRail
Use SolVe Online for VxRail to upgrade firmware, software, or hardware. You can also expand your VxRail using procedures
generated in SolVe.
See SolVe Online for VxRail for a complete set of upgrade procedures.
See the Update Advisor Report (UAR) for more information about upgrades and the life cycle management process.

Firmware upgrades
You can upgrade firmware on VxRail models G560/G56F. For other VxRail models, contact Dell Support. You can also upgrade
firmware on the chassis.

Hardware upgrade/expansion
You can upgrade or expand the following components:
● Convert a 2-node cluster to a 3-node cluster
● Expand a compute node
● Expand a satellite node
● Expand a capacity drive (HDD/SSD)
● Add a disk group
● Upgrade an SSD
● Expand a manual disk
● Add an NVMe disk
● Upgrade system memory
● Upgrade a NIC
● Upgrade the GPU
● Upgrade from a TPM 1.2 to TPM 2.0 module

Software upgrade
Select your VxRail model to upgrade your software. When you perform a software upgrade, you download a bundle that includes
VxRail Manager which performs the upgrade. VxRail Manager assesses the current software version running on your VxRail and
identifies the differences. Only the sections that are identified as different are upgraded with the new version software.

Upgrade workflow for LCM


For VxRail 7.0.450 and VxRail 8.0.110 and later updates, you can use the LCM workflow to provide planning, upgrade
remediation (if needed), and execution and validation.
The LCM workflow includes a planning component and upgrade readiness indicator to improve upgrades.

Figure 115. LCM workflow

For connected clusters, VxRail Manager automatically retrieves the recommended upgrade bundle information with the latest
upgrade prechecks. Upgrade prechecks run every 24 hours against the wanted state of the target bundle. An upgrade readiness

Upgrade your VxRail 197


indicator displays green, yellow, or red to indicate the level of remediation that is required before the upgrade is attempted.
Detailed upgrade precheck results are available in the Update Advisor Report. The Update Advisor Report describes the impact
of updating to the target VxRail version.
The Update Advisor Report is comprehensive and exportable and provides the following features:
● Cluster update readiness status
● Cluster update duration estimates
● Last VxRail backup insights
● Link to release notes
● Displays upgrade precheck outputs with KB links for remediation help
● Analysis of component-level drift
● Custom component information
See Generate the Update Advisor Report and Update Advisor Report for more information.
For unconnected clusters, go to Dell Support to download two small upgrade files from instead of the full upgrade bundle. To
generate the Update Advisor Report, locate and download the files for the latest upgrade prechecks and the metadata file of
the target bundle, and then upload using the UI workflow in local update.
When the cluster is ready for an update, you must download a full upgrade bundle.

LCM modes
VxRail has the following LCM modes which are abstracted by the VxRail API:
● Legacy LCM (ESXCLI) Mode: VxRail orchestrates life cycle management and the continually validated state using the
ESXCLI.
● vLCM Mode (Recommended):: VxRail orchestrates lifecycle management and the continually validated state using the
VMware vSphere LCM (vLCM) API. This mode provides additional update capabilities, including Quick Boot and ESXi Live
Patch. If vLCM mode is enabled, you cannot revert to Legacy LCM (ESXCLI) Mode.
For VxRail 7.0.240 and later, there are limited cases where VMware vLCM benefits. For VxRail 8.0.210 and later, additional use
cases can leverage vLCM enablement for more capabilities.

Figure 116. LCM modes

Upgrade components that are not managed by VxRail


The VxRail UI workflow allows updates to third party GPUs, FC HBAs, or VMware NSX component during a single maintenance
cycle combined with VxRail LCM update.
The following table describes the release support for this feature:

Table 29. Components and VxRail releases


Custom component VxRail release Description
NVIDIA GPU 7.0.240 Requires VMware vLCM to be enabled.

198 Upgrade your VxRail


Table 29. Components and VxRail releases (continued)
Custom component VxRail release Description
FC HBA 7.0. 240 or Any LCM mode.
8.0.110
NVIDIA and FC HBAs 7.0.450 or Any LCM mode.
8.0.110
VMware NSX/Tanzu 7.0.450 or VMware NSX/Tanzu VIBs can be staged with the custom component UI when in
VIBs (default LCM 8.0.110 default LCM mode.
mode)
VMware NSX/Tanzu 7.0.450 or VMware NSX is already installed before you enable VMware vLCM, see KB
VIBs (default vLCM 8.0.110 190928. If VMware NSX is installed after you enable VMware vLCM, no further
mode) action is required. VMware NSX Manager manages all aspects of VMware NSX
lifecycle management.

Generate the Update Advisor Report


Before you perform a software update, generate an Update Advisor Report to provide valuable information that supports the
update.

About this task


You can generate an Update Advisor Report from VxRail Manager as shown in this task. You can also generate an Update
Advisor Report from Internet Updates where you can select the option to upgrade targets. If you select upgrade targets, an
Update Advisor Report is automatically generated daily. By default, Internet updates execute the update advisor report against
the recommended target state every 24 hours.

Steps
1. From the VMware vSphere Web Client, under the Inventory icon, select your VxRail cluster.
2. Click the Configure tab, then under VxRail, click Updates.
3. Under Updates, click LOCAL UPDATES. Then under VxRail Upgrades, click Plan and Update (Recommended).
4. Under Installer Metadata File, select a metadata file and click UPLOAD. Click UPLOAD again on the confirmation window.
5. After the file is uploaded from the Create Update Advisor Report window, click CREATE.
6. When the report finishes generating, click NEXT, then select the LOCAL UPDATES tab and click CREATE REPORT.

Update Advisor Report


You can access the update Advisor Report from the Updates tab of VxRail Manager. There are four sections to the report that
include the report summary, VxRail components, the VxRail precheck version, and VxRail custom components. This report is fully
exportable in HTML format.

Report Summary
The report summary is comprehensive and contains the following:
● Cluster update readiness status
● Report timestamp
● Cluster name, current and target states
● Update Type
● Cluster update duration estimates
● Insights from the last VxRail backup
● Link to release notes

Upgrade your VxRail 199


Figure 117. Report summary

VxRail Components
The following figure shows VxRail components:

Figure 118. VxRail components

The following figure shows expanded details for VxRail components:

200 Upgrade your VxRail


Figure 119. VxRail components - expanded view

The following figure shows VxRail components with group by component disabled:

Figure 120. VxRail components with group by component disabled

VxRail Precheck Version


The VxRail Precheck Version section provides the upgrade precheck outputs with KB links for remediation help:

Upgrade your VxRail 201


Figure 121. VxRail precheck

When you expand details, the following is displayed:

Figure 122. VxRail precheck - expanded

The following additional details are provided:

202 Upgrade your VxRail


Figure 123. Additional information

When you disable Group by component, the following is displayed:

Figure 124. VxRail precheck with group by component disabled

VxRail Custom Components


The custom component information is displayed:

Upgrade your VxRail 203


Figure 125. VxRail custom components

204 Upgrade your VxRail


19
Broadcom products used with VxRail
Broadcom products can be ordered with VxRail or purchased separately.
All documentation is provided on VMware Docs by Broadcom. The following table provides links to Broadcom documentation:

Table 30. Broadcom components


VMware product Documentation
VMware Horizon ● VMware Horizon Documentation
● VMware Horizon Release Notes
VMware vSphere Remote Office and ● VMware Validated Design Documentation
Back Office ● SDDC Architectures
● Overview of ROBO SDDC
VMware Cloud Foundation ● VMware Cloud Foundation Documentation
● VMware Cloud Foundation Release Notes

Broadcom products used with VxRail 205

You might also like