Vxrail 80 Adminguide
Vxrail 80 Adminguide
x
Administration Guide
October 2024
Rev. 11
Notes, cautions, and warnings
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
© 2023 - 2024 Dell Inc. or its subsidiaries. All rights reserved. Dell Technologies, Dell, and other trademarks are trademarks of Dell Inc. or its
subsidiaries. Other trademarks may be trademarks of their respective owners.
Contents
Revision history..........................................................................................................................................................................8
Chapter 1: Introduction................................................................................................................. 9
Dell Technologies Support.................................................................................................................................................9
Register for a Dell Technologies Support account................................................................................................ 9
Support resources......................................................................................................................................................... 9
Use SolVe Online for VxRail procedures.................................................................................................................10
Locate your VxRail serial number...................................................................................................................................10
Locate your VxRail serial number in VxRail Manager.......................................................................................... 10
Locate your physical VxRail serial number............................................................................................................. 10
Access VxRail content using the QRL.....................................................................................................................10
Contents 3
Convert one VMware VDS to two VMware VDS...................................................................................................... 34
Identify the port groups............................................................................................................................................. 34
Convert one VMware VDS with two uplinks to two VMware VDS with two uplinks.......................................35
Convert one VMware VDS with four uplinks to two VMware VDS with four uplinks/two uplinks...............36
Convert one VMware VDS with four uplinks to two VMware VDS with two uplinks ..................................... 37
Create a VMware VDS and assign two uplinks.................................................................................................... 37
Add existing VxRail nodes to VDS2.........................................................................................................................37
Create the port group for VMware vSAN in VDS2............................................................................................. 38
Create port group for VMware vSphere vMotion in VDS2...............................................................................38
Unassign uplink3 in VDS1...........................................................................................................................................38
Assign the released VMNIC to uplink1 in VDS2....................................................................................................38
Migrate the VMware vSAN VMkernel from VDS1 to VDS2 port groups....................................................... 39
Migrate the VMware vMotion VMkernel from VDS1 to VDS2 port groups ................................................. 39
Unassign uplink4 in VDS1...........................................................................................................................................40
Assign the released VMNIC to uplink2 in VDS2...................................................................................................40
Enable DPU offloads on VxRail...................................................................................................................................... 40
Enable the DPU offload after Day 1 VxRail deployment..................................................................................... 41
Add a VxRail node........................................................................................................................................................42
Remove VxRail nodes................................................................................................................................................. 43
Remediate the CPU core count after node addition or replacement ..................................................................45
Update the cluster status..........................................................................................................................................47
Trigger a rolling update.............................................................................................................................................. 49
Repoint the VMware vCenter Server to a VMware vCenter Server in a different domain............................50
Repoint a single VMware vCenter Server node to an existing domain...........................................................51
Back up each VxRail node (optional)......................................................................................................................52
Repoint the VMware vCenter Server A of domain 1 to domain 2................................................................... 52
Update the VMware vCenter Server SSL certificates from VMware vCenter Server B.......................... 53
Refresh the node certificates in the VMware vCenter Server A.................................................................... 53
Repoint the VMware vCenter Server node to a new domain.......................................................................... 54
Submit install base updates for VxRail.........................................................................................................................55
View APEX AIOps Infrastructure Observability information in VxRail..................................................................55
4 Contents
Identify the load-balancing policy on the switches............................................................................................. 70
Configure the LACP policy on the VxRail VDS.....................................................................................................70
Verify the port flags.....................................................................................................................................................71
Migrate the uplink to a LAG port..............................................................................................................................71
Migrate the LACP policy to the standby uplink....................................................................................................72
Move the second VMNIC to LAG............................................................................................................................ 74
Verify LAG connectivity on VxRail nodes.............................................................................................................. 74
Verify that LAG is configured in the VMware VDS............................................................................................. 75
Enable dynamic link aggregation for four ports on a VxRail network for a VxRail-managed VMware
VDS...................................................................................................................................................................................75
Verify the VxRail version on the VxRail cluster....................................................................................................75
Verify the health state of the VxRail cluster........................................................................................................ 76
Verify the VMware VDS health status................................................................................................................... 76
Verify the VMware VDS uplinks...............................................................................................................................76
Confirm uplink isolation of the VxRail port group................................................................................................ 77
Identify the NICs for LAG.......................................................................................................................................... 77
Identify NIC assignment to node ports.................................................................................................................. 78
Identify switch ports for LAG................................................................................................................................... 78
Prepare the switches for LAG .................................................................................................................................79
Configure switch ports for link aggregation.......................................................................................................... 81
Configure the LACP policy on the VxRail VDS..................................................................................................... 81
Verify that port flags are all individual on the switch......................................................................................... 82
Migrate the LACP policy to standby uplink...........................................................................................................82
Change LAG to the active uplink............................................................................................................................. 84
Migrate the active uplink to a link aggregation port........................................................................................... 84
Verify link aggregation connectivity....................................................................................................................... 85
Enable dynamic link aggregation for four ports on a VxRail network for a customer-managed
VMware VDS..................................................................................................................................................................86
Verify the VxRail version on the VxRail cluster....................................................................................................86
Verify the health state of the VxRail cluster........................................................................................................ 86
Verify the VMware VDS health status................................................................................................................... 86
Verify the VMware VDS uplinks............................................................................................................................... 87
Confirm isolation of the VxRail port group............................................................................................................ 87
Identify the NICs for LAG..........................................................................................................................................88
Identify NIC assignment to node ports.................................................................................................................. 89
Identify switch ports for LAG...................................................................................................................................89
Prepare the switches for link aggregation ........................................................................................................... 90
Identify the load-balancing policy on the switches............................................................................................. 92
Configure the LACP policy on the VxRail VDS.....................................................................................................92
Migrate the LACP policy to standby uplink...........................................................................................................93
Migrate an unused uplink to a LAG port................................................................................................................ 95
Configure the first switch for LAG......................................................................................................................... 96
Verify LAG connectivity on the switch.................................................................................................................. 96
Verify link aggregation connectivity on VxRail nodes.........................................................................................97
Move VMware vSAN or VMware vSphere vMotion traffic to LAG................................................................ 97
Verify that LAG is configured in the VMware VDS.............................................................................................98
Move the second VMNIC to LAG............................................................................................................................99
Configure the second ToR switch for LAG...........................................................................................................99
Verify LAG connectivity on the second switch.................................................................................................. 100
Verify LAG connectivity on VxRail nodes............................................................................................................ 100
Contents 5
Enable network redundancy across NDC and PCIe ports...................................................................................... 101
Verify that the VxRail version supports network redundancy........................................................................ 103
Verify that the VxRail cluster is healthy...............................................................................................................103
Verify the VxRail physical network compatibility............................................................................................... 103
Verify the physical switch port configuration..................................................................................................... 104
Verify active uplink on the VMware VDS port groups post migration..........................................................106
Add uplinks to the VMware VDS............................................................................................................................106
Migrate the VxRail network traffic to a new VMNIC........................................................................................106
Set the port group teaming and failover policies............................................................................................... 108
Remove the uplinks from the VMware VDS....................................................................................................... 109
Reset the VMware vSphere alerts for network uplink redundancy.............................................................. 109
Enable VMware vSAN RDMA in the VxRail cluster (VxRail 8.0.210 and later)................................................. 110
Enable VMware vSAN RDMA in the VxRail cluster (VxRail versions earlier than 8.0.210)............................ 113
Migrate the satellite node to a VMware VDS............................................................................................................114
Capture the satellite node VMware standard switch settings........................................................................ 114
Create the VMware VDS for the satellite node.................................................................................................. 115
Set the MTU on the VMware VDS.........................................................................................................................116
Create the VMware VDS port groups for the satellite node........................................................................... 116
Migrate the satellite node to the new VMware VDS......................................................................................... 117
Modify the VMware VDS port group teaming and failover policy........................................................................ 118
Optimize cross-site traffic for VxRail.......................................................................................................................... 119
Configure telemetry settings using curl commands ..........................................................................................121
Configure telemetry settings from VxRail Manager...........................................................................................121
6 Contents
Delete the satellite node bundles from VxRail Manager ....................................................................................... 157
Set the PostgreSQL log destination to the system log..........................................................................................157
Renew the PostgreSQL certificate............................................................................................................................. 159
Chapter 14: Restore the VMware vCenter Server from a file-based backup................................ 177
Chapter 17: Set up external storage for a dynamic node cluster................................................. 196
Contents 7
Revision history
Date Revision Description of change
October 2024 11 License information updated.
August 2024 10 Updated for VxRail 8.0.300.
July 2024 9 Updated for VxRail 8.0.230.
May 2024 8 Updated with CloudIQ rebranding changes.
May 2024 7 Updated with licensing information.
March 2024 6 Updated for VxRail 8.0.210.
November 2023 5 Updated for VxRail 8.0.200 and subscription licensing.
August 2023 4 Updated with additional procedures from SolVe.
March 2023 3 Updated for VxRail 8.0.020.
January 2023 2 Updated for VxRail 8.0.010.
January 2023 1 Initial release for VxRail 8.0.000.
8 Revision history
1
Introduction
This document describes some of the administrative tasks that you can perform for VxRail.
This document is also designed for people familiar with:
● Dell Technologies systems and software
● Broadcom virtualization products
● Data center appliances and infrastructure
● SolVe Online for VxRail
This document is intended for customers, field personnel, and partners who want to manage and operate VxRail clusters.
See the VxRail Documentation Quick Reference List for a complete list of VxRail documentation.
Steps
1. Go to Dell Technologies Support.
2. Click Create an Account and follow the steps to create an account.
It may take approximately 48 hours to receive a confirmation of account creation.
Support resources
Support resources are available for your VxRail.
Use the following resources to obtain support for your VxRail:
● In the VMware vSphere Web Client, select VxRail. Use the Support functions on the VxRail Dashboard.
● Go to Dell Technologies Support.
Introduction 9
Use SolVe Online for VxRail procedures
To avoid potential data loss, always use SolVe Online for VxRail to generate procedures before you replace any hardware
components or upgrade software.
CAUTION: If you do not use SolVe Online for VxRail to generate procedures to replace hardware components or
perform software upgrades, data loss may occur for VxRail.
You must have a Dell Technologies Support account to use SolVe Online for VxRail.
Steps
1. On the VMware vSphere Web Client, select the Inventory icon.
2. Select the VxRail cluster and click the Monitor tab.
3. Expand VxRail, and click Physical View to view the serial number.
Steps
1. On the upper right corner of the VxRail chassis, locate the luggage tag.
2. Pull out the blue-tabbed luggage tag.
3. Locate the serial number label on the pull-out tag.
The Product Serial Number Tag (PSNT) is the 14-digit number that is on the front edge of the luggage tag.
Steps
1. On the VxRail luggage tag, locate the QRL or Service Tag.
10 Introduction
Figure 1. QRL code
2. Using the camera on your phone or laptop, use the QRL code on the Service Tag to access information specific to your
VxRail. You can also go to qrl.dell.com to enter the Service Tag information.
Introduction 11
2
VxRail components and features
PowerEdge servers provide power to VxRail which then uses HCI software to provide virtualization, compute, and storage in
a scalable system. VxRail provides centralized management, orchestration, and life cycle management. VxRail can be rapidly
deployed into an existing data center environment that can deploy applications and services.
The following table provides an overview of VxRail components and features:
VxRail leverages VMware vSphere and VMware vSAN to provide server virtualization and software-defined storage. Through
the logical and physical networks, individual nodes act as a single system providing scalability, resiliency, and workload balance.
The VxRail software bundle in the compute nodes contains VxRail Manager, VMware vCenter Server, VMware vSAN, and
VMware vSphere. Broadcom components are installed with temporary licenses that expire after 60 days.
For more information, see documentation at VMware Docs by Broadcom.
VMware vSAN Express Storage Architecture (ESA) is supported for VxRail 8.0.x. VxRail 8.0.010 does
not support VMware vSAN ESA.
VxRail dynamic A dynamic node VxRail cluster starts with a minimum of two nodes and can scale to a maximum of
node cluster 64 nodes. Dynamic clusters do not have local drives and instead use the following external storage
resources to support workload and applications:
● PowerStore, PowerMax, and Dell Unity
● PowerFlex
● Another VMware vSAN cluster through VMware vSAN HCI Mesh
● FC
VxRail stretched Supports synchronous I/O on a local VMware vSAN data store on two sites that are separated
cluster with geographically. The VMware vSAN stretched cluster enables site-level failure protection with no loss
VMware vSAN of service or loss of data. The VMware vSAN stretched cluster requires a witness for monitoring and the
strict network guidelines. See the VxRail Architecture Overview for more information.
VxRail 2-node Supports small-scale deployments with reduced workload and availability requirements with two nodes.
cluster with The two-node cluster also requires a witness and strict network guidelines. You can convert a two-node
VMware vSAN ROBO cluster to a standard VxRail three-node cluster and then expand to 64 nodes.
NOTE: VxRail 8.0.010 does not support two-node ROBO clusters.
Satellite node A centrally located VxRail cluster monitors and manages satellite nodes that are deployed locally and
remotely. Satellite nodes use the same PowerEdge servers as the other VxRail cluster types and the
same engineering, qualification, and manufacturing processes. VxRail Manager supports the software
LCM of satellite nodes. Satellite nodes require a single IP address to enable connectivity to a central
cluster.
See the Dell VxRail Network Planning Guide for more information about VxRail clusters.
Automatic deployment
After you set up your system and configure network settings, VxRail Manager automates the installation and configuration of all
nodes into a cluster.
Service connectivity
Service connectivity provides secure, automated access between Dell Technologies Support and VxRail. You can enable service
connectivity in direct connection mode or through an external secure connection gateway. Enable remote support connectivity
for VxRail using the VMware vSphere Web Client. Remote support connectivity is required for APEX AIOps Infrastructure
Observability. Using service connectivity, you can:
● Provide usage data to the Dell Technologies customer experience improvement program.
● Determine the level of data about your VxRail environment that is collected. Environmental usage, performance, capacity,
and configuration information are the different types of data that are collected.
Dell Technologies uses this information to improve your experience with VxRail.
Expand a cluster
With VxRail automated installation and scale-out features, you can expand your cluster from three nodes.
You can use automated installation and scale-out features or multinode expansion to expand your clusters. VxRail automated
installation and scale-out features to expand your clusters from three nodes. VxRail multinode expansion for a higher compute
and storage capacity, and to simultaneously add up to six nodes.
VxRail supports expansion of the following clusters:
● The VxRail VMware vSAN cluster configuration is three to 64 nodes. Expansion of a cluster through node addition may lead
to stranded assets where excess compute and storage resources cannot be shared outside of the cluster. If your workloads
require a precise balance of compute and storage resources, use a dynamic cluster.
● The dynamic node cluster configuration is two to 64 nodes.
● The VxRail 2-node ROBO cluster configuration consists of two nodes. You can convert a two-node ROBO cluster into a
standard VxRail 3-node cluster and expand to 64 nodes.
Expand a cluster
The following actions are not permitted when adding a node in a VxRail cluster:
● Add a VIB to the cluster, such as RecoverPoint for VMs, VMware NSX, NVIDIA GPU, or other third-party VIBs.
● Configure jumbo frames on the cluster.
● Enable VMware vSAN encryption.
● Install external storage targets in the cluster, such as iSCSI, NFS, or FC.
● Install an additional VMware VDS.
● Configure a stretched cluster.
● Perform security hardening on the cluster.
If any change is made after the initial cluster deployment, place the new node in the maintenance mode and apply matching
settings.
Service connectivity
You can verify your VxRail connectivity heartbeat, which is the last time that your system has communicated using service
connectivity. You can also review the configuration data that was sent to service connectivity.
Your VxRail can use service connectivity by connecting directly to the Dell backend (Dell Support Team that handles requests)
or through secure connect gateway. Use VxRail Manager to enable service connectivity on your VxRail using VMware vSphere
Web Client.
Steps
1. Shuts down related VMs and services.
2. Performs system health diagnostics and maintenance mode diagnostics.
3. Indicates any errors or conditions that prevent shutting down.
Prerequisites
Before adding a VxRail host to a cluster, verify that the nodes are the same type, family, and configuration in the VMware vSAN
ESA initial release.
Steps
To add a node to the cluster, see the Dell VxRail 8.0.x Admin Guide.
Steps
To remove a VxRail node from a cluster, generate a step-by-step procedure using SolVe Online for VxRail.
Configure iDRAC
Configure iDRAC for a VxRail host.
Steps
1. In the VMware vSphere Web Client, select the Inventory icon.
2. Select a host and click the Configure tab.
3. Select VxRail > iDRAC Configuration.
4. Click Edit next to IPv4 Settings or IPv6 Settings.
5. Modify the settings and click Apply.
6. Click Edit next to VLAN Settings.
7. Modify the settings and click Apply.
8. To add an iDRAC user, click Add, enter user information, and click Apply.
Steps
1. From VMware vSphere Web Client, select Inventory icon.
2. Select the VxRail cluster on which you want to configure automatic renewal of VxRail Manager certificate.
3. Click the Configure tab.
4. Select VxRail > Certificate in the left window.
5. Click EDIT AUTOMATED RENEWAL.
6. In the Edit Automated Renewal window, click Enable or Disable.
7. Enter the Certificate Authority Server URL, Challenge Password, Certificate Validation Frequency and Renew
Certificate Before Expiration, and then click APPLY.
Prerequisites
Verify that the VxRail cluster is using the internal DNS. To convert the internal DNS to external DNS, see Change the IP address
of the DNS server.
Use the python script to add or remove upstream DNS:
● Download the python upstream_dns_operation.py script (.zip) .
● Extract the file from DL100623_upstream_dns_operation.zip.
Steps
1. Log in to the VxRail Manager as mystic and su to root.
2. To add the upstream DNS, enter:
python upstream_dns_operation.py add -s <upstream_dns_ipaddress>
The new FQDN must be from the upstream DNS and not by the internal DNS.
4. To remove the upstream DNS server, enter:
python upstream_dns_operation.py remove -s <upstream_dns_ipaddress>
6. DNS queries for an external domain or different subnet on the upstream DNS server are not forwarded through VxRail
Manager as the internal DNS server. This behavior is set up by default to enhance security. To add the upstream DNS server
on the VMware vCenter Server and VMware ESXi host manually, perform the following steps:
For more information, see KB 226207.
a. Add an upstream DNS server on the VMware vCenter Server, go to https://<vCenter_Server_Ip>:5480.
b. Select Networking and click Edit.
c. On the Edit Network Settings window, under Edit Settings, select Hostname and DNS.
d. Click Enter DNS settings manually and enter the server following the internal DNS server IP address. Use commas to
separate addresses. Click Next and Finish, wait unit the DNS is updated.
e. SSH to the VMware vCenter Server. Use nslookup to ensure that the upstream DNS server could be queried on
VMware vCenter Server.
Prerequisites
Verify that the VxRail cluster is using the internal DNS. To convert the internal DNS to external DNS, see Change the IP address
of the DNS server.
Steps
1. Log in to the VMware vSphere Web Client as administrator.
2. From the Inventory icon, select the cluster and click the Configure tab.
3. Click VxRail > Settings.
4. Under DNS Server, click ACTIONS > Edit Upstream DNS.
5. Enter the IP address and click APPLY.
6. DNS queries for an external domain or different subnet on the upstream DNS server are not forwarded through VxRail
Manager as the internal DNS server. This behavior is set up by default to enhance security. To add the upstream DNS server
on the VMware vCenter Server and VMware ESXi host manually, perform the following steps:
For more information, see KB 226207.
a. Add an upstream DNS server on the VMware vCenter Server, go to https://<vCenter_Server_Ip>:5480.
b. Select Networking and click Edit.
c. On the Edit Network Settings window, under Edit Settings, select Hostname and DNS.
d. Click Enter DNS settings manually and enter the server following the internal DNS server IP address. Use commas to
separate addresses. Click Next and Finish, wait unit the DNS is updated.
e. SSH to the VMware vCenter Server. Use nslookup to ensure that the upstream DNS server could be queried on
VMware vCenter Server.
NOTE: The maximum number of DNS servers for IPv4 and IPv6 environments is two. Ensure that all records are in the DNS
servers.
NOTE: The maximum number of DNS servers is three for dual-stack environments and must contain one IPv4 and one IPv6
address. Ensure that all records are in the DNS servers.
This procedure applies to VxRail 8.0.210 and later clusters that a VxRail-managed VMware vCenter Server or a customer-
managed VMware vCenter Server manages with an external DNS server.
For VxRail versions earlier than 8.0.210, to repoint DNS servers using the public API, check the API on VxRail API.
See the VxRail 8.x Support Matrix for a list of the supported versions.
This procedure is intended for customers, Dell Technologies service providers who are authorized to work on a VxRail Cluster,
and VxRail administrators.
Steps
1. To convert an internal DNS to an external DNS, perform the following:
a. Log in to the VMware vSphere Web Client as administrator.
b. Select the cluster and click the Configure tab.
c. Click VxRail > Settings.
d. Under DNS Server, click ACTIONS > Convert to External DNS Server.
e. Enter the IP address and click APPLY.
2. To repoint DNS servers, perform the following:
a. Log in to the VMware vSphere Web Client as administrator.
b. Select the cluster and click the Configure tab.
c. Click VxRail > Settings.
d. Under DNS Server, click Edit.
e. Enter the IP address and click APPLY.
Go to SolVe Online for VxRail to generate a VCF procedure to change the IP address of the NTP server.
Steps
1. For VxRail 8.0.210 and later, to repoint an NTP server, perform the following:
a. Log in to the VMware vSphere Web Client as administrator.
b. Select the cluster and click the Configure tab.
c. Click VxRail > Settings.
d. Under NTP Server, click Edit.
e. Enter the IP address or FQDN and click APPLY.
For dual-stack environments, provide at least one IPv4 and one IPv6 address, or FQDN, which can be resolved to one
IPv4 and one IPv6 address.
2. For versions earlier than VxRail 8.0.210, to repoint NTP servers using the public API, check the API on VxRail API.
Prerequisites
Before you run the script, create a snapshot of the VxRail Manager VM.
1. Log in to the VMware vSphere Web Client and select the Inventory icon.
2. To take a snapshot of the VxRail Manager VM, right-click VxRail Manager > Snapshots > Take Snapshot.
3. Enter a name and click OK.
Steps
1. See KB 225002 to download the attached ZIP file and rename it to rke2-scripts.zip.
2. Upload the .zip file to /home/mystic/ in VxRail Manager and extract the rke2-scripts.zip.
3. Using SSH, log in to the VxRail Manager as mystic.
4. To switch to root, enter:
su root
/etc/sysconfig/network/ifcfg-dummy0
STARTMODE='auto'
BOOTPROTO='static'
IPADDR='172.28.177/32'
6. To change the IP address of the dummy0 interface for VxRail Manager from 172.28.177.1/32, Update the IPADDR field
with a new IP address.
Wait for a few seconds and verify that the network is restarted.
8. To check the RKE2 version, enter:
# rke2 -v
9. To restart the RKE2 cluster and run the RKE2 precheck, enter:
● If the RKE2 version is v1.21.x, enter:
# bash /usr/local/bin/rke2-precheck.sh
● If the RKE2 version is later than v1.21.x, enter:
# bash /home/mystic/rke2-scripts/rke2-precheck_fix.sh
10. To change the CIDR of the RKE2 cluster, enter:
By default, the VxRail Manager is configured with CIDR for RKE2 services and pods with IP addresses as
172.28.176.0/24 and 172.28.175.0/24. If there is an IP address conflict with your LAN configuration, specify
another IP address range for the CIDR for RKE2.
Where:
-c --cluster-cidr=""<xx.xx.xx.xx/xx>""
-s --service-cidr="<xx.xx.xx.xx/xx>"
If the VxRail Manager VM is damaged from the RKE2 script, go to Restore the VxRail Manager VM to restore the VM.
Prerequisites
You must have created a snapshot to restore the VM.
Steps
1. Log in to the VMware vSphere Web Client and select the Inventory icon.
2. Select the VxRail Manager VM and click the Snapshots tab.
3. Go to the snapshot which created in the snapshot tree, click Revert.
VMware vCenter Users in the vsphere.local domain can For the vsphere.local domain, use the VMware
Server SSO user change their VMware vCenter Server SSO vSphere Web Client.
passwords from the VMware vSphere Web
For other domains, see Change the VMware vCenter
Client.
Server SSO password.
The default user account name is
[email protected] for
customer-managed and VxRail-managed
VMware vCenter Servers. You cannot change
a password that has expired. If your password
expired, contact the Administrator group.
VMware vCenter The default root password for the VMware Change the VMware vCenter Server Appliance root
Server Appliance vCenter Server instance is set during password
root user deployment. The default password expires
You can change the expiry time for an account by
after 90 days.
logging as root to the VMware vCenter Server Bash
shell, and running chage -M number_of_days -W
warning_until_expirationuser_name. To increase
the expiration time of the root password to infinity, run
chage -M -1 -E -1 root command.
VMware ESXi See ESXi Passwords and Account Lockout for See Change the password of the VMware ESXi host
host management more information. management user.
user
VxRail You can change the default password of See Change the password of the VxRail management user.
Management user the VxRail management user. Follow the
requirements for changing the password.
VxRail root user Default user passwords are applied during Use the passwd command.
installation and deployment.
VxRail mystic Use the passwd command.
user
Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. Click the Inventory icon.
3. Select the target VMware ESXi host and click the Configure tab.
4. Under VxRail, click iDRAC Configuration.
5. In iDRAC Settings, click Edit for users.
6. In the Edit Credentials wizard, enter the password information and click Apply.
Prerequisites
For more information, go to VMware Docs by Broadcom and search for ESXi Passwords and Account Lockout.
Steps
1. Log in to the VMware ESXi host client as a root.
2. Select Host > Manage and then select Security & users > Users.
3. Select the VxRail management user and click Edit user.
4. In the Edit User window, enter a password in the Password window and click Save.
5. To apply the password changes, in the VMware vSphere Web Client, perform the following:
a. Under the Inventory icon, select the target cluster, and click the Configure tab.
Steps
1. For VxRail versions earlier than 8.0.210, to change and apply the VMware management user password, perform the
following:
a. Log in to the VMware vSphere Web Client as an administrator.
b. Click Administration from the main menu.
c. Under Single Sign On, click Users and Groups.
d. From the Domain drop-down list, select vsphere.local.
e. Select the VxRail Management username and click EDIT.
f. In the Edit User window, enter and confirm the password and then click Save.
g. To apply the changes, select the target cluster, and click the Configure tab.
h. Under VxRail, click System.
i. Click Update passwords.
j. In the Update Passwords wizard, enter the new password and click SUBMIT, and then click FINISH.
2. For VxRail 8.0.210 and later, the VxRail password management UI is supported for a VxRail-managed VMware vCenter
Server.
a. Log in to the VMware vSphere Web Client as an administrator.
b. Select the target cluster and click the Configure tab.
c. Under Single Sign On, click Users and Groups.
d. Under VxRail, click Security > Credentials.
e. Click EDIT.
f. In the Edit Credentials wizard, enter the new password and click Apply.
Prerequisites
Verify that two VMware vSAN clusters are deployed in the same VMware data center.
1. Log in to the VMware vSphere Web Client as an administrator.
2. From the Inventory icon, select the cluster and click the Configure tab.
3. Under Remote Datastores, verify that two VMware vSAN clusters are deployed in the same VMware data center.
Steps
1. To ensure the VMware vSAN override gateway is set for the server cluster nodes if both clusters are on an L3 network,
perform one the following:
● If the server cluster is running VxRail 7.0.480 and later or VxRail 8.0.200 or later, go to step 2.
● If the server cluster is running an earlier version than VxRail 7.0.480 or VxRail 8.0.200, go to step 3.
2. From the Inventory icon, select the VMware VDS and click the Configure tab, and then select Topology.
a. Select the VMware vSAN traffic setting on each node and click the edit icon. On the Edit Settings window, check
Override default gateway for this adapter on IPv4 and click OK.
b. If the override gateway on the server cluster is not configured for each node, select the VMware vSAN port group.
c. Select the hosts and click Edit Settings to configure the VMkernel adapter.
d. Under IPv4 settings, click Use IPv4 settings and then enable and configure the default gateway.
3. For versions earlier than VxRail 7.0.480 or VxRail 8.0.200 only, to set a static route on the server cluster nodes to reach the
VMware vSAN network of the client cluster, perform the following:
a. Select the configured node.
b. Click the Configure tab and select System > Services.
c. Select SSH or ESXi Shell and click START.
If the SSH service is enabled, you can log in to the configured node CLI using the SSH client. If the VMware ESXi Shell
service is enabled, you can log in to the configured node CLI using DCUI with Alt and F1.
d. Log in to the configured node as root.
e. To check the IPv4 static route, enter: esxcli network ip route ipv4 list
4. On the Ready to complete page, click Finish.
5. To mount the remote VMware vSAN data store on another VMware vSAN cluster, perform the following:
a. Select a cluster, then click the Configure tab.
b. Select Remote Datastores and click MOUNT REMOTE DATASTORE.
c. On the Mount Remote Datastore window, select the data store and click NEXT.
d. On the Check compatibility window, click FINISH.
Prerequisites
Before you convert a VMware VDS, perform the following:
● Verify that the node that is configured has a PCIE NIC with the same speed.
● Validate that all network VLAN and MTU configurations are properly set on the physical switches before making any network
profile changes.
● Confirm that the new uplinks from newly configured ports comply with existing VLAN and MTU configurations.
● Verify that the cluster is in healthy state.
● Configure remote VMware vSAN cluster connection before VMware VDS configuration in dynamic node cluster with
VSAN_HCI_MESH storage type.
Table 8. Port group types, default names, and VMkernel port groups
Port group type Port group default name VMkernel port group
Management Management Network-xxxxxx vmk2
vSAN Virtual SAN-xxxxxxxx vmk3
vMotion vSphere vMotion-xxxxxxxxxxx vmk4
VxRail discovery VxRail Management-xxxxxx vmk0
Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. Under the Inventory icon on in the top-left menu bar, select a node.
3. Select the Configure tab.
4. Click Networking > VMkernel adapters.
5. In the VMkernel adapters window, under Network Label, view the port group name.
Steps
1. Use the following table to perform the first four tasks:
Steps
1. Use the following table to perform the first four tasks:
Steps
1. From the VMware vSphere Web Client, log in as an administrator
2. Under the Inventory icon on in the top-left menu bar, and select a data center.
3. Select the Networks tab.
4. Select Distributed Switch.
5. Under the Actions menu, select Distributed Switch > New Distributed Switch.
6. Enter the name and location and click Next.
7. Select the same version of the existing VMware VDS and click Next.
8. Set the number of uplinks to 2 and click Next.
9. Review settings and click FINISH.
10. From the left menu, select the new VMware VDS and click the Actions menu.
11. Select Settings > Edit Settings....
12. Go to the Uplinks tab and modify Uplink 1 to uplink1 and Uplink 2 to uplink2 to adhere to the unified name rule
and click OK.
Steps
1. From the VMware vSphere Web Client, log in as an administrator.
2. Under the Inventory icon on in the top left menu bar, select a data center.
3. Click Networks tab.
4. Select Distributed Switches to view VS1.
5. From Actions menu, select Add and Manage Hosts.
6. From Select task page, select Manage host networking and click NEXT.
7. From Select hosts page, select Attached hosts and choose hosts that are linked to the distributed switch.
8. Click OK and then click Next.
9. Click Next without making any changes on the Manage physical adapters page.
10. Click Assign port group after selecting VMware vSAN vmk3 on each host.
11. Select the newly created VMware vSAN port group and click OK.
12. Click Next twice and then click Finish.
Steps
1. From the VMware vSphere Web Client, log in as an administrator.
2. Under the Inventory icon on in the top left menu bar, select a data center.
3. Click Networks tab.
4. Select Distributed Switches to view VS1.
5. From Actions menu, select Add and Manage Hosts.
6. From Select task page, select Manage host networking and click NEXT.
7. From Select hosts page, select Attached hosts and choose hosts that are linked to the distributed switch.
8. Click OK and then click Next.
9. Click Next without making any changes on the Manage physical adapters page.
10. Click Assign port group after selecting VMware vSAN vmk4 on each host. Select the newly created VMware vSAN port
group from Create the port group for VMware vSAN in VDS2 and click OK.
11. Click Next twice and then click Finish.
Next steps
The summary of Hosts and Clusters page displays alerts for Network uplink redundancy loss in the reconfigured nodes. Click
Reset to Green to skip the alert.
Prerequisites
● Do not involve the DPU NICs in the Day 1 bring up.
● Create the VMware VDS in Day 2.
● Use V670F, P670N, and E660F to build your VxRail cluster.
Steps
1. On the Physical adapters page, verify the DPU Backed column is marked on DPU adapters.
NOTE: The VxRail nodes should be integrated with VMware NSX to leverage any network offload functionality.
Prerequisites
● Verify that the nodes are the same type, family, and configuration in the VxRail vSAN ESA initial release.
● Obtain the access to the management system from the user to communicate with the VxRail.
● Ensure that the VxRail node that you add is compatible with the VxRail version 8.0.010.
● Ensure that you have the compatible DPUs to add a node.
● Ensure that the node you add is identical.
Steps
1. Log in to the VMware vSphere Web Client as administrator.
2. From the Inventory icon, select a VMware vSAN cluster.
3. From the Configure tab, select VxRail > Health Monitoring and verify that the Health Monitoring Status is set to
Enable.
4. Select VxRail > Hosts.
5. Click ADD.
● If the new node version matches the cluster version, select the host. To discover the VxRail hosts by Loudmouth mode,
configure the ToR switches and power on the hosts.
● If the new node version is lower than the cluster version and the node is compatible, add the new node to the cluster.
The new node is upgraded to the cluster level during the node addition.
● If the new node is not compatible, upgrade the corresponding subcomponent, or downgrade before you add the node to
the VxRail cluster.
● If no new hosts are found, and you want to add a node using the IP address and credentials, click ADD.
6. To add the node manually, in the Add Hosts screen, enter the ESXi IP Address and the ESXi Root Password.
7. Click VALIDATE.
8. Click ADD.
10. In the vCenter User Credentials window, enter the VMware vCenter Server user credentials. Click NEXT.
11. In the NIC Configuration window, select a configuration, and select NICs and VMNICs. Click NEXT.
Select the proper NIC configuration and define the NIC-mapping configuration plan for the new hosts.
The default NIC configuration is from the node that you configured first in the VxRail cluster. The default values of the
VMNIC for the new nodes must align with the selected NIC configuration.
Default values must satisfy the common configuration requirement.
NOTE: If the VxRail cluster uses an external DNS server, all the nodes in the cluster must have DNS hostname and IP
address lookup records.
12. In the Host Settings window, enter the ESXi Host Configuration settings for the hosts and click NEXT.
13. OPTIONAL: In the Host Location window, to customize the host location, enter the Rack Name, Rack Position, and click
NEXT.
14. In the Network Settings window, enter the VMware vSAN IPv4 Address and VMware vSphere vMotion IPv4 Address.
Click NEXT.
NOTE: A dynamic node cluster with a fiber channel array does not have the VMware vSAN field sets.
15. In the Validate window, review the details and click VALIDATE. Click BACK to make any changes.
VxRail validates the configuration details and if the validation passes, a success message appears on the screen.
16. In the Validate window, select Yes to put the hosts in maintenance mode and click FINISH.
NOTE: You must select Put Hosts in Maintenance Mode option to add the nodes to VCF on a VxRail environment.
17. Monitor the progress of each host that is added to the VxRail cluster.
18. Once the expansion progress is complete, a success message appears. If a supported lower version of the node is added, the
node gets upgraded to the cluster level.
CAUTION: You cannot use this task to replace a node. Node removal does not destroy the VxRail cluster.
Prerequisites
● Disable the remote support connectivity, if enabled.
● Verify that the VxRail cluster is in a healthy state.
● Add new nodes into the cluster before running the node removal procedure to avoid any capacity or node limitations.
● Verify that the VxRail cluster has enough nodes remaining after the node removal to support the current Failure to Tolerate
(FTT) setting.
The following table lists the minimum number of VMware ESXi nodes in the VxRail cluster before node removal:
Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. Select a cluster and click the Monitor tab.
3. Select vSAN > Skyline Health.
4. If alarms display, acknowledge the Reset to Green at the node and cluster levels before you remove the node.
Steps
1. To view capacity for the cluster, log in to the VMware vSphere Web Client as administrator, and perform the following:
a. Under the Inventory icon, select the VMware vSAN cluster and click the Monitor tab.
b. Select vSAN > Capacity.
2. To check the impact of data migration on a node, perform the following:
a. Select vSAN > Data Migration Pre-check.
b. From the SELECT OBJECT drop-down, select the host.
c. From the vSAN data migration drop-down, select Full data migration and click PRE-CHECK.
3. To view disk capacity, perform the following:
a. Select the VMware vSAN cluster and click the Configure tab.
b. Select vSAN > Disk Management to view capacity.
Use the following formulas to compute percentage used:
CPU_used_% = Consumed_Cluster_CPU /(CPU_capacity - Plan_to_Remove_CPU_sum)
Memory_used_% = Consumed_Cluster_Memory /(Memory_capacity - Plan_to_Remove_Memory_sum)
4. To view to view the CPU and memory overview, perform the following:
a. Select the VMware vSAN cluster and click Monitor tab.
b. Select Resource Allocation > Utilization.
5. To check the CPU and memory resources on a node, perform the following: click the node and select Hardware.
a. Select the node and click the Summary tab.
b. View the Hardware window for CPU, memory, Virtual Flash Resource, Networking, and Storage.
Prerequisites
Before you remove the node, perform the following steps to place the node in to maintenance mode:
1. Log in to the VMware vSphere Web Client as an administrator.
2. Under the Inventory icon, right-click host that you want to remove and select Maintenance Mode > Enter Maintenance
Mode.
3. In the Enter Maintenance Mode dialog, check Move powered-off and suspended virtual machines to other hosts in
the cluster.
4. Next to vSAN data migration, from the drop-down menu, select Full data migration and click GO-TO PRECHECK.
5. Verify that the test was successful and click ENTER MAINTENANCE MODE and click OK.
6. To monitor the VMware vSAN resyncing, click the cluster name and select Monitor > vSAN > Resyncing Objects.
Steps
1. To remove the host from the VxRail cluster, perform the following:
a. Select the cluster and click the Configure tab.
b. Select VxRail > Hosts.
c. Select the host and click REMOVE.
2. In the Remove Host from Cluster window, enter the VMware vCenter Server administrator and root account information.
3. After the account information is entered, click VERIFY CREDENTIALS .
4. When the validation is complete, click APPLY to create the Run Node Removal task.
5. After the precheck successfully completes, the host shuts down and is removed.
6. For L3 deployment: If you have removed all the nodes of a segment, select the unused port group on VMware VDS and click
Delete.
Next steps
To access the SSH, perform the following:
● Log in to the VMware vCenter Server Management console as root.
● From the left-menu, click Access.
● From the Access Settings page, click EDIT and enable SSH.
If a DNS resolution issue occurs after you removed the node or you added the same removed node back into the cluster but
with a new IP address, on the VMware vCenter Server, to update dnsmasq, enter:
Prerequisites
● Verify that you have a PowerEdge 15G or higher model with Intel CPU configuration.
● Enable the cluster DRS.
● Obtain the API guide.
Steps
1. To get the cluster CPU drifts, use the GET method to invoke the REST API:
curl -k -XGET -u <username>:<password>https://localhost/rest/vxm/private/v1/cluster/
i2e_config
2. If the driftConfiguration in the API response is empty, do not perform this procedure.
If driftConfiguration is not empty, view the CPU core count under desiredConfiguration:
Prerequisites
● Verify that you have a PowerEdge 15G or higher model with an Intel CPU configuration.
● Enable the cluster DRS.
3. On the right-menu, view the VMware vSphere DRS configuration. If cluster DRS is off or the Automation Level is not fully
automated, click EDIT.
a. In Edit Cluster Settings, enable vSphere DRS. For the Automation Level, use the drop-down menu to select Fully
Automated.
Steps
1. Verify that the desired CPU core count is set to enable core count value to prepare a rolling update API request body.
{
"desiredConfiguration": {
"cpu": {
"enabledCores": <enable core count>
}
}
}
Prerequisites
● To avoid data loss, take a file-based backup of each node before repointing.
● Be familiar with the UNIX or LINUX commands, and the VMware vSphere management interface.
-spa, --src-psc-admin SSO administrator username for the source VMware vCenter Server. Do not append
the @domain.
-dpf, --dest-psc-fqdn The FQDN of the VMware vCenter Server to repoint.
-dpa, --dest-psc-admin SSO administrator username for the destination VMware vCenter Server. Do not
append @domain.
-ddn, --dest-domain-name SSO domain name of the destination VMware vCenter Server.
-dpr, --dest-psc-rhttps (Optional) HTTPS port for the destination VMware vCenter Server. If not set, the
default port is 443.
-dvf, --dest-vc-fqdn The FQDN of the VMware vCenter Server pointing to a destination VMware vCenter
Server. The VMware vCenter Server is used to check for the component data
conflicts in the precheck mode. If not provided, conflict checks are skipped and
the default resolution (COPY) is applied for any conflicts that are found during the
import process.
This argument is optional only if the destination domain does not have a VMware
vCenter Server.
-sea, --src-emb-admin Administrator for the VMware vCenter Server with embedded VMware vCenter
Server. Do not append @domain to the administrator ID.
-rpf, --replication-partner- (OPTIONAL) The FQDN of the replication partner node to which the VMware
fqdn vCenter Server is replicated.
-rpr, --replication-partner- (OPTIONAL) The HTTPS port for the replication node. If not set, the default port
rhttps is 443.
-dvr, --dest-vc-rhttps (OPTIONAL) The HTTPS port for the VMware vCenter Server pointing to the
destination VMware vCenter Server. If not set, the default port is 443.
--ignore-snapshot (OPTIONAL) Ignore the snapshot warning.
--no-check-certs (OPTIONAL) Ignore the certification validation.
(OPTIONAL) Retrieves the command execution detail.
-h, --help (OPTIONAL) Displays the help message for the cmsso-util domain
repoint command.
Prerequisites
Power on both VMware vCenter Server nodes (A and B) before beginning the repointing process.
Steps
1. Using SSH, log in to the VMware vCenter Server as root.
2. To access the VMware vCenter Server A of domain 1, enter:
ssh root@<vcenter_a_ip_address>
WARNING: Global Permissions for the source vCenter Server system will be lost. The
administrator for the target domain must add global permissions manually.
Source domain users and groups will be lost after the Repoint operation.
User '[email protected]' will be assigned administrator role on the
source vCenter Server
The following license keys are being copied to the target Single Sign-On
domain. VMware recommends using each license key in only a single domain. See
"vCenter Server Domain Repoint License Considerations" in the vCenter Server
Installation and Setup documentation
MH2HL-2PH9N-08C70-19573
4. OPTIONAL: Review conflicts and apply the same resolution for all the conflicts, or apply a separate resolution for each
conflict.
The conflict resolutions are:
● Copy: Creates a copy of the data in the target domain.
● Skip: Skips copying the data in the target domain.
● Merge: Merges the conflict without creating duplicates.
Steps
1. Log in to the VMware vCenter Server as root.
2. Click Backup.
The table under Activity displays the latest backup version from the VMware vCenter Server.
3. Click Backup Now.
4. OPTIONAL: Click Use backup location and username from backup schedule and perform the following:
a. Enter the backup location details.
b. OPTIONAL: Enter an encryption password if you want to encrypt your backup file.
To encrypt the backup data, use the encryption password.
c. OPTIONAL: Select Stats, Events, and Tasks to back up additional historical data from the database.
d. OPTIONAL: In the Description field, enter a description for the backup.
e. Click Start.
Steps
1. To repoint the VMware vCenter Server A of domain 1 to domain 2, enter:
cmsso-util domain-repoint -m execute --src-emb-admin Administrator --replication-partner-
fqdn <vcenterb_fqdn_domain2> --replication-partner-admin PSC_Admin_of_destination_node --
dest-domain-name destination_PSC_domain
WARNING: Global Permissions for the source vCenter Server system will be lost. The
administrator for the target domain must add global permissions manually.
Source domain users and groups will be lost after the Repoint operation.
User '[email protected]' will be assigned administrator role on the
source vCenter Server system.
The default resolution node for Tags and Authorization conflicts us Copy,
unless overridden in the conflict files generated during pre-check.
Before running the Repoint operation, you should backup all nodes. You can use
file based backups to restore in case of failure. By using the
Repoint tool
you agree to take the responsibility for creating backups.
Otherwise you should
cancel this operation.
The following license keys are being copied to the target Single Sign-On
domain. VMware recommends using each license key in only a single domain. See
"vCenter Server Domain Repoint License Considerations" in the vCenter Server
Installation and Setup documentation
MH2HL-2PH9N-08C70-0R80K-19573
Steps
1. Log in to the VMware vCenter Server as root.
2. Select Host > Configure > System > Certificate.
3. Click REFRESH CA CERTIFICATES and wait for the task to complete.
4. Repeat these steps on all the nodes in the VMware vCenter Server A.
Steps
1. Shut down the node (VMware vCenter Server A) that is repointed to domain 1 (moved to a different domain).
2. Decommission the VMware vCenter Server node that is repointed.
For example, to decommission the VMware vCenter Server A, log in to the VMware vCenter Server B (on the original
domain) and enter:
ssh root@<vcenter_ip_address>
cmsso-util unregister --node-pnid <vcentera_fqdn> --username
VC_B_sso_administrator@sso_domain.com --passwd
VC_B_sso_adminuser_password
To encrypt the backup data, you must use the encryption password.
● OPTIONAL: Select Stats, Events, and Tasks to back up additional historical data from the database.
● OPTIONAL: In the Description field, enter a description for the backup.
● Click Start.
5. To repoint the VMware vCenter Server A to new domain 2, enter:
cmsso-util domain-repoint -m execute --src-emb-admin administrator --dest-domain-name
destination_PSC_domain
6. Update the VMware vCenter Server A SSL certificates from its VMware vCenter Server.
Generate Import VMware vSphere SSL certificates to VxRail Manager to update certificates.
7. Generate Refresh node certificates in VMware vCenter Server A to refresh node certificates.
For VMware documentation, see VMware docs.
Steps
1. For Dell Technologies employees, see KB 197636 for information that is related to installation of install base updates for
VxRail. For more information, see Product Registration and Install Base Maintenance Job Aid.
2. View the video tutorial for the partner product registration process Dell Partner Product Registration Process and
Deployment Operations Guide.
Prerequisites
Bring up the VxRail cluster and verify that there are no critical alarms and that VMware vSAN is healthy.
Steps
1. Open the VMware vSphere Web Client select the Inventory icon.
2. Select the VxRail cluster and click the Configure tab.
3. Select VxRail > Support.
4. Under VxRail HCI System Software SAAS multi-cluster management, a description of the information is displayed with
a link to a demo.
a. VDS
b. NIOC
Prerequisites
Before you configure the node:
● Go to the Day 1 public API to verify that the NIC profiles in the API are ADVANCED_VXRAIL_SUPPLIED_VDS and
ADVANCED_CUSTOMER_SUPPLIED_VDS.
● Verify that the node has enough spare PCIe NICs for configuration.
● Configure the required VLAN on the switch for the PCIe adapter ports that are planned for discovery and management.
● When using the PCIe only adapter, disable the NDC or OCP ports. To avoid network interruptions, use DCUI to log in to the
iDRAC console and configure the NDC or OCP ports.
Steps
1. Log in to the iDRAC console as root.
2. Press Alt-F1 to switch to the CLI mode.
3. To verify the status and locate the VMNICs are from PCIe, enter:
esxcfg-nics -l
Check the PCI column to identify different PCIe adapters.
4. To view the current NIC teaming policy of a vSwitch, enter:
esxcli network vswitch standard policy failover get -v vSwitch0
5. Select one of the PCIe ports and add the PCIe VMNIC into the default VMware vSwitch0.
To configure the VxRail node before deployment, one port from the PCIe adapter is required.
esxcli network vswitch standard portgroup policy failover set -p "Management Network" -a
vmnic2
esxcli network vswitch standard portgroup policy failover set -p "Private Management
Network" -a vmnic2
esxcli network vswitch standard portgroup policy failover set -p "VM Network" -a vmnic2
esxcli network vswitch standard portgroup policy failover set -p "Private VM Network" -a
vmnic2
7. To add an additional PCIe NICs for the VxRail networking as a standby uplink, enter:
esxcli network vswitch standard policy failover set -v vSwitch0 -s vmnic2
8. After the nodes are configured, ping the VxRail management IP address. Perform one of the following to start the
deployment.
● For the VMware vCenter Server UI, perform the following:
○ In VDS Settings step, select Custom or VDS configuration.
○ In the Uplink Definition checklist, select two PCIe adapter ports and complete the VxRail deployment.
● If you are using the API to perform the initialization, only ADVANCED_VXRAIL_SUPPLIED_VDS and
ADVANCED_CUSTOMER_SUPPLIED_VDS NIC profiles are supported.
9. To expand the VxRail cluster host, perform the following:
a. Complete all the procedures on the new node.
b. Perform the node expansion using the VMware vCenter Server UI or API.
10. To expand the VxRail satellite host, perform the following:
a. Ensure that there are two adjacent PCIe adapter ports with the same network speed that is greater than or equal to one
GB per second.
b. Remove unused ports from the vSwitch0 and add the PCIe adapter ports. For example, to remove the VMNIC0 and
VMNIC1 from vSwitch0, enter:
esxcli network vswitch standard uplink remove -u vmnic0 -v vSwitch0
c. Verify that at least one PCIe adapter port is Active and the other is Standby. For example, to add VMNIC2 to vSwitch0
and configure it as an Active PCIe adapter port, enter:
esxcli network vswitch standard uplink add -u vmnic2 -v vSwitch0
For example, to add VMNIC3 to vSwitch0 and configure it as a Standby PCI adapter port, enter:
esxcli network vswitch standard uplink add -u vmnic3 -v vSwitch0
esxcli network vswitch standard policy failover set -v vSwitch0 -s vmnic3
d. Use the VMware vCenter Server wizard or API to expand the node.
NOTE: The VxRail physical view page does not display the PCIe adapter information.
See Configure Physical Network Adapters on a VMware VDS for more information.
Prerequisites
● Verify that the VxRail cluster is healthy and all nodes are running.
● On the Windows client, install the following:
○ PowerShell 5.1.14409.1005
○ Posh-SSH 2.0.2 for PowerShell
○ VMware.PowerCLI 12.2.0 build 17538434 for PowerShell
● Download the enablejumboframe_movevc_70100.ps1 script.
● When you enable the jumbo frames on the VMware VDS, uplinks are cycled up and down for approximately 20-40 seconds.
For critical applications, shut down and power on all the user VMs.
● The scripts power off and power on the VxRail Manager and the user VMs. If some VM services prevent the VM from
shutting down, manually shut down the VM. If the script fails after you power off the VMs, power on the VMs and retry.
● Do not power off the VxRail-managed VMware vCenter Server.
● If connectivity to the VMware vCenter Server fails due to a certificate error, enter:
C:\Users\stshell\Downloads>Set-PowerCLIConfiguration -InvalidCertificateAction Ignore
Perform operation?
Performing operation 'Update PowerCLI configuration.'?
[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "Y"): Y
Steps
1. To enable jumbo frames for the VMware vCenter Server, perform the following:
a. Enter enablejumboframe_movevc_70100.ps1 with the following parameters:
vcUser <vcenter_username>
vxVDS <vds_name>
vxCluster <cluster_name>
MTU <size>
Optional: Enter the MTU size. The MTU value range is 1280–9000 bytes.
validIP <ip_address>
Use the IP address from the vmkping for the jumbo frame validation.
retryTimes <retry_times>
To retry the failed steps in the script, the minimum value is 3.
vxmIP <vxrail_mgr_ipaddr>
VxRail Manager VM skips the power off. When your cluster uses internal DNS, this
field is required.
For example:
● Internal VMware vCenter Server with external DNS (VMware vCenter Server is a VM in the VxRail Cluster):
.\enablejumboframe_movevc_70100.ps1 -MTU 9000 -vCenterServer 192.168.101.201
-vcUser "[email protected]" -vcPwd "Testvxrail123!" -vxVDS
"VMware HCIA Distributed Switch" -vxCluster "VxRail-Virtual-SAN-Cluster-
d5fff3cd-49dc-4230-8aa1-071050aa4fc0" -validIP 192.168.101.211 -retryTimes 5
vcUser <vcenter_username>
vxVDS <vds_name>
vxCluster <cluster_name>
hostMode <host_mode>
addHostName <name>
MTU <MTU_size>
Optional: the MTU value range is 1280–9000 bytes.
validIP <ip_address>
Use the IP address from the vmkping for the jumbo frame validation.
vxmIP <vxrail_mgr_ipaddr>
VxRail Manager VM skips the power off. When your cluster uses internal DNS, this
field is required.
For example:
.\enablejumboframe_movevc_70100.ps1 -skipValid -MTU 9000 -vCenterServer
192.168.101.201 -vcUser "[email protected]" -vcPwd "Testvxrail123!"
-vxVDS "VMware HCIA Distributed Switch" -vxCluster "VxRail-Virtual-SAN-
Cluster-d5fff3cd-49dc-4230-8aa1-071050aa4fc0" -vcNotInCluster -hostMode -addHostName
"engdell1-01.localdomain.local"
Prerequisites
Obtain access to the customer-managed VMware VDS and VxRail Manager.
Before you begin the conversion, take a snapshot of all the service VMs:
1. Log in to the VMware vSphere Web Client as an administrator.
2. Select the Inventory icon.
3. Right-click VxRail Manager and select Snapshots > Take Snapshot.
4. Enter a name and click OK.
5. Repeat these steps for the remaining service VMs.
Steps
1. Using SSH, log in to VxRail Manager as mystic.
2. To connect to the database, enter:
psql -U postgres vxrail
3. To view the VMware VDS status in the database, enter:
select * from configuration.configuration where key='customer_supplied_vds';
Optional: If the above query returns null for the customer-managed VMware VDS, to add a row, enter:
VALUES ('setting','customer_supplied_vds','true');
Prerequisites
● Standard cluster deployment running VxRail 7.0.130 or later.
● NIC profiles in API: ADVANCED_VXRAIL_SUPPLIED_VDS and ADVANCED_CUSTOMER_SUPPLIED_VDS.
● The new node must have enough spared PCIE NICs for configuration.
● You must configure the required VLAN on switch for the PCIE adapter ports which are planned for Discovery and
management.
● When using pure PCIE adapter, the NDC ports should not be in a connected or active state. To avoid network interruption,
configure NDC ports using DCUI through IDRAC console.
Steps
1. Log in to the node IDRAC interface and open the console.
2. Press Alt+F1 to check into CLI mode.
3. Log in to the CLI as root.
4. To check the VMNIC status and locate the VMNIC from PCIE, enter: esxcfg-nics -l
Check the PCI column to identify different PCIE adapter in the result.
5. Select one of the PCIE ports and add it into vSwitch0 as shown in the next section.
6. In the following example, we used 2-port NDC and 2-port PCIE adapters on VxRail E560F. The VMNIC2 and VMNIC3 are
the ports that we planned to use from PCIE adapters. Only one port from PCIE adapter is required to be configured before
VxRail deployment. To configure and add PCIE VMNIC into default vSwitch0 configure and add PCIE adapters on VxRail
560F, perform the following:
a. Enter:
esxcli network vswitch standard uplink add -u vmnic2 -v vSwitch0
esxcli network vswitch standard portgroup policy failover set -p "Management Network"
-a vmnic2
esxcli network vswitch standard portgroup policy failover set -p "Private Management
Network" -a vmnic2
esxcli network vswitch standard portgroup policy failover set -p "VM Network" -a
vmnic2
esxcli network vswitch standard portgroup policy failover set -p "Private VM Network"
-a vmnic2
esxcli network vswitch standard portgroup policy failover set -p "Private Management
Network" -a vmnic2
esxcli network vswitch standard portgroup policy failover set -p "VM Network" -a
vmnic2
esxcli network vswitch standard portgroup policy failover set -p "Private VM Network"
-a vmnic2
esxcli network vswitch standard portgroup policy failover set -p "Management Network"
-s vmnic3
esxcli network vswitch standard portgroup policy failover set -p "Private Management
Network" -s vmnic3
esxcli network vswitch standard portgroup policy failover set -p "VM Network" -s
vmnic3
esxcli network vswitch standard portgroup policy failover set -p "Private VM Network"
-s vmnic3
9. Perform the node expansion using the UI wizard or API. Known issue: The VxRail Physical View page does not display PCIE
adapter information. See Configure Physical Network Adapters on a vSphere Distributed Switch for more information.
Steps
1. Connect to the VMware vCenter Server instance that supports the VxRail cluster.
2. From the VMware vSphere Web Client, click the Inventory icon.
3. Select a VxRail cluster, and click the Configure tab.
4. Expand VxRail and click System.
CAUTION: If the VxRail cluster is not healthy, you cannot enable dynamic LAG.
Steps
1. From the VMware vSphere Web Client, click the Inventory icon.
2. Select a VxRail cluster, and click the Monitor tab.
3. Select VxRail > Physical View.
4. Verify that Cluster Health displays as Healthy.
CAUTION: You cannot enable LAG if the VxRail cluster is in an unhealthy state.
Steps
1. From the VMware vSphere Web Client, select the Networking icon.
2. Select the VMware VDS that supports the VxRail cluster network, and click the Configure tab.
3. Expand Settings, and click Health Check.
4. To enable or disable health check, click Edit.
5. In the Edit Health Check Settings window, do the following:
a. Under VLAN and MTU, from the State menu, select Enabled.
b. In the Interval box, enter the interval for the VLAN and MTU health check. The default value is 1 minute.
c. Under Teaming and failover, from the State menu, select Enabled.
d. In the Interval box, enter the interval for the Teaming and failover health check. The default value is 1 minute.
e. Click OK.
6. Click the Monitor tab, and click Health.
7. Confirm that the VMware VDS switch is in a healthy state.
8. Disable the VMware VDS health service.
Health check can detect incorrect network conditions when LAG is enabled.
Steps
1. From the VMware vSphere Web Client, click the Networking icon.
2. Right-click the VMware VDS that supports the VxRail cluster network.
3. Select Settings > Edit Settings.
4. On the Edit Settings window, select the Uplinks tab.
5. Verify the uplinks and click OK.
Steps
1. From the VMware vSphere Web Client, click the Networking icon.
2. Select the VMware VDS and click the Networks tab.
3. Under the Distributed Port Groups tab, select the port group.
4. Right-click the port group and then click Edit Settings.
5. In the Distributed Port Group - Edit Settings page, click Teaming and failover.
6. Select the two uplinks that are assigned to the port group.
7. Open each port group that represents the management networks (management, VxRail management, and VMware vCenter
Server).
8. Verify that the two uplinks do not match the uplinks that are assigned to the port groups for LAG.
Do not continue with this procedure if the uplinks are not isolated.
Steps
1. From the VMware vSphere Web Client, select the VMware VDS that is targeted for LAG.
2. Click the Configure tab.
3. Expand Settings and click Topology.
4. Expand the two uplinks that support the VxRail networks.
5. Locate the VMNICs that are assigned to each uplink.
Steps
1. Open a browser to the iDRAC console on one of the nodes.
2. Log in as root.
3. Open a virtual console session to the VxRail node.
4. Select Keyboard from the top toolbar, and click F2.
5. Log in to the VMware ESXi operating system as root.
6. Go to Troubleshooting Options, and select Enable ESXi Shell.
7. On the virtual keyboard, click Alt-F1.
8. Log in to the VMware ESXi console as root.
9. To obtain the MAC address and description for each VMNIC, enter:
esxcli network nic list
Identify the switch ports that are targeted for LAG using iDRAC
If the ToR switches do not support LLDP discovery, use iDRAC to identify the switch port connection.
Prerequisites
Verify that you have connectivity to the iDRAC on each VxRail node.
Steps
1. Log in to the iDRAC on a VxRail node as root.
2. Select the System view.
3. From the Overview tab, click Network Devices to view the NDC and PCIe adapter cards.
4. To view the switch port assignment for each NDC port and any of the unused PCIe based ports, perform the following:
a. Select Integrated NIC to view the NDC-OCPport properties.
b. Select NIC Slot to view the PCIe based port properties.
c. Select Summary.
The Switch Port Connection ID column identifies the switch port connection. The MAC address under Switch
Connection ID for each view differs, indicating that each port is connected to a different switch.
5. Repeat the iDRAC query for each of VxRail node to discover the switch port connections.
Prerequisites
Verify that the ToR switches that support the VxRail cluster also support VLT.
Steps
1. To view the configuration on each switch, enter:
show running-configuration vlt
!
vlt-domain 255
backup destination 172.17.186.204
discovery-interface ethernet 1/1/29-1/1/30
peer-routing
vlt-mac 59:9a:4c:da:5d:30
To view the configuration on a port channel, enter: show running-configuration interface port-channel
100
3. (Optional) If the STP was enabled in the network, set the port channel to STP portfast mode to avoid temporary network
loss during STP convergence. The command to set STP to portfast depends on the model of the switch. As the command
is different from model to model and vendor to vendor, contact your physical switch vendor for more detailed information
about how to configure. For example:
Cisco switch:
● spanning-tree portfast (for an access port)
Steps
1. Open a console to the ToR switches.
2. To confirm the switch port for the VMNIC connection using LLDP, enter:
show lldp neighbors | grep <vmnic>
3. To configure the switch interface and set the channel group to Active, enter:
4. Repeat these steps for each switch interface that is configured into the LACP policy.
Steps
1. Open a console session to the second ToR switch.
2. To confirm the VMNIC that is connected to LLDP, enter:
show lldp neighbors | grep <vmnic>
Steps
1. To view the load-balancing policies set on the switch, enter:
show load-balance
2. Verify that the load-balancing policy on the switches align with the load-balancing policy that is to be configured on the
VxRail network.
Steps
1. From the VMware vSphere Web Client, click the Networking icon.
2. Select the VMware VDS to configure the LACP policy, and click the Configure tab.
3. Expand Settings and click LACP.
4. Under MIGRATING NETWORK TRAFFIC TO LAGS, click NEW.
5. In the New Link Aggregation Group window, enter the following:
● Name: <name>
● Number of ports: 2
● Mode
○ Active: Initiate negotiation with the remote ports by sending the LACP packets. If the LAGs on the physical switch
are in Active mode, set the LACP policy mode to either Active or Passive.
○ Passive: Responds to the LACP packet that it receives but does not initiate LACP negotiation. If the LAGs on the
physical switch are in Passive mode, set the LACP policy mode to Active.
● Load balancing mode: Select the load-balancing algorithm that aligns with the ToR switch settings, and click OK.
6. Click Topology.
7. Verify that the LACP policy is listed for the uplink selection.
Steps
1. To check the flag setting on the switch, enter:
show port-channel summary
Steps
1. Right-click the VMware VDS that is targeted for LAG, and click Add and Manage Hosts.
2. On the Select task page, select Manage host networking and click NEXT .
3. On the Select hosts page, select all the member hosts and click NEXT.
4. On the Manage physical adapters page, select one VMNIC to LAG on each host and click NEXT.
Steps
1. Right-click the VMware VDS on which you want to migrate the LACP policy to the standby uplink .
2. Select Distributed Port Group > Manage Distributed Port Groups.
3. On the Select port group policies page, select Teaming and failover, and then click Next.
4. On the Select port groups page, select a single port group or two port groups (VMware vSAN or VMware vSphere
vMotion) to assign for the LACP policy, and click Next.
5. On the Teaming and failover page, under Failover order section, use the UP and DOWN arrows to migrate between the
uplinks.
a. Migrate the LACP policy to Active uplinks.
b. Migrate the remaining uplinks to Unused uplinks.
6. On the Ready to complete page, review the changes, and click FINISH.
7. A warning message displays while migrating the physical adapters. Click OK to discard the issues and proceed or Cancel to
review your changes.
8. Verify that one of the ports is connect to LAG. Yellow connections in the following example indicate that connections are
applied to all port groups.
10. Verify that IND and P display next to each of the ports.
Steps
1. Right-click the VMware VDS and select Add and Manage Hosts.
2. On the Select task window, select Manage host networking and click NEXT.
3. On the Select hosts window, under Member hosts, select all the hosts in the VxRail cluster and click NEXT.
4. On the Manage physical adapters window,perform the following:
● For uplinks transferred to LAG, select the VMNIC associated with the uplink and select lag1-1 in topology that has all
traffic with port groups.
● Replace the vmnic2 which still use the original uplink with unassigned LAG.
5. Skip the remaining screens and click Finish.
6. To verify the switch status, enter: show port-channel summary
7. Verify that all connections are migrated to LAG.
Vmnic1 and vmnic5 support the network that is targeted for link aggregation. They were unassigned from uplink2 and uplink4
and reassigned to the two ports that are attached to the LACP policy.
8. Skip the rest of the screens and click FINISH.
Steps
1. Open a VMware ESXi console session to a VxRail node.
2. To verify the LACP counters on the VMware ESXi console, enter:
esxcli network vswitch dvs vmware lacp stats get
3. Repeat this procedure on the other VxRail nodes to validate the LACP status.
Prerequisites
Configure the LAG for the VMNIC on all the VxRail nodes.
Steps
1. From the VMware vSphere Web Client, select the VMware VDS that is targeted for LAG.
2. Select the Configure tab and click Topology.
3. Select the LAG and verify that the specified VMNIC is assigned to the uplink against the LAG.
Steps
1. Connect to the VMware vCenter Server instance that supports the VxRail cluster.
2. From the VMware vSphere Web Client, click the Inventory icon.
3. Select a VxRail cluster and click the Configure tab.
4. Expand VxRail, and click System.
CAUTION: If the VxRail cluster is not healthy, you cannot enable dynamic LAG.
Steps
1. From the VMware vSphere Web Client, click the Inventory icon.
2. Select a VxRail cluster, and click the Monitor tab.
3. Select VxRail > Physical View.
4. Verify that Cluster Health displays as Healthy.
CAUTION: You cannot enable LAG if the VxRail cluster is in an unhealthy state.
Steps
1. From the VMware vSphere Web Client, select the Networking icon.
2. Select the VMware VDS that supports the VxRail cluster network, and click the Configure tab.
3. Expand Settings, and click Health Check.
4. To enable or disable health check, click Edit.
5. In the Edit Health Check Settings window, do the following:
a. Under VLAN and MTU, from the State menu, select Enabled.
b. In the Interval box, enter the interval for the VLAN and MTU health check. The default value is 1 minute.
c. Under Teaming and failover, from the State menu, select Enabled.
d. In the Interval box, enter the interval for the Teaming and failover health check. The default value is 1 minute.
e. Click OK.
6. Click the Monitor tab, and click Health.
7. Confirm that the VMware VDS switch is in a healthy state.
8. Disable the VMware VDS health service.
Health check can detect incorrect network conditions when LAG is enabled.
Steps
1. From the VMware vSphere Web Client, click the Networking icon.
Steps
1. From the VMware vSphere Web Client, click the Networking icon.
2. Select the VMware VDS switch that supports the VxRail cluster network, and click the Networks tab.
3. Under the Distributed Port Groups tab, select the port group.
4. Right-click the selected port group, and then click Edit Settings.
5. In the Distributed Port Group - Edit Settings page, click Teaming and failover.
6. Select the two uplinks that are assigned to the port group.
7. Open each port group that represents the management networks (management, VxRail management, and VMware vCenter
Server).
8. Verify that the two uplinks do not match the uplinks that are assigned to the port groups for LAG.
Do not continue with this procedure if the uplinks are not isolated.
Steps
1. From the VMware vSphere Web Client, select the VMware VDS that is targeted for LAG.
2. Click the Configure tab.
3. Expand Settings and click Topology.
4. Expand the two uplinks that support the VxRail networks.
5. Locate the VMNICs that are assigned to each uplink.
Steps
1. Open a browser to the iDRAC console on one of the nodes.
2. Log in as root.
3. Open a virtual console session to the VxRail node.
4. Select Keyboard from the top toolbar, and click F2.
5. Log in to the VMware ESXi operating system as root.
6. Go to Troubleshooting Options, and select Enable ESXi Shell.
7. On the virtual keyboard, click Alt-F1.
8. Log in to the VMware ESXi console as root.
9. To obtain the MAC address and description for each VMNIC, enter:
esxcli network nic list
Identify the switch ports that are targeted for LAG using LLDP
The ToR switches must support LLDP discovery to identify the switch ports. Do not perform this task if the switch does not
support LLDP discovery.
Steps
1. Open a console session to the ToR switches that support the VxRail cluster.
2. To identify the VMNICs that are connected for each node, enter:
show lldp neighbors | grep <hostname>
● In this example, VMNIC0 and VMNIC4 are assigned to the VxRail network that is not targeted for LAG. The VMNIC1 and
VMNIC5 are assigned to the VxRail network that is targeted for LAG.
● The VMNIC1 and VMNIC2 are connected to separate switches.
● The MAC address for each pairing is different. This indicates that the source adapter for one NIC port is on the NDC and
the other NIC port is on a PCIe adapter card.
3. Use the VMNIC values captured from the switch topology view in the vClient to identify the switch ports planned for link
aggregation.
4. Repeat the query for each VMware ESXi hostname to discover the NICs.
Identify the switch ports that are targeted for LAG using iDRAC
If the ToR switches do not support LLDP discovery, use iDRAC to identify the switch port connection.
Prerequisites
Verify that you have connectivity to the iDRAC on each VxRail node.
Steps
1. Log in to the iDRAC on a VxRail node as root.
2. Select the System view.
3. From the Overview tab, click Network Devices to view the NDC and PCIe adapter cards.
4. To view the switch port assignment for each NDC port and any of the unused PCIe based ports, perform the following:
a. Select Integrated NIC to view the NDC-OCP port properties.
b. Select NIC Slot to view the PCIe based port properties.
c. Select Summary.
The Switch Port Connection ID column identifies the switch port connection. The MAC address under Switch
Connection ID for each view differ, indicating that each port is connected to a different switch
5. Repeat the iDRAC query for each of VxRail node to discover the switch port connections.
Prerequisites
Verify that the ToR switches that support the VxRail cluster also support VLT.
Steps
1. To view the configuration on each switch, enter:
show running-configuration vlt
!
vlt-domain 255
backup destination 172.17.186.204
discovery-interface ethernet 1/1/29-1/1/30
peer-routing
vlt-mac 59:9a:4c:da:5d:30
To view the configuration on a port channel, enter: show running-configuration interface port-channel
100
3. (Optional) If the STP was enabled in the network, set the port channel to STP portfast mode to avoid temporary network
loss during STP convergence. The command to set STP to portfast depends on the model of the switch. As the command
is different from model to model and vendor to vendor, contact your physical switch vendor for more detailed information
about how to configure. For example:
Cisco switch:
● spanning-tree portfast (for an access port)
● spanning-tree portfast trunk (for a trunk port)
Dell switch:
● spanning-tree port type edge (for an access port)
● spanning-tree port type edge trunk (for a trunk port)
Steps
1. Open a console to the adjacent ToR switches.
2. To configure each switch port to peer with a pair of node ports for link aggregation, enter:
configure terminal
interface ethernet 1/1/2
channel-group 100 mode active
exit
Steps
1. From the VMware vSphere Web Client, click the Networking icon.
2. Select the VMware VDS to configure the LACP policy, and click the Configure tab.
3. Expand Settings and click LACP.
4. Under MIGRATING NETWORK TRAFFIC TO LAGS, click NEW.
5. In the New Link Aggregation Group window, enter the following:
● Name: <name>
● Number of ports: 2
● Mode
○ Active: Initiate negotiation with the remote ports by sending the LACP packets. If the LAGs on the physical switch
are in Active mode, set the LACP policy mode to either Active or Passive.
○ Passive: Responds to the LACP packet that it receives but does not initiate LACP negotiation. If the LAGs on the
physical switch are in Passive mode, set the LACP policy mode to Active.
● Load balancing mode: Select the load-balancing algorithm that aligns with the ToR switch settings, and click OK.
6. Click Topology.
7. Verify that the LACP policy is listed for the uplink selection.
Steps
1. To view the port channels on each switch, enter:
show port-channel summary
Steps
1. From the VMware vSphere Web Client, click the Inventory icon.
2. Right-click the VMware VDS on which you want to migrate the LACP policy to the standby uplink .
3. Select Distributed Port Group > Manage Distributed Port Groups.
4. On the Select port group policies page, select Teaming and failover, and then click Next.
5. On the Select port groups page, select a single port group or two port groups (VMware vSAN or VMware vSphere
vMotion) to assign for the LACP policy, and click Next.
6. On the Teaming and failover page, under Failover order, use the UP and DOWN arrows to migrate between the uplinks.
a. Migrate the LACP policy to Active uplinks.
b. Migrate the remaining uplinks to Unused uplinks.
c. Repeat these steps for all port groups.
7. On the Ready to complete page, review the changes, and click FINISH.
8. A warning message displays while migrating the physical adapters. Click OK to discard the issues and proceed or Cancel to
review your changes.
9. Verify that one of the ports is connected to LAG. Yellow connections in the following example indicate that connections are
applied to all port groups.
11. Verify that (IND) and (P) are displayed next to each of the ports.
Steps
1. Select the VMware VDS.
2. Right-click the port group where LAG is the standby and click Edit settings.
3. Select Teaming and failover.
4. In the Failover order, use the up and down arrows to move the LAG in the Active list, all stand-alone uplinks in the Unused
list, and leave the Standby list empty, and click OK.
5. Repeat for all the port groups which use LAG.
Steps
1. Right-click the VMware VDS and select Add and Manage Hosts > Manage Host Networking.
2. On the next screen, select Attached hosts.
3. In the Select Member Hosts window, select all hosts in the VxRail cluster and click OK.
4. On the next screen, repeat the steps for the two NICs targeted for link aggregation.
a. Select one VMNIC on the first host.
b. Select Unassign adapter.
c. Check Apply this operation to all other hosts and click Unassign.
d. Select the same NIC under the other switches/unclaimed list and click Assign uplink.
e. Select a port that is assigned to the LACP policy.
f. Check Apply uplink assignment to rest of the hosts and click OK.
g. Select the next NIC targeted for link aggregation and repeat the steps.
5. Review the uplink reassignment.
Vmnic1 and vmnic5 support the network that is targeted for link aggregation. Both VMNICs were unassigned from uplink2
and uplink4 and reassigned to the two ports that are attached to the LACP policy.
6. Skip the rest of the screens and click Finish.
Steps
1. To verify the port channels on each switch, enter:
show port-channel summary
5. Repeat these steps on the other VxRail nodes to validate the LACP status.
Steps
1. Connect to the VMware vCenter Server instance that supports the VxRail cluster.
2. From the VMware vSphere Web Client, click the Inventory icon.
3. Select a VxRail cluster and click the Configure tab.
4. Expand VxRail, and click System.
CAUTION: If the VxRail cluster is not healthy, you cannot enable dynamic LAG.
Steps
1. From the VMware vSphere Web Client, click the Inventory icon.
2. Select a VxRail cluster, and click the Monitor tab.
3. Select VxRail > Physical View.
4. Verify that Cluster Health displays as Healthy.
CAUTION: You cannot enable LAG if the VxRail cluster is in an unhealthy state.
Steps
1. From the VMware vSphere Web Client, select the Networking icon.
2. Select the VMware VDS that supports the VxRail cluster network, and click the Configure tab.
3. Expand Settings, and click Health Check.
4. To enable or disable health check, click Edit.
5. In the Edit Health Check Settings window, do the following:
a. Under VLAN and MTU, from the State menu, select Enabled.
Steps
1. From the VMware vSphere Web Client, click the Networking icon.
2. Right-click the VMware VDS that supports the VxRail cluster network that is targeted for LAG.
3. Select Settings > Edit Settings.
4. Select Uplinks.
5. Verify that the number of uplinks that are assigned to the VMware VDS support LAG.
Steps
1. From the VMware vSphere Web Client, click the Networking icon.
2. Select the VMware VDS and click the Networks tab.
3. Under the Distributed Port Groups tab, select the port group.
4. Right-click the port group and then click Edit Settings.
5. In the Distributed Port Group - Edit Settings page, click Teaming and failover.
6. Select the two uplinks that are assigned to the port group.
7. Open each port group that represents the management networks (management, VxRail management, and VMware vCenter
Server).
8. Verify that the two uplinks do not match the uplinks that are assigned to the port groups for LAG.
Do not continue with this procedure if the uplinks are not isolated.
Steps
1. From the VMware vSphere Web Client, select the VMware VDS that is targeted for LAG.
2. Click the Configure tab.
3. Expand Settings and click Topology.
4. Expand the two uplinks that support the VxRail networks.
5. Locate the VMNICs that are assigned to each uplink.
Steps
1. Open a browser to the iDRAC console on one of the nodes.
2. Log in as root.
3. Open a virtual console session to the VxRail node.
4. Select Keyboard from the top toolbar, and click F2.
5. Log in to the VMware ESXi operating system as root.
6. Go to Troubleshooting Options, and select Enable ESXi Shell.
7. On the virtual keyboard, click Alt-F1.
8. Log in to the VMware ESXi console as root.
9. To obtain the MAC address and description for each VMNIC, enter:
esxcli network nic list
Identify the switch ports that are targeted for LAG using LLDP
The ToR switches must support LLDP discovery to identify the switch ports. Do not perform this task if the switch does not
support LLDP discovery.
Steps
1. Open a console session to the ToR switches that support the VxRail cluster.
2. To identify the VMNICs that are connected for each node, enter:
show lldp neighbors | grep <hostname>
● In this example, VMNIC0 and VMNIC4 are assigned to the VxRail network that is not targeted for LAG. The VMNIC1 and
VMNIC5 are assigned to the VxRail network that is targeted for LAG.
● The VMNIC1 and VMNIC2 are connected to separate switches.
● The MAC address for each pairing is different. This indicates that the source adapter for one NIC port is on the NDC and
the other NIC port is on a PCIe adapter card.
3. Use the VMNIC values captured from the switch topology view in the vClient to identify the switch ports planned for link
aggregation.
4. Repeat the query for each VMware ESXi hostname to discover the NICs.
Identify the switch ports that are targeted for LAG using iDRAC
If the ToR switches do not support LLDP discovery, use iDRAC to identify the switch port connection.
Prerequisites
Verify that you have connectivity to the iDRAC on each VxRail node.
Steps
1. Log in to the iDRAC on a VxRail node as root.
2. Select the System view.
3. From the Overview tab, click Network Devices to view the NDC and PCIe adapter cards.
4. To view the switch port assignment for each NDC port and any of the unused PCIe based ports, perform the following:
a. Select Integrated NIC to view the NDC-OCP port properties.
b. Select NIC Slot to view the PCIe based port properties.
c. Select Summary.
The Switch Port Connection ID column identifies the switch port connection. The MAC address under Switch
Connection ID for each view differ, indicating that each port is connected to a different switch
5. Repeat the iDRAC query for each of VxRail node to discover the switch port connections.
Prerequisites
Verify that the ToR switches that support the VxRail cluster also support VLT.
Connect the Ethernet cables between one or two pairs of ports on each switch.
Steps
1. To view the configuration on each switch, enter:
show running-configuration vlt
!
vlt-domain 255
backup destination 172.17.186.204
discovery-interface ethernet 1/1/29-1/1/30
peer-routing
vlt-mac 59:9a:4c:da:5d:30
To view the configuration on a port channel, enter: show running-configuration interface port-channel
100
3. (Optional) If the STP was enabled in the network, set the port channel to STP portfast mode to avoid temporary network
loss during STP convergence. The command to set STP to portfast depends on the model of the switch. As the command
is different from model to model and vendor to vendor, contact your physical switch vendor for more detailed information
about how to configure. For example:
Cisco switch:
● spanning-tree portfast (for an access port)
● spanning-tree portfast trunk (for a trunk port)
Dell switch:
● spanning-tree port type edge (for an access port)
● spanning-tree port type edge trunk (for a trunk port)
Steps
1. To view the load-balancing policies set on the switch, enter:
show load-balance
2. Verify that the load-balancing policy on the switches align with the load-balancing policy that is to be configured on the
VxRail network.
Steps
1. From the VMware vSphere Web Client, click the Networking icon.
2. Select the VMware VDS to configure the LACP policy, and click the Configure tab.
3. Expand Settings and click LACP.
4. Under MIGRATING NETWORK TRAFFIC TO LAGS, click NEW.
5. In the New Link Aggregation Group window, enter the following:
● Name: <name>
● Number of ports: 2
● Mode
○ Active: Initiate negotiation with the remote ports by sending the LACP packets. If the LAGs on the physical switch
are in Active mode, set the LACP policy mode to either Active or Passive.
○ Passive: Responds to the LACP packet that it receives but does not initiate LACP negotiation. If the LAGs on the
physical switch are in Passive mode, set the LACP policy mode to Active.
● Load balancing mode: Select the load-balancing algorithm that aligns with the ToR switch settings, and click OK.
6. Click Topology.
7. Verify that the LACP policy is listed for the uplink selection.
Steps
1. From the VMware vSphere Web Client, click the Inventory icon.
2. Right-click the VMware VDS on which you want to migrate the LACP policy to the standby uplink .
3. Select Distributed Port Group > Manage Distributed Port Groups.
4. On the Select port group policies page, select Teaming and failover, and then click Next.
5. On the Select port groups page, select a single port group or two port groups (VMware vSAN or VMware vSphere
vMotion) to assign for the LACP policy, and click Next.
6. On the Teaming and failover page, under Failover order, use the UP and DOWN arrows to migrate between the uplinks.
a. Migrate the LACP policy to Active uplinks.
b. Migrate the remaining uplinks to Unused uplinks.
c. Repeat these steps for all port groups.
7. On the Ready to complete page, review the changes, and click FINISH.
8. A warning message displays while migrating the physical adapters. Click OK to discard the issues and proceed or Cancel to
review your changes.
9. Verify that one of the ports is connected to LAG. Yellow connections in the following example indicate that connections are
applied to all port groups.
11. Verify that (IND) and (P) are displayed next to each of the ports.
Steps
1. Right-click the VMware VDS that is targeted for LAG, and click Add and Manage Hosts.
2. On the Select task page, select Manage host networking and click NEXT .
3. On the Select hosts page, select all the Member hosts and click NEXT.
4. On the Manage physical adapters page, select one VMNIC to assign an uplink on each host.
5. Repeat the process of assigning uplinks to all the hosts, and click Next.
Steps
1. Open a console to the ToR switches.
2. To confirm the switch port for the VMNIC connection using LLDP, enter:
show lldp neighbors | grep <vmnic>
3. To configure the switch interface and set the channel group to Active, enter:
4. Repeat these steps for each switch interface that is configured into the LACP policy.
Steps
1. To verify the port channels of the switch, enter:
---------------------------------------------------------------------------
Group Port-Channel Type Protocol Member Ports
---------------------------------------------------------------------------
101 port-channel101 (U) Eth DYNAMIC 1/1/9 (P)
102 port-channel102 (U) Eth DYNAMIC 1/1/12(P)
103 port-channel103 (U) Eth DYNAMIC 1/1/3 (P)
104 port-channel104 (U) Eth DYNAMIC 1/1/6 (P)
3. For a multi-chassis LAG, to verify the port channel status for both the VLT peers, enter:
show vlt
<id> vlt-port-detail
Steps
1. Open a VMware ESXi console session to a VxRail node.
2. To verify the LACP counters on the VMware ESXi console, enter:
esxcli network vswitch dvs vmware lacp stats get
3. Repeat these steps on the other VxRail nodes to validate the LACP status.
Steps
1. From the VMware vSphere Web Client, click the Networking icon.
2. Right-click the VMware VDS that is targeted for LAG.
3. Select Distributed Port Group > Manage Distributed Port Groups.
4. On the Select port group policies page, select Teaming and Failover, and then click Next.
5. On the Select port groups page, select the VMware vSAN or VMware vSphere vMotion distributed port groups and click
Next.
6. On the Teaming and failover page, click MOVE UP and MOVE DOWN to move the LACP policy to Active uplinks and all
the other uplinks to Unused uplinks, and then click Next.
7. On the Ready to complete page, review the changes, and click FINISH.
Prerequisites
Configure the LAG for the VMNIC on all the VxRail nodes.
Steps
1. From the VMware vSphere Web Client, select the VMware VDS that is targeted for LAG.
2. Select the Configure tab and click Topology.
3. Select the LAG and verify that the specified VMNIC is assigned to the uplink against the LAG.
Steps
1. Right-click the VMware VDS, and click Add and Manage Hosts.
2. On the Select task page, select Manage host networking and click NEXT.
3. On the Select hosts page, under Member hosts, select all the hosts in the VxRail cluster and click NEXT.
4. For the second port on the LAG, select the VMNIC associated with the uplink that is not used for VMware vSAN and
VMware vSphere vMotion.
5. Select the VMNIC on the first host.
6. Select Unassign adapter.
7. Enable Apply this operation to all other hosts.
8. Click UNASSIGN.
9. Select the same NIC under the On other switches/unclaimed list.
10. Select Assign uplink.
11. Assign the uplink to an available port on the LAG.
12. Select Apply uplink assignment to rest of the hosts and click OK.
13. Review the uplink assignment.
In this example, the unused uplink assigned vmnic2 has been unassigned from the uplink2 and reassigned the second port
that is attached to the LAG.
Steps
1. Open a console session to the second ToR switch.
2. To confirm the VMNIC that is connected to LLDP, enter:
Steps
1. To verify that the switch port channels are up and active, enter:
show port-channel summary
---------------------------------------------------------------------------
Group Port-Channel Type Protocol Member Ports
---------------------------------------------------------------------------
101 port-channel101 (U) Eth DYNAMIC 1/1/7 (P)
102 port-channel102 (U) Eth DYNAMIC 1/1/10(P)
103 port-channel103 (U) Eth DYNAMIC 1/1/1 (P)
104 port-channel104 (U) Eth DYNAMIC 1/1/5 (P)
3. For a multichassis LAG, to verify that the port channel status for both VLT peers are active, enter:
show vlt <id> vlt-port-detail
Steps
1. Open a VMware ESXi console session to a VxRail node.
2. To verify the LACP counters on the VMware ESXi console, enter:
3. Repeat this procedure on the other VxRail nodes to validate the LACP status.
Table 17. Example of four NDC port to two NDC and two PCIE ports
Uplink Starting uplink Starting VMNIC Ending uplink Ending VMNIC
configuration assignment configuration assignment
uplink1 Management VMNIC0 (NDC) Management VMNIC0 (NDC)
uplink2 Management VMNIC1 (NDC) Management VMNIC4 (PCIE)
uplink3 VMware vSAN/VMware VMNIC2 (NDC) VMware vSAN/VMware VMNIC2 (NDC)
vSphere vMotion vSphere vMotion
uplink4 VMware vSAN/VMware VMNIC3 (NDC) VMware vSAN/VMware VMNIC5 (PCIE)
vSphere vMotion vSphere vMotion
The following table provides an example of two NDC ports to one NDC and one PCIE port:
Table 18. Example of two NDC ports to one NDC and one PCIE port
Uplink Starting uplink Starting VMNIC Ending uplink Ending VMNIC
configuration assignment configuration assignment
uplink1 Management/VMware VMNIC0 (NDC) Management/VMware VMNIC0 (NDC)
vSAN/VMware vSphere vSAN/VMware vSphere
vMotion vMotion
uplink2 Management/VMware VMNIC1 (NDC) Management/VMware VMNIC4(PCIE)
vSAN/VMware vSphere vSAN/VMware vSphere
vMotion vMotion
uplink3 N/A N/A N/A N/A
uplink4 N/A N/A N/A N/A
The following table provides an example of two NDC ports to two NDC and two PCIE ports:
The following table provides an example of four NDC ports to one NDC and one PCIE ports:
Table 20. Example of four NDC ports to one NDC and one PCIE ports
Uplink Starting uplink Starting VMNIC Ending uplink Ending VMNIC
Configuration assignment Configuration assignment
uplink1 Management VMNIC0 (NDC) Management/VMware VMNIC0 (NDC)
vSAN/VMware vSphere
vMotion
uplink2 Management VMNIC1 (NDC) Management/VMware VMNIC4 (PCIE)
vSAN/VMware vSphere
vMotion
uplink3 VMware vSAN/VMware VMNIC2 (NDC) N/A N/A
vSphere vMotion
uplink4 VMware vSAN/VMware VMNIC3 (NDC) N/A N/A
vSphere vMotion
Populate the grid with the uplink names and VMNIC names.
Steps
1. Open the VMware vSphere Web Client and connect to the VMware vCenter Server instance that supports the VxRail
cluster.
2. Select Home > Hosts and Clusters.
3. Select the VxRail cluster to enable network redundancy.
4. Select Configure > VxRail > System.
5. Confirm that the VxRail version supports network redundancy.
Prerequisites
Verify access to the VMware vCenter Server that supports the VxRail cluster.
Steps
1. From the VMware vSphere Web Client, select the VxRail cluster in which you want to enable network redundancy.
2. Select the Monitor tab.
3. From the left-menu, select VxRail > Physical View.
4. Verify that the Health State is healthy.
Steps
1. Log in to VMware vSphere Web Client as an administrator.
2. Select Home > Hosts and Clusters > VxRail Cluster.
3. From the VxRail clusters, select a node.
4. Select Configure > Networking > Physical adapters.
5. View the physical adapters serving as an uplink to the VMware VDS. In the following figure, VMNIC 0, VMNIC 1, VMNIC 2,
and VMNIC 3 are connected to a single VMware VDS at a connection speed of 10 Gbps. There are four NDC ports. If your
cluster has only two NDC ports, only two VMNICs are visible.
6. View the unused physical adapters. In the following figure, VMNIC 4 and VMNIC 5 are PCIe network ports. The connection
speed is 10 Gbps and is compatible with the NDC ports.
Prerequisites
Ensure that you have access to the adjacent ToR switches.
To discover the VxRail node connections, your switch operating system must support the LLDP neighbor functionality.
Steps
1. Open a console session to one of the Ethernet switches that supports the VxRail cluster.
2. To verify the ports that are connected to the VxRail nodes and VMNIC assignment, enter:
show lldp neighbors | grep vmnic
Following are the sample outputs shown for two different switches:
interface ethernet1/1/3
description VxRail-NDC-Port
no shutdown
switchport mode trunk
switchport access vlan 1386
switchport trunk allowed vlan 100-103,3939
mtu 9216
flowcontrol receive on
flowcontrol transmit off
spanning-tree port type edge
exit
interface ethernet1/1/16
description VxRail-PCIe-Port
no shutdown
switchport mode trunk
switchport access vlan 1386
switchport trunk allowed vlan 100-103,3939
mtu 9216
flowcontrol receive on
flowcontrol transmit off
spanning-tree port type edge
exit
Verify active uplink on the VMware VDS port groups post migration
Verify at least one uplink in each VMware VDS port group is active after the migration.
Prerequisites
Ensure that you have access to the planning grid table Enable network redundancy across NDC and PCIe ports.
Review the planning grid table that is populated with the starting and ending network configuration to identify any uplinks that
are disconnected as part of the uplink reassignment process.
Steps
1. From the VMware vSphere Web Client, select Networking.
2. Right-click the VMware HCIA Distributed Switch.
3. Select Distributed Port Group > Manage Distributed Port Groups.
4. Select Teaming and Failover.
5. Select all the VMware VDS port groups.
6. Verify at least one of the active uplinks in the failover order is not disconnected during the migration task.
7. If there is an uplink under Active uplinks, that uplink gets disconnected during the migration. Modify the failover order to
move an uplink that might not get disconnected during the migration to Active uplinks.
Prerequisites
Review the planning grid table populated in Enable network redundancy across NDC and PCIe ports.
Steps
1. To add the uplinks to the VMware VDS, perform the following:
a. From the VMware vSphere Web Client, select Networking inventory view.
b. Right-click the VMware HCIA Distributed Switch and select Settings > Edit Settings.
c. Click Uplinks to display the existing uplinks.
d. Click ADD to add the uplinks according to the planning grid table populated in Enable network redundancy across NDC
and PCIe ports and click OK.
2. Skip this task if you are removing or not changing the uplinks.
Prerequisites
Review the planning grid table in Enable network redundancy across NDC and PCIe ports.
Steps
1. From the VMware vSphere Web Client, select Networking.
2. From the VxRail Datacenter menu, right-click VMware HCIA Distributed Switch.
3. Click Add and Manage Hosts... and click Manage host networking.
4. Select all the hosts in the VxRail cluster and click NEXT.
5. From the left-menu, select Manage physical adapters to review the existing VMNICs and uplinks mapping.
6. Use the planning grid table in Enable network redundancy across NDC and PCIe ports to set and update the VMNIC and
uplink mapping.
In the example below:
● VMNIC1 from an NDC-based adapter is unassigned from uplink2.
● VMNIC3 from an NDC-based adapter is unassigned from uplink4.
● VMNIC4 from a PCIe-based adapter is assigned to uplink2.
● VMNIC5 from a PCIe-based adapter is assigned to uplink4.
7. Click NEXT.
8. From the VMware HCIA Distributed Switch > Add and Manage Hosts menu, click Manage VMkernel adapters. Do not
migrate any network on the Manage VMkernel adapters window.
9. Click NEXT.
10. From the Migrate VM networking window, click NEXT > FINISH.
Monitor the network migration progress until it is complete.
Prerequisites
Go to Enable network redundancy across NDC and PCIe ports to identify the VMNICs that are assigned and unassigned to the
VMware VDS port groups. Identify the ending uplinks from the planning grid table and the VMware VDS port groups that are
assigned to each uplink.
Steps
1. From the VMware vSphere Web Client, select Networking.
2. Right-click the VMware HCIA distributed switch.
3. Select a VMware VDS port group to modify for the network reconfiguration.
4. Move the uplinks up and down to align with the ending network configuration from the planning grid table.
5. Move any unused uplinks and remove as part of the reconfiguration process and click OK.
6. Select the next VMware VDS port group that you plan to modify for the network reconfiguration.
7. Move the uplinks up and down to align with the ending network configuration from the planning grid table.
8. Move any unused uplinks and remove as part of the reconfiguration process and click OK.
Prerequisites
Go to Enable network redundancy across NDC and PCIe ports to identify the uplinks removed from the VMware VDS port
groups. Identify any uplinks that are listed in the starting network configuration column of planning grid table that is not listed in
the ending network configuration.
Steps
1. From the VMware vSphere Web Client, select Networking.
2. Right-click the VMware VDS.
3. Select Settings > Edit Settings.
4. Click Uplinks.
5. Next to each uplink you want to remove, click REMOVE, and then click OK.
Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. Select vCenter > Hosts and Clusters.
3. Select the VxRail cluster to perform the network migration.
4. Select a host in the VxRail cluster and select the Summary view.
Prerequisites
● Complete the VxRail cluster Day 1 bring-up.
● Verify that there are no critical alarms in the cluster.
● Verify that the VMware vSAN is in a healthy state.
● Configure the DCB-capable switch. Verify that the RDMA-enabled physical NIC is configured for lossless traffic.
● To ensure a lossless SAN, configure the data center bridging (DCB) mode as IEEE.
○ Set the priority flow control (PFC) value to CoS priority 3, per VMware.
○ See the operation guide from the physical switch vendor to set up the outside network environment to match the data
center cluster network strategy and topology.
● Disable the VMware vSAN large-scale cluster support (LSCS) feature. VxRail enables VMware vSAN LSCS as a default
setting during the VxRail cluster setup. LSCS conflicts with the VMware vSAN RDMA and must be disabled to use the
VMware vSAN RDMA.
Steps
1. To place the host into maintenance mode and configure advanced settings, perform the following:
a. Enter System Setup and select Device Settings.
b. Select a device.
f. For VxRail releases 8.0.xxx and earlier, disable large cluster support in the cluster level.
/etc/init.d/port-lldpd disable
b. Place the host into maintenance mode and individually reboot each host.
For Mellanox NIC, see the vendor documentation on disabling the hardware DCBx from Mellanox for VMware.
5. To enable RDMA support in the VMware vSAN service, perform the following:
a. Select Configure > vSAN > Services.
b. Under the Network section, click EDIT and enable the RDMA support.
Verify that there are no critical alarms in the VxRail cluster. Verify that the VMware vSAN and RDMA configurations are
healthy.
c. To verify the VMware vSAN health and the RDMA configuration health status, select Monitor > vSAN > System
Health > RDMA Configuration Health.
d. Under RDMA Configuration Health, check the health status.
Prerequisites
● Complete the VxRail cluster Day 1 bring-up.
● Verify that there are no critical alarms in the cluster.
● Verify that the VMware vSAN is in a healthy state.
● Configure the DCB-capable switch. Verify that the RDMA-enabled physical NIC is configured for lossless traffic.
● To ensure a lossless SAN, configure the data center bridging (DCB) mode as IEEE.
○ Set the priority flow control (PFC) value to CoS priority 3, per VMware.
○ See the operation guide from the physical switch vendor to set up the outside network environment to match the data
center cluster network strategy and topology.
● Disable the VMware vSAN large-scale cluster support (LSCS) feature. VxRail enables VMware vSAN LSCS as a default
setting during the VxRail cluster setup. LSCS conflicts with the VMware vSAN RDMA and must be disabled to use the
VMware vSAN RDMA.
Steps
1. The VMware vSAN interface for LSCS in the VMware vCenter Server is not present. VMware has re-enabled the SDK
interface to allow configuration options for the VMware vSAN LSCS feature to set up VMware vSAN RDMA. See KB 2110081
and follow the SDK steps for large-scale configurations.
See Set-VsanClusterConfiguration commands for more information.
2. To place the host into maintenance mode and configure advanced settings, enter:
esxcli system settings advanced set -o /VSAN/goto11 -i 0
a. To adjust the TCP/IP heap size, if needed, enter:
esxcli system settings advanced set -o /Net/TcpipHeapMax -i XXXX
/etc/init.d/port-lldpd disable
For Mellanox NIC, see the vendor documentation on disabling the hardware DCBx from Mellanox for VMware.
5. To enable RDMA support in the VMware vSAN service, perform the following:
a. Select Configure > vSAN > Services.
b. Under the Network section, click EDIT and enable the RDMA support.
Verify that there are no critical alarms in the VxRail cluster. Verify that the VMware vSAN and RDMA configurations are
healthy.
c. To verify the VMware vSAN health and the RDMA configuration health status, select Monitor > vSAN > System
Health > RDMA Configuration Health.
d. Under RDMA Configuration Health, check the health status.
Prerequisites
To set up the satellite node, you must:
● Verify that the VxRail management cluster is deployed.
● Verify that the satellite node is added into a folder that manages the VMware VDS.
Steps
1. Log in to VMware vSphere Web Client as an administrator.
2. From the left-menu, select Networking.
3. Select the Virtual switches tab and locate the VMware standard switch that supports the satellite node.
4. Click Edit Settings.
5. Identify and capture the MTU.
6. Identify and capture the VMNIC that is connected to the VMware VDS.
7. Identify and capture the NIC teaming policy.
b. Identify and capture any port groups and VLANs that are assigned for the guest networks or other management
networks with at least one active port.
9. Select the VMkernel NICs tab and capture the name of each VMkernel NIC and the name of the port group assignment.
10. Exit the VMware ESXi session.
Steps
1. Log in to the VMware vSphere Web Client of the management cluster as an administrator.
2. Select Networking.
3. From the vSphere Client menu, select Inventory.
4. Select the data center that contains the satellite node folder.
5. Right-click the data center and select Distributed Switch > New Distributed Switch.
6. Enter a name for the VMware VDS and click NEXT.
7. Select the latest version that is compatible with the VMware ESXi version on the satellite node and click NEXT.
8. Set the number of uplinks to match the number of uplinks on the satellite node VMware standard switch.
Steps
1. In the VMware vSphere Web Client , select the new VMware VDS.
2. Right-click the VMware VDS and select Settings > Edit Settings.
3. In the Edit Settings window, select Advanced.
4. Set the MTU to match the satellite node VMware standard switch and click OK.
Create the VMware VDS port groups for the satellite node
Create a VMware VDS port group on the VMware VDS that supports satellite node networking. Repeat these steps to add the
new port group to the VMware VDS.
Steps
1. Locate the first port group that was captured on the satellite node on the VMware standard switch.
2. In the VMware vSphere Web Client, select the new VMware VDS.
3. Right-click the data center and select Distributed Switch > New Distributed Switch.
4. Under Name and Location, perform the following:
a. The distributed port group name can be the same or correlate with the port group on the satellite node VMware standard
switch. Enter the distributed port group name.
b. Click NEXT.
5. Under Configure Settings, to set the properties of the new port group, perform the following:
a. For the VLAN Type, select VLAN.
b. Enter the VLAN ID.
The VLAN ID must match with the port group VLAN ID on the satellite node VMware standard switch.
c. Select Customize default policies configuration.
6. From Teaming and Failover, set the policy that matches the settings that are captured on the satellite node VMware
standard switch.
Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. Select VxRail-Datacenter > VMware HCIA Distributed Switch.
3. Right-click the satellite node for the VMware VDS and select Add Hosts....
4. From the Add hosts wizard, enter host information and click ADD HOST.
5. Under Select hosts, select the satellite node.
6. Under Manage VMkernel adapters, to migrate the VMkernel from the satellite node VMware standard switch to the port
groups on the VMware vCenter Server VDS, perform the following:
a. Select the first VMkernel to assign to a port group.
b. Click ASSIGN PORT GROUP.
c. Select the port group from the drop-down.
d. Click ASSIGN.
e. Repeat these steps for the next VMkernel on the list.
7. Under Migrate VM Networking, to migrate the VMs to the new port group on the VMware VDS, perform the following:
a. Select the first VM.
b. Migrate the NIC from the source port group on the satellite node VMware standard switch to the new port group on the
VMware VDS.
c. Repeat these steps for the remaining VMs in the list.
8. Click FINISH.
Next steps
Verify the VMware VDS.
1. Connect to the VMware vSphere Web Client.
2. Select Home > Hosts and Clusters.
3. Select Configure > Virtual Switches.
4. Select the satellite node and verify the new VMware VDS.
Steps
1. To connect to the VMware VDS, perform the following:
a. Log in to the VMware vSphere Web Client as an administrator.
b. Select Networking.
c. Select the VMware VDS that supports the VxRail cluster that you plan to modify.
2. To identify the number of uplinks that support the VMware VDS, perform the following:
a. From the Home screen, select the Actions drop-down menu.
b. Select Settings > Edit Settings and click the Uplinks tab to view the number of uplinks that are assigned to the
VMware VDS.
The options for the failover settings are based on the number of uplinks.
3. To configure to the port group teaming and failover policy, perform the following:
a. From the VMware VDS, select the port group to modify.
b. Select Configure and from the left-menu, click Properties > EDIT.
4. From the left-menu, select Teaming and failover to view the existing port group policy.
5. Select the Load balancing policy that meets the requirements for the network traffic on the port group.
Use explicit failover order Use the highest order uplink that passes the failover detection. Yes
There is no load balance that is based on the network traffic.
Route based on source MAC Uplink is selected based on the VM MAC address. There is no load Yes
hash balance that is based on the network traffic.
Route based on physical NIC Monitor the network traffic and adjust the overloaded uplinks by Yes
load moving the network traffic to another uplink.
Route based on IP hash Dependency on the logical link setting of the physical switch port No
adapters is not supported in VxRail.
The following table lists the supported failover options for the VxRail port groups with four configured uplink ports:
You cannot configure the unused uplinks into the failover order setting.
7. To configure an active/active failover order, perform the following:
a. Select the uplink under Standby uplinks.
b. Use the UP arrow to move the uplink to Active uplinks.
8. To configure an active/standby failover order, perform the following:
a. Under the Active uplinks, select the uplink that is supported to be in standby mode per the supported failover order for
this port group.
b. Use the DOWN arrow to go to the uplink in the Standby uplinks setting.
9. To complete the policy update, click OK.
Figure 56. Simple topology with a centrally shared VMware vCenter Server
Telemetry settings
The following table describes the data that is collected and the amount of daily traffic between VxRail Manager and the VMware
vCenter Server:
NOTE: Telemetry settings are different on the API as shown in the table.
You can manage telemetry settings using the VxRail onboard API, client URL (curl) commands, or through VxRail Manager. To
modify telemetry settings using VxRail onboard API, verify access to:
● Verify that you have access to the REST API.
● Verify the IP address for VxRail Manager onboard API.
Prerequisites
Verify that you have the following:
● Username and password for the curl command
● Four 14 GB and 10 GB R640 nodes with 4*10 GB NIC
● VxRail cluster with VMware vCenter Server
● Twenty-five running workload VMs, without I/O
● 10+ alarms
● Remote support connectivity enabled
● One market application
Steps
1. To view the telemetry setting, enter:
curl -k -H "Content-Type: application/json" -X GET --user username:password https://
<vxrailmanager_ipaddr>/rest/vxm/v1/telemetry/tier
{‘level’:’BASIC’}
2. To modify the telemetry level, using the POST request method, enter:
curl -k -X POST -H "Content-type: application/json" -d '{"level":"BASIC"}' --user
management:tell1103@ https://<vxrailmanager_ipaddr>/rest/vxm/v1/telemetry/tier
Prerequisites
Verify that you have the following:
● Four 14 GB and 10 GB R640 nodes with 4*10 GB NIC
● VxRail cluster with VMware vCenter Server
● Twenty-five running workload VMs, without I/O
● 10+ alarms
● Remote support connectivity enabled
● One market application
Prerequisites
Before you change the IP address of the witness sled, perform the following:
● Do not update the witness sled DNS entry with the new IP address until instructed to in the steps.
● Verify the health status of the sled to avoid running in a degraded state.
● Verify that the DNS mapping is correct.
● Verify that the health monitoring status is disabled.
Steps
1. To shut down the witness VM, perform the following:
a. Log in to the VMware vSphere Web Client as an administrator.
b. From the VxRail cluster left-menu, select VxRail data center and click Witness Folder Cluster. Select the witness sled
and click VMware vSAN Witness Appliance.
c. Click the shutdown icon at the upper right corner of the screen and click YES to confirm.
2. To remove the witness sled from the VMware vCenter Server, perform the following:
a. Right-click the sled and select Maintenance Mode > Enter Maintenance Mode.
b. Right-click the sled again and select Remove from Inventory.
3. To change the IP address for the witness sled, perform the following:
a. Log in to the VMware ESXi of the witness sled through the management IP address.
b. From the Networking left-menu, under the VMkernel NICs tab, click vmk2.
d. When the wizard opens, configure the new IP address and click Save.
NOTE: The new management IP address disconnects immediately when you click Save. To reconnect, use the
updated IP address or change it using the ESXi shell command line using the iDRAC remote console.
4. Determine how the DNS is managed before you update your DNS server with a new DNS mapping and perform one of the
following:
● For the customer-managedn DNS server, add a DNS entry where the new witness sled IP address is mapped to the
original witness sled FQDN. Delete the old entry of the witness sled. Continue to step 5.
● If the DNS server is VxRail managed, use SSH to log in to the VxRail Manager as mystic and su to root.
a. Use an editor to open the /etc/hosts file.
5. To clear the DNS cache on the VMware vCenter Server, perform the following:
a. Using SSH, log in to the VxRail vCenter Server as root.
b. To restart the DNS service, enter:
systemctl restart dnsmasq
c. To verify the FQDN mapping to the new witness sled IP address, enter:
dig <witness_sled_fqdn> +short
6. To add the witness sled to the VMware vCenter Server, perform the following:
a. Log in to the VMware vSphere Web Client as an administrator.
b. Right-click the Witness Folder cluster and select Add Host.
c. To add the witness sled, use the witness sled FQDN.
[general]
log_level = INFO
log_target = syslog
listener_type = unix
listener_address = /tmp/platform-services.sock
[backend]
max_workers = 12
[restservice]
bind = 20.12.91.202---Original witness sled IP address in the platform.conf file
[general]
log_level = INFO
log_target = syslog
listener_type = unix
listener_address = /tmp/platform-services.sock
[backend]
max_workers = 12
[restservice]
bind = 20.12.91.203---New witness sled IP address
[general]
log_level = INFO
log_target = syslog
listener_type = unix
listener_address = /tmp/platform-linzhi.sock
[backend]
max_workers = 12
[restservice]
bind = 20.12.91.202---Original witness sled IP address in the platform.conf file
[general]
log_level = INFO
log_target = syslog
listener_type = unix
listener_address = /tmp/platform-linzhi.sock
[backend]
max_workers = 12
f. To restart the platform service for versions earlier than VxRail 8.0.300, enter:
/etc/init.d/vxrail-pservice restart
To restart the platform service for versions earlier than VxRail 8.0.300, enter:
esxcli daemon control restart -s platformsvc
You can also use the API to get the moid ID by entering:
curl -X POST --unix-socket /var/lib/vxrail/nginx/socket/nginx.sock -H
"accept: application/json" -H "Content-Type: application/json" -d
'{"query":"{ multiVirtualmachines(name: \"VMware vSAN Witness Appliance\",
hostname:\"<witness_sled_hostname>\", datacentername: \"<datacenter_name>\",
clustername: \"<cluster_name>\", host:\"<vcenter_hostname>\", username:
\"<vcenter_admin_username>\", password: \"<vcenter_admin_password>\") {moid config
{ name uuid }}}"}' http://localhost/rest/vxm/internal/do/v1/vm/query
To use the API to get the id of the witness VM in the VxRail database:
Using the id from the output, to update the witness VM moid, enter:
Prerequisites
Verify that DNS has been configured properly or this task may not work.
Before you change the hostname of the witness sled, verify the following:
● DNS mapping is correct.
● The VxRail cluster is in a healthy state.
● Health monitoring status is disabled.
Steps
1. Perform one of the following to determine how the DNS is managed:
● If the DNS server is customer-managed, add a DNS server entry where the new FQDN is mapped to the original witness
sled IP address. Continue to step 2.
● If the DNS server is VxRail managed, use SSH to log in to the VxRail Manager as mystic and su to root.
a. Use an editor to open the /etc/hosts file.
b. To add a DNS entry where the new FQDN is mapped with the original witness sled IP address, enter:
<sled_ipaddr><new_sled_fqdn><new_sled_host>
c. Click the Shutdown icon at the upper right corner of the screen. Click YES to confirm.
3. To remove the witness VMware ESXi host from the VMware vCenter Server, perform the following:
a. Right-click the witness sled and select Maintenance Mode > Enter Maintenance Mode .
b. Right-click the witness sled again and select Remove from Inventory.
4. To change the hostname for the witness sled, perform the following:
a. Log in to the VMware ESXi host client of the witness sled through the management IP address.
b. From the Networking left-menu, under the TCP/IP stacks tab, click Default TCP/IP stack.
d. When the wizard opens, enter the new Host name and click Save.
10. Using the moid, to set the witness VM moid to VxRail, enter:
psql -U postgres vxrail -c "update system.system_vm set moref_id='<vm_moid>' where
server_type='WITNESS';"
You can also use the API to get the id of the witness VM in the VxRail database:
psql -U postgres vxrail -c "select id, uuid, server_type, moref_id from system.system_vm
where server_type='WITNESS';"
Using the id from the output, to update the witness VM moid, enter:
Prerequisites
● Disable the stretched cluster.
● Remove the Witness VM.
NOTE: The VxRail-managed Witness VM is also known as the mapping host. The VMware ESXi operating system is
running on this VM. When the VxRail-managed Witness VM is added to the witness folder, it is displayed as a VMware
ESXI host. If the VxRail-managed Witness VM IP address is changed, the VMware ESXi host IP address is also changed.
The VMware ESXi host IP address must be removed and added back using the new IP address.
Steps
1. Log in to the VMware vSphere Web Client as an administrator and select the Inventory icon.
2. To verify the health status, select a cluster and select the Monitor tab. Select vSAN > Skyline Health.
3. To disable the stretched cluster, perform the following:
a. Select the VxRail cluster and click the Configure tab.
b. Select vSAN > Fault Domains.
c. From Fault Domains window, click DISABLE STRETCHED CLUSTER and click REMOVE.
4. To remove the Witness VM mapping host, perform the following:
6. To add the Witness VM as a host with the new IP address, perform the following:
NOTE: There is no procedure to change the management IP of a physical node where the customer-managed Witness
VM is running.
c. Accept the default entries for Connection settings, Host summary, Assign License, and Lockdown mode and click
NEXT.
d. For VM location wizard, select the folder location and click NEXT.
e. From the Witness Folder cluster, right-click the witness host and select Maintenance Mode > Exit Maintenance
Mode.
7. To update the VxRail Manager database, perform the following:
a. Use SSH to log in to the VxRail Manager as mystic and su to root.
b. Connect to the database and enter:
psql -U postgres vxrail
c. To query the witness sled IP address, enter:
select * from configuration.configuration where key = 'witness_vm_host';
Name IPv4 Address IPv4 Netmask IPv4 Broadcast Address Type Gateway DHCP DNS
---- ------------ ------------ -------------- ------------ ----------- --------
vmk0 20.12.91.112 255.255.255.0 20.12.91.255 STATIC 20.12.91.1 false
vmk1 192.168.101.33 255.255.255.0 192.168.101.255 STATIC 20.12.91.1 false
e. Select the rootFolder: datacenter and examine through the following values:
● childEntity: datacenter-3
Figure 78. VM
● guest: guest
Prerequisites
Verify the following:
● DNS mapping is correct.
● The VxRail cluster is in a healthy state.
● Health monitoring status is disabled.
Steps
1. To remove the witness VMware ESXi VM host from the VMware vCenter Server, perform the following:
a. Right-click the witness VM host and select Maintenance Mode > Enter Maintenance Mode.
b. Right-click the witness VM host again and select Remove from Inventory.
c. Go to step 4.
2. To change or add the hostname for the witness VM host, perform the following:
a. For the witness VM host, log in to the VMware ESXi host client using the management IP address.
b. From the Networking left-menu, under the TCP/IP stacks tab, click Default TCP/IP stack.
c. On the Default TCP/IP stack window, click Edit Settings.
d. When the wizard opens, enter a hostname and click Save.
3. To add the witness VM host to the VMware vCenter Server, perform the following:
a. Log in to the VMware vSphereWeb Client as an administrator.
b. Right-click the Witness Folder cluster and select Add Host....
c. Use the new witness VM FQDN to add the witness VM host.
d. Select Compose a new image on host life cycle options.
e. Follow the steps in the wizard to add the witness VM host.
f. From the Witness Folder cluster, right-click the host and select Maintenance Mode > Exit Maintenance Mode .
4. To verify the health status, perform the following:
a. Log in to the VMware vSphere Web Client as an administrator.
b. Select a cluster and select Monitor > vSAN > Skyline Health to retest and verify that the VxRail cluster is in a healthy
state.
5. After you change the hostname to update witness VM moid, perform the following:
a. Go to MOB and check VM moid on the witness sled host:
You can also use the API to get the id of the witness VM in the VxRail database:
psql -U postgres vxrail -c "select id, uuid, server_type, moref_id from system.system_vm
where server_type='WITNESS';"
Using the id from the output, to update the witness VM moid, enter:
Prerequisites
Verify the following:
● DNS mapping is correct.
● The VxRail cluster is in a healthy state.
● Disable health monitoring status.
Steps
1. For a VxRail-managed DNS server, perform the following:
a. Use SSH to log in to the VxRail Manager as mystic and su to root.
b. Use an editor to open the /etc/hosts file.
c. To add a DNS entry where the new FQDN is mapped with the original witness VM IP address, enter:
<vm_ipaddr><new_vm_fqdn><new_vm_host>
You can also use the API to get the id of the witness VM in the VxRail database:
psql -U postgres vxrail -c "select id, uuid, server_type, moref_id from system.system_vm
where server_type='WITNESS';"
Using the id from the output, to update the witness VM moid, enter:
Steps
1. To remove the witness VMware ESXi host from the VMware vCenter Server, perform the following:
a. Right-click the witness host and select Maintenance Mode > Enter Maintenance Mode.
b. Right-click the witness host again and select Remove from Inventory.
2. To change or add the hostname for the witness VM host, perform the following:
a. Log in to the VMware ESXi host client of the witness VM host using the management IP address.
b. From the Networking left-menu, under the TCP/IP stacks tab, click Default TCP/IP stack.
Steps
1. To download the VxRail configuration .xml file from the current configuration report, perform the following:
a. Log in to the VMware vSphere Web Client as an administrator.
b. Select the Inventory icon.
Prerequisites
● Provision a dedicated VLAN or subnet at each data site for witness traffic. The VLAN at each data site should be different.
For example, VLAN 19 at Site-1 on subnet 172.18.19.0/24 and VLAN 20 at Site-2 on subnet 172.18.20.0/24.
● For both sites, create the VLAN on the ToR switches and add to the trunk ports going to the nodes.
Steps
1. To create a port group on each data site, perform the following:
a. Log in to the VMware vCenter Web Client.
b. From the main menu, click the Networking icon.
c. Right-click the VMware VDS and select Distributed Port Group > New Distributed Port Group.
d. In the New Distributed Port Group wizard, enter the name for the port group. Click NEXT.
e. In Configure settings, enter or select the following:
● From the VLAN type drop-down menu, select VLAN.
● Enter the VLAN ID.
● Select Customize default policies configuration and click NEXT.
f. In the Teaming and Failover window, modify the Failover order of the uplinks to match the existing failover order of
the management traffic. Click NEXT.
g. For the remaining steps, accept the default settings by clicking NEXT.
h. In Ready to Complete, review the selections and click FINISH.
6. When using L3 switching for the witnessPg port group, add static routes to the witness and the VMware ESXi hosts to
communicate. When witness traffic is separated, reset the static routes on each node and the witness host.
a. Enable SSH on the node.
b. Determine the existing static route on the node for the vSAN network (vmk3), enter:
esxcli network ip route ipv4list
d. To add a static route on the node for the witness traffic network (vmk5), depending on which site the node is associated
with, enter:
● For Site-1, enter:
esxcli network ip route ipv4 add -n <witness_vsan_subnet>/24 -g
<site1_witness_traffic_subnet>
● For Site-2, enter:
Steps
1. Log in to the VMware vSphere Web Client as an administrator or log in to the VxRail Manager as mystic.
You can SSH to VxRail Manager or log in to the VMware vSphere Web Client and launch the VxRail Manager VM on the web
console.
2. To switch to root, enter:
su root
3. To generate the VxRail Manager log bundle, enter any of the following commands:
/mystic/generateLogBundle.py -v
/mystic/generateLogBundle.py --vxm
dellvxm:~ # / mystic/generateLogBundle.py -v
Start to collect log bundle.
types: vxm
The request id for collecting log bundle is
a2a3f85e-408a-4de4-96dc-1c8dfd3e17fa
Start looping a2a3f85e-408a-4de4-96dc-1c8dfd3e17fa until request finished.
Collect log bundle successfully.
Please find the generated log bundle here:
/tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_05_21_41_13.zip
5. For VxRail 8.0.210 and later, click the Configure tab and select VxRail > Support. Under Support, click the
Troubleshooting tab.
6. Under Support, click the Troubleshooting tab and click CREATE to select the log types.
7. When finished, select the generated log bundle and click Download.
The <witness> log type is only for 2-node robot environment and does not work in a normal cluster configuration.
Steps
1. Log in to the VxRail Manager as mystic.
2. To switch to root, enter:
su root
3. To generate the VMware vCenter Server log bundle, enter any of the following commands:
/mystic/generateLogBundle.py -c
/mystic/generateLogBundle.py --vcenter
Wait for the command to finish. The following file path displays:
dellvxm:~ # / mystic/generateLogBundle.py -c
Start to collect log bundle.
types: vcenter
The request id for collecting log bundle is
0e9d3cb3-89c3-49d7-921c-b35fed410fe1
Start looping 0e9d3cb3-89c3-49d7-921c-b35fed410fe1 until request finished.
Collect log bundle successfully.
Please find the generated log bundle here:
/tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_18_14_29_49.zip
dellvxm:~ # ls -l /tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_05_21_57_54.zip
-rw-rw-rw- 1 root root 246578225 August 5 21:58 /tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_05_21_57_54.zip
Steps
1. Log in to the VxRail Manager CLI as mystic.
2. To switch to root, enter:
su root
3. To generate the VMware ESXi log bundle, enter any of the following commands:
/mystic/generateLogBundle.py -e
/mystic/generateLogBundle.py --esxi
/mystic/generateLogBundle.py --types esxi
Wait for the command to finish. The following file path displays:
dellvxm:~ # / mystic/generateLogBundle.py -e
Start to collect log bundle.
types: esxi
The request id for collecting log bundle is
2807b8ca-5d84-4578-9409-d6eb5389ff8b
Start looping 2807b8ca-5d84-4578-9409-d6eb5389ff8b until request finished.
Collect log bundle successfully.
Please find the generated log bundle here:
/tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_05_22_15_46.zip
dellvxm:~ # ls -l /tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_05_22_15_46.zip
-rw-rw-rw- 1 root root 3019014 August 5 22:27 /tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_05_22_15_46.zip
If the vSAN encryption is enabled, the following warning messages displays while performing ESXi log collection: Failed
to generate esxi log bundle on host <hostname> due to internal error. See KB000200163.
vxm:~ # /mystic/generateLogBundle.py -e
Start to collect log bundle.
types: esxi
The request id for collecting log bundle is
7c260275-1921-4e2f-8408-95d6cef88a35
Start looping 7c260275-1921-4e2f-8408-95d6cef88a35 until request finished.
Failed to generate esxi log bundle on host esx-c.122-powerx.dell.com due to internal
error. See KB000200163.
Failed to generate esxi log bundle on host esx-a.122-powerx.dell.com due to internal
error. See KB000200163.
Failed to generate esxi log bundle on host esx-b.122-powerx.dell.com due to internal
error. See KB000200163.
Steps
1. Log in to the iDRAC console.
2. To generate an iDRAC log bundle, enter any of the following commands:
/mystic/generateLogBundle.py --idrac
Steps
1. Log in to the VxRail Manager as mystic.
2. To switch to root, enter:
su root
3. To generate the platform log bundle, enter any of the following commands:
/mystic/generateLogBundle.py -p
/mystic/generateLogBundle.py --platform
Wait for the command to finish. The following file path displays:
dellvxm:~ # / mystic/generateLogBundle.py -p
Start to collect log bundle.
types: platform
The request id for collecting log bundle is
50661fb1-d552-47ef-be8f-e42ffc08d07f
Start looping 50661fb1-d552-47ef-be8f-e42ffc08d07f until request finished.
Collect log bundle successfully.
Please find the generated log bundle here:
/tmp/mystic/dc/VxRail_Support_Bundle_5291caa7-8938-
fa82-169c-8b010f5d1658_2022-10-08_12_53_48.zip
Steps
1. Log in to the VMware vSphere Web Client as an administrator or log in to the VxRail Manager as mystic.
2. To switch to root, enter:
su root
3. To generate the VMware ESXi log bundle with node selection , enter any of the following commands:
/mystic/generateLogBundle.py -e 2C49DN2, 3F89DN2
/mystic/generateLogBundle.py --esxi --nodes 2C49DN2, 3F89DN2
/mystic/generateLogBundle.py --types esxi --nodes 2C49DN2, 3F89DN2
Wait for the command to finish. The following file path displays:
dellvxm:~ # ls -l /tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_07_09_34_21.zip
-rw-rw-rw- 1 root root 485734016 August 7 09:34 /tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_07_09_34_21.zip
If the vSAN encryption is enabled, the following warning messages displays while performing ESXi log collection: Failed
to generate esxi log bundle on host <hostname> due to internal error. See KB000200163.
Steps
1. Log in to the VMwarev Sphere Web Client as an administrator or log in to the VxRail Manager as mystic.
2. To switch to root, enter:
/mystic/generateLogBundle.py -v -c
Wait for the command to finish. The following file path displays:
dellvxm:~ # ls -l /tmp/mystic/dc/VxRail_Support_Bundle_521ffa8e-70f7-793e-
ea1a-8ec8db0fb3a3_2022_08_20_05_04_09.zip
-rw-rw-rw- 1 root root 691083648 August 20 09:34 /tmp/mystic/dc/
VxRail_Support_Bundle_521ffa8e-70f7-793e-ea1a-8ec8db0fb3a3_2022_08_20_05_04_09.zip
Steps
1. Log in to the VMwarevSphere Web Client as an administrator or log in to the VxRail Manager as mystic.
2. To switch to root, enter:
su root
3. To generate the full log bundle, enter any of the following commands:
/mystic/generateLogBundle.py
/mystic/generateLogBundle
Wait for the command to finish. The following file path displays:
dellvxm:~ # / mystic/generateLogBundle
Start to collect log bundle.
types: vxm,vcenter,esxi,idrac,platform
The request id for collecting log bundle is
99419c45-3a75-4956-9470-255e94239175
Start looping 99419c45-3a75-4956-9470-255e94239175 until request finished.
Collect log bundle successfully.
Please find the generated log bundle here:
/tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_07_14_17_25.zip
dellvxm:~ # ls -l /tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_07_14_17_25.zip
-rw-rw-rw- 1 root root 991840517 August 7 14:17 /tmp/mystic/dc/
VxRail_Support_Bundle_525a98a0-8e7b-5849-4946-6bc80cb64731_2022_08_07_14_17_25.zip
If the vSAN encryption is enabled, the following warning messages displays while performing ESXi log collection: Failed
to generate esxi log bundle on host <hostname> due to internal error. See KB000200163.
vxm:~ # /mystic/generateLogBundle.py
Start to collect log bundle.
types: idrac,vcenter,platform,vxm,esxi
The request id for collecting log bundle is
8335787c-2641-48d3-9869-675f20489c38
Start looping 8335787c-2641-48d3-9869-675f20489c38 until request finished.
Collect log budle successfully.
Please find the generated log bundle here:
/tmp/mystic/dc/VxRail_Support_Bundle_52e92049-182f-40ab-
f117-4103dab9dc16_2023-04-06_22_08_37.zip
Warning
Failed to generate esxi log
bundle on host esx01.poda.powerx.dell.com due to internal error. See KB000200163.
Failed to generate esxi log
bundle on host esx02.poda.powerx.dell.com due to internal error. See KB000200163.
Failed to generate esxi log
bundle on host esx03.poda.powerx.dell.com due to internal error. See KB000200163.
Steps
1. Log in to the VxRail Manager as mystic.
2. To switch to root, enter:
su root
3. To generate the witness log bundle, enter:
/mystic/generateLogBundle.py -w
/mystic/generateLogBundle.py --witness
Wait for the command to finish. The following file path displays:
dellvxm:~ # / mystic/generateLogBundle.py -w
Start to collect log bundle.
types: witness
The request id for collecting log bundle is
5e4517fc-76f7-400a-85d1-64856a2aa46a
Start looping 5e4517fc-76f7-400a-85d1-64856a2aa46a until request finished.
Collect log bundle successfully.
Please find the generated log bundle here:
/tmp/mystic/dc/VxRail_Support_Bundle_521a4049-edb6-28ef-f7f1-
ebe7df507143_2022_09_21_05_20_34.zip
dellvxm:~ # ls -l /tmp/mystic/dc/VxRail_Support_Bundle_521a4049-edb6-28ef-f7f1-
ebe7df507143_2022_09_21_05_20_34.zip
The parameter <witness> log type is only for 2-node reboot environment and does not work in a normal cluster
configuration.
The witness log bundle is not in the full log bundle collection option. The witness log bundle collection must be performed
separately.
Prerequisites
Be familiar with UNIX and Linux commands and obtain root credentials.
log_destination='syslog'
syslog_facility='LOCAL0'
syslog_ident='postgres'
c. Save changes.
3. To reload the configuration file, as root, enter:
#systemctl reload postgresql
b. Enter:
c. Enter:
#cat /var/log/messages | grep postgres
Complete all tasks to ensure that the PostgreSQL log destination is the same as source VxRail Manager after running the
vxm_backup_restore.py script.
Prerequisites
Be familiar with UNIX and Linux commands and obtain root credentials.
Steps
1. To create a certificate using an existing key, enter:
#cd /var/lib/pgsql
#DOMAIN=`hostname -d`
#SHORT_NAME=`hostname -s`
#openssl req -new -nodes -out new-server.csr -keyout new-server.key -subj "/CN=$
{SHORT_NAME}.${DOMAIN}/O=vxrail"
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
Steps
1. For VxRail 8.0.210 and later, to import a VMware vCenter Server certificate into the VxRail Manager trust store using a
script, perform the following:
a. Use SSH to log in to VxRail Manager and switch to root.
b. To run the import script, enter:
c. From the Inventory icon, select the VxRail cluster and click the Configure tab.
d. Select VxRail > Security > Certificates.
e. Click ALL TRUST STORE CERTIFICATES.
2. For VxRail versions earlier than VxRail 8.0.210, to import a certificate, obtain the fingerprint list from the VxRail Manager
trust store, perform the following:
a. Log in to the VxRail onboard API documentation at https://<vxm_ipaddr>/rest/vxm/api-doc.html.
b. From the VxRail REST API left-menu, select Certificates > Get a list of fingerprints retrieved from....
c. Enter the username and password, and then click Send Request.
You can also place only the certificate content in the body.
For example:
"-----BEGIN CERTIFICATE-----\nMIIEHzCCAwegAwIBAgIJANx5901VXVVVMA0GCSqGS
Ib3DQEBCwUAMIGaMQswCQYD\nVQQDDAJDQTEXMBUGCgmSJomT8ixkARkWB3ZzcGhlcmUxFTATBgoJkiaJk/
IsZA
EZ\nFgVsb2NhbDELMAkGA1UEBhMCVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExHDAaBgNV\nBAoME2M0LXZjLnJh
Y
2tMMDMubG9jYWwxGzAZBgNVBAsMElZNd2FyZSBFbmdpbmVl\ncmluZzAeFw0yMjAzMjcwNjA3NTVaFw0zMjAzM
j
QwNjA3NTVaMIGaMQswCQYDVQQD\nDAJDQTEXMBUGCgmSJomT8ixkARkWB3ZzcGhlcmUxFTATBgoJkiaJk/
IsZAE
ZFgVs\nb2NhbDELMAkGA1UEBhMCVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExHDAaBgNVBAoM\nE2M0LXZjLnJhY
2
tMMDMubG9jYWwxGzAZBgNVBAsMElZNd2FyZSBFbmdpbmVlcmlu\nZzCCASIwDQYJKoZIhvcNAQEBBQADggEPAD
C
CAQoCggEBALSoNvUmgFYouBS6qjgp\nwb8NZdeT1Gv4r2/wbWNr332iP1A/ffv5Kq66AbaaNDu+0G6NSsdh/
IPD
I31YtaAP\n0VN7xvwuUJsYeCCwzldQE3tm/M4Xe0h/Tw//GodYRIkC/
5uYxKxm4hRCPu7Qvs8/\n2q1ypGclpzj
5U5lXOoxHy4JsmX9Argqee3F0mT9l0bHqGBlNu+cWtK0Hwh7eTaUj\nyhJ+pHVf8SHvQQnxIYSlo1e0o3lQnGv
+
TXcLctbKzmsHMPVjYOletqs/
9aCSsEgO\ncxhjSIxGwwgRI5BLGhakoLXHznyWsJ81vc0TBvMock2hPOV7VOhGp
NibBMB6Fz+j\nC3cCAwEAAaNmMGQwHQYDVR0OBBYEFCaeddsZQeRukQL/
pfUX2MbCFk30MB8GA1Ud\nEQQYMBaB
DmVtYWlsQGFjbWUuY29thwR/AAABMA4GA1UdDwEB/wQEAwIBBjASBgNV\nHRMBAf8ECDAGAQH/
AgEAMA0GCSqGS
Ib3DQEBCwUAA4IBAQBbbnY6I8d/qVCExT89\nthbae5n81hzFFtL0t36HzmSkcCLZnU/
w8cWuILabJCSYbJYREG
cGrvKkplF9Bfsp\nw/
u4Y1nwHrLWmfX1spNWgEWFGbSzE2qxFLIozNBKcMS1+CvZP6fIc1CfqjvMTEt2\nyNGbR
+gtBG5Are3K6VMZPihSCcWqu7XMsX9yCVdpOFCbV5m27JxYMwleOA220io6\nI3PJVAvCsRNoaBu7UiWEmjAsq
j
0m1v4+c3XG+2QquJ6CGHrfgoxGQDormUXGbxvp\neUq86TgxcbH76LzmLTywJzQ/
DFYm3bBHOgzCH2F0Ra7jz46
gnuuOPqWtJ4pU1Ghj\nm2rf\n-----END CERTIFICATE-----"
Prerequisites
● Verify that the VMware ESXi host network is available when you replace the VMware ESXi host certificates into VxRail
Manager.
● Obtain the root password.
Steps
1. Log in to the VxRail Manager as mystic.
2. To switch to root, enter:
su root
3. To replace certificates on a node, enter:
cd /mystic/ssl/
For versions earlier than VxRail 8.0.300, enter: python certificate_replacement.py -sn <node_sn1>
<node_sn2>
For VxRail 8.0.300 and later , enter: mcp_python certificate_replacement.py -sn <node_sn1> <node_sn2>
The updated certificates are stored under the /var/lib/vmware-marvin/trust/host directory in VxRail Manager. If
a host fails, check the failed host network using the failed hosts serial number.
When the update is complete, the following table shows the results:
4. To manually import the VMware vCenter Server SSL certificate on the VxRail Manager, see KB 000077894.
Prerequisites
Verify that the VMware ESXi host network is available during the replacement of the VMware ESXi host certificates into VxRail
Manager.
The VxRail-managed VMware vCenter Server manages the VxRail 8.0.xxx and later. See the VxRail 8.0.x Support Matrix for a
list of supported versions.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
Steps
1. For VxRail 8.0.210 and later, go to step 14.
2. For Enhanced Link Mode, to retrieve the new CA certificates from the VMware vCenter Server, perform the following:
a. Log in to the VMware vCenter Server.
b. Click Download trusted root CA certificate on the bottom-right corner or right-click the link and save it as a ZIP file.
c. A download.zip file is downloaded to your local machine that contains the CA certificates (.<digit> files) and the
revocation lists (.r<digit> files).
).
NOTE: The revocation files are not used in this task.
3. Use FTP or SCP to transfer the download.zip to the VxRail Manager and select the target directory such as /tmp.
4. SSH to the root account in VxRail Manager. To extract the download, enter:
cd /tmp
unzip download.zip
cd certs
ls *
9. Select the VxRail cluster and click the Configure tab. Select VxRail > Health Monitoring and enable the health monitoring
status.
10. To restart the marvin and runjars services, enter:
service vmware-marvin restart
11. To change the permission on the new certificate file to -rw-r-r--, enter:
chmod 644 /var/lib/vmware-marvin/trust/lin/*
12. To restart the ms-day2 service, obtain the root credentials and switch to root by entering:
su root
kubectl --kubeconfig
/etc/rancher/rke2/rke2.yaml -n helium scale deployment/ms-day2
--replicas=1
13. To update the VMware ESXi host certificates in VxRail Manager, enter:
cd /mystic/ssl/
For versions earlier than VxRail 8.0.300, enter: python certificate_replacement.py
If the default VMware vCenter Server management account does not have sufficient permissions to get VMware ESXi hosts
certificates. For versions earlier than VxRail 8.0.300, enter: python certificate_replacement.py -u. For VxRail
8.0.300 and later, enter: mcp_python certificate_replacement.py -u provide another VMware vCenter Server
account.
Next steps
For more information about replacing certificates, see KB 77894.
See Managing Certificates Using the vSphere Certificate Manager Utility.
Prerequisites
● Back up the VMware vCenter Server in the VMware SSO domain.
● Unregister products from the VMware vCenter Server. Reregister the products after the FQDN change is complete.
● Delete the VMware vCenter High Availability (vCHA) configuration and reconfigure after the FQDN change is complete.
● If you rename the VMware vCenter Server, rejoin it to the Microsoft AD.
● Verify that the FQDN or hostname resolves to the provided IP address (DNS A records).
● Do not unregister the VxRail Manager VMware vCenter Server plug-in.
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
Steps
1. For internal DNS, to configure VxRail Manager to add the VMware vCenter Server Appliance FQDN DNS record, perform the
following:
172.16.10.211 vc.testfqdn.local vc
NOTE: If the new FQDN is in different top-level domains, the new FQDN of the top-level domain and IP address
CICR on the auth-zone in /etc/dnsmasq.conf must be updated or added.
d. To locate and add a line for the new auth-zone with the new FQDN and IP CICR, enter:
auth-server=127.0.0.1,eth0
auth-zone=vv003.local,172.16.0.0/16
auth-zone=vv003.local,fc00::20:18:0:0/96,fc00::20:19:0:0/96
auth-zone=vv003.local,fc00::20:18:0:0/96,fc00::20:19:0:0/96
g. Review the changes that are made to the VMware vCenter Servers FQDN and IP address settings.
h. Acknowledge that the VMware vCenter Server backup is performed.
Perform these additional steps after the FQDN of the VMware vCenter Server is changed. Do not unregister the VxRail
VMware vCenter Server plug-in.
3. Wait for the FQDN change procedure to complete.
After the changes are complete, an alert displays allowing the automatic redirection back to the VAMI on port 5480 within
10 seconds. Click Redirect Now to skip the automatic redirect.
4. Log in to the VMware vCenter Server as root on port 5480 and confirm that the configuration is complete.
5. To renew the node certificates in the VMware vSphere Web Client, perform the following:
a. From the Inventory icon, select a VxRail cluster, and then select a host within the cluster.
b. Click the Configure tab and select System > Certificate.
c. Under Certificate Management, click Renew.
NOTE: Each certificate must be updated manually on each node, including the witness node for certain clusters.
6. To restart the vxrail-platform-service or platformsvc for each node, perform the following:
a. From the Inventory icon, select a VxRail cluster, and then select a host within the cluster.
b. Click the Configure tab.
c. For versions earlier than VxRail 8.0.300, select System > Services and select vxrail-platform-service to restart
the service. For VxRail 8.0.300 and later, select System > Services and select platformsvc to restart the service.
7. (OPTIONAL) To update the VxRail Manager database for the TLD change, perform the following:
a. Connect to the database and enter:
psql -U postgres vxrail
b. To confirm your existing TLD, enter:
select * from configuration.configuration where key='system_tld';
c. To update your new FQDN value, enter:
update configuration.configuration set value='new_FQDN' where key='system_tld';
d. To verify your new FQDN, enter:
select * from configuration.configuration where key='system_tld';
8. To update the VMware vCenter Server Appliance FQDN information in the VxRail Manager using the root credentials,
perform the following:
a. To obtain the existing VMware vCenter Server host value, enter:
curl --location --request GET 'http://127.0.0.1/rest/vxm/internal/configservice/v1/
configuration/keys/vcenter_host' --header 'Content-Type: application/json' --unix-
socket /var/lib/vxrail/nginx/socket/nginx.sock
b. To update the VMware vCenter Server host value with the new FQDN, enter:
curl --location --request PUT 'http://127.0.0.1/rest/vxm/internal/configservice/v1/
configuration/keys/vcenter_host' --header 'Content-Type: application/json' --unix-
socket /var/lib/vxrail/nginx/socket/nginx.sock --data-raw '{"value": "<New_VC_FQDN>"}'
{
"vc_info": {
"host": "<New_VC_FQDN>",
"username": "[email protected]",
"password": "<password>",
"port": 443
},
"auto_accept_vc_cert": true
}
EOF
{
"vc_info": {
"host": "vcnew1.testfqdn.local",
"username": "[email protected]",
"password": "password",
"port": 443
},
"auto_accept_vc_cert": true
}
EOF
11. Clear the cache to ensure that the VxRail Manager information is updated correctly.
12. To generate a base64 string for the username:password, enter:
# echo -n "[email protected]:password" | base64
# YWRtaW5pc3RyYXRvckB2c3BoZXJlLmxvY2FsOnBhc3N3b3Jk
NOTE: You can use this step to delete the old record for the IPv6 or dual-stack environment.
Next steps
For more information, see:
● KB 77894 to manually import the VMware vCenter Server SSL certificate on the VxRail Manager.
● Managing Certificates Using the vSphere Certificate Manager Utility
● Changing your vCenter Server's FQDN
This procedure is intended for Dell Technologies customers, employees, and partners who are authorized to work on a VxRail
cluster.
Prerequisites
● Disable the remote support connectivity, if enabled.
● Verify that the VxRail cluster is in a healthy state.
● Add new nodes into the cluster before running the node removal procedure to avoid any capacity or node limitations.
● Verify that the VxRail cluster has enough nodes remaining after the node removal to support the current Failure to Tolerate
(FTT) setting:
● The following table lists the minimum number of VMware ESXi nodes in the VxRail cluster before node removal:
Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. Select a cluster and click the Monitor tab.
3. Select vSAN > Skyline Health.
4. If alarms display, acknowledge the Reset to Green at the node and cluster levels before you remove the node.
Steps
1. To view capacity for the cluster, log in to the VMware vSphere Web Client as administrator, and perform the following:
a. Under the Inventory icon, select the VMware vSAN cluster and click the Monitor tab.
b. Select vSAN > Capacity.
2. To check the impact of data migration on a node, perform the following:
a. Select vSAN > Data Migration Pre-check.
b. From the SELECT OBJECT drop-down, select the host.
c. From the vSAN data migration drop-down, select Full data migration and click PRE-CHECK.
3. To view disk capacity, perform the following:
a. Select the VMware vSAN cluster and click the Configure tab.
b. Select vSAN > Disk Management to view capacity.
Use the following formulas to compute percentage used:
CPU_used_% = Consumed_Cluster_CPU /(CPU_capacity - Plan_to_Remove_CPU_sum)
Memory_used_% = Consumed_Cluster_Memory /(Memory_capacity - Plan_to_Remove_Memory_sum)
4. To view to view the CPU and memory overview, perform the following:
a. Select the VMware vSAN cluster and click Monitor tab.
b. Select Resource Allocation > Utilization.
5. To check the CPU and memory resources on a node, perform the following: click the node and select Hardware.
a. Select the node and click the Summary tab.
b. View the Hardware window for CPU, memory, Virtual Flash Resource, Networking, and Storage.
Prerequisites
Before you remove the node, perform the following steps to place the node in to maintenance mode:
1. Log in to the VMware vSphere Web Client as an administrator.
2. Under the Inventory icon, right-click host that you want to remove and select Maintenance Mode > Enter Maintenance
Mode.
3. In the Enter Maintenance Mode dialog, check Move powered-off and suspended virtual machines to other hosts in
the cluster.
4. Next to vSAN data migration, from the drop-down menu, select Full data migration and click GO-TO PRECHECK.
5. Verify that the test was successful and click ENTER MAINTENANCE MODE and click OK.
6. To monitor the VMware vSAN resyncing, click the cluster name and select Monitor > vSAN > Resyncing Objects.
Steps
1. To remove the host from the VxRail cluster, perform the following:
a. Select the cluster and click the Configure tab.
b. Select VxRail > Hosts.
c. Select the host and click REMOVE.
Next steps
To access the SSH, perform the following:
● Log in to the VMware vCenter Server Management console as root.
● From the left-menu, click Access.
● From the Access Settings page, click EDIT and enable SSH.
If a DNS resolution issue occurs after you removed the node or you added the same removed node back into the cluster but
with a new IP address, on the VMware vCenter Server, to update dnsmasq, enter:
# service dnsmasq restart
Steps
1. From the VMware vSphere Web Client, select the Inventory icon.
2. Select a VxRail host and click the Configure tab.
3. Select VxRail > Hosts.
4. From the Cluster Hosts window, check the hosts that you want to reboot and click REBOOT.
5. For Reboot Hosts, select Reboot Now and click Next.
6. On the Prechecks window, view the prechecks and click NEXT.
7. On the Summary window, click REBOOT NOW.
Prerequisites
To view when the notes were rebooted last, perform the following:
1. Log in to the VMware vSphere Web Client as an administrator.
Steps
1. To perform an immediate reboot, select the VxRail cluster and click the Configure tab.
2. Under VxRail, select Hosts and view the information in the Last Reboot column.
3. Check the box to select the nodes and click REBOOT.
NOTE: A reboot may take up to 10 minutes.
Prerequisites
● Create a file-based back up.
● Verify that your system meets the minimum software and hardware requirements. See System Requirements for the vCenter
Server Appliance and Platform Services Controller Appliance.
● Download and mount the VMware vCenter Server Appliance Installer. SeeDownload and Mount the vCenter Server Installer.
● To restore a VMware vCenter Server HA cluster, first power off the active, passive, and witness nodes.
● Verify that the target VMware ESXi host is in lockdown or maintenance mode and that it is not part of a fully automated
DRS cluster.
● Check if the DRS cluster of a VMware vCenter Server inventory has a VMware ESXi host that is not in lockdown or
maintenance mode.
● Configure the forward and reverse DNS records for the IP address before you assign a static IP address to the VMware
vCenter Server Appliance.
● Power off the backed-up VMware vCenter Server.
Steps
1. Log in to the VMware vSphere Web Client as an administrator.
2. Select the Inventory icon.
3. Right-click the VxRail cluster and select Deploy OVF Template to launch the wizard.
4. From Select an OVF template, select Local file and then click UPLOAD FILES.
5. Select the VMware vCenter Server OVA file and click NEXT.
6. Enter a VM name and click NEXT.
7. Select the node where the VMware vCenter Server is installed and then click NEXT.
8. Verify that all details are correct. Ignore certificate warnings and click NEXT.
9. Accept all license agreements and click NEXT.
10. Select the appropriate configuration for the VMware vCenter Server environment and then click NEXT.
11. Select the VxRail vSAN data store storage and then click NEXT.
12. Select the VMware vCenter Server Network as the Destination Network.
13. Enter the following network configurations in Customize template based on the network requirements of the end user:
14. Verify that the setup details are correct and then click FINISH.
15. Locate the host from the VMware vCenter Server Appliance window.
16. Log in to the VMware ESXi host that the initial VMware vCenter Server is running on and then click Shut down.
17. Access the new VMware vCenter ESXi and then click Power on.
18. Launch the VMware vCenter console and verify the network configurations.
NOTE: If the configuration information fails to deploy successfully, reconfigure it in the VMware vCenter Server
console.
Verify the VMware vCenter IP Address, Subnet Mask, and Default Gateway are correct. If not, update them.
Verify that the DNS configuration is correct. If incorrect, update the correct DNS and hostname.
Save the changes and exit from the VMware vCenter Server after you modify the IP address or DNS configurations.
19. Go to the newly deployed VMware vCenter Server at http://<FQDN>:5480 and click Restore.
b. Click content.
d. Click datacenter.
h. Locate VMware vCenter Server in one host and click VMware vCenter Server Appliance.
i. Click summary.
j. Click config.
m. Log in to the VMware vCenter Server and verify that VxRail is connected.
n. For a dual-stack environment, after you complete the process, log in to the VMware vCenter Server Appliance
management interface (VAMI) as root at https://<vCSA_ip_addr>:5480.
o. Go to Networking and verify that the DNS server contains at least one IPv4 and one IPv6 address(es). If a DNS server
is lost, log in to https://<vCSA_ip_addr>. Select a VxRail cluster and click the Configure tab. VxRail > Setting >
DNS server.
p. Apply the current DNS server to sync the DNS server on all the hosts and VMs.
Steps
1. To access the VxRail Manager bash shell, log in to the VMware vSphere Web Client as administrator and perform the
following:
a. From the Inventory icon, select the VxRail Manager VM.
b. On the Summary tab, click LAUNCH REMOTE CONSOLE.
c. Log in to the VxRail Manager as root or log in to the VxRail Manager VM as mystic and su to root.
2. You can create a backup with or without VxRail Manager logs. Select one of the following:
● To create a backup without the VxRail Manager logs, enter:
cd /mystic/vxm_backup_restore/
NOTE: You may not be able to access some of the VxRail features during the backup process because the script
includes restarting the services. Wait two to three minutes until the backup finishes and the services are ready to be
used.
3. To verify that the backup is complete and to list the backup copies, enter:
cd /mystic/vxm_backup_restore/
5. The following steps are only required following a first-run and following an upgrade. After the first backup, back
up the recoveryBundle.zip to the primary data store manually. For the upgraded VxRail, replace the old
recoveryBundle.zip with the new one.
a. Log in to the VMware vSphere Web Client as an administrator.
b. Select a host and click the Configure tab.
c. Select System > Services.
d. Select SSH and click START.
e. Select ESXi Shell and click START.
6. To back up the recoveryBundle.zip, SSH in to the VxRail Manager VM, log in as mystic and su to root.
a. For the VMware vSAN cluster, enter:
#scp /data/store2/recovery/recoveryBundle.zip
root@[hostIP]:/vmfs/volumes/VxRail-Virtual-SAN-Datastore-******/VxRail_backup_folder/
vxrailmanagement@[hostIP]:/vmfs/volumes/VxRail-Virtual-SAN-Datastore-******/
VxRail_backup_folder/
Steps
1. From the VMware vSphere Web Client, select the Inventory icon.
2. Select the VxRail cluster and click the Configure tab.
3. Under VxRail Integrated Backup, select the STATUS tab.
4. Click CREATE BACKUP.
Prerequisites
Before you schedule the backup, manually back up the recoveryBundle.zip file to the primary data store. This step is only
required following a first-run and an upgrade. For the upgraded VxRail, replace the old recoveryBundle.zip file with the
new one.
1. To back up the recoveryBundle.zip to the primary data store manually, perform the following:
a. Log in to the VMware vSphere Web Client as an administrator.
b. Select a host and click the Configure tab.
c. Select System > Services.
d. Select SSH and click START.
e. Select ESXi Shell and click START.
2. To back up the recoveryBundle.zip to the primary data store, enter:
SSH root@<host_ipaddr>
root@<host_ipaddr>:/vmfs/volumes/VxRail-Virtual-SAN-Datastore-******/
VxRail_backup_folder/
vxrailmanagement@<host_ipaddr>:/vmfs/volumes/VxRail-Virtual-SAN-Datastore-******/
VxRail_backup_folder/
Next steps
To stop the automatic backup, enter:
cd /mystic/vxm_backup_restore/
For versions earlier than VxRail 8.0.300, enter: python vxm_backup_restore.py -c --period manual
For VxRail 8.0.300 and later, enter: mcp_python vxm_backup_restore.py -c --period manual
CAUTION: If SolVe Online for VxRail is not used to generate procedures, VxRail is at risk for potential data loss.
Firmware upgrades
You can upgrade firmware on VxRail models G560/G56F. For other VxRail models, contact Dell Support. You can also upgrade
firmware on the chassis.
Hardware upgrade/expansion
You can upgrade or expand the following components:
● Convert a 2-node cluster to a 3-node cluster
● Expand a compute node
● Expand a satellite node
● Expand a capacity drive (HDD/SSD)
● Add a disk group
● Upgrade an SSD
● Expand a manual disk
● Add an NVMe disk
● Upgrade system memory
● Upgrade a NIC
● Upgrade the GPU
● Upgrade from a TPM 1.2 to TPM 2.0 module
Software upgrade
Select your VxRail model to upgrade your software. When you perform a software upgrade, you download a bundle that includes
VxRail Manager which performs the upgrade. VxRail Manager assesses the current software version running on your VxRail and
identifies the differences. Only the sections that are identified as different are upgraded with the new version software.
For connected clusters, VxRail Manager automatically retrieves the recommended upgrade bundle information with the latest
upgrade prechecks. Upgrade prechecks run every 24 hours against the wanted state of the target bundle. An upgrade readiness
LCM modes
VxRail has the following LCM modes which are abstracted by the VxRail API:
● Legacy LCM (ESXCLI) Mode: VxRail orchestrates life cycle management and the continually validated state using the
ESXCLI.
● vLCM Mode (Recommended):: VxRail orchestrates lifecycle management and the continually validated state using the
VMware vSphere LCM (vLCM) API. This mode provides additional update capabilities, including Quick Boot and ESXi Live
Patch. If vLCM mode is enabled, you cannot revert to Legacy LCM (ESXCLI) Mode.
For VxRail 7.0.240 and later, there are limited cases where VMware vLCM benefits. For VxRail 8.0.210 and later, additional use
cases can leverage vLCM enablement for more capabilities.
Steps
1. From the VMware vSphere Web Client, under the Inventory icon, select your VxRail cluster.
2. Click the Configure tab, then under VxRail, click Updates.
3. Under Updates, click LOCAL UPDATES. Then under VxRail Upgrades, click Plan and Update (Recommended).
4. Under Installer Metadata File, select a metadata file and click UPLOAD. Click UPLOAD again on the confirmation window.
5. After the file is uploaded from the Create Update Advisor Report window, click CREATE.
6. When the report finishes generating, click NEXT, then select the LOCAL UPDATES tab and click CREATE REPORT.
Report Summary
The report summary is comprehensive and contains the following:
● Cluster update readiness status
● Report timestamp
● Cluster name, current and target states
● Update Type
● Cluster update duration estimates
● Insights from the last VxRail backup
● Link to release notes
VxRail Components
The following figure shows VxRail components:
The following figure shows VxRail components with group by component disabled: