0% found this document useful (0 votes)
175 views

Vsphere Administration Guide For Acropolis (Using Vsphere HTML 5 Client)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
175 views

Vsphere Administration Guide For Acropolis (Using Vsphere HTML 5 Client)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

AOS 5.

10

vSphere Administration
Guide for Acropolis (using
vSphere HTML 5 Client)
April 25, 2019
Contents

1.  vCenter Configuration............................................................................................ 3


Using an Existing vCenter Server....................................................................................................................... 3
Creating a Nutanix Cluster in vCenter..................................................................................................3
Adding a Nutanix Node to vCenter.......................................................................................................4
Configuring HA, DRS, and EVC in vCenter Server.......................................................................... 6
vSphere Cluster Settings (Review).................................................................................................................... 9
vSphere HA Admission Control Settings for Nutanix Environment.....................................................10
Disabling SIOC on a Container...........................................................................................................................12
Removing the Hidden Files......................................................................................................................13

2.  VM Management.................................................................................................... 15
VM Migration.............................................................................................................................................................. 15
Migrating a VM to Another Nutanix vSphere Cluster....................................................................15
vStorage APIs for Array Integration.................................................................................................................16
Cloning a VM................................................................................................................................................. 16

3.  Node Management................................................................................................ 17


Shutting Down a Node in a Cluster (vSphere Web Client).....................................................................17
Shutting Down a Node in a Cluster (vSphere command line)...............................................................18
Starting a Node in a Cluster (vSphere Client)............................................................................................. 18
Starting a Node in a Cluster (vSphere command line)........................................................................... 20
Restarting a Node....................................................................................................................................................21
Changing the ESXi Host Password.................................................................................................................. 22
Changing the ESXi Host Name..........................................................................................................................23
Changing CVM Memory Configuration (ESXi)............................................................................................ 23
Nonconfigurable ESXi Components................................................................................................................ 24

4.  vSphere Networking............................................................................................26


Changing a Host IP Address.............................................................................................................................. 26
Configuring Host Networking (ESXi)..................................................................................................26
Reconnecting an ESXi Host to vCenter.............................................................................................28
Networking Components..................................................................................................................................... 28
Selection of Management Interface in ESXi................................................................................................ 30
Selecting a New Management Interface in ESXi............................................................................. 31

5. ESXi Host Upgrade (Manual)........................................................................... 32


ESXi Host Upgrade Process................................................................................................................................32

Copyright.......................................................................................................................34
License......................................................................................................................................................................... 34
Conventions............................................................................................................................................................... 34
Default Cluster Credentials................................................................................................................................. 34
Version......................................................................................................................................................................... 35
1
VCENTER CONFIGURATION
VMware vCenter enables the centralized management of multiple ESXi hosts. The Nutanix
cluster in vCenter must be configured according to Nutanix best practices.

Using an Existing vCenter Server


Procedure

1. Create a new cluster entity within the existing vCenter inventory and configure its settings
based on Nutanix best practices by following Creating a Nutanix Cluster in vCenter on
page 3.

2. Add the Nutanix hosts to this new cluster by following Adding a Nutanix Node to vCenter on
page 4

3. Configure HA and DRS by following Configuring HA, DRS, and EVC in vCenter Server on
page 6

Creating a Nutanix Cluster in vCenter

About this task

Procedure

1. Log on to vCenter with the vSphere Client.

2. If you want the Nutanix cluster to be in its own datacenter or if there is no datacenter, click
Host and Clusters and right-click the vCenter and select New Datacenter.

3. Type a meaningful name for the datacenter, such as NTNX-DC and click OK.
You can also create the Nutanix cluster within an existing datacenter.

4. Right-click the datacenter node and select New Cluster.

5. Type a meaningful name for the cluster in the Name field, such as NTNX-Cluster.

6. Select the Turn on vSphere HA check box.

7. Select the Enable admission control check box.

8. Accept the default values for VM monitoring.

AOS  |  vCenter Configuration | 3


9. If the cluster contain nodes with different processor classes, enable EVC with the lower
feature set as the baseline. Otherwise, accept the default value and proceed to the next
step.

a. From the Enable EVC for Intel Hosts, select the lowest processor class that is present in
the cluster.
For an indication of the processor class of a node, see the Block Serial field on the
Diagram or Table view of the Hardware Dashboard in the Nutanix web console.

10. Review the settings and then click OK.

What to do next
1. For configuring admission control policy according to availability configuration, go to
Configure > Services > vSphere Availability and click Edit and select Admission Control.
Select Cluster resource Percentage from the Define host failover capacity by drop-down
menu. Enter the percentage appropriate for the number of Nutanix nodes in the cluster.
For more information about settings of percentage of cluster resources reserved as failover
spare capacity, see vSphere HA Admission Control Settings for Nutanix Environment on
page 10.
2. For a cluster, go to Configure > General and click Edit for the Swap file location to verify that
Virtual machine directory (Store the swapfile in the same directory as the virtual machine) is
selected.
3. Add all Nutanix nodes to the vCenter cluster inventory by following Adding a Nutanix Node
to vCenter on page 4.

Adding a Nutanix Node to vCenter

Before you begin


The cluster must be configured according to Nutanix specifications given in Creating a Nutanix
Cluster in vCenter on page 3 and vSphere Cluster Settings (Review) on page 9.

About this task

Tip: Refer to Default Cluster Credentials on page 34 for the default credentials of all cluster
components.

Procedure

1. Log on to vCenter with the vSphere Client.

2. Right-click the cluster and select Add Host.

3. Type the fully-qualified domain name or IP address of the ESXi host in the Host name or IP
address field.

4. Enter the ESXi host logon credentials in the Username and Password fields.

5. Click Next.
If a security or duplicate management alert appears, click Yes.

6. Review the Host Summary page and click Next.

7. Select a license to assign to the ESXi host and click Next.

AOS  |  vCenter Configuration | 4


8. Ensure that the Enable Lockdown Mode check box is left unselected and click Next.
Lockdown mode is not supported.

9. Click Finish.

10. Select the host and go to Configure > Networking > TCP/IP configuration.

11. Configure DNS servers by clicking the Pencil icon.

a. Select Use manual settings check box.


b. Type the host name and domain name in the Host name and Domain fields.
c. Type DNS server addresses in the Preferred DNS Server and Alternate DNS Server fields
and click OK.

12. Configure NTP servers.

a. Click Configure > System > Time Configuration.


b. Click Edit button.
c. Select the Use Network Time Protocol (Enable NTP client) option.
d. Type the NTP server address in the NTP Servers text box.
e. In the NTP Service Startup Policy select Start and stop with host option from the drop-
down menu.
Add multiple NTP servers if required.
f. Click OK.

13. Click Configure > Storage and confirm that NFS datastores are mounted.

14. If HA is not enabled, set the Controller VM to start automatically when the ESXi host is
powered on.

Note: Automatic VM start and stop is disabled in clusters where HA is enabled.

a. Click the Configure > Virtual Machines tab.


b. Select VM Startup/Shutdown and click Edit.
c. Ensure that the Automatically start and stop the virtual machines with the system check
box is selected.
d. If the Controller VM is listed in Manual Startup, click up arrow key to move the Controller
VM into the Automatic Startup section.
e. Click OK.

What to do next
Configure HA and DRS settings by following Configuring HA, DRS, and EVC in vCenter Server
on page 6.

AOS  |  vCenter Configuration | 5


Configuring HA, DRS, and EVC in vCenter Server

Before you begin


Add the node or nodes to vCenter by following Adding a Nutanix Node to vCenter on
page 4.

About this task

Procedure

1. Log on to vCenter with the vSphere Client.

2. Select the Nutanix cluster and click Configure.

3. If vSphere HA and DRS are not enabled, you can enable them from the vSphere DRS and
vSphere Availability tabs.

Note: It is recommended to configure vSphere HA and DRS even if you do not plan to use
the features at this time. The settings are preserved within the vSphere cluster configuration,
in case you decide to enable the feature later, it is pre-configured based on Nutanix best
practices.

4. Configure vSphere HA by navigating to Configure > Services > vSphere Availability and click
Edit.

AOS  |  vCenter Configuration | 6


5. Click to the vSphere HA toggle button.

a. Configure the cluster wide host isolation response settings.


b. Select Failures and responses.
c. Select Power off and restart VMs option from the Response for Host Isolation drop-down
menu.
d. Disable (if enabled) the VM component protection option by changing Datastore with
PDL and Datastore with APD options to Disabled.

Figure 1: Failures and Responses


e. Configure VM restart priority, host isolation response, and VM monitoring setting for all
the Controller VMs.
f. Go to Configure > Configuration > VM Overrides, select the CVM, and click Edit.

Note: If you do not have the Controller VMs listed, click the Add button to ensure that the
CVMs are added to the VM Overrides dialog box.

g. Select Disabled from the , VM restart priority and VM Monitoring drop-down menu.

AOS  |  vCenter Configuration | 7


Figure 2: VM Monitoring

6. Configure datastore monitoring.

a. Go to vSphere Availability > Datastore for Heartbeating.


b. Select Use datastores only from the specified list and select the Nutanix datastore (NTNX-
NFS).
c. If the cluster has only one datastore as recommended, click Advanced Options, add an
Option named das.ignoreInsufficientHbDatastore with Value of true, and click OK.

Note: After configuring this setting, you need to clear the Turn on vSphere HA check box
and wait for all the hosts in the cluster to reconfigure HA, and then again enable HA by
selecting the Turn on vSphere HA check box.

7. Configure DRS by navigating to Configure > vSphere DRS and click Edit.

a. Click to enable vSphere DRS check box.


From the DRS Automation drop-down menu, leave the default migration threshold at 3
(this the default configuration) in a fully automated configuration as it is recommended
for the Nutanix deployments. This configuration automatically manages data locality in

AOS  |  vCenter Configuration | 8


such a way that whenever VMs move writes are always written on one of the replicas
locally to maximize the subsequent read performance.
b. Click OK.
c. Configure automation level setting of all the Controller VMs.
d. Go to Configure > Configuration > VM Overrides and click Edit.

Note: If you do not have Controller VMs listed, you need to click the Add button to ensure
that the CVMs are added to the VM Overrides dialog box.

e. Change the Automation Level setting of all the Controller VMs to Disabled.
f. Click OK.
g. Go to Configure > vSphere DRS and click Edit.
h. Confirm that Off is selected as the default power management for the cluster.
i. Click OK to close the cluster settings window.

8. Enable EVC on a cluster.

a. Select the cluster in the inventory.


b. Shut down all the virtual machines on the hosts with feature sets greater than the EVC
mode.
Ensure that the cluster contains hosts with CPUs from only one vendor, either Intel or
AMD.
c. Click the Configure tab, select VMware EVC and click Edit.
d. Enable EVC for the CPU vendor and feature set appropriate for the hosts in the cluster,
and click OK.
e. Start the virtual machines in the cluster to apply the EVC.
If you try to enable EVC on a cluster with mismatching host feature sets (mixed processor
clusters), the lowest common feature set (lowest processor class) is selected. Hence, if
VMs are already running on the new host and if you need to enable EVC on the host, you
need to first shut down the VMs and then enable EVC.

Note: Do not shut down more than one Controller VM at the same time.

The vCenter Configuration is complete. You can verify whether you have configured all the
settings properly, see the checklist, vSphere Cluster Settings (Review) on page 9.

vSphere Cluster Settings (Review)


Following list provides an overview of the settings that you have already configured. You can
use the following list for the review purpose.

Note: It is recommended to configure vSphere HA and DRS even if the customer does not plan
to use the feature. The settings are preserved within the vSphere cluster configuration, so if
the customer later decides to enable the feature, it is pre-configured based on Nutanix best
practices.

vSphere HA Settings

• Enable host monitoring

AOS  |  vCenter Configuration | 9


• Enable admission control and use the percentage-based policy with a value based on the
number of nodes in the cluster. For more information about settings of percentage of cluster
resources reserved as failover spare capacity, vSphere HA Admission Control Settings for
Nutanix Environment on page 10.
• Set the VM Restart Priority of all Controller VMs to Disabled.
• Set the Host Isolation Response of the cluster to Power Off.
• Set the Host Isolation Response of all Controller VMs to Disabled.
• Set the VM Monitoring for all Controller VMs to Disabled.
• Enable Datastore Heartbeating by clicking Select only from my preferred datastores and
choosing the Nutanix NFS datastore. If the cluster has only one datastore, add an advanced
option named das.ignoreInsufficientHbDatastore with Value of true.

vSphere DRS Settings

• Set the Automation Level on all Controller VMs to Disabled.


• Leave power management disabled (set to Off).

Other Cluster Settings

• Store VM swapfiles in the same directory as the virtual machine.


• Enable EVC in the cluster.

vSphere HA Admission Control Settings for Nutanix Environment


In the event of a node failure, you need to ensure that there is sufficient compute resources
available to restart all virtual machines previously running on the failed node.

Overview
If you are using redundancy factor 2 with cluster sizes of up to 16 nodes, you must configure HA
admission control setting with the appropriate percentage of CPU/RAM to achieve at least N
+1 availability. For cluster sizes larger than 16 nodes, you must configure HA admission control
with the appropriate percentage of CPU/RAM to achieve at least N+2 availability.

N+2 Availability Configuration


The N+2 availability configuration can be achieved in the following two ways.

• Redundancy factor 2 and N+2 vSphere HA Admission Control Setting Configured


Because Nutanix Distributed File System recovers in the event of a node failure, it is possible
to have a second node failure without data being unavailable if the Nutanix cluster has fully
recovered before the subsequent failure. In this case, a N+2 vSphere HA Admission Control
setting is required to ensure sufficient compute resources are available to restart all virtual
machines.
• Redundancy factor 3 and N+2 vSphere HA Admission Control Setting Configured
If you want two concurrent node failures to be tolerated and the cluster has insufficient
blocks to use block awareness, redundancy factor 3 in a cluster of five or more nodes is
required. In either of these two options, the Nutanix storage pool must have sufficient free
capacity to restore the configured redundancy factor (2 or 3). The percentage of free space

AOS  |  vCenter Configuration | 10


required is the same as the required HA Admission Control percentage setting. In this case,
redundancy factor 3 needs to be configured at the storage container layer. An N+2 vSphere
HA Admission Control setting is also required to ensure sufficient compute resources are
available to restart all the virtual machines.

Note: For redundancy factor 3, a minimum of five nodes is required, which provides the
ability that two concurrent nodes can fail while ensuring data remains online. In this case, the
same N+2 level of availability is required for the vSphere cluster to enable the VMs to restart
following a failure.

Table 1: Minimum Reservation Percentage for vSphere HA Admission Control Setting

For redundancy factor 2 deployments, the recommended minimum HA admission control


setting percentage is marked with single asterik (*) symbol in the following table. For
redundancy factor 2 or redundancy factor 3 deployments configured for multiple non-
concurrent node failures to be tolerated, the minimum required HA admission control setting
percentage is marked with two asteriks (**) in the following table.

Nodes Availability Level

N+1 N+2 N+3 N+4

1 N/A N/A N/A N/A

2 N/A N/A N/A N/A

3 33* N/A N/A N/A

4 25* 50 75 N/A

5 20* 40** 60 80

6 18* 33** 50 66

7 15* 29** 43 56

8 13* 25** 38 50

9 11* 23** 33 46

10 10* 20** 30 40

11 9* 18** 27 36

12 8* 17** 25 34

13 8* 15** 23 30

14 7* 14** 21 28

15 7* 13** 20 26

16 6* 13** 19 25

Nodes Availability Level

N+1 N+2 N+3 N+4

17 6 12* 18** 24

18 6 11* 17** 22

AOS  |  vCenter Configuration | 11


Nodes Availability Level

N+1 N+2 N+3 N+4

19 5 11* 16** 22

20 5 10* 15** 20

21 5 10* 14** 20

22 4 9* 14** 18

23 4 9* 13** 18

24 4 8* 13** 16

25 4 8* 12** 16

26 4 8* 12** 16

27 4 7* 11** 14

28 4 7* 11** 14

29 3 7* 10** 14

30 3 7* 10** 14

31 3 6* 10** 12

32 3 6* 9** 12

The table also represents the percentage of the Nutanix storage pool, which should remain free
to ensure the cluster can fully restore the redundancy factor in the event of one or more node,
or even a block failure (where three or more blocks exist within a cluster).

Block Awareness
For deployments of at least three blocks, block awareness automatically ensures data
availability when an entire block of up to four nodes configured with redundancy factor 2 can
become unavailable.
If block awareness levels of availability are required, the vSphere HA Admission Control setting
needs to ensure sufficient compute resources are available to restart all virtual machines. In
addition, to the Nutanix storage pool must have sufficient space to restore redundancy factor 2
to all data.
The vSphere HA minimum availability level should be equal to number of nodes per block.

Note: For block awareness, each block must be populated with a uniform number of nodes.
In the event of a failure, a non-uniform node count might compromise block awareness or the
ability to restore the redundancy factor, or both.

Disabling SIOC on a Container


Storage I/O Control (SIOC) in the statistics mode is enabled when you are create containers on
a Nutanix cluster that is running the ESXi hypervisor.

About this task


It is recommended to disable SIOC on Nutanix because this setting can cause following issues.

• If SIOC or SIOC in the statistics mode is enabled then storage might become unavailable.

AOS  |  vCenter Configuration | 12


• If SIOC is enabled and you are using Metro Availability feature, you may face issues with
activate and restore operation.
• If SIOC in the statistics mode is enabled, then this might cause all the hosts to repeatedly
create and delete the access and .lck-XXXXXXXX files in the .iorm.sf directory in the root
directory of the container.
If you are configuring metro availability on a Nutanix cluster, Nutanix recommends to use an
empty container. However, if SIOC is enabled on a container (which is enabled by default), you
must disable SIOC and also delete all the files from the container that are related to SIOC.
The following error message is displayed if there are files related to SIOC in the container
Container X on the Remote Site has existing Data.
If SIOC is enabled, following files are present inside the container.

• .iorm.sf directory
• Two hidden files, namely, .lck-xxx and iormstats.sf.
To resolve this issue, you must first disable storage I/O statistics collection and then remove the
two hidden files.
Perform the following procedure to disable storage I/O statistics collection.

Procedure

1. Log into the vSphere Web Client.

2. Click Storage.

3. Navigate to the container for your cluster.

4. Right-click the container and select Configure Storage I/O Controller.


The properties for the container is displayed. The Disable Storage I/O statistics collection
option is unchecked, which means that SIOC is enabled by default.

5. Select the Disable Storage I/O statistics collection option to disable SIOC, and click OK.

6. Select the Exclude I/O Statistics from SDRS option, and click OK.

Removing the Hidden Files

About this task


Perform the following procedure to remove the hidden files.

Procedure

1. Log in to the ESXi host by using SSH.


You can log on to any host in the Nutanix cluster if the container is mounted on all the hosts,
or log on to the host where the container is mounted.

2. Go to the container by using the following command.


root@esx# cd /vmfs/volumes/container_name

Replace container_name with the name of the container.

AOS  |  vCenter Configuration | 13


3. Remove the contents of the container. For example, to remove the hidden files from ctr1, run
the following commands.
root@esx# cd /vmfs/volumes/ctr1
root@esx# rm -rf .irom.sf
root@esx# rm -f .iromstats.sf
root@esx# rm -f .lck-XXXXXXXX

Changes that you make on one host are applied to all the other hosts in the cluster. Hence,
performing this procedure on one host resolves this issue.
This disables SIOC and removes any data related to SIOC, making your container empty. You
can now use this empty container to configure metro availability.
2
VM MANAGEMENT
VM Migration
You can live migrate a VM to an ESXi host in a Nutanix cluster. Usually this is done in the
following cases:

• Migrate VMs from existing storage platform to Nutanix.


• Keep VMs running during disruptive upgrade or other downtime of Nutanix cluster.
In migrating VMs between vSphere clusters, the source host and NFS datastore are the ones
presently running the VM. The target host and NFS datastore are the ones where the VM runs
after migration. The target ESXi host and datastore must be part of a Nutanix cluster.
To accomplish this migration, you have to mount the NFS datastores from the target on the
source. After the migration is complete, you should unmount the datastores and block access.

Migrating a VM to Another Nutanix vSphere Cluster

Before you begin


Before migrating a VM to another Nutanix vSphere cluster, verify that you have provisioned the
target Nutanix environment.

About this task


The shared storage feature in vSphere allows you to move both compute and storage
resources from the source legacy environment to the target Nutanix environment at the same
time without disruption. This feature also removes the need to do any sort of Filesystems
Whitelists on Nutanix.
You can use the shared storage feature through the migration wizard in the vSphere Web
Client.

Procedure

1. Log on to vCenter with the vSphere Client.

2. Select the VM that you want to migrate.

3. Right-click the VM and select Migrate.

4. Under Select Migration Type, select Change both compute resource and storage.

5. Select Compute Resource and then Storage and click Next.


If necessary, change the disk format to the one that you want to use during the migration
process.

6. Select a destination network for all VM network adapters and click Next.

AOS  |  VM Management | 15


7. Click Finish.
Wait for the migration process to complete. The process performs the storage vMotion first,
and then creates a temporary storage network over vmk0 for the period of time where the
disk files are on Nutanix.

vStorage APIs for Array Integration


To improve the vSphere cloning process, Nutanix provides a vStorage APIs for Array
Integration (VAAI) plugin. This plugin is installed by default during the Nutanix factory process.
Without the Nutanix VAAI plugin, the process of creating a full clone takes a significant amount
of time because all the data that comprises a VM is duplicated. This duplication also results in
an increase in storage consumption.
The Nutanix VAAI plugin efficiently makes full clones without reserving space for the clone.
Read requests for blocks that are shared between parent and clone are sent to the original
vDisk that was created for the parent VM. As the clone VM writes new blocks, the Nutanix file
system allocates storage for those blocks. This data management occurs completely at the
storage layer, so the ESXi host sees a single file with the full capacity that was allocated when
the clone was created.

Cloning a VM

Procedure

1. Log on to vCenter with the vSphere Client.

2. Right-click the VM and select Clone.

3. Follow the wizard to enter a name for the clone, select a cluster, and select a host.

4. Select the datastore that contains source VM and click Next.

Note: If you choose a datastore other than the one that contains the source VM, the clone
operation uses the VMware implementation and not the Nutanix VAAI plugin.

5. If desired, set the guest customization parameters. Otherwise, proceed to the next step.

6. Click Finish.
3
NODE MANAGEMENT
Shutting Down a Node in a Cluster (vSphere Web Client)
About this task

CAUTION: Verify the data resiliency status of your cluster. If the cluster only has replication
factor 2 (RF2), you can only shut down one node for each cluster. If an RF2 cluster would have
more than one node shut down, shut down the entire cluster.

Procedure

1. Log on to vCenter with the vSphere Client.

2. If DRS is not enabled, manually migrate all the VMs except the Controller VM to another host
in the cluster or shut down any VMs other than the Controller VM that you do not want to
migrate to another host.
If DRS is enabled on the cluster, you can skip this step.

3. Right-click the host and select Maintenance Mode > Enter Maintenance Mode.

4. In the Enter Maintenance Mode dialog box, click OK.


The host gets ready to go into maintenance mode, which prevents VMs from running on this
host. DRS automatically attempts to migrate all the VMs to another host in the cluster.

Note: If DRS is not enabled, you need to manually migrate or shut down all the VMs excluding
the Controller VM. The VMs that are not migrated automatically even when the DRS is enabled
can be because of a configuration option in the VM that is not present on the target host.

5. Log on to the Controller VM with SSH and shut down the Controller VM.
nutanix@cvm$ cvm_shutdown -P now

Note: Do not reset or shutdown the Controller VM in any way other than the cvm_shutdown
command to ensure that the cluster is aware that the Controller VM is unavailable

6. After the Controller VM shuts down, wait for the host to go into maintenance mode.

7. Right-click the host and select Shut Down.


Wait until vCenter Server displays that the host is not responding, which may take several
minutes. If you are logged on to the ESXi host rather than to vCenter Sever, the vSphere
Client disconnects when the host shuts down.

AOS  |  Node Management | 17


Shutting Down a Node in a Cluster (vSphere command line)
Before you begin
If DRS is not enabled, manually migrate all the VMs except the Controller VM to another host in
the cluster or shut down any VMs other than the Controller VM that you do not want to migrate
to another host. If DRS is enabled on the cluster, you can skip this pre-requisite.

About this task

CAUTION: Verify the data resiliency status of your cluster. If the cluster only has replication
factor 2 (RF2), you can only shut down one node for each cluster. If an RF2 cluster would have
more than one node shut down, shut down the entire cluster.

You can put the ESXi host into maintenance mode and shut it down from the command line or
by using the vSphere Web Client.

Procedure

1. Log on to the Controller VM with SSH and shut down the Controller VM.
nutanix@cvm$ cvm_shutdown -P now

2. Log on to another Controller VM in the cluster with SSH.

3. Shut down the host.


nutanix@cvm$ ~/serviceability/bin/esx-enter-maintenance-mode -s cvm_ip_addr

If successful, this command returns no output. If it fails with a message like the following,
VMs are probably still running on the host.
CRITICAL esx-enter-maintenance-mode:42 Command vim-cmd hostsvc/
maintenance_mode_enter failed with ret=-1

Ensure that all VMs are shut down or moved to another host and try again before
proceeding.
nutanix@cvm$ ~/serviceability/bin/esx-shutdown -s cvm_ip_addr

Replace cvm_ip_addr with the IP address of the Controller VM on the ESXi host.
Alternatively, you can put the ESXi host into maintenance mode and shut it down using the
vSphere Web Client.
If the host shuts down, a message like the following is displayed.
INFO esx-shutdown:67 Please verify if ESX was successfully shut down using
ping hypervisor_ip_addr

4. Confirm that the ESXi host has shut down.


nutanix@cvm$ ping hypervisor_ip_addr

Replace hypervisor_ip_addr with the IP address of the ESXi host.


If no ping packets are answered, the ESXi host is shut down.

Starting a Node in a Cluster (vSphere Client)


About this task

AOS  |  Node Management | 18


Procedure

1. If the node is turned off, turn it on by pressing the power button on the front. Otherwise,
proceed to the next step.

2. Log on to vCenter (or to the node if vCenter is not running) with the vSphere Client.

3. Right-click the ESXi host and select Exit Maintenance Mode.

4. Right-click the Controller VM and select Power > Power on.


Wait approximately 5 minutes for all services to start on the Controller VM.

5. Log on to another Controller VM in the cluster with SSH.

6. Confirm that cluster services are running on the Controller VM.


nutanix@cvm$ ncli cluster status | grep -A 15 cvm_ip_addr

Output similar to the following is displayed.


Name : 10.1.56.197
Status : Up
... ...
StatsAggregator : up
SysStatCollector : up

Every service listed should be up.

7. Right-click the ESXi host in the vSphere client and select Rescan for Datastores. Confirm that
all Nutanix datastores are available.

8. Verify that all services are up on all Controller VMs.


nutanix@cvm$ cluster status

If the cluster is running properly, output similar to the following is displayed for each node in
the cluster:
CVM: 10.1.64.60 Up
Zeus UP [5362, 5391, 5392, 10848,
10977, 10992]
Scavenger UP [6174, 6215, 6216, 6217]
SSLTerminator UP [7705, 7742, 7743, 7744]
SecureFileSync UP [7710, 7761, 7762, 7763]
Medusa UP [8029, 8073, 8074, 8176,
8221]
DynamicRingChanger UP [8324, 8366, 8367, 8426]
Pithos UP [8328, 8399, 8400, 8418]
Hera UP [8347, 8408, 8409, 8410]
Stargate UP [8742, 8771, 8772, 9037,
9045]
InsightsDB UP [8774, 8805, 8806, 8939]
InsightsDataTransfer UP [8785, 8840, 8841, 8886,
8888, 8889, 8890]
Ergon UP [8814, 8862, 8863, 8864]
Cerebro UP [8850, 8914, 8915, 9288]
Chronos UP [8870, 8975, 8976, 9031]
Curator UP [8885, 8931, 8932, 9243]
Prism UP [3545, 3572, 3573, 3627,
4004, 4076]
CIM UP [8990, 9042, 9043, 9084]
AlertManager UP [9017, 9081, 9082, 9324]
Arithmos UP [9055, 9217, 9218, 9353]

AOS  |  Node Management | 19


Catalog UP [9110, 9178, 9179, 9180]
Acropolis UP [9201, 9321, 9322, 9323]
Atlas UP [9221, 9316, 9317, 9318]
Uhura UP [9390, 9447, 9448, 9449]
Snmp UP [9418, 9513, 9514, 9516]
SysStatCollector UP [9451, 9510, 9511, 9518]
Tunnel UP [9480, 9543, 9544]
ClusterHealth UP [9521, 9619, 9620, 9947,
9976, 9977, 10301]
Janus UP [9532, 9624, 9625]
NutanixGuestTools UP [9572, 9650, 9651, 9674]
MinervaCVM UP [10174, 10200, 10201, 10202,
10371]
ClusterConfig UP [10205, 10233, 10234, 10236]
APLOSEngine UP [10231, 10261, 10262, 10263]
APLOS UP [10343, 10368, 10369, 10370,
10502, 10503]
Lazan UP [10377, 10402, 10403, 10404]
Orion UP [10409, 10449, 10450, 10474]
Delphi UP [10418, 10466, 10467, 10468]

Starting a Node in a Cluster (vSphere command line)


About this task

Procedure

1. Log on to a running Controller VM in the cluster with SSH.

2. Start the Controller VM.


nutanix@cvm$ ~/serviceability/bin/esx-exit-maintenance-mode -s cvm_ip_addr

If successful, this command produces no output. If if it fails, wait 5 minutes and try again.
nutanix@cvm$ ~/serviceability/bin/esx-start-cvm -s cvm_ip_addr

Replace cvm_ip_addr with the IP address of the Controller VM.


If the Controller VM starts, a message like the following is displayed.
INFO esx-start-cvm:67 CVM started successfully. Please verify using
ping cvm_ip_addr

After starting, the Controller VM restarts once. Wait three to four minutes before you ping
the Controller VM.
Alternatively, you can take the ESXi host out of maintenance mode and start the Controller
VM using the vSphere Web Client.

3. Verify that all services are up on all Controller VMs.


nutanix@cvm$ cluster status

If the cluster is running properly, output similar to the following is displayed for each node in
the cluster:
CVM: 10.1.64.60 Up
Zeus UP [5362, 5391, 5392, 10848,
10977, 10992]
Scavenger UP [6174, 6215, 6216, 6217]
SSLTerminator UP [7705, 7742, 7743, 7744]

AOS  |  Node Management | 20


SecureFileSync UP [7710, 7761, 7762, 7763]
Medusa UP [8029, 8073, 8074, 8176,
8221]
DynamicRingChanger UP [8324, 8366, 8367, 8426]
Pithos UP [8328, 8399, 8400, 8418]
Hera UP [8347, 8408, 8409, 8410]
Stargate UP [8742, 8771, 8772, 9037,
9045]
InsightsDB UP [8774, 8805, 8806, 8939]
InsightsDataTransfer UP [8785, 8840, 8841, 8886,
8888, 8889, 8890]
Ergon UP [8814, 8862, 8863, 8864]
Cerebro UP [8850, 8914, 8915, 9288]
Chronos UP [8870, 8975, 8976, 9031]
Curator UP [8885, 8931, 8932, 9243]
Prism UP [3545, 3572, 3573, 3627,
4004, 4076]
CIM UP [8990, 9042, 9043, 9084]
AlertManager UP [9017, 9081, 9082, 9324]
Arithmos UP [9055, 9217, 9218, 9353]
Catalog UP [9110, 9178, 9179, 9180]
Acropolis UP [9201, 9321, 9322, 9323]
Atlas UP [9221, 9316, 9317, 9318]
Uhura UP [9390, 9447, 9448, 9449]
Snmp UP [9418, 9513, 9514, 9516]
SysStatCollector UP [9451, 9510, 9511, 9518]
Tunnel UP [9480, 9543, 9544]
ClusterHealth UP [9521, 9619, 9620, 9947,
9976, 9977, 10301]
Janus UP [9532, 9624, 9625]
NutanixGuestTools UP [9572, 9650, 9651, 9674]
MinervaCVM UP [10174, 10200, 10201, 10202,
10371]
ClusterConfig UP [10205, 10233, 10234, 10236]
APLOSEngine UP [10231, 10261, 10262, 10263]
APLOS UP [10343, 10368, 10369, 10370,
10502, 10503]
Lazan UP [10377, 10402, 10403, 10404]
Orion UP [10409, 10449, 10450, 10474]
Delphi UP [10418, 10466, 10467, 10468]

4. Verify storage.

a. Log on to the ESXi host with SSH.


b. Rescan for datastores.
root@esx# esxcli storage core adapter rescan --all

c. Confirm that cluster VMFS datastores, if any, are available.


root@esx# esxcfg-scsidevs -m | awk '{print $5}'

Restarting a Node
Before you begin
Shut down guest VMs, including vCenter, that are running on the node, or move them to other
nodes in the cluster.

AOS  |  Node Management | 21


About this task

Procedure

1. Log on to vCenter (or to the ESXi host if the node is running the vCenter VM) with the
vSphere client.

2. Right-click the host and select Maintenance mode > Enter Maintenance Mode.
In the Confirm Maintenance Mode dialog box, click OK.
The host is placed in maintenance mode, which prevents VMs from running on the host.

3. Log on to the Controller VM with SSH and shut down the Controller VM.
nutanix@cvm$ cvm_shutdown -P now

Note: Do not reset or shutdown the Controller VM in any way other than the cvm_shutdown
command to ensure that the cluster is aware that the Controller VM is unavailable

4. Right click the node and select Power > Reboot.


Wait until vCenter shows that the host is not responding and then is responding again, which
may take several minutes.
If you are logged on to the ESXi host rather than to vCenter, the vSphere Web Client
disconnects when the host shuts down.

5. Right-click the ESXi host and select Exit Maintenance Mode.

6. Right-click the Controller VM and select Power > Power on.


Wait approximately 5 minutes for all services to start on the Controller VM.

7. Log on to the Controller VM with SSH.

8. Confirm that cluster services are running on the Controller VM.


nutanix@cvm$ ncli cluster status | grep -A 15 cvm_ip_addr

Output similar to the following is displayed.


Name : 10.1.56.197
Status : Up
... ...
StatsAggregator : up
SysStatCollector : up

Every service listed should be up.

9. Right-click the ESXi host in the vSphere client and select Rescan for Datastores. Confirm that
all Nutanix datastores are available.

Changing the ESXi Host Password


About this task

Tip: Although it is not required for the root user to have the same password on all hosts, doing
so makes cluster management and support much easier. If you do select a different password for
one or more hosts, make sure to note the password for each host.

Perform these steps on every ESXi host in the cluster.

AOS  |  Node Management | 22


Procedure

1. Log on to the ESXi host with SSH.

2. Change the root password.


root@esx# passwd root

3. Respond to the prompts, providing the current and new root password.
Changing password for root.
Old Password:
New password:
Retype new password:
Password changed.

The minimum password length is 8 characters.

Changing the ESXi Host Name


You can change the host name of the ESXi host by using the vSphere web client.

Before you begin


1. Put the host in to maintenance mode.
2. Disconnect the host from the vCenter server.

About this task


Perform the following procedure to change the host name of the ESXi host.

Procedure

1. Connect the host directly to a vSphere web client.

2. Click the ESXi host, go to Networking > TCP/IP configuration, and then click the edit (pencil)
icon.

3. In the Edit TCP/IP Stack Configuration dialog box, click DNS Configuration, and select Enter
settings manually.

4. In the Host name field, specify the new name for the host.

5. Click OK.

What to do next
Connect the host to the vCenter server and exit the maintenance mode.

Changing CVM Memory Configuration (ESXi)


The Controller VM memory is set to 16 GB by default. You might need to increase the Controller
VM memory if you are using storage-heavy nodes, or if the compression or deduplication
features are enabled.

Before you begin


Verify the cluster health. The Nutanix cluster can sustain the failure of one Controller VM at a
given time. Do not continue if you suspect that the cluster is not healthy or if another Controller
VM is down.

AOS  |  Node Management | 23


1. Verify that the Controller VM is running on each host in the cluster.
1. Log on to a Controller VM with SSH.
2. Determine the number of nodes in the cluster.
nutanix@cvm$ svmips | wc -w

3. Verify that the number of Controller VMs that are Up and Normal is the same as the
number of nodes in the cluster.
nutanix@cvm$ nodetool -h localhost ring | grep Normal | grep -c Up

2. Verify that all the services in the Controller VM are Up.


nutanix@cvm$ cluster status

About this task


Perform the following steps if you need to change the Controller VM memory allocation.

CAUTION: To avoid impacting cluster availability, shut down one Controller VM at a time. Wait
until cluster services are up before proceeding to the next Controller VM.

Procedure

1. Log on to a Controller VM with SSH.

2. Gracefully shut down the Controller VM.


nutanix@cvm$ cvm_shutdown -P now

3. Increase the memory of the Controller VM.

a. Log on to vCenter with the vSphere client.


b. Right-click the Controller VM and select Edit Settings.
c. On the Virtual Hardware tab, click Memory and, in the Memory Size field, specify the
memory size, and then click OK.
d. In the Reservation field, specify the same value you specified in step c, and then click OK.

4. Start the Controller VM by using the vSphere client.

5. Log on to the Controller VM.


Accept the host authenticity warning if prompted, and enter the Controller VM nutanix
password.

6. Verify that all the services in the Controller VM are up.


nutanix@cvm$ cluster status

Nonconfigurable ESXi Components


The components listed here are configured by the Nutanix manufacturing and installation
processes. Do not modify any of these components except under the direction of Nutanix
Support.

Warning: Modifying any of the settings listed here may render your cluster inoperable.

AOS  |  Node Management | 24


In particular, do not, under any circumstances, use the Reset System Configuration
option of ESXi, delete the Nutanix Controller VM, or take a snapshot of the Controller
VM for backup.

Warning: You must not run any commands on a Controller VM that are not covered in the
Nutanix documentation.

Nutanix Software

• Local datastore name


• Settings and contents of any Controller VM, including the name and the virtual hardware
configuration (except memory when required to enable certain features)

ESXi Settings

Note: If you create vSphere resource pools, Nutanix Controller VMs must have the top share.

• NFS settings
• VM swapfile location
• VM startup/shutdown order
• iSCSI software adapter settings
• vSwitchNutanix standard virtual switch
• vmk0 interface in port group "Management Network"
• SSH enabled
• Host firewall ports that are open
• Taking snapshots of the Controller VM

AOS  |  Node Management | 25


4
VSPHERE NETWORKING
vSphere Networking provides information about configuring networking for VMware vSphere
including information on IP addresses, vSwitches, and selection of management interface.

Note: Do not add any other device, including guest VMs, to the VLAN to which the Controller VM
and hypervisor host are assigned. Isolate guest VMs on one or more separate VLANs.

Changing a Host IP Address


About this task
Perform these steps once for every hypervisor host in the cluster. Complete the entire
procedure on a host before proceeding to the next host.

CAUTION: The cluster cannot tolerate duplicate host IP addresses. For example, when swapping
IP addresses between two hosts, temporarily change one host IP address to an interim unused
IP address. This step avoids having two hosts with identical IP addresses on the cluster. Then
complete the address change or swap on each host as described here.

Note: All Controller VMs and hypervisor hosts must be on the same subnet. The hypervisor can
be multihomed provided that one interface is on the same subnet as the Controller VM.

Procedure

1. Configure networking on the node by following Configuring Host Networking (ESXi) on


page 26.

2. Update the ESXi host IP addresses in vCenter by following Reconnecting an ESXi Host to
vCenter on page 28.

3. Log on to every Controller VM in the cluster and restart genesis.


nutanix@cvm$ genesis restart

If the restart is successful, output similar to the following is displayed:


Stopping Genesis pids [1933, 30217, 30218, 30219, 30241]
Genesis started on pids [30378, 30379, 30380, 30381, 30403]

Configuring Host Networking (ESXi)

About this task

Figure 3: Configure Management Network

AOS  |  vSphere Networking | 26


You can access the ESXi console either through IPMI or by attaching a keyboard and monitor to
the node.

Procedure

1. On the ESXi host console, press F2 and then provide the ESXi host logon credentials.

2. Press the down arrow key until Configure Management Network is highlighted and then
press Enter.

3. Select Network Adapters and press Enter.

4. Ensure that the connected network adapters are selected.


If they are not selected, press Space to select them and press Enter to return to the
previous screen.

Figure 4: Network Adapters

5. If a VLAN ID needs to be configured on the Management Network, select VLAN (optional)


and press Enter. In the dialog box, provide the VLAN ID and press Enter.

6. Select IP Configuration and press Enter.

7. If necessary, highlight the Set static IP address and network configuration option and press
Space to update the setting.

8. Provide values for the following: IP Address, Subnet Mask, and Default Gateway fields
based on your environment and then press Enter .

9. Select DNS Configuration and press Enter.

10. If necessary, highlight the Use the following DNS server addresses and hostname option
and press Space to update the setting.

11. Provide values for the Primary DNS Server and Alternate DNS Server fields based on your
environment and then press Enter.

12. Press Esc and then Y to apply all changes and restart the management network.

AOS  |  vSphere Networking | 27


13. Select Test Management Network and press Enter.

14. Press Enter to start the network ping test.

15. Verify that the default gateway and DNS servers reported by the ping test match those
that you specified earlier in the procedure and then press Enter.
Ensure that the tested addresses pass the ping test. If they do not, confirm that the correct
IP addresses are configured.

Figure 5: Test Management Networkd

Press Enter to close the test window.

16. Press Esc to logoff.

Reconnecting an ESXi Host to vCenter

Procedure

1. Log on to vCenter with the vSphere Client.

2. Right-click the host with the changed IP address and select Disconnect.

3. Right-click the host again and select Remove from Inventory.

4. Right-click the cluster and select Add Host.

5. In the Host name or IP address text box, type the new IP Address of the host, and click Next.

6. Type the root user credentials for the host in the User name and Password text boxes, and
click Next.

7. View the summary information and click Next.

8. Review settings and click Finish.

Networking Components
IP Addresses
All Controller VMs and ESXi hosts have two network interfaces.

AOS  |  vSphere Networking | 28


Interface IP Address vSwitch

ESXi host vmk0 User-defined vSwitch0

Controller VM eth0 User-defined vSwitch0

ESXi host vmk1 192.168.5.1 vSwitchNutanix

Controller VM eth1 192.168.5.2 vSwitchNutanix

Controller VM eth1:1 192.168.5.254 vSwitchNutanix

Note: The ESXi and Controller VM interfaces on vSwitch0 cannot use IP addresses in any subnets
that overlap with subnet 192.168.5.0/24.

vSwitches
A Nutanix node is configured with two vSwitches:

• vSwitch0 is used for all other communication. It has uplinks to the physical network
interfaces. vSwitch0 has two networks:

• Management Network is used for HA, vMotion, and vCenter communication.


• VM Network is used by all VMs.

CAUTION: You can enable jumbo frames by changing the size of the maximum transmission
unit (MTU) on a vSwitch up to 9000 bytes. If you change MTU setting, the attached uplinks
(physical NICs) are brought down and up again. This causes a short network outage for virtual
machines that are using the uplink.

Figure 6: vSwitch0 Configuration

• vSwitchNutanix is used for local communication between the Controller VM and the ESXi
host. It has no uplinks.

CAUTION: If you need to manage network traffic between VMs with greater control, create
additional port groups on vSwitch0. Do not modify vSwitchNutanix.

AOS  |  vSphere Networking | 29


Figure 7: vSwitchNutanix Configuration

Selection of Management Interface in ESXi


Nutanix tracks the management IP address for each ESXi host and uses this IP address to open
a SSH session into the ESXi host to perform management activities. Activities that require
interaction with the hypervisor fail if the selected vmk interface cannot be accessed through
SSH from the Controller VMs.
If multiple vmk interfaces are present on an ESXi host, Nutanix uses the following rules to select
a management interface.
1. Assigns weight to each vmk interface.

• If vmk is configured for the management traffic under network settings of ESXi, then the
weight assigned is 4. Otherwise, the weight assigned is 0.
• If IP address of vmk belongs to the same IP subnet as eth0 of the Controller VMs
interface, then 2 is added to its weight.
• If IP address of vmk belongs to the same IP subnet as eth2 of the Controller VMs
interface, then 1 is added to its weight.
2. The vmk interface that has the highest weight is selected as the management interface.

Example of selection of management network


Consider an ESXi host with following configuration.

• vmk0 IP address and mask: 2.3.62.204, 255.255.255.0


• vmk1 IP address and mask: 192.168.5.1, 255.255.255.0
• vmk2 IP address and mask: 2.3.63.24, 255.255.255.0
Consider a Controller VM with following configuration.

• eth0 inet address and mask: 2.3.63.31, 255.255.255.0


• eth2 inet address and mask: 2.3.62.12, 255.255.255.0
Consider for vmk0 Management and vMotion tags are displayed if you type the following
command.
root@esx# esxcli network ip interface tag get -i vmk0

Tags: Management, VMmotion


For the other two interfaces, no tags are displayed.
According to the rules, following weights are assigned to the vmk interfaces.

AOS  |  vSphere Networking | 30


• vmk0 = 4 + 0 + 1 = 5
• vmk1 = 0 + 0 + 0 = 0
• vmk2 = 0 + 2 + 0 = 2
Since vmk0 has the highest weight assigned, this interface is used as a management IP for this
ESXi host.
If you want any other interface to act as the management IP, you must enable management
traffic on that interface by following the procedure described in Selecting a New Management
Interface in ESXi on page 31.

Selecting a New Management Interface in ESXi


You can mark the vmk interface that needs to be selected as management interface on an ESXi
host by using any one of the following methods.

Procedure

1. Log in to the vSphere Client.

a. On the ESXi host:

• For vSphere releases prior to 6.5 version, go to Manage > Networking > VMkernel
adapters.
• For vSphere 6.5 or later releases, go to Configure > Networking > VMkernel adapters

a. Select the interface on which you want to enable the management traffic.
b. Click Edit settings of the port group to which the vmk belongs.
c. Select Management check box from the Enabled services option to enable management
traffic on the vmk interface.

2. Open an SSH session to the ESXi host and enable the management traffic on the vmk
interface.
root@esx# esxcli network ip interface tag add -i vmkN --tagname=Management

Replace vmkN with the vmk interface where you want to enable the management traffic.

AOS  |  vSphere Networking | 31


5
ESXI HOST UPGRADE (MANUAL)
If you have not enabled DRS in your environment and want to upgrade the ESXi host, you need
to upgrade the ESXi host manually. This topic describes all the requirements that you must
meet before manually upgrading the ESXi host.

CAUTION: If you have enabled DRS and want to upgrade the ESXi host, use the one-click
upgrade procedure from the Prism Web console. Do not use the manual procedure. For more
information on the one-click upgrade procedure, see the Prism Web Console guide.

Nutanix supports the ability to patch upgrade the ESXi hosts with the versions that are greater
than or released after the Nutanix qualified version, but Nutanix might not have qualified those
releases. See the Nutanix hypervisor support statement in our Support FAQ.
Because ESXi hosts with different versions can co-exist in a single Nutanix cluster, upgrading
ESXi does not require cluster downtime.

• If you want to avoid cluster interruption, you must complete upgrading a host and ensure
that the Controller VM is running before upgrading any other host. When two hosts in a
cluster are down at the same time all the data will be unavailable.
• If you want to minimize the duration of the upgrade activities and cluster downtime is
acceptable, you can stop the cluster and upgrade all hosts at the same time.

Warning: By default, Nutanix clusters have redundancy factor 2, which means they can tolerate
the failure of a single node or drive. Nutanix clusters with a configured option of redundancy
factor 3 allow the Nutanix cluster to withstand the failure of two nodes or drives in different
blocks.

• Never shut down or restart multiple Controller VMs or hosts simultaneously.


• Always run the cluster status command to verify that all Controller VMs are up
before performing a Controller VM or host shutdown or restart.

ESXi Host Upgrade Process


Follow the following process to upgrade ESXi hosts in your environment.

Prerequisites and Requirements

Note: Use the following process only if you do not have DRS enabled in your cluster.

• If you are upgrading all nodes in the cluster at once, shut down all guest VMs and stop the
cluster with the cluster stop command.

CAUTION: There will be downtime if you decide to upgrade all the nodes in the cluster at
once. If you do not want downtime in your environment, you must ensure that only one
Controller VM is powered off at a time in a redundancy factor 2 configuration.

AOS  |  ESXi Host Upgrade (Manual) | 32


• If you are upgrading the nodes while keeping the cluster running, ensure that all nodes are
up by logging on to a Controller VM and running the cluster status command. If any
nodes are not running, start them before proceeding with the upgrade. Shut down all guest
VMs on the node or migrate them to other nodes in the cluster.
• Disable email alerts in the web console under Email Alert Services or with the nCLI command
alerts update-alert-config enable=false.

• Disable automatic upgrades on the host.

Note: Perform this step if you are only going to upgrade a single host. Do not use this step if
you are going to perform a rolling upgrade with the cluster command.

nutanix@cvm$ cluster --host_upgrade disable_auto_install

• Place the host in the maintenance mode by using vSphere Web Client.

Upgrading ESXi Host

• See the vSphere Upgrade Guide at https://www.vmware.com/support/pubs/ for information


on the standard ESXi upgrade procedures.
• You can also use the vCenter Update Manager (VUM).

• If you upgrade using VUM and the host is part of an HA cluster, upgrade may fail with this
message .
Software or system configuration on host host_name is incompatible. Check
scan results
for details.

For instructions on resolving this known issue with ESXi, see VMware KB article 2034945
at https://kb.vmware.com
• Ensure that you do not import any third-party patches into VUM.
• Ensure that you uncheck the Remove 3rd party add-ins option in the VUM interface.
If some problem occurs with the upgrade process, an alert is raised in the Alert dashboard.

Post Upgrade
Run the complete NCC health check by using the following command.
nutanix@cvm$ ncc health_checks run_all

AOS  |  ESXi Host Upgrade (Manual) | 33


COPYRIGHT
Copyright 2019 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual
property laws. Nutanix and the Nutanix logo are registered trademarks of Nutanix, Inc. in the
United States and/or other jurisdictions. All other brand and product names mentioned herein
are for identification purposes only and may be trademarks of their respective holders.

License
The provision of this software to you does not grant any licenses or other rights under any
Microsoft patents with respect to anything other than the file server implementation portion of
the binaries for this software, including no licenses or any other rights in any hardware or any
devices or software that are used to communicate with or in connection with this software.

Conventions
Convention Description

variable_value The action depends on a value that is unique to your environment.

ncli> command The commands are executed in the Nutanix nCLI.

user@host The commands are executed as a non-privileged user (such as


$ command nutanix) in the system shell.

root@host# command The commands are executed as the root user in the vSphere or
Acropolis host shell.

> command The commands are executed in the Hyper-V host shell.

output The information is displayed as output from a command or in a


log file.

Default Cluster Credentials


Interface Target Username Password

Nutanix web console Nutanix Controller VM admin Nutanix/4u

vSphere Web Client ESXi host root nutanix/4u

vSphere client ESXi host root nutanix/4u

SSH client or console ESXi host root nutanix/4u

SSH client or console AHV host root nutanix/4u

SSH client or console Hyper-V host Administrator nutanix/4u

SSH client Nutanix Controller VM nutanix nutanix/4u

AOS  |  Copyright | 34


Interface Target Username Password

SSH client Nutanix Controller VM admin Nutanix/4u

SSH client or console Acropolis OpenStack root admin


Services VM (Nutanix
OVM)

Version
Last modified: April 25, 2019 (2019-04-25T13:01:52+05:30)

AOS  |  Copyright | 35

You might also like