Vsphere Administration Guide For Acropolis (Using Vsphere HTML 5 Client)
Vsphere Administration Guide For Acropolis (Using Vsphere HTML 5 Client)
10
vSphere Administration
Guide for Acropolis (using
vSphere HTML 5 Client)
April 25, 2019
Contents
2. VM Management.................................................................................................... 15
VM Migration.............................................................................................................................................................. 15
Migrating a VM to Another Nutanix vSphere Cluster....................................................................15
vStorage APIs for Array Integration.................................................................................................................16
Cloning a VM................................................................................................................................................. 16
Copyright.......................................................................................................................34
License......................................................................................................................................................................... 34
Conventions............................................................................................................................................................... 34
Default Cluster Credentials................................................................................................................................. 34
Version......................................................................................................................................................................... 35
1
VCENTER CONFIGURATION
VMware vCenter enables the centralized management of multiple ESXi hosts. The Nutanix
cluster in vCenter must be configured according to Nutanix best practices.
1. Create a new cluster entity within the existing vCenter inventory and configure its settings
based on Nutanix best practices by following Creating a Nutanix Cluster in vCenter on
page 3.
2. Add the Nutanix hosts to this new cluster by following Adding a Nutanix Node to vCenter on
page 4
3. Configure HA and DRS by following Configuring HA, DRS, and EVC in vCenter Server on
page 6
Procedure
2. If you want the Nutanix cluster to be in its own datacenter or if there is no datacenter, click
Host and Clusters and right-click the vCenter and select New Datacenter.
3. Type a meaningful name for the datacenter, such as NTNX-DC and click OK.
You can also create the Nutanix cluster within an existing datacenter.
5. Type a meaningful name for the cluster in the Name field, such as NTNX-Cluster.
a. From the Enable EVC for Intel Hosts, select the lowest processor class that is present in
the cluster.
For an indication of the processor class of a node, see the Block Serial field on the
Diagram or Table view of the Hardware Dashboard in the Nutanix web console.
What to do next
1. For configuring admission control policy according to availability configuration, go to
Configure > Services > vSphere Availability and click Edit and select Admission Control.
Select Cluster resource Percentage from the Define host failover capacity by drop-down
menu. Enter the percentage appropriate for the number of Nutanix nodes in the cluster.
For more information about settings of percentage of cluster resources reserved as failover
spare capacity, see vSphere HA Admission Control Settings for Nutanix Environment on
page 10.
2. For a cluster, go to Configure > General and click Edit for the Swap file location to verify that
Virtual machine directory (Store the swapfile in the same directory as the virtual machine) is
selected.
3. Add all Nutanix nodes to the vCenter cluster inventory by following Adding a Nutanix Node
to vCenter on page 4.
Tip: Refer to Default Cluster Credentials on page 34 for the default credentials of all cluster
components.
Procedure
3. Type the fully-qualified domain name or IP address of the ESXi host in the Host name or IP
address field.
4. Enter the ESXi host logon credentials in the Username and Password fields.
5. Click Next.
If a security or duplicate management alert appears, click Yes.
9. Click Finish.
10. Select the host and go to Configure > Networking > TCP/IP configuration.
13. Click Configure > Storage and confirm that NFS datastores are mounted.
14. If HA is not enabled, set the Controller VM to start automatically when the ESXi host is
powered on.
What to do next
Configure HA and DRS settings by following Configuring HA, DRS, and EVC in vCenter Server
on page 6.
Procedure
3. If vSphere HA and DRS are not enabled, you can enable them from the vSphere DRS and
vSphere Availability tabs.
Note: It is recommended to configure vSphere HA and DRS even if you do not plan to use
the features at this time. The settings are preserved within the vSphere cluster configuration,
in case you decide to enable the feature later, it is pre-configured based on Nutanix best
practices.
4. Configure vSphere HA by navigating to Configure > Services > vSphere Availability and click
Edit.
Note: If you do not have the Controller VMs listed, click the Add button to ensure that the
CVMs are added to the VM Overrides dialog box.
g. Select Disabled from the , VM restart priority and VM Monitoring drop-down menu.
Note: After configuring this setting, you need to clear the Turn on vSphere HA check box
and wait for all the hosts in the cluster to reconfigure HA, and then again enable HA by
selecting the Turn on vSphere HA check box.
7. Configure DRS by navigating to Configure > vSphere DRS and click Edit.
Note: If you do not have Controller VMs listed, you need to click the Add button to ensure
that the CVMs are added to the VM Overrides dialog box.
e. Change the Automation Level setting of all the Controller VMs to Disabled.
f. Click OK.
g. Go to Configure > vSphere DRS and click Edit.
h. Confirm that Off is selected as the default power management for the cluster.
i. Click OK to close the cluster settings window.
Note: Do not shut down more than one Controller VM at the same time.
The vCenter Configuration is complete. You can verify whether you have configured all the
settings properly, see the checklist, vSphere Cluster Settings (Review) on page 9.
Note: It is recommended to configure vSphere HA and DRS even if the customer does not plan
to use the feature. The settings are preserved within the vSphere cluster configuration, so if
the customer later decides to enable the feature, it is pre-configured based on Nutanix best
practices.
vSphere HA Settings
Overview
If you are using redundancy factor 2 with cluster sizes of up to 16 nodes, you must configure HA
admission control setting with the appropriate percentage of CPU/RAM to achieve at least N
+1 availability. For cluster sizes larger than 16 nodes, you must configure HA admission control
with the appropriate percentage of CPU/RAM to achieve at least N+2 availability.
Note: For redundancy factor 3, a minimum of five nodes is required, which provides the
ability that two concurrent nodes can fail while ensuring data remains online. In this case, the
same N+2 level of availability is required for the vSphere cluster to enable the VMs to restart
following a failure.
4 25* 50 75 N/A
5 20* 40** 60 80
6 18* 33** 50 66
7 15* 29** 43 56
8 13* 25** 38 50
9 11* 23** 33 46
10 10* 20** 30 40
11 9* 18** 27 36
12 8* 17** 25 34
13 8* 15** 23 30
14 7* 14** 21 28
15 7* 13** 20 26
16 6* 13** 19 25
17 6 12* 18** 24
18 6 11* 17** 22
19 5 11* 16** 22
20 5 10* 15** 20
21 5 10* 14** 20
22 4 9* 14** 18
23 4 9* 13** 18
24 4 8* 13** 16
25 4 8* 12** 16
26 4 8* 12** 16
27 4 7* 11** 14
28 4 7* 11** 14
29 3 7* 10** 14
30 3 7* 10** 14
31 3 6* 10** 12
32 3 6* 9** 12
The table also represents the percentage of the Nutanix storage pool, which should remain free
to ensure the cluster can fully restore the redundancy factor in the event of one or more node,
or even a block failure (where three or more blocks exist within a cluster).
Block Awareness
For deployments of at least three blocks, block awareness automatically ensures data
availability when an entire block of up to four nodes configured with redundancy factor 2 can
become unavailable.
If block awareness levels of availability are required, the vSphere HA Admission Control setting
needs to ensure sufficient compute resources are available to restart all virtual machines. In
addition, to the Nutanix storage pool must have sufficient space to restore redundancy factor 2
to all data.
The vSphere HA minimum availability level should be equal to number of nodes per block.
Note: For block awareness, each block must be populated with a uniform number of nodes.
In the event of a failure, a non-uniform node count might compromise block awareness or the
ability to restore the redundancy factor, or both.
• If SIOC or SIOC in the statistics mode is enabled then storage might become unavailable.
• .iorm.sf directory
• Two hidden files, namely, .lck-xxx and iormstats.sf.
To resolve this issue, you must first disable storage I/O statistics collection and then remove the
two hidden files.
Perform the following procedure to disable storage I/O statistics collection.
Procedure
2. Click Storage.
5. Select the Disable Storage I/O statistics collection option to disable SIOC, and click OK.
6. Select the Exclude I/O Statistics from SDRS option, and click OK.
Procedure
Changes that you make on one host are applied to all the other hosts in the cluster. Hence,
performing this procedure on one host resolves this issue.
This disables SIOC and removes any data related to SIOC, making your container empty. You
can now use this empty container to configure metro availability.
2
VM MANAGEMENT
VM Migration
You can live migrate a VM to an ESXi host in a Nutanix cluster. Usually this is done in the
following cases:
Procedure
4. Under Select Migration Type, select Change both compute resource and storage.
6. Select a destination network for all VM network adapters and click Next.
Cloning a VM
Procedure
3. Follow the wizard to enter a name for the clone, select a cluster, and select a host.
Note: If you choose a datastore other than the one that contains the source VM, the clone
operation uses the VMware implementation and not the Nutanix VAAI plugin.
5. If desired, set the guest customization parameters. Otherwise, proceed to the next step.
6. Click Finish.
3
NODE MANAGEMENT
Shutting Down a Node in a Cluster (vSphere Web Client)
About this task
CAUTION: Verify the data resiliency status of your cluster. If the cluster only has replication
factor 2 (RF2), you can only shut down one node for each cluster. If an RF2 cluster would have
more than one node shut down, shut down the entire cluster.
Procedure
2. If DRS is not enabled, manually migrate all the VMs except the Controller VM to another host
in the cluster or shut down any VMs other than the Controller VM that you do not want to
migrate to another host.
If DRS is enabled on the cluster, you can skip this step.
3. Right-click the host and select Maintenance Mode > Enter Maintenance Mode.
Note: If DRS is not enabled, you need to manually migrate or shut down all the VMs excluding
the Controller VM. The VMs that are not migrated automatically even when the DRS is enabled
can be because of a configuration option in the VM that is not present on the target host.
5. Log on to the Controller VM with SSH and shut down the Controller VM.
nutanix@cvm$ cvm_shutdown -P now
Note: Do not reset or shutdown the Controller VM in any way other than the cvm_shutdown
command to ensure that the cluster is aware that the Controller VM is unavailable
6. After the Controller VM shuts down, wait for the host to go into maintenance mode.
CAUTION: Verify the data resiliency status of your cluster. If the cluster only has replication
factor 2 (RF2), you can only shut down one node for each cluster. If an RF2 cluster would have
more than one node shut down, shut down the entire cluster.
You can put the ESXi host into maintenance mode and shut it down from the command line or
by using the vSphere Web Client.
Procedure
1. Log on to the Controller VM with SSH and shut down the Controller VM.
nutanix@cvm$ cvm_shutdown -P now
If successful, this command returns no output. If it fails with a message like the following,
VMs are probably still running on the host.
CRITICAL esx-enter-maintenance-mode:42 Command vim-cmd hostsvc/
maintenance_mode_enter failed with ret=-1
Ensure that all VMs are shut down or moved to another host and try again before
proceeding.
nutanix@cvm$ ~/serviceability/bin/esx-shutdown -s cvm_ip_addr
Replace cvm_ip_addr with the IP address of the Controller VM on the ESXi host.
Alternatively, you can put the ESXi host into maintenance mode and shut it down using the
vSphere Web Client.
If the host shuts down, a message like the following is displayed.
INFO esx-shutdown:67 Please verify if ESX was successfully shut down using
ping hypervisor_ip_addr
1. If the node is turned off, turn it on by pressing the power button on the front. Otherwise,
proceed to the next step.
2. Log on to vCenter (or to the node if vCenter is not running) with the vSphere Client.
7. Right-click the ESXi host in the vSphere client and select Rescan for Datastores. Confirm that
all Nutanix datastores are available.
If the cluster is running properly, output similar to the following is displayed for each node in
the cluster:
CVM: 10.1.64.60 Up
Zeus UP [5362, 5391, 5392, 10848,
10977, 10992]
Scavenger UP [6174, 6215, 6216, 6217]
SSLTerminator UP [7705, 7742, 7743, 7744]
SecureFileSync UP [7710, 7761, 7762, 7763]
Medusa UP [8029, 8073, 8074, 8176,
8221]
DynamicRingChanger UP [8324, 8366, 8367, 8426]
Pithos UP [8328, 8399, 8400, 8418]
Hera UP [8347, 8408, 8409, 8410]
Stargate UP [8742, 8771, 8772, 9037,
9045]
InsightsDB UP [8774, 8805, 8806, 8939]
InsightsDataTransfer UP [8785, 8840, 8841, 8886,
8888, 8889, 8890]
Ergon UP [8814, 8862, 8863, 8864]
Cerebro UP [8850, 8914, 8915, 9288]
Chronos UP [8870, 8975, 8976, 9031]
Curator UP [8885, 8931, 8932, 9243]
Prism UP [3545, 3572, 3573, 3627,
4004, 4076]
CIM UP [8990, 9042, 9043, 9084]
AlertManager UP [9017, 9081, 9082, 9324]
Arithmos UP [9055, 9217, 9218, 9353]
Procedure
If successful, this command produces no output. If if it fails, wait 5 minutes and try again.
nutanix@cvm$ ~/serviceability/bin/esx-start-cvm -s cvm_ip_addr
After starting, the Controller VM restarts once. Wait three to four minutes before you ping
the Controller VM.
Alternatively, you can take the ESXi host out of maintenance mode and start the Controller
VM using the vSphere Web Client.
If the cluster is running properly, output similar to the following is displayed for each node in
the cluster:
CVM: 10.1.64.60 Up
Zeus UP [5362, 5391, 5392, 10848,
10977, 10992]
Scavenger UP [6174, 6215, 6216, 6217]
SSLTerminator UP [7705, 7742, 7743, 7744]
4. Verify storage.
Restarting a Node
Before you begin
Shut down guest VMs, including vCenter, that are running on the node, or move them to other
nodes in the cluster.
Procedure
1. Log on to vCenter (or to the ESXi host if the node is running the vCenter VM) with the
vSphere client.
2. Right-click the host and select Maintenance mode > Enter Maintenance Mode.
In the Confirm Maintenance Mode dialog box, click OK.
The host is placed in maintenance mode, which prevents VMs from running on the host.
3. Log on to the Controller VM with SSH and shut down the Controller VM.
nutanix@cvm$ cvm_shutdown -P now
Note: Do not reset or shutdown the Controller VM in any way other than the cvm_shutdown
command to ensure that the cluster is aware that the Controller VM is unavailable
9. Right-click the ESXi host in the vSphere client and select Rescan for Datastores. Confirm that
all Nutanix datastores are available.
Tip: Although it is not required for the root user to have the same password on all hosts, doing
so makes cluster management and support much easier. If you do select a different password for
one or more hosts, make sure to note the password for each host.
3. Respond to the prompts, providing the current and new root password.
Changing password for root.
Old Password:
New password:
Retype new password:
Password changed.
Procedure
2. Click the ESXi host, go to Networking > TCP/IP configuration, and then click the edit (pencil)
icon.
3. In the Edit TCP/IP Stack Configuration dialog box, click DNS Configuration, and select Enter
settings manually.
4. In the Host name field, specify the new name for the host.
5. Click OK.
What to do next
Connect the host to the vCenter server and exit the maintenance mode.
3. Verify that the number of Controller VMs that are Up and Normal is the same as the
number of nodes in the cluster.
nutanix@cvm$ nodetool -h localhost ring | grep Normal | grep -c Up
CAUTION: To avoid impacting cluster availability, shut down one Controller VM at a time. Wait
until cluster services are up before proceeding to the next Controller VM.
Procedure
Warning: Modifying any of the settings listed here may render your cluster inoperable.
Warning: You must not run any commands on a Controller VM that are not covered in the
Nutanix documentation.
Nutanix Software
ESXi Settings
Note: If you create vSphere resource pools, Nutanix Controller VMs must have the top share.
• NFS settings
• VM swapfile location
• VM startup/shutdown order
• iSCSI software adapter settings
• vSwitchNutanix standard virtual switch
• vmk0 interface in port group "Management Network"
• SSH enabled
• Host firewall ports that are open
• Taking snapshots of the Controller VM
Note: Do not add any other device, including guest VMs, to the VLAN to which the Controller VM
and hypervisor host are assigned. Isolate guest VMs on one or more separate VLANs.
CAUTION: The cluster cannot tolerate duplicate host IP addresses. For example, when swapping
IP addresses between two hosts, temporarily change one host IP address to an interim unused
IP address. This step avoids having two hosts with identical IP addresses on the cluster. Then
complete the address change or swap on each host as described here.
Note: All Controller VMs and hypervisor hosts must be on the same subnet. The hypervisor can
be multihomed provided that one interface is on the same subnet as the Controller VM.
Procedure
2. Update the ESXi host IP addresses in vCenter by following Reconnecting an ESXi Host to
vCenter on page 28.
Procedure
1. On the ESXi host console, press F2 and then provide the ESXi host logon credentials.
2. Press the down arrow key until Configure Management Network is highlighted and then
press Enter.
7. If necessary, highlight the Set static IP address and network configuration option and press
Space to update the setting.
8. Provide values for the following: IP Address, Subnet Mask, and Default Gateway fields
based on your environment and then press Enter .
10. If necessary, highlight the Use the following DNS server addresses and hostname option
and press Space to update the setting.
11. Provide values for the Primary DNS Server and Alternate DNS Server fields based on your
environment and then press Enter.
12. Press Esc and then Y to apply all changes and restart the management network.
15. Verify that the default gateway and DNS servers reported by the ping test match those
that you specified earlier in the procedure and then press Enter.
Ensure that the tested addresses pass the ping test. If they do not, confirm that the correct
IP addresses are configured.
Procedure
2. Right-click the host with the changed IP address and select Disconnect.
5. In the Host name or IP address text box, type the new IP Address of the host, and click Next.
6. Type the root user credentials for the host in the User name and Password text boxes, and
click Next.
Networking Components
IP Addresses
All Controller VMs and ESXi hosts have two network interfaces.
Note: The ESXi and Controller VM interfaces on vSwitch0 cannot use IP addresses in any subnets
that overlap with subnet 192.168.5.0/24.
vSwitches
A Nutanix node is configured with two vSwitches:
• vSwitch0 is used for all other communication. It has uplinks to the physical network
interfaces. vSwitch0 has two networks:
CAUTION: You can enable jumbo frames by changing the size of the maximum transmission
unit (MTU) on a vSwitch up to 9000 bytes. If you change MTU setting, the attached uplinks
(physical NICs) are brought down and up again. This causes a short network outage for virtual
machines that are using the uplink.
• vSwitchNutanix is used for local communication between the Controller VM and the ESXi
host. It has no uplinks.
CAUTION: If you need to manage network traffic between VMs with greater control, create
additional port groups on vSwitch0. Do not modify vSwitchNutanix.
• If vmk is configured for the management traffic under network settings of ESXi, then the
weight assigned is 4. Otherwise, the weight assigned is 0.
• If IP address of vmk belongs to the same IP subnet as eth0 of the Controller VMs
interface, then 2 is added to its weight.
• If IP address of vmk belongs to the same IP subnet as eth2 of the Controller VMs
interface, then 1 is added to its weight.
2. The vmk interface that has the highest weight is selected as the management interface.
Procedure
• For vSphere releases prior to 6.5 version, go to Manage > Networking > VMkernel
adapters.
• For vSphere 6.5 or later releases, go to Configure > Networking > VMkernel adapters
a. Select the interface on which you want to enable the management traffic.
b. Click Edit settings of the port group to which the vmk belongs.
c. Select Management check box from the Enabled services option to enable management
traffic on the vmk interface.
2. Open an SSH session to the ESXi host and enable the management traffic on the vmk
interface.
root@esx# esxcli network ip interface tag add -i vmkN --tagname=Management
Replace vmkN with the vmk interface where you want to enable the management traffic.
CAUTION: If you have enabled DRS and want to upgrade the ESXi host, use the one-click
upgrade procedure from the Prism Web console. Do not use the manual procedure. For more
information on the one-click upgrade procedure, see the Prism Web Console guide.
Nutanix supports the ability to patch upgrade the ESXi hosts with the versions that are greater
than or released after the Nutanix qualified version, but Nutanix might not have qualified those
releases. See the Nutanix hypervisor support statement in our Support FAQ.
Because ESXi hosts with different versions can co-exist in a single Nutanix cluster, upgrading
ESXi does not require cluster downtime.
• If you want to avoid cluster interruption, you must complete upgrading a host and ensure
that the Controller VM is running before upgrading any other host. When two hosts in a
cluster are down at the same time all the data will be unavailable.
• If you want to minimize the duration of the upgrade activities and cluster downtime is
acceptable, you can stop the cluster and upgrade all hosts at the same time.
Warning: By default, Nutanix clusters have redundancy factor 2, which means they can tolerate
the failure of a single node or drive. Nutanix clusters with a configured option of redundancy
factor 3 allow the Nutanix cluster to withstand the failure of two nodes or drives in different
blocks.
Note: Use the following process only if you do not have DRS enabled in your cluster.
• If you are upgrading all nodes in the cluster at once, shut down all guest VMs and stop the
cluster with the cluster stop command.
CAUTION: There will be downtime if you decide to upgrade all the nodes in the cluster at
once. If you do not want downtime in your environment, you must ensure that only one
Controller VM is powered off at a time in a redundancy factor 2 configuration.
Note: Perform this step if you are only going to upgrade a single host. Do not use this step if
you are going to perform a rolling upgrade with the cluster command.
• Place the host in the maintenance mode by using vSphere Web Client.
• If you upgrade using VUM and the host is part of an HA cluster, upgrade may fail with this
message .
Software or system configuration on host host_name is incompatible. Check
scan results
for details.
For instructions on resolving this known issue with ESXi, see VMware KB article 2034945
at https://kb.vmware.com
• Ensure that you do not import any third-party patches into VUM.
• Ensure that you uncheck the Remove 3rd party add-ins option in the VUM interface.
If some problem occurs with the upgrade process, an alert is raised in the Alert dashboard.
Post Upgrade
Run the complete NCC health check by using the following command.
nutanix@cvm$ ncc health_checks run_all
License
The provision of this software to you does not grant any licenses or other rights under any
Microsoft patents with respect to anything other than the file server implementation portion of
the binaries for this software, including no licenses or any other rights in any hardware or any
devices or software that are used to communicate with or in connection with this software.
Conventions
Convention Description
root@host# command The commands are executed as the root user in the vSphere or
Acropolis host shell.
> command The commands are executed in the Hyper-V host shell.
Version
Last modified: April 25, 2019 (2019-04-25T13:01:52+05:30)