Vxrail Appliance Administration - Lab Guide
Vxrail Appliance Administration - Lab Guide
ADMINISTRATION - LAB
GUIDE
Version 4.7.300
PARTICIPANT GUIDE
PARTICIPANT GUIDE
Dell Confidential and Proprietary
Copyright © 2019 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC and other
trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be
trademarks of their respective owners.
Lab Exercise 2: Explore VxRail Cluster with vSphere Client Using VxRail Plug-
In ................................................................................................................................ 11
Lab 2: Part 1 – Log In to vCenter with vSphere Client ........................................................ 12
Lab 2: Part 2 – Explore VxRail Plug-In Functionality .......................................................... 13
Lab 2: Part 3 – Explore VxRail Cluster Configuration ......................................................... 21
Lab Exercise 5: Monitor Health and Performance of VxRail vSAN Cluster ........ 41
Lab 5: Part 1 – Monitor vSAN Health, Capacity, Objects, and Disks .................................. 42
Lab 5: Part 2 – Monitor vSAN Performance ....................................................................... 49
Lab Exercise 13: View VxRail Advisories and Knowledgebase Articles ........... 109
Lab 13: Part 1 – View VxRail Related Advisories ............................................................. 110
Lab 13: Part 2 – View VxRail Related KB Articles ............................................................ 112
Lab Exercise 14: Generate VxRail Procedures with SolVe Online .................... 114
Lab 14: Part 1 – Explore SolVe Online and Available VxRail Procedures ........................ 115
Lab 14: Part 2 – Power Control Procedures ..................................................................... 118
Lab 14: Part 3 – Generate Capacity Drive Expansion Procedure ..................................... 119
Lab 14: Part 4 – Generate Procedure to Change VLAN ID .............................................. 120
Purpose
Tasks
Lab Steps
1. Lab resources for most of the lab exercise are located remotely and are
accessed using the Dell EMC Virtual Data Center (VDC).
To start the labs, connect to the VDC and log in to a jump server. Then, launch
the component interfaces required for the lab exercises from the jump server.
The instructor should have given you a Lab worksheet with the Dell EMC VDC
URL and login credentials:
Dell EMC VDC URL: ________________________
VDC User name: ________________
VDC Password: __________________
3. The VDC page shows the jump server. The jump server has all the required
tools.
In this lab environment, multiple students share the same VxRail Cluster. Each
student has their own jump host and their own node within the cluster. Each
student can do all the labs in this course, but some coordination is required.
For example, in the setups with four nodes, putting multiple nodes into
maintenance mode could cause issues.
When coordination is required, the lab guide tells you to coordinate with the
students using the same cluster. Record any needed contact information for
your cluster colleagues here.
1) __________________________________________________
2) __________________________________________________
3) __________________________________________________
4) __________________________________________________
You also require your VxRail number and your assigned node:
Node: _______________________________________
Purpose
Find VxRail related documentation in the Dell EMC and VMware web sites.
Tasks
References
Lecture Module:
VxRail Overview
Lab Steps
1. This lab is best performed from a computer that is used in the day-to-day
administration of a VxRail environment.
The lab can be performed at your convenience. The lab requires Internet
connectivity to the Dell EMC Support site and a valid Dell EMC support
account.
2. Log in to the Dell EMC Support site and go to the Product Support page for the
VxRail Appliance Series.
The following URL takes you directly to the VxRail Appliance Series support
page:
https://support.emc.com/products/39970_VxRail-Appliance-Series
After a successful login you should see:
Click the link in the center panel and see a listing of available
VxRail documentation. To familiarize yourself with what is available, scroll
through the list of documents.
VxRail Appliance 4.7 Describes the VxRail Appliance 4.7.xxx, how it works,
Administration Guide and how to perform administrative tasks.
SolVe Online and SolVe Desktop can be used to generate VxRail procedures.
SolVe Desktop is a user installed Windows application, while SolVe Online is a
web-based application accessed using a web browser.
Scroll down in the Overview tab, and view the Helpful Resources section.
This section contains links to download the SolVe Desktop software and to a
YouTube video on installing SolVe Desktop.
Click the Content tab at the top of the web page. Here is where you find
Release Notes and other documentation about SolVe.
Scroll down and review the Site Readiness Support and Customer Training
section. The VxRail Networking Guide is an important document.
Purpose
Use vSphere Client to explore a recently deployed VxRail Cluster and become
familiar with the VxRail plug-in.
Tasks
References
Lecture modules:
VxRail Administration and Management Overview
VxRail Management
Lab Steps
1. You should be logged in to your jump server. See Lab 0 for instructions.
All the lab steps are run from the jump server.
The instructor should have given you a Lab worksheet with detailed
information about your VxRail environment.
Refer to the Lab worksheet and record information about:
vCenter Server IP address: __________________
vCenter Server FQDN: ______________________
Example: vxrail##-vcenter.vsb.edu
Username: [email protected]
Password: VMw@r3!!123
Launch Chrome.
Connect to the vSphere Client URL: https://<vCenter Server FQDN>/ui/
Example: https://vxrail##-vcenter.vsb.edu/ui/
Use the FQDN instead of the IP address.
Ignore and bypass any privacy/security/certificate warnings.
Enter the vCenter Server username and password. Click Login.
You should be logged in to the vSphere Client.
The lab environment uses the evaluation licenses for vSphere and vSAN. So,
every time you log in to the vSphere Client, you will see a warning message
about expired or expiring licenses.
Lab Steps
1. Check the status of the VxRail vCenter plug-in.
Look for the VxRail client plug-in, and observe its version and state. The
VxRail plug-in should be enabled.
You may have to scroll down to see the VxRail plug-in. The Recent Tasks
panel may be obscuring the view. To minimize the Recent Tasks panel, click
the icon towards the bottom right of the page.
Contact your instructor if you do not see the VxRail plug-in or if its state is
disabled.
Navigate to the VxRail Dashboard - Menu > VxRail. You should see the
VxRail Dashboard.
4. Explore the VxRail plug-in functionality in the VxRail Cluster Monitor tab.
Monitor tab.
6. Explore the VxRail plug-in functionality in the VxRail Host Monitor tab.
The Host-level Monitor tab only offers one option for Physical View.
The Physical View shows the front and back views of the host. Clicking an
individual component in the Physical View depicts the details of the specific
component. The view also shows the health status of components.
This feature is explored further in a future lab exercise (Monitoring with VxRail
Plug-In).
7. Explore the VxRail plug-in functionality in the VxRail Cluster Configure tab.
You should see the System information in the display pane. The System page
displays information about VxRail including the software version, provides links
to the product documentation, privacy statement, and software updates.
The System page also displays the vCenter Mode that is used in the VxRail
deployment. The vCenter Mode can be converted from Embedded to an
External vCenter.
Click the Update link in the Helpful Information box. The Update link directs
you to the Updates option in the VxRail tree panel on the left.
Observe the service tag, appliance ID, model, operation status, management
IP address, and hostname. The Hosts page can be used to change the
management IP address or hostname if necessary.
DO NOT edit the IP addresses or hostnames in this lab environment.
This page is used to manage the Internet connection status of the VxRail
Manager VM and to configure proxy settings.
11. Explore the VxRail plug-in functionality in the VxRail Host Configure tab.
The only option here is iDRAC Configuration. Users can change the iDRAC
network settings and create iDRAC users on the selected Host.
Lab Steps
1. Explore the VxRail cluster in the Hosts and Clusters view.
Navigate to the Hosts and Clusters view – Menu > Hosts and Clusters.
Expand the VxRail vCenter.
Expand the VxRail Datacenter.
Expand the VxRail Cluster.
You should see something like the following graphic:
Observe the names of the vCenter Server, the datacenter, the cluster, and the
hosts.
The number of hosts and the host names should match the information that
was seen in Part 2 of this lab exercise.
Observe the names of the distributed switch and the uplink port group.
For example:
Distributed Switch name - VMware HCIA Distributed Switch
Uplink port group name - VMware HCIA Dist-DVUplinks-xx
The table shows the examples of the other distributed port groups that you
should see.
Expand the Uplinks, VMkernel Ports, and Virtual Machines in each of the
port groups for more information.
For example, if you expand Uplink1 you see the vmnic from each of the nodes.
4. Determine the active uplinks that are used by the vSAN, vMotion, and
Management network port groups.
Repeat the same for the vMotion port group and the Management Network
port group.
vSAN
vMotion
Management Network
The Network Label corresponds to the name of the Distributed port group
names that you saw in a previous step. Observe that the vmk0 adapter has an
IPv6 address, while the rest of the adapters have IPv4 addresses. The vmk0
adapter is on the VxRail Management private network that is used for node
discovery.
Select one of the healthy disk groups, and observe the information about the
disks in the disk group:
You should see one service datastore per node and one VxRail vSAN
datastore that is shared by all the nodes in the cluster.
12. Examine the vSAN object placement for the storage that is used by the VxRail
Manager VM.
Observe the information about the storage that is used by the VM.
Select the Monitor tab in the central pane. Select Physical disk placement
under vSAN.
Observe the information about the vSAN Component and Witness placement
for one of the hard disks.
You should see that the Witness and each of the RAID1 components are on
different hosts.
Witness
RAID1 Component
RAID1 Component
Purpose
Tasks
References:
Lecture module:
VxRail Administration and Management Overview
Lab Steps
1. You should be logged in to your jump server. See Lab 0 for instructions.
All the lab steps are run from the jump server.
The instructor should have given you a Lab Worksheet with detailed
information about your VxRail environment. The instructor should have also
assigned you a specific VxRail node.
Refer to the Lab worksheet and record information for your node:
VxRail node name: _________________
iDRAC IP address: __________
iDRAC Username: root
iDRAC Password: calvin
3. Log in to iDRAC.
Lab Steps
1. You should be logged in to iDRAC.
Refer to your lab worksheet for the root password of the node.
Root password: VMw@r3!!123
You should be on the ESXi console.
Press <F2>
Enter the login name of root and the root password and press .
Use the down arrow key and select Troubleshooting Options and press .
Is ESXi Shell enabled? _______________ Ensure it is enabled by checking
the status on the right that will state if the ESXi Shell is enabled or not.
Press <Esc ><Esc> and go back to the home screen of the console.
Press +
Log in with the root credentials.
Username: root
Password: VMw@r3!!123
The information should match the information that was seen in Lab 2: Part 3.
Purpose
Tasks
Nodes
NIC
Power supplies
Disks
References:
Lecture module:
VxRail Management
Lab Steps
1. You should be logged in to your jump server. See Lab 0 for instructions.
All the lab steps are run from the jump server.
https://vxrail##-vcenter.vsb.edu/ui/
User: [email protected]
Password: VMw@r3!!123
Lab Steps
1. You should be logged in to the vSphere Client viewing the Health Monitoring
page of the VxRail Cluster.
The Appliances view shows the physical view of all the VxRail chassis that
forms the VxRail Cluster. This example shows a cluster with one G560 chassis
with four nodes.
The top of the display shows the Cluster ID, Last Timestamp, Number of
Chassis, Connected, Health State, and Operational State of the cluster.
To view information about one of the VxRail nodes, click on one of the nodes
in the chassis.
What node-specific information is shown?
Are there any triggered VxRail alerts specific to the selected node? The alert
listing can be filtered based on severity.
What maintenance activities are available when selecting the ACTIONS drop-
down button?
Select a VxRail host from the Hosts and Clusters navigation pane.
Select the Monitor tab.
Select Physical View under VxRail.
The Physical View of a node shows the front and back view of the chassis.
The top of the display shows the Appliance ID, Service Tag, System Health,
iDRAC IP Address, Appliance PSNT, Model, ESXi IP address, and iDRAC
IP address.
What VxRail components are shown in the Front View?
What VxRail components are shown in the Back View?
The node information panel on the right has three sections, OVERVIEW,
BOOT DEVICE, and ALERTS. Observe each of these sections one by one to
answer the questions below.
Click one of the NIC ports in the middle of the Back View ( )
and observe the information about the NIC. Select each NIC individually.
Select one of the Power Supplies in the Back View and observe the
information:
Serial Number: ______________ Slot: _____________
Part Number: _____________ Health: _______________
Name: ________________
To gather details about all the disks in the node, observe the Front View of
the node.
To gather details about a specific disk, select an individual disk and see the
Disk Information pane.
If the node has HDDs - Click one of the HDDs and observe the information:
Status LED: ______________
Manufacturer: _____________ Capacity: __________
Purpose
Use the vSphere Client to monitor the health and performance of a VxRail vSAN
cluster.
Tasks
References:
Lecture module:
Managing vSAN
Lab Steps
1. You should be logged in to your jump server. See Lab 0 for instructions.
All the lab steps are run from the jump server.
https://vxrail##-vcenter.vsb.edu/ui/
User: [email protected]
Password: VMw@r3!!123
Select the Monitor tab in the center pane. Expand the vSAN entry in the left
navigation and select Health.
Expand each test group to see a listing of the tests in the group. For example,
the graphic below shows the Data and Capacity utilization tests.
Expand a group with a Warning result. Expand any other test group if you do
not see one with a Warning.
Select one of the individual tests. Review the information for the test.
Select the Info tab of the center pane and click the Ask VMware link in the
upper-right corner. It launches another browser tab with the VMware KB
related to the issue.
Review and close the VMware KB and return to the vSphere Client.
Select the vSAN Disk Balance test, and view the Overview information.
Select the Info tab of the vSAN Disk Balance test, click Ask VMware – View
the VMware KB related to vSAN disk balance.
Review and close the VMware KB and return to the vSphere Client.
Go back to the Overview tab of the vSAN Disk Balance health check.
Select Capacity under the expanded vSAN entry in the navigation pane.
What is the Effective free space with the policy - vSAN Default Storage
Policy? _________
Swap objects
Checksum overhead
Select Virtual Objects below the vSAN entry in the navigation pane.
Expand the VxRail Manager VM from the View Placement Details section.
You should be on the Monitor > vSAN > Virtual Objects view. In the View
Placement Details section:
Select the checkbox for Performance management object, click View Placement
Details at the top of the table.
Observe that the Witness and the Components are each hosted on different
nodes.
Witness
RAID1 Component
RAID1 Component
Select Physical Disks below the vSAN entry in the navigation pane.
To see the disks, expand each node in the center pane. View the State and
vSAN Health Status of each disk.
Note: You may have to scroll to the right to see the State and vSAN Health
Status columns.
Are all the disks Mounted? ________ Are all the disks Healthy? ________
Select one of the nodes, and view the Objects on Host in the lower portion of
the pane.
Select one of the HDDs, and view the Objects on Disk in the lower portion of
the pane.
Lab Steps
1. You should be still logged in to the vSphere Client looking at the Monitor >
vSAN view.
The purpose of this lab exercise is to see what types of performance graphs
and metrics are available for a vSAN cluster. In this lab environment, there is
no active IO, so the data that is observed in the graphs is not immediately
useful.
The vSAN Performance view should have the VM tab that is selected by default.
Set the Time Range to Last 1 Hour. Click the SHOW RESULTS button.
Hover over the line graphs to view Read and Write statistics. View the maximum
observed:
Set the Time Range to Last 1 Hour. Click the SHOW RESULTS button.
Select the Monitor tab in the central pane. Locate the vSAN entry in the
navigation pane and select Performance below it.
Observe the categories for which vSAN performance graphs are available.
They are listed in tabs at the top of the pane:
________________________________________________________
________________________________________________________
________________________________________________________
________________________________________________________
________________________________________________________
6. Explore the vSAN - Disk Group performance graphs for the node.
In the node Performance view, select the DISKS tab at the top of the pane.
Locate the Disk Group drop-down menu at the top of the pane. Select one
disk group.
________________________________________________________
________________________________________________________
________________________________________________________
________________________________________________________
________________________________________________________
________________________________________________________
________________________________________________________
________________________________________________________
________________________________________________________
You should be viewing the node Performance view with the DISKS tab that is
selected at the top of the pane.
To change from the default whole group, use the drop-down menu to the right
of Disk Group, and select the first disk in the group. The first disk is a flash
disk and is the Cache disk for the vSAN disk group.
________________________________________________
________________________________________________
________________________________________________
Select the second disk - this disk could be a flash disk or a hard drive depending on
the type of node you have. Observe the available graphs:
_______________________________________________
_______________________________________________
_______________________________________________
_______________________________________________
_______________________________________________
Is there a difference in the graphs available for the first disk compared to the
second disk? _________________________________
Hint: The first disk is the Cache disk for the vSAN disk group, the second disk is a
Capacity disk.
8. Explore the vSAN - Physical Adapters performance graphs for the node.
_________________________________________________________
9. Explore the vSAN - Host Network performance graphs for the node.
_________________________________________________________
10. Navigate to the Monitor Performance view of the VxRail Manager VM.
Select the VxRail Manager VM in the Hosts and Clusters navigation pane.
Confirm that the view is on the Monitor tab with the Performance option
selected under the vSAN entry in the navigation pane.
Observe the categories for which vSAN performance graphs are available:
_____________________________
_____________________________
11. Explore the vSAN – VM performance graphs for the VxRail Manager VM.
__________________________________________________________
__________________________________________________________
12. Explore the vSAN – VIRTUAL DISKS performance graphs for the VxRail
Manager VM.
Select the VIRTUAL DISKS tab at the top of the Performance pane.
To determine how many virtual disks this VM contains, use the Virtual Disk
drop down menu.
__________________________________________________________
__________________________________________________________
__________________________________________________________
Purpose
To understand the implications of using fault domains and different FTT and FTM
settings in the storage policy.
Tasks
References:
Lecture module:
Managing vSAN
Lab Steps
1. VMware vSAN provides a VxRail Cluster with self-healing software defined
storage.
Three variables define the data protection in a standard VxRail vSAN cluster.
Failures to Tolerate (FTT) which specifies how many failures the configuration
can tolerate without rebuilding before data is lost.
Failure Tolerance Method (FTM) which specifies whether Mirroring or Erasure
Coding is used to tolerate the failures.
Fault Domains which specify areas of the cluster that are expected to fail
together (for example nodes in the same chassis or rack.)
The table lists the minimum number of nodes that are required for various FTT
and FTM settings:
FTT 1 2 3
Minimum 3 4 5 6 7
nodes for copies=2 3+1 RAID copies=3 4+2 RAID copies=4
compliance witness=1 group witness=2 group witness=3
All the nodes are used so there is no spare node to rebuild. Any node outage
is tolerated until the node is returned to service. Remember, outages include a
node being down for maintenance.
For example, a 3-node cluster would have to use FTT=1 and FTM=Mirroring. If
a node goes down, either for maintenance or due to a failure, that would be
tolerated. However, the cluster would not be able to rebuild. Any additional
failure could lead to data loss or data unavailability.
Each of the questions can have more than one correct answer.
________________________________________________________
4. You have a cluster workload that would work best with Erasure Coding and is
required to tolerate one failure.
The cluster is also required to tolerate one failure when one node is down for
maintenance.
________________________________________________________
6. You have a cluster workload that would work best with mirroring and is
required to tolerate two failures.
Which FTT and FTM settings could be configured on it? Would any of those
configurations be able to self-heal before the node was restored?
________________________________________________________
Which FTT and FTM settings could be configured on it? Would any of those
configurations be able to self-heal before the node was restored?
________________________________________________________
Which FTT and FTM settings could be configured on it? Would any of those
configurations be able to self-heal before the node was restored?
________________________________________________________
Lab Steps
1. You have a hybrid 6-node VxRail cluster.
2. You have a cluster workload that would work best with Erasure Coding and is
required to tolerate one failure.
The cluster is also required to tolerate one failure when one node is down for
maintenance.
FTT = 1 and FTM = mirroring requires three nodes, so that would work as
there are extra nodes to rebuild.
FTT = 1 and FTM = erasure coding requires four nodes, so that would work as
there are extra nodes to rebuild.
FTT = 2 and FTM = mirroring requires five nodes, so that would work as there
are extra nodes to rebuild.
FTT = 2 and FTM = erasure coding requires six nodes, so that would not work
as there are no extra nodes for rebuild.
FTT = 3 and FTM = mirroring requires seven nodes, so that would not work.
4. You have a cluster workload that would work best with mirroring and is
required to tolerate two failures.
Which FTT and FTM settings could be configured on it? Would any of those
configurations be able to self-heal before the node was restored?
FTT = 1 and FTM = mirroring requires three nodes, so that would work. But
there are no extra nodes so it would not have room to rebuild.
No other FTT/FTM combinations would work.
Which FTT and FTM settings could be configured on it? Would any of those
configurations be able to self-heal before the node was restored?
FTT = 1 and FTM = mirroring requires three nodes, so that would work as
there are extra nodes to rebuild
FTT = 1 and FTM = erasure coding requires four nodes, so that would work as
there are no extra nodes to rebuild
No other FTT/FTM combinations would work.
Which FTT and FTM settings could be configured on it? Would any of those
configurations be able to self-heal before the node was restored?
FTT = 1 and FTM = mirroring requires three nodes, so that would work as
there are extra nodes to rebuild.
FTT = 1 and FTM = erasure coding requires four nodes, so that would work as
there are extra nodes to rebuild.
FTT = 2 and FTM = mirroring requires five nodes, so that would work. But
there are no extra nodes to rebuild.
No other FTT/FTM combinations would work.
Lab Steps
1. Fault domains enable you to protect against nodes that are likely to fail in
groups.
When used correctly fault domains can improve overall availability. When used
incorrectly they can damage system availability.
A fault domain consists of one or more vSAN nodes that are grouped
according to their physical location in the data center. Each fault domain only
takes one piece of a vSAN object (mirror, witness, or RAID element). If the
nodes fail together, the system is still protected.
Each fault domain should be a group of nodes that may fail together. You
must have sufficient fault domains to meet the compliance requirements
of the storage policy. If you do not have sufficient fault domains, the policy is
noncompliant and prevents the creation of VMs.
Example 1: A 6-node cluster is spread across three racks, with two nodes per
rack. Each rack could be set up as a fault domain. With FTT=1 and
FTM=Mirroring, you need three fault domains for compliance. In this
circumstance, setting up fault domains would improve availability.
Example 2: A 6-node cluster is spread across three racks, with two nodes per
rack. Each rack could be set up as a fault domain. With FTT=1 and
FTM=Erasure coding, you need four fault domains for compliance. The policy
would be noncompliant, and better availability would be achieved without fault
domains.
For extra protection, the consideration as in the previous lab around having an
extra node for rebuilds apply to fault domains also. If a node fails, a different
node in the fault domain could be used to rebuild. If the entire fault domain
fails, an extra fault domain would be required.
For each question, specify whether fault domains would improve availability,
and if so how the fault domains should be configured.
Each question can have more than one correct answer.
Fault domains?
Fault domain
configuration
FTT
FTM
Fault domains?
Fault domain
configuration
FTT
FTM
Fault domains?
Fault domain
configuration
FTT
FTM
Fault domains?
Fault domain
configuration
FTT
FTM
Fault domains?
Fault domain
configuration
FTT
FTM
Fault domains?
Fault domain
configuration
FTT
FTM
Lab Steps
1. A 20-node cluster is spread across two racks in one row.
No - With only two racks there is no way to have the storage components in
different fault domains. The default setup with each node being its own fault
domain is best.
Yes - Each rack can be a fault domain. With four racks, FTT = 1 for both
erasure coding and mirroring would enable the storage components to each
be stored in different fault domains.
Yes - Each rack can be a fault domain. Any combination of FTT/FTM would
work with 10 fault domains.
Yes - In this case, fault domains could be created with racks or rows.
Using rows, five fault domains can be created.
FTT of 1, FTM could be Mirroring or Erasure Coding.
FTT of 2 and FTM of mirroring is also possible.
FTT of 2 and FTM of Erasure coding is not possible.
Yes - Using the three racks as fault domains. FTT = 1 with mirroring works.
Purpose
Tasks
References:
Lecture module:
Managing vSAN
Lab Steps
1. You should be logged in to your jump server. See Lab 0 for instructions.
All the lab steps are run from the jump server.
https://vxrail##-vcenter.vsb.edu/ui/
User: [email protected]
Password: VMw@r3!!123
Select the Configure tab in the center pane. Select Storage Providers under
More in the Configure tab.
You will see one VMware vSAN Storage Provider that is internally managed within
vCenter.
Select the VMware vSAN Storage Provider in the center pane and view its
details in the window directly below the list of providers.
5. The VxRail vSAN cluster is automatically configured during the initial setup of
VxRail. Enabling vSAN automatically configures and registers a vSAN storage
provider for each host in the cluster.
The initial setup of VxRail creates a VxRail vSAN storage policy and the
default vSAN storage policy. You may have to scroll to the bottom of the pane
to see all the storage policies.
Select the VM Compliance tab. Observe the VMs associated with the
selected policy. What VMs use this policy?
_______________________________
Based on the rule set information and the VMs that this policy applies to, what
can you conclude? _________________________________
View the compliance status column. Are any of the components out of
compliance? ____________________________
Select the vSAN Default Storage Policy in the center pane. You should see
information about the policy in the lower section of the window.
Lab Steps
1. You should be logged in to the vSphere Client looking at the details of the
vSAN Default Storage Policy.
Click NEXT.
Policy structure – Select the box next to Enable rules for "vSAN" storage
Click NEXT
Use the drop-down to add Failures to tolerate – Set FTT to 1 failure - RAID-
1 (Mirroring)
Click NEXT
Click NEXT
Click NEXT.
Policy structure – Select the box next to Enable rules for "vSAN" storage
Click NEXT
Click NEXT
Click BACK to go back to vSAN settings. Click the Advanced Policy Rules
near the top of the window.
Locate the Force provisioning option and enable it. Click NEXT
Click NEXT.
Click NEXT.
Policy structure – Select the box next to Enable rules for "vSAN" storage
Click NEXT
Use the drop-down to add Failures to tolerate – Set FTT to 1 failure - RAID-
5 (Erasure Coding)
Click NEXT.
Only All-Flash systems with at least 4 nodes can support FTT=1 and
FTM=Erasure Coding.
Click NEXT
6. The VM Storage Policies listing should list the three policies that you created.
<your name>-RAID1-FTT1
your name>-RAID1-FTT1
<your name>-RAID5-FTT1
Storage policies can be edited to make any needed changes. To edit a storage
policy, select the policy and click Edit Settings.
Purpose
Deploy a new VM on a VxRail cluster and apply the preferred vSAN storage policy.
Tasks
Lab Steps
1. Log in to the vSphere Client:
https://vxrail##-vcenter.vsb.edu/ui/
User: [email protected]
Password: VMw@r3!!123
While this lab is designed for use by multiple students simultaneously, it does
use shared resources.
You can see the work that other students do, and they can see your work.
Following the naming convention is important so that you can see what is
yours.
b. Right-click your node in the Navigator pane and select Deploy OVF
Template.
a. Select the Local file option and then click the Choose Files button.
c. Click NEXT.
The name enables you to identify which VMs are yours since they are visible
throughout the cluster. If you had multiple VxRail clusters using the same
external vCenter, you would choose which VxRail the VM was going to run on.
b. Click NEXT.
Since you right clicked on a node to deploy this VM, that node is selected as a
compute resource. You could select the cluster or another node. If you select
the cluster, vSphere decides where to deploy the VM. HA, DRS, and vMotion
may move the VM to other nodes.
b. Click NEXT. It may take a short amount of time to validate the requested
configuration.
d. Click NEXT.
6. Select storage
The Select Storage pane is the only difference between deploying a VM using
traditional storage and deploying a VM using vSAN storage on a VxRail. Use
the vSAN storage policy that you created earlier.
a. In the Select virtual disk format drop-down menu, select Thin Provision
from the available options.
b. For the VM Storage Policy, select the policy that is called <Your Name>-
RAID1-FTT1 that you created earlier. Notice that only one datastore is in the
Compatible list. The other datastores are Incompatible with the selected
policy. Also observe that the select virtual disk format automatically
changes to As defined in the VM storage policy.
d. Click NEXT.
7. Select networks
a. There are two columns that are listed for Source Network and Destination
Network. To reveal a drop-down menu of available networks in the
environment, select the entry under Destination Network from its drop
down menu.
c. Click NEXT.
The VM deployment takes a few minutes. You may also review the progress in
Recent Tasks panel to view when this task completes.
a. Select the VMs and Templates icon above the Navigator pane.
e. Click the Launch Web Console link that is located in the central pane.
g. Log in to the VM
Username: user
Password: user1234
h. Run the command ip addr show eth0 and verify the eth0 has an IP
address of 192.168.1.1.
i. Type exit.
j. Close out of the web console and return to the vSphere Client.
9. A second VM must now be deployed to communicate with the first. The steps
are similar to the deployment of the first VM.
b. Right-click your node in the Navigator pane and select Deploy OVF
Template.
e. Click NEXT.
b. Click NEXT.
11. Here you select where the VM is deployed. It is possible to select the cluster or
an individual node.
a. Select VxRail-VirtualSAN-Cluster
b. Click NEXT.
d. Click NEXT.
12. In the Select Storage section, use the vSAN storage policy that you created
earlier.
a. For the VM Storage Policy, look for the RAID5 policy <Your Name>-
RAID5-FTT1 that you created earlier.
You may notice that the <Your Name>-RAID5-FTT1 policy you are looking
for is unavailable. Erasure coding requires all-flash storage.
c. Click NEXT.
b. Click NEXT.
The VM deployment takes a few minutes. You may also review the progress in
Recent Tasks panel to view when this task completes.
c. Select your VM2. Right-click the VM, and select Power > Power On.
e. Click the Launch Web Console link that is located in the central pane.
g. Log in to the VM
Username: user
Password: user1234
h. Run the command ip addr show eth0 and verify the eth0 has an IP
address of 192.168.1.2
i. Type exit.
j. Close out of the web console and return to the vSphere Client.
Lab Steps
1. VM templates can be created in two different ways. Converting a VM to a
template and cloning a VM to a template.
Once again, the only difference with a traditional vSphere environment is that
you use a vSAN policy and the vSAN datastore.
It is also worth mentioning that you may see other VMs and templates in this
lab from other students doing their labs. All your VMs, Templates, and Profiles
should start with your name.
A running VM can be cloned. The following steps create a clone and then
convert the clone to a template:
a. Right-click the <Your name>-VM2 virtual machine, and select Clone in the
pop-up menu. You should see Clone to Virtual Machine, Clone to
Template and Clone as Template to Library. Libraries can be used to
share images across multiple vSphere clusters that share the SSO domain.
b. Multiple clusters do not exist in this SSO domain since an internal vCenter is
being used for this VxRail. Select Clone to Template.
3. The wizard is similar to the create VM wizard you saw in the previous lab.
c. Click NEXT
e. Click NEXT
b. Click FINISH
The process takes a minute while the VM is cloned and the clone is converted
to a template. When the task is completed, you should see a new template.
a. Select <Your Name>-VM2 and right click in the pop-up menu select Power
> Power Off.
The process happens more quickly than the clone to template since all the
data does not need to be rewritten. Notice that the <Your-name>-VM2 no
longer has the VM icon and now has the template icon.
7. The purpose of this lab is not to teach you how to manage VMs in a vSphere
environment.
Purpose
Use the vSphere Client to add a new distributed port group to an existing VxRail
VDS.
Tasks
References:
Lecture module:
Managing Virtualization
Lab Steps
1. Log in to the vSphere Client: https://vxrail##-vcenter.vsb.edu/ui
User: [email protected]
Password: VMw@r3!!123
You can see the work that other students do, and they can see your work.
Following the naming convention is important so that you can identify what is
yours.
2. To create a distributed port group, find the distributed virtual switch. The
distributed switch is created as part of the installation process of a VxRail.
Hence, it uses the VxRail naming convention.
Under the distributed switch, you see the uplinks and the distributed port
groups that have already been created. Most of the existing port groups are
the default VxRail port groups, however, there may be some distributed port
groups that are created by other students.
3. Right-click the VMware HCIA Distributed Switch, and select Distributed Port
Group > New Distributed Port Group.
For Name use <Your Name>-DPortGroup for the name of the Distributed
Port Group so that it can be identified.
Click NEXT.
Set Port binding to Static Binding, the recommended setting for general use.
It means that each VM in the group has a port that is reserved for it.
Conversely, Ephemeral - no binding enables the ports to be configured from
the individual hosts when vCenter is not available. This is useful for recovery
purposes.
Leave Network resource pool set to (default). Network resource pools are
used to setup QoS and priority for the virtual networks connecting through the
physical uplinks on each node.
For VLAN type, select VLAN. Selecting VLAN enables you to specify the
VLAN ID for this distributed port group.
For the VLAN ID, concatenate your VxRail number with your node number.
For example, if you are on VxRail 22 and node 1, then use 221.
Click NEXT.
6. Complete the Security section. Accept the default settings and click NEXT.
7. Complete the Traffic shaping section. Accept the default settings and click
NEXT.
Customize the uplink settings to match the existing virtual machine port
groups. Configure uplink2 as a standby uplink.
In the Failover order section, click uplink2 and then click the down arrow
button to move it under Standby uplinks. If uplink3 and uplink4 are present,
select each uplink and then use the down arrow button to move them under
Unused uplinks. Your uplink configuration should look like the illustration
below. Uplink1 should be active, and uplink2 should be standby. Any other
uplink should be under unused.
Leave all the other settings at the default values. Click NEXT.
9. Complete the Monitoring section. Accept the default settings and click NEXT.
10. Complete the Miscellaneous section. Accept the default settings and click
NEXT.
11. Complete the Ready to complete section. Verify the new port group
configuration. If everything is correct, click FINISH to create the virtual port
group.
In the work pane, select the Configure tab and then select Settings
>Topology to see the new distributed port group. The new distributed port
group is also displayed under the VMware HCIA Distributed Switch in the left
Navigation Pane.
12. Examine the newly created distributed port group. In the left Navigation pane,
select the newly created distributed port group. In the work pane, select the
Configure tab and then select Settings > Policies.
Purpose
Tasks
Lab Steps
1. VMware High Availability is one of the simplest and most effective ways to
improve the availability of the services that virtual machines provide.
https://vcenter##.vsb.edu/ui
User: [email protected]
Password: VMw@r3!!123
3. Check the default HA setting for VxRail. Click Menu and select Hosts and
Clusters. Then open the vCenter and Datacenter in the navigation pane if
they are not already open.
c. Expand the vSphere HA section toward the bottom of the work pane.
Here you can see that this cluster is mostly idle. You can also see the CPU
and Memory that is reserved for failover. The graphic shows the default
configuration. Proactive HA moves VMs when a host is degraded. Host
monitoring means that the ESXi hosts are monitored, and if one of them fails,
the VMs that were running on it are restarted. VM Monitoring captures the
heartbeat information from VMware tools on the individual VMs, and enables a
failed VM to be restarted automatically. In the lab, VMware tools are not
installed so leave it disabled.
4. HA failover is initiated by powering off one of the VxRail nodes in the cluster.
Ensure that vCenter is not running on the node that is going to be powered
off. The failover would still work, however, monitoring would be much more
difficult.
b. Select Migrate
d. Select one of the VxRail nodes that is not used by the vCenter Server
Appliance.
h. Click FINISH
There is nothing special about this vMotion. Like on any other VMware cluster,
the VM stays up while the VM is migrated from one host to another. Since the
system has little utilization, the migration should complete quickly.
6. While the vMotion is running, check the configuration of vSphere HA. To check
and modify further vSphere HA configuration items:
_____________________________________________________
Host Isolation is when a host is not able to communicate with the rest of the
vSphere cluster. What is Host Isolation set to?
______________________________________________________
8. To show VMware HA restarting a VM, reboot the node that is running the VM.
First check the nodes in the cluster. Do any of them have a red icon? If they
do, someone else in your cluster is doing the lab.
You can wait to do the lab with your own VM or watch their VM restart.
a. In the navigation pane, select the node (ESXi host) your VM was running on.
c. In the Enter a Reason pop up window, type Lab HA test and click OK
The host that is rebooting gets a red icon and is labeled not responding.
The VM loses the green triangle that indicates that it is running and is
labeled disconnected.
The VM gets the green triangle back and is now running on a new host.
Purpose
Tasks
References:
Lecture module:
VxRail Availability Management
Lab Steps
1. VxRail components do not perform optimally at high levels of utilization. The
details on how performance degrades depends on which resource is over
used and how over used it is.
The best practice for VxRail is to avoid utilization above 80%, at that point
performance issues begin. If utilization ever exceeds 100%, outages should be
expected.
Utilization calculations should be done separately for compute capacity,
memory utilization, network bandwidth, storage capacity (GB), and storage
performance (IOPS).
For simplicity, all the calculations in this lab are based on memory
utilization.
What is the utilization with one node down for maintenance? _________
Will the loss of a node result in over utilization? _________
What is the utilization if an extra node fails while the first node is still down for
maintenance? ____________
Will that lead to over utilization? _____________
How many nodes should be ordered to bring the utilization down to less than
50%? ________________________
At that point how many nodes need to be out of service for utilization to again
exceed 80%? ____________________
Utilization is growing at the rate of 5% a month. Utilization will be 55% the next
month, and then 60%, and so on. How long until utilization climbs above 80%?
____________________
At that point how many nodes should be added to return the cluster to under
50% utilization? _______________________
After six months, the cluster is at 90% utilization. How many nodes should be
added to bring down the utilization to less than 50%?
_____________________
Lab Steps
1. In general, these questions are answered with the formula:
What is the utilization with one node down for maintenance? _________
Will the loss of a node result in over utilization? _________
What is the utilization if an extra node fails while the first node is still down for
maintenance? ____________
Will that lead to over utilization? _____________
What is the utilization if an extra node fails while the first 87%
node is still down for maintenance?
How many nodes should be ordered to bring the utilization down to less than
50%? ________________________
At that point how many nodes need to be out of service for utilization to again
exceed 80%? ____________________
A 10 node VxRail cluster is at 90% utilization 8 nodes must be added for a total of
and is experiencing performance problems. 18 nodes.
How many nodes should be ordered to bring
the utilization down to fewer than 50%?
At that point how many nodes need to be out 7 nodes down would lead to 82%
for utilization to again exceed 80%? utilization.
Utilization is growing at the rate of 5% a month. Utilization will be 55% the next
month, and then 60%, and so on. How long until utilization climbs above 80%?
____________________
At that point how many nodes need to be added to return the cluster to under
50% utilization? _______________________
At that point how many nodes need 8 nodes for a total of 20 nodes(Round up)
to be added to return the cluster to
under 50% utilization?
After six months, the cluster is at 90% utilization. How many nodes need to be
added to bring down the utilization to less than 50%?
_____________________
After 6 months, the 4 node cluster is now at 90% 4 nodes for a total of 8 nodes
utilization. How many nodes need to be added to
bring down utilization to less than 50%?
Purpose
Create a log bundle using VxRail Plug-In. Dell EMC support may request a log
bundle to help diagnose issues.
Export system logs using the vSphere Client. Dell EMC support or VMware support
may request diagnostic information that is related to vCenter Server or the ESXi
nodes to diagnose issues.
Tasks
References:
Lecture module:
Maintenance and Troubleshooting
Lab Steps
1. You should be logged in to your jump server. See Lab 0 for instructions.
All the lab steps are run from the jump server.
https://vxrail##-vcenter.vsb.edu/ui/
Username: [email protected]
Password: VMw@r3!!123
Go to the Hosts and Clusters view – Menu > Hosts and Clusters.
Expand VxRail-Datacenter.
Click VxRail-Virtual-SAN-Cluster-######
The VxRail plug-in functionality is under VxRail in the tree panel of the
Configure tab.
Click CREATE. The CREATE button is located in the upper right corner of the
Log Collection pane.
In the Create Log Bundle dialog, you can select logs for VxRail Manager,
vCenter, ESXi, iDRAC, and PTAgent.
ESXi, IDRAC, and PT Agent logs require selecting specific hosts.
In this lab, you create a log bundle for VxRail Manager, iDRAC, and PTAgent.
Select the boxes to include VxRail Manager, iDRAC, and PTAgent.
Click GENERATE.
Monitor the status in the Log Collection window. The Status column shows a
value of In Progress while the log bundle is being generated. The process can
take several minutes, please be patient. The status changes to Completed
with a green check mark after the log bundle has been successfully generated.
Chrome automatically saves the file to the Downloads folder. Firefox and
Internet Explorer give you the option to either open or save the file to the
Downloads folder. Ensure that you save the file. The filename starts with
VxRail_Support_Bundle. The name also includes the date and time.
You may see a "Failed - Bad Certificate" error message when trying to
download the log bundle. To resolve the bad certificate issue:
a) Open a Firefox session to https://vxrail##-vcenter.vsb.edu.
b) Click Download trusted root CA certificates. Save the Zip file to a
known location.
c) Extract the contents of the downloaded file. Go to the certs/win folder.
d) Right click on each of files in the win folder and install them.
e) Open a new browser session to the vSphere client and try to download
the VxRail log bundle again.
Use the Windows File Explorer to locate the log bundle in the
C:\Users\Administrator\Downloads folder.
Open the VxRail_Support_Bundle* folder. You should see three files. One
each for iDRAC, PTAgent, and VxRail.
Lab Steps
1. Log in to the vSphere Client:
https://vxrail##-vcenter.vsb.edu/ui/
User: [email protected]
Password: VMw@r3!!123
Go to the Hosts and Clusters view – Menu > Hosts and Clusters.
Right-click the VxRail vCenter instance, and select Export System Logs...
Select the option to Include vCenter Server and vSphere UI Client logs.
Click NEXT.
Accept the default selections. Feel free to expand the different sections to see
all the available categories for which logs can be collected. The performance
Chrome opens a new browser tab and saves the file to the Downloads folder.
Firefox and Internet Explorer give you the option to either open or save the file
to the Downloads folder. Ensure that you save the file.
The log bundle can be large, and the export process can take some time
to complete. You can follow the progress of the log bundle download in
the Recent Tasks panel. In this lab environment, the log export process
took about 10 minutes – one node and vCenter Server logs with the
default selections. The support bundle was about 1.2 GB.
Use the Windows File Explorer to locate the log bundle in the
C:\Users\Administrator\Downloads folder. The name of the zipped logfile
starts with VMware-vCenter-support and includes the date and time.
Double-click the logfile. You should see log bundles for the node and vCenter.
The bundles have the .tgz extension. Extracted logfiles should be similar to the
files shown in the graphic. TGZ bundles can be opened with tools like WinZip
or 7-Zip.
4. Optional step
5. Optional step
Purpose
Review VxRail related advisories and knowledgebase (KB) articles on the Dell
EMC Support site.
Tasks
References:
Lecture module:
Maintenance and Troubleshooting
Lab Steps
1. The lab is best performed from your own computer that is used for day-to-day
administration of a VxRail environment.
This lab can be performed at your convenience. The lab requires Internet
connectivity to the Dell EMC Support site and a valid Dell EMC support
account.
2. Log in to the Dell EMC Support site and go to the Product Support page for the
VxRail Appliance Series.
The following URL takes you directly to the VxRail Appliance Series support
page:
https://support.emc.com/products/39970_VxRail-Appliance-Series
After a successful login you should see:
Click the link in the center panel and see a listing of the relevant
advisories.
These advisories are the Dell EMC Technical Advisories (DTA) and Dell EMC
Security Advisories (DSA) for the VxRail Appliance.
You can sign up for advisory alerts by clicking the link in the
upper right corner of the Technical and Security Advisories page.
Lab Steps
1. You should be logged in to the Dell EMC Support site looking at the VxRail
Appliance Series support page.
Click the link in the center panel and see a listing of all the
VxRail KB articles.
You can filter the list of articles by various criteria.
Click How To, under Dell EMC Article Type, in the panel on the left.
Here are some useful articles that you should see in the listing:
Article KB Number
Click each of the articles, and view the details. Each article opens in a new
browser tab. Close the browser tab after reviewing the article.
Clear the How To filter to go back to the full listing of VxRail KB articles.
Click Break Fix under Dell EMC Article Type in the panel on the left.
Here are some useful articles that you should see in the listing.
Article KB Number
VxRail: When upgrading Dell nodes to 4.5.x or 4.7.x, the vSAN 528230
Health service item "Controller firmware is VMware certified"
may be in warning status
Click each of the articles, and view the details. Each article opens in a new
browser tab. Close the browser tab after reviewing the article.
Purpose
Tasks
References:
Lecture module:
Maintenance and Troubleshooting
Lab Steps
1. The lab is best performed from your own computer that is used for day-to-day
administration of the VxRail environment.
The instructions in this lab are for Dell EMC SolVe Online.
VxRail procedures can also be generated using Dell EMC SolVe Desktop. Dell
EMC SolVe Desktop should have already been installed, authorized, and
VxRail Appliance generator downloaded.
You should see a listing of the top service topics that are related to VxRail.
View the details of one of the topics. Clicking the topic opens a new browser
tab. Return to the SolVe Online browser tab after you have reviewed the
article.
The list of available procedures depends on your access level. The graphic
shows the Customer view.
Select VxRail E560/E560F. Scroll down and view the components that can be
replaced for this node type.
______________________________________________________________
________________________________________________________
Select VxRail G560/G560F. View the components that can be replaced for
this node type.
______________________________________________________________
________________________________________________________
Click CANCEL.
Click CANCEL.
Is the system running Virtual Cloud Foundation on VxRail? Select No. Click
NEXT.
Click CANCEL.
Lab Steps
1. SolVe Online should be open on the VxRail Appliance procedures listing.
Select Your Power Control Activity - Select Power Down a Running VxRail
Cluster.
The VxRail Plug-In Shut Down Cluster feature provides users a graceful
shutdown for the entire cluster with a few clicks. During the shutdown
procedure, VxRail Manager provides detailed error messages with links to
appropriate knowledge base articles if there are any problems.
Users are responsible for properly shutting down all client VMs. VxRail
Manager shuts down all VMs in the cluster. Dell EMC recommends the
graceful shutdown of all client VMs before performing this procedure.
Adequate planning should be done before performing this procedure.
Lab Steps
1. SolVe Online should be open on the VxRail Appliance procedures listing.
2. Generate the capacity drive expansion procedure for VxRail E series nodes.
Review the section on Handling FRUs. Why is use of an ESD kit important?
__________________________________________
Review the section on Materials needed. What is a requirement for the new
drives? __________________________________
What software interface is used to run the disk expansion procedure?
______________________________________________
Lab Steps
1. SolVe Online should be open on the VxRail Appliance procedures listing.