Lenovo Config Vmware
Lenovo Config Vmware
◦ iSCSI
◦ SAS
2. Creating logical volumes on the storage array.
• Online help — Describes how to use ThinkSystem System Manager to complete configuration
and storage management tasks. It is available within the product.
• Lenovo ThinkSystem Storage Documentation Center DE Series (a database of articles) —
Provides troubleshooting information, FAQs, and instructions for a wide range of Lenovo
products and technologies.
• VMware Configuration Maximums — Describes how to configure virtual and physical storage to
stay within the allowed maximums that ESX/ESXi supports.
1
Chapter 2. Assumptions
The VMware express method is based on the following assumptions:
Component Assumptions
Hardware • You have used the Installation and Setup Instructions included
with the controller shelves to install the hardware.
• You have connected cables between the optional drive shelves
and the controllers.
• You have applied power to the storage system.
Host • You have made a connection between the storage system and the
data host.
• You have installed the host operating system.
• You are not configuring the data (I/O attached) host to boot from
SAN.
2
Component Assumptions
Protocol: FC • You have made all host-side FC connections and activated switch
zoning.
• You are using Lenovo-supported FC HBAs and switches.
• You are using FC HBA driver and firmware versions as listed in the
Lenovo Storage Interoperation Center (LSIC).
Protocol: NVMe over Fibre • You have made all host-side FC connections and activated switch
Channel zoning.
• You are using Lenovo-supported FC HBAs and switches.
• You are using FC HBA driver and firmware versions as listed in the
Lenovo Storage Interoperation Center (LSIC).
Protocol: iSCSI • You are using Ethernet switches capable of transporting iSCSI
traffic.
• You have configured the Ethernet switches according to the
vendor’s recommendation for iSCSI.
• You are using SAS HBA driver and firmware versions as listed in
the Lenovo Storage Interoperation Center (LSIC).
3
Chapter 3. Understand the VMware workflow
This workflow guides you through the "express method" for configuring your
storage array and ThinkSystem System Manager to make storage available to a
VMware host.
4
5
Chapter 4. Verify the VMware configuration is supported
To ensure reliable operation, you create an implementation plan and then use
the Lenovo Storage Interoperation Center (LSIC) to verify that the entire
configuration is supported.
Steps
2. Choose your storage model, firmware, protocol, HBA, and operating system and click here for
guidance on how to use LSIC to search the products support configuration.
3. As necessary, make the updates for your operating system and protocol as listed in the table.
6
Operating system updates Protocol Protocol-related updates
• You might need to install out-of-box drivers to FC Host bus adapter (HBA)
ensure proper functionality and supportability. driver, firmware, and
You can install HBA drivers using the ESXi shell bootcode
or a remote SSH connection to the ESXi host. To
access the host using either of those methods,
you must enable the ESXi shell and SSH access.
For more information about the ESXi shell, refer to
the VMware Knowledge Base regarding using the
ESXi shell in ESXi. For installation commands, iSCSI Network interface card
refer to the instructions that accompany the HBA (NIC) driver, firmware and
drivers. bootcode
• Each HBA vendor has specific methods for
updating boot code and firmware. Some of these
methods could include the use of a vCenter
plugin or the installation of CIM provider on the
ESXi host. vCener plugins can be used to obtain
information about the a vendor’s specific HBA. SAS Host bus adapter (HBA)
Refer to the support section of the vendor’s driver, firmware, and
website to obtain the instructions and software bootcode
necessary to update the HBA boot code or
firmware. Refer to the VMware Compatibility
Guide or the HBA vendor’s website to obtain the
correct boot code or firmware.
7
Chapter 5. Configure IP addresses using DHCP
To configure communications between the management station and the storage
array, use Dynamic Host Configuration Protocol (DHCP) to provide IP
addresses.
Each storage array has two controllers (duplex), and each controller has two storage management
ports. Each controller has two management ports but only one is useable by the customer.
You have installed and configured a DHCP server on the same subnet as the storage management
ports.
The following instructions refer to a storage array with two controllers (a duplex configuration).
Steps
1. If you have not already done so, connect an Ethernet cable to the management station and to
management port 1 on each controller (A and B).
Do not use management port 2 on either controller. Port 2 is reserved for use
by Lenovo technical personnel.
If you disconnect and reconnect the Ethernet cable, or if the storage array is
power-cycled, DHCP assigns IP addresses again. This process occurs until
static IP addresses are configured. It is recommended that your avoid
disconnecting the cable or power-cycling the array.
If the storage array cannot get DHCP-assigned IP addresses within 30 seconds, the following
default IP addresses are set:
Your network administrator needs the MAC addresses to determine the IP address for each
controller. You will need the IP addresses to connect to your storage system through your
browser.
8
Chapter 6. Configure the multipath software
Multipath software provides a redundant path to the storage array in case one of
the physical paths is disrupted.
The multipath software presents the operating system with a single virtual device that represents
the active physical paths to the storage. The multipath software also manages the failover process
that updates the virtual device. For VMware, NVMe/FC uses High Performance Plugin (HPP).
Applicable only for FC, iSCSI, and SAS protocols, VMware provides plug-ins, known as Storage
Array Type Plug-ins (SATP), to handle the failover implementations of specific vendors' storage
arrays. The SATP you should use is VMW_SATP_ALUA.
9
Chapter 7. Access ThinkSystem System Manager and use
the Setup wizard
You use the Setup wizard in ThinkSystem System Manager to configure your
storage array.
• You have ensured that the device from which you will access ThinkSystem System Manager
contains one of the following browsers:
Mozilla Firefox 31
Safari 9
If you are an iSCSI user, make sure you have closed the Setup wizard while configuring iSCSI.
The wizard automatically relaunches when you open System Manager or refresh your browser and
at least one of the following conditions is met:
If the Setup wizard does not automatically appear, contact technical support.
Steps
The first time ThinkSystem System Manager is opened on an array that has not been
10
configured, the Set Administrator Password prompt appears. Role-based access management
configures four local roles: admin, support, security, and monitor. The latter three roles have
random passwords that cannot be guessed. After you set a password for the admin role you
can change all of the passwords using the admin credentials. See ThinkSystem System
Manager online help for more information on the four local user roles.
2. Enter the System Manager password for the admin role in the Set Administrator Password and
Confirm Password fields, and then click Set Password.
When you open System Manager and no pools, volumes groups, workloads, or notifications
have been configured, the Setup wizard launches.
◦ Verify hardware (controllers and drives) — Verify the number of controllers and drives in
the storage array. Assign a name to the array.
◦ Verify hosts and operating systems — Verify the host and operating system types that the
storage array can access.
◦ Accept pools — Accept the recommended pool configuration for the express installation
method. A pool is a logical group of drives.
◦ Enable AutoSupport — Automatically monitor the health of your storage array and have
dispatches sent to technical support.
4. If you have not already created a volume, create one by going to Storage > Volumes > Create
> Volume.
11
Chapter 8. Perform FC-specific tasks
For the Fibre Channel protocol, you configure the switches and determine the
host port identifiers.
• You must have used your HBA utility to discover the WWPN of each host initiator port and of
each controller target port connected to the switch.
A vendor’s HBA utility can be used to upgrade and obtain specific information
about the HBA. Refer to the support section of the vendor’s website for
instructions on how to obtain the HBA utility.
For details about zoning your switches, see the switch vendor’s documentation.
Each initiator port must be in a separate zone with all of its corresponding target ports.
Steps
1. Log in to the FC switch administration program, and then select the zoning configuration
option.
2. Create a new zone that includes the first host initiator port and that also includes all of the
target ports that connect to the same FC switch as the initiator.
3. Create additional zones for each FC host initiator port in the switch.
4. Save the zones, and then activate the new zoning configuration.
12
Steps
esxcfg**-**scsidevs –a
3. Record the initiator identifiers. The output will be similar to this example:
The illustration shows a host connected to a DE Series storage array in two zones. One zone is
indicated by the blue line; the other zone is indicated by the red line. Each zone contains one
initiator port and all target ports.
13
Callout No. Host (initiator) port WWPN
connections
1 Host not applicable
Host OS type
14
Chapter 9. Perform NVMe over FC-specific tasks
For the NVMe over Fibre Channel protocol, you configure the switches and
determine the host port identifiers.
• You must have used your HBA utility to discover the WWPN of each host initiator port and of
each controller target port connected to the switch.
A vendor’s HBA utility can be used to upgrade and obtain specific information
about the HBA. Refer to the support section of the vendor’s website for
instructions on how to obtain the HBA utility.
For details about zoning your switches, see the switch vendor’s documentation.
Each initiator port must be in a separate zone with all of its corresponding target ports.
Steps
1. Log in to the FC switch administration program, and then select the zoning configuration
option.
2. Create a new zone that includes the first host initiator port and that also includes all of the
target ports that connect to the same FC switch as the initiator.
3. Create additional zones for each FC host initiator port in the switch.
4. Save the zones, and then activate the new zoning configuration.
15
Steps
esxcfg**-**scsidevs –a
3. Record the initiator identifiers. The output will be similar to this example:
Steps
The illustration shows a host connected to a DE Series storage array in two zones. One zone is
indicated by the blue line; the other zone is indicated by the red line. Each zone contains one
initiator port and all target ports.
16
9.4.1. Host identifiers
17
Callout No. Array controller (target) port WWPN
connections
8 Controller B, port 2 to FC switch
2
Host OS type
18
Chapter 10. Perform iSCSI-specific tasks
For the iSCSI protocol, you configure the switches and configure networking on
the array side and the host side. Then you verify the IP network connections.
• You have two separate networks for high availability. Make sure that you isolate your iSCSI
traffic to separate network segments.
• You have enabled send and receive hardware flow control end to end.
Port channels/LACP is not supported on the controller’s switch ports. Host-side
LACP is not recommended; multipathing provides the same benefits or better.
Consult your network administrator for tips on selecting the best configuration for your
environment.
While planning your iSCSI networking, remember that the VMware Configuration Maximums guides
state that the maximum supported iSCSI storage paths is 8. You must consider this requirement to
avoid configuring too many paths.
By default, the VMware iSCSI software initiator creates a single session per iSCSI target when you
are not using iSCSI port binding.
19
VMware iSCSI port binding is a feature that forces all bound VMkernel ports to
log into all target ports that are accessible on the configured network segments. It
is meant to be used with arrays that present a single network address for the
iSCSI target. Lenovo recommends that iSCSI port binding not be used. For
additional information, see the VMware Knowledge Base for the article regarding
considerations for using software iSCSI port binding in ESX/ESXi. If the ESXi host
is attached to another vendor’s storage, Lenovo recommends that you use
separate iSCSI vmkernel ports to avoid any conflict with port binding.
For best practice, you should NOT use port binding on DE Series storage arrays.
To ensure a good multipathing configuration, use multiple network segments for the iSCSI network.
Place at least one host-side port and at least one port from each array controller on one network
segment, and an identical group of host-side and array-side ports on another network segment.
Where possible, use multiple Ethernet switches to provide additional redundancy.
You must enable send and receive hardware flow control end to end. You must disable priority
flow control.
If you are using jumbo frames within the IP SAN for performance reasons, make sure to configure
the array, switches, and hosts to use jumbo frames. Consult your operating system and switch
documentation for information on how to enable jumbo frames on the hosts and on the switches.
To enable jumbo frames on the array, complete the steps in Configuring array-side networking—
iSCSI.
Many network switches have to be configured above 9,000 bytes for IP overhead.
Consult your switch documentation for more information.
• You must know the IP address or domain name for one of the storage array controllers.
• You or your system administrator must have set up a password for the System Manager GUI, or
you must configured Role-Based Access Control (RBAC) or LDAP and a directory service for
the appropriate security access to the storage array. See the ThinkSystem System Manager
online help for more information about Access Management.
This task describes how to access the iSCSI port configuration from the Hardware page. You can
also access the configuration from System › Settings › Configure iSCSI ports.
20
For additional information on how to setup the array-side networking on your
VMware configuration, see the VMware Configuration Guide for ThinkSystem DE-
Series SAN OS iSCSI Integration with ESXi 6.X Technical Report.
Steps
The first time ThinkSystem System Manager is opened on an array that has not been
configured, the Set Administrator Password prompt appears. Role-based access management
configures four local roles: admin, support, security, and monitor. The latter three roles have
random passwords that cannot be guessed. After you set a password for the admin role you
can change all of the passwords using the admin credentials. See ThinkSystem System
Manager online help for more information on the four local user roles.
2. Enter the System Manager password for the admin role in the Set Administrator Password and
Confirm Password fields, and then click Set Password.
When you open System Manager and no pools, volumes groups, workloads, or notifications
have been configured, the Setup wizard launches.
You will use the wizard later to complete additional setup tasks.
4. Select Hardware.
6. Click the controller with the iSCSI ports you want to configure.
8. In the drop-down list, select the port you want to configure, and then click Next.
To see all port settings, click the Show more port settings link on the right of the dialog box.
21
Port Setting Description
Configured ethernet port speed Select the desired speed. The options that
appear in the drop-down list depend on the
maximum speed that your network can
support (for example, 10 Gbps).
Enable IPv4 / Enable IPv6 Select one or both options to enable support
for IPv4 and IPv6 networks.
TCP listening port (Available by clicking Show If necessary, enter a new port number.
more port settings.)
The listening port is the TCP port number that
the controller uses to listen for iSCSI logins
from host iSCSI initiators. The default listening
port is 3260. You must enter 3260 or a value
between 49152 and 65535.
MTU size (Available by clicking Show more If necessary, enter a new size in bytes for the
port settings.) Maximum Transmission Unit (MTU).
Enable ICMP PING responses Select this option to enable the Internet
Control Message Protocol (ICMP). The
operating systems of networked computers
use this protocol to send messages. These
ICMP messages determine whether a host is
reachable and how long it takes to get packets
to and from that host.
If you selected Enable IPv4, a dialog box opens for selecting IPv4 settings after you click Next.
If you selected Enable IPv6, a dialog box opens for selecting IPv6 settings after you click Next.
If you selected both options, the dialog box for IPv4 settings opens first, and then after you
click Next, the dialog box for IPv6 settings opens.
22
10. Configure the IPv4 and/or IPv6 settings, either automatically or manually. To see all port
settings, click the Show more settings link on the right of the dialog box.
Manually specify static configuration Select this option, and then enter a static
address in the fields. For IPv4, include the
network subnet mask and gateway. For IPv6,
include the routable IP address and router IP
address.
In this express method for configuring iSCSI networking on the host side, you allow the ESXi host
to carry iSCSI traffic over four redundant paths to the storage.
After you complete this task, the host is configured with a single vSwitch containing both VMkernel
ports and both VMNICs.
For additional information on configuring iSCSI networking for VMware, see the vSphere
Documentation Center for your version of vSphere.
Steps
1. Configure the switches that will be used to carry iSCSI storage traffic.
6. Use either the vSphere client or vSphere web client to perform the host-side configuration.
The interfaces vary in functionality and the exact workflow will vary.
23
10.5. Step 5: Verify IP network connections—iSCSI, VMware
You verify Internet Protocol (IP) network connections by using ping tests to
ensure the host and array are able to communicate.
Steps
1. On the host, run one of the following commands, depending on whether jumbo frames are
enabled:
vmkping <iSCSI_target_IP_address\>
◦ If jumbo frames are enabled, run the ping command with a payload size of 8,972 bytes. The
IP and ICMP combined headers are 28 bytes, which when added to the payload, equals
9,000 bytes. The -s switch sets the packet size bit. The -d switch sets the DF (Don’t
Fragment) bit on the IPv4 packet. These options allow jumbo frames of 9,000 bytes to be
successfully transmitted between the iSCSI initiator and the target.
3. Issue a vmkping command from each host’s initiator address (the IP address of the host
Ethernet port used for iSCSI) to each controller iSCSI port. Perform this action from each host
server in the configuration, changing the IP addresses as necessary.
If the command fails with the message sendto() failed (Message too long),
verify the MTU size (jumbo frame support) for the Ethernet interfaces on the
host server, storage controller, and switch ports.
24
10.6. Step 6: Record iSCSI-specific information for VMware
Select the iSCSI worksheet to record your protocol-specific storage
configuration information. You need this information to perform provisioning
tasks.
You can use this worksheet to record iSCSI storage configuration information.
You need this information to perform provisioning tasks.
Recommended configuration
Recommended configurations consist of two initiator ports and four target ports with one or more
VLANs.
Target IQN
Host OS type
25
Chapter 11. Perform SAS-specific tasks
For the SAS protocol, you determine host port addresses and make the settings
recommended in the Notes column of the Lenovo Storage Interoperation Center
(LSIC)
• Host I/O ports might automatically register if the host context agent is installed.
Steps
1. Download the HBA utility from your HBA vendor’s web site.
3. Use the HBA BIOS to select the appropriate settings for your configuration.
See the Notes column of the Lenovo Storage Interoperation Center (LSIC) for
recommendations.
You can use this worksheet to record SAS storage configuration information.
You need this information to perform provisioning tasks.
26
Host identifiers
Target identifiers
Mappings host
Host OS type
27
Chapter 12. Discover storage on the host
After assigning volumes to the host, you perform a rescan so that the host
detects and configures the volumes for multipathing.
By default, an ESXi host automatically performs a rescan every five minutes. A volume might
appear between the time you create it and assign it to a host, before you perform a manual rescan.
Regardless, you can perform a manual rescan to ensure all volumes are configured properly.
Steps
1. Create one or more volumes and assign them to the ESXi host.
3. Use the vSphere Client or the vSphere Web Client to connect directly to the vCenter Server or
to the ESXi host.
4. For instructions on how to perform a rescan of the storage on an ESXi host, search for the
VMware Knowledge Base article on this topic.
28
Chapter 13. Configure storage on the host
You can use the storage assigned to an ESXi host as either a Virtual Machine
File System (VMFS) datastore or a raw device mapping (RDM). RDMs are not
supported on the NVMe over Fibre Channel protocol.
The volumes mapped to the ESXi host must have been discovered properly.
Steps
• For instructions on creating VMFS datastores using either the vSphere Client or the vSphere
Web Client, see the VMware pubs webpage (https://www.vmware.com/support/pubs/) for
documentation on this topic.
• For instructions on using volumes as RDMs using either the vSphere Client or the vSphere Web
Client, see the VMware pubs webpage (https://www.vmware.com/support/pubs/) for
documentation on this topic.
29
Chapter 14. Verify storage access on the host
Before you use a volume, you verify that the host can write data to the volume
and read it back.
Steps
1. Verify that the volume has been used as a Virtual Machine File System (VMFS) datastore or has
been mapped directly to a VM for use as a raw device mapping (RDM).
30
Chapter 15. Appendix
You can receive hardware service through a Lenovo Authorized Service Provider. To locate a
service provider authorized by Lenovo to provide warranty service, go to
https://datacentersupport.lenovo.com/serviceprovider and use filter searching for different
countries. For Lenovo support telephone numbers, see
https://datacentersupport.lenovo.com/supportphonelist for your region support details.
15.2. Notices
Lenovo may not offer the products, services, or features discussed in this
document in all countries. Consult your local Lenovo representative for
information on the products and services currently available in your area.
Any reference to a Lenovo product, program, or service is not intended to state or imply that only
that Lenovo product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any Lenovo intellectual property right may be used
instead. However, it is the user’s responsibility to evaluate and verify the operation of any other
product, program, or service.
Lenovo may have patents or pending patent applications covering subject matter described in this
document. The furnishing of this document is not an offer and does not provide a license under any
patents or patent applications. You can send inquiries in writing to the following:
LENOVO PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some
jurisdictions do not allow disclaimer of express or implied warranties in certain transactions,
therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are
periodically made to the information herein; these changes will be incorporated in new editions of
the publication. Lenovo may make improvements and/or changes in the product(s) and/or the
program(s) described in this publication at any time without notice.
The products described in this document are not intended for use in implantation or other life
support applications where malfunction may result in injury or death to persons. The information
31
contained in this document does not affect or change Lenovo product specifications or warranties.
Nothing in this document shall operate as an express or implied license or indemnity under the
intellectual property rights of Lenovo or third parties. All information contained in this document
was obtained in specific environments and is presented as an illustration. The result obtained in
other operating environments may vary.
Lenovo may use or distribute any of the information you supply in any way it believes appropriate
without incurring any obligation to you.
Any references in this publication to non-Lenovo Web sites are provided for convenience only and
do not in any manner serve as an endorsement of those Web sites. The materials at those Web
sites are not part of the materials for this Lenovo product, and use of those Web sites is at your
own risk.
Any performance data contained herein was determined in a controlled environment. Therefore, the
result obtained in other operating environments may vary significantly. Some measurements may
have been made on development-level systems and there is no guarantee that these
measurements will be the same on generally available systems. Furthermore, some measurements
may have been estimated through extrapolation. Actual results may vary. Users of this document
should verify the applicable data for their specific environment.
15.3. Trademarks
LENOVO, LENOVO logo, and THINKSYSTEM are trademarks of Lenovo. All
other trademarks are the property of their respective owners. © 2022 Lenovo.
32