iSCSI SAN Configuration Guide
iSCSI SAN Configuration Guide
This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of this document, see http://www.vmware.com/support/pubs.
EN-000288-01
You can find the most up-to-date technical documentation on the VMware Web site at: http://www.vmware.com/support/ The VMware Web site also provides the latest product updates. If you have comments about this documentation, submit your feedback to: [email protected]
Copyright 2009, 2010 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
VMware, Inc.
Contents
ESX/ESXi iSCSI SAN Requirements 29 ESX/ESXi iSCSI SAN Restrictions 30 Setting LUN Allocations 30 Network Configuration and Authentication 30 Setting Up Independent Hardware iSCSI Adapters 31 Setting Up and Configuring Dependent Hardware iSCSI Adapters 32 Setting Up and Configuring Software iSCSI Adapter 34 Networking Configuration for Software iSCSI and Dependent Hardware iSCSI 36 Bind iSCSI Ports to iSCSI Adapters 40 Using Jumbo Frames with iSCSI 41 Enabling Jumbo Frames for Software and Dependent Hardware iSCSI 42 Configuring Discovery Addresses for iSCSI Initiators 43 Configuring CHAP Parameters for iSCSI Adapters 44 Configuring Additional Parameters for iSCSI 48 iSCSI Session Management 49 Add iSCSI Storage 51
Testing ESX/ESXi SAN Configurations 53 General Considerations for iSCSI SAN Storage Systems 54 EMC CLARiiON Storage Systems 54 EMC Symmetrix Storage Systems 55 Enable HP StorageWorks MSA1510i to Communicate with ESX/ESXi 55 HP StorageWorks EVA Storage Systems 56
VMware, Inc.
NetApp Storage Systems 57 EqualLogic Storage Systems 59 LeftHand Networks SAN/iQ Storage Systems 59 Dell PowerVault MD3000i Storage Systems 59 iSCSI Targets in vApps 59
General Boot from iSCSI SAN Recommendations 62 Prepare the iSCSI SAN 62 Configure ESX Hosts to Boot from iSCSI SAN 63 iBFT iSCSI Boot Overview 64 Collecting Diagnostic Information for ESXi Hosts 69
Index 115
VMware, Inc.
Updated Information
This iSCSI SAN Configuration Guide is updated with each release of the product or when necessary. This table provides the update history of the iSCSI SAN Configuration Guide.
Revision EN-000288-01 EN-000288-00 Description ESX/ESXi iSCSI SAN Restrictions, on page 30 has been updated to clarify multipathing support for different types of iSCSI adapters. Initial release.
VMware, Inc.
VMware, Inc.
The iSCSI SAN Configuration Guide explains how to use VMware ESX and VMware ESXi systems with an iSCSI storage area network (SAN). The manual includes conceptual background information and installation requirements for ESX, ESXi, and VMware vCenter Server.
Intended Audience
This manual is written for experienced Windows or Linux system administrators who are familiar with virtual machine technology datacenter operations.
Document Feedback
VMware welcomes your suggestions for improving our documentation. If you have comments, send your feedback to [email protected].
VMware, Inc.
VMware, Inc.
You can use ESX/ESXi in conjunction with a storage area network (SAN), a specialized high-speed network that connects computer systems to high-performance storage subsystems. Using ESX/ESXi together with a SAN provides storage consolidation, improves reliability, and helps with disaster recovery. To use ESX/ESXi effectively with a SAN, you must have a working knowledge of ESX/ESXi systems and SAN concepts. Also, when you set up ESX/ESXi hosts to use Internet SCSI (iSCSI) SAN storage systems, you must be aware of certain special considerations that exist. This chapter includes the following topics:
n n n n n n n n n n n
Understanding Virtualization, on page 9 iSCSI SAN Concepts, on page 11 Overview of Using ESX/ESXi with a SAN, on page 16 Specifics of Using SAN Storage with ESX/ESXi, on page 17 Understanding VMFS Datastores, on page 18 Making LUN Decisions, on page 19 How Virtual Machines Access Data on a SAN, on page 21 Understanding Multipathing and Failover, on page 22 Choosing Virtual Machine Locations, on page 27 Designing for Server Failure, on page 27 LUN Display and Rescan, on page 28
Understanding Virtualization
The VMware virtualization layer is common across VMware desktop products (such as VMware Workstation) and server products (such as VMware ESX/ESXi). This layer provides a consistent platform for development, testing, delivery, and support of application workloads. The virtualization layer is organized as follows:
n n
Each virtual machine runs its own operating system (the guest operating system) and applications. The virtualization layer provides the virtual devices that map to shares of specific physical devices. These devices include virtualized CPU, memory, I/O buses, network interfaces, storage adapters and devices, human interface devices, and BIOS.
VMware, Inc.
Network Virtualization
The virtualization layer guarantees that each virtual machine is isolated from other virtual machines. Virtual machines can talk to each other only through networking mechanisms similar to those used to connect separate physical machines. The isolation allows administrators to build internal firewalls or other network isolation environments so that some virtual machines can connect to the outside, while others are connected only through virtual networks to other virtual machines.
Storage Virtualization
ESX/ESXi provides host-level storage virtualization, which logically abstracts the physical storage layer from virtual machines. An ESX/ESXi virtual machine uses a virtual disk to store its operating system, program files, and other data associated with its activities. A virtual disk is a large physical file, or a set of files, that can be copied, moved, archived, and backed up as easily as any other file. You can configure virtual machines with multiple virtual disks. To access virtual disks, a virtual machine uses virtual SCSI controllers. These virtual controllers include BusLogic Parallel, LSI Logic Parallel, LSI Logic SAS, and VMware Paravirtual. These controllers are the only types of SCSI controllers that a virtual machine can see and access. Each virtual disk that a virtual machine can access through one of the virtual SCSI controllers resides on a VMware Virtual Machine File System (VMFS) datastore, an NFS-based datastore, or on a raw disk. From the standpoint of the virtual machine, each virtual disk appears as if it were a SCSI drive connected to a SCSI controller. Whether the actual physical disk device is being accessed through parallel SCSI, iSCSI, network, or Fibre Channel adapters on the host is transparent to the guest operating system and to applications running on the virtual machine. Figure 1-1 gives an overview of storage virtualization. The diagram illustrates storage that uses VMFS and storage that uses raw device mapping. The diagram also shows how iSCSI storage is accessed through either iSCSI HBAs or by using a general-purpose NIC that uses iSCSI initiator software.
10
VMware, Inc.
SCSI controller
SCSI controller
virtual disk
virtual disk
LUN2 RDM
LUN5
VMware, Inc.
11
iSCSI Name
yyyy-mm is the year and month when the naming authority was established. naming-authority is usually reverse syntax of the Internet domain name of the naming authority. For example, the iscsi.vmware.com naming authority could have the iSCSI qualified name form of iqn. 1998-01.com.vmware.iscsi. The name indicates that the vmware.com domain name was registered in January of 1998, and iscsi is a subdomain, maintained by vmware.com. unique name is any name you want to use, for example, the name of your host. The naming authority must make sure that any names assigned following the colon are unique, such as:
n n n
12
VMware, Inc.
The 16-hexadecimal digits are text representations of a 64-bit number of an IEEE EUI (extended unique identifier) format. The top 24 bits are a company ID that IEEE registers with a particular company. The lower 40 bits are assigned by the entity holding that company ID and must be unique.
iSCSI Initiators
To access iSCSI targets, your host uses iSCSI initiators. The initiators transport SCSI requests and responses, encapsulated into the iSCSI protocol, between the host and the iSCSI target. VMware supports different types of initiators.
VMware, Inc.
13
storage array
storage array
Three LUNs are available in each of these configurations. In the first case, ESX/ESXi detects one target but that target has three LUNs that can be used. Each of the LUNs represents individual storage volume. In the second case, the ESX/ESXi detects three different targets, each having one LUN. ESX/ESXi-based iSCSI initiators establish connections to each target. Storage systems with a single target containing multiple LUNs have traffic to all the LUNs on a single connection. With a system that has three targets with one LUN each, a host uses separate connections to the three LUNs. This information is useful when you are trying to aggregate storage traffic on multiple connections from the ESX/ESXi host with multiple iSCSI HBAs, where traffic for one target can be set to a particular HBA, while traffic for another target can use a different HBA.
14
VMware, Inc.
Discovery
A discovery session is part of the iSCSI protocol, and it returns the set of targets you can access on an iSCSI storage system. The two types of discovery available on ESX/ESXi are dynamic and static. Dynamic discovery obtains a list of accessible targets from the iSCSI storage system, while static discovery can only try to access one particular target by target name.
Authentication
iSCSI storage systems authenticate an initiator by a name and key pair. ESX/ESXi supports the CHAP protocol, which VMware recommends for your SAN implementation. To use CHAP authentication, the ESX/ESXi host and the iSCSI storage system must have CHAP enabled and have common credentials.
Access Control
Access control is a policy set up on the iSCSI storage system. Most implementations support one or more of three types of access control:
n n n
Only initiators that meet all rules can access the iSCSI volume.
Error Correction
To protect the integrity of iSCSI headers and data, the iSCSI protocol defines error correction methods known as header digests and data digests. Both parameters are disabled by default, but you can enable them. These digests pertain to, respectively, the header and SCSI data being transferred between iSCSI initiators and targets, in both directions. Header and data digests check the end-to-end, noncryptographic data integrity beyond the integrity checks that other networking layers provide, such as TCP and Ethernet. They check the entire communication path, including all elements that can change the network-level traffic, such as routers, switches, and proxies. The existence and type of the digests are negotiated when an iSCSI connection is established. When the initiator and target agree on a digest configuration, this digest must be used for all traffic between them. Enabling header and data digests does require additional processing for both the initiator and the target and can affect throughput and CPU use performance. NOTE Systems that use Intel Nehalem processors offload the iSCSI digest calculations, thus reducing the impact on performance.
VMware, Inc.
15
You can store data securely and configure multiple paths to your storage, eliminating a single point of failure. Using a SAN with ESX/ESXi systems extends failure resistance to the server. When you use SAN storage, all applications can instantly be restarted on another host after the failure of the original host. You can perform live migration of virtual machines using VMware vMotion. Use VMware High Availability (HA) in conjunction with a SAN to restart virtual machines in their last known state on a different server if their host fails. Use VMware Fault Tolerance (FT) to replicate protected virtual machines on two different hosts. Virtual machines continue to function without interruption on the secondary host if the primary one fails. Use VMware Distributed Resource Scheduler (DRS) to migrate virtual machines from one host to another for load balancing. Because storage is on a shared SAN array, applications continue running seamlessly. If you use VMware DRS clusters, put an ESX/ESXi host into maintenance mode to have the system migrate all running virtual machines to other ESX/ESXi hosts. You can then perform upgrades or other maintenance operations on the original host.
n n
The portability and encapsulation of VMware virtual machines complements the shared nature of this storage. When virtual machines are located on SAN-based storage, you can quickly shut down a virtual machine on one server and power it up on another server, or suspend it on one server and resume operation on another server on the same network. This ability allows you to migrate computing resources while maintaining consistent shared access.
Load balancing
16
VMware, Inc.
Disaster recovery
Having all data stored on a SAN facilitates the remote storage of data backups. You can restart virtual machines on remote ESX/ESXi hosts for recovery if one site is compromised. When you purchase new storage systems or arrays, use storage vMotion to perform live automated migration of virtual machine disk files from existing storage to their new destination without interruptions to the users of the virtual machines.
Use your storage array vendor's documentation for most setup questions. Your storage array vendor might also offer documentation on using the storage array in an ESX/ESXi environment. The VMware Documentation Web site. The Fibre Channel SAN Configuration Guide discusses the use of ESX/ESXi with Fibre Channel storage area networks. The VMware I/O Compatibility Guide lists the currently approved HBAs, HBA drivers, and driver versions. The VMware Storage/SAN Compatibility Guide lists currently approved storage arrays. The VMware Release Notes give information about known issues and workarounds. The VMware Knowledge Bases have information on common issues and workarounds.
n n
n n n n
You cannot directly access the virtual machine operating system that uses the storage. With traditional tools, you can monitor only the VMware ESX/ESXi operating system. You use the vSphere Client to monitor virtual machines. The HBA visible to the SAN administration tools is part of the ESX/ESXi system, not part of the virtual machine. Your ESX/ESXi system performs multipathing for you.
Storage array management, including LUN creation, array cache management, LUN mapping, and LUN security. Setting up replication, check points, snapshots, or mirroring.
VMware, Inc.
17
If you decide to run the SAN management software on a virtual machine, you gain the benefits of running a virtual machine, including failover using vMotion and VMware HA. Because of the additional level of indirection, however, the management software might not be able to see the SAN. In this case, you can use an RDM. NOTE Whether a virtual machine can run management software successfully depends on the particular storage system.
VM1
VM2
VM3
18
VMware, Inc.
Because virtual machines share a common VMFS datastore, it might be difficult to characterize peak-access periods or to optimize performance. You must plan virtual machine storage access for peak periods, but different applications might have different peak-access periods. VMware recommends that you load balance virtual machines over servers, CPU, and storage. Run a mix of virtual machines on each server so that not all experience high demand in the same area at the same time.
Metadata Updates
A VMFS datastore holds virtual machine files, directories, symbolic links, RDM descriptor files, and so on. The datastore also maintains a consistent view of all the mapping information for these objects. This mapping information is called metadata. Metadata is updated each time the attributes of a virtual machine file are accessed or modified when, for example, you perform one of the following operations:
n n n
Creating, growing, or locking a virtual machine file Changing a file's attributes Powering a virtual machine on or off
Each LUN should have the correct RAID level and storage characteristic for the applications running in virtual machines that use the LUN. One LUN must contain only one VMFS datastore. If multiple virtual machines access the same VMFS, use disk shares to prioritize virtual machines.
n n
You might want fewer, larger LUNs for the following reasons:
n n n
More flexibility to create virtual machines without asking the storage administrator for more space. More flexibility for resizing virtual disks, doing snapshots, and so on. Fewer VMFS datastores to manage.
You might want more, smaller LUNs for the following reasons:
n n n n n
Less wasted storage space. Different applications might need different RAID characteristics. More flexibility, as the multipathing policy and disk shares are set per LUN. Use of Microsoft Cluster Service requires that each cluster disk resource is in its own LUN. Better performance because there is less contention for a single volume.
When the storage characterization for a virtual machine is not available, there is often no simple method to determine the number and size of LUNs to provision. You can experiment using either a predictive or adaptive scheme.
VMware, Inc.
19
If performance is acceptable, you can place additional virtual disks on the VMFS. If performance is not acceptable, create a new, large LUN, possibly with a different RAID level, and repeat the process. Use migration so that you do not lose virtual machines data when you recreate the LUN.
20
VMware, Inc.
Double-click the Shares column for the disk to modify and select the required value from the drop-down menu. Shares is a value that represents the relative metric for controlling disk bandwidth to all virtual machines. The values Low, Normal, High, and Custom are compared to the sum of all shares of all virtual machines on the server and, on an ESX host, the service console. Share allocation symbolic values can be used to configure their conversion into numeric values.
NOTE Disk shares are relevant only within a given ESX/ESXi host. The shares assigned to virtual machines on one host have no effect on virtual machines on other hosts.
Locates the file in the VMFS volume that corresponds to the guest virtual machine disk. Maps the requests for the blocks on the virtual disk to blocks on the appropriate physical device. Sends the modified I/O request from the device driver in the VMkernel to the iSCSI initiator (hardware or software).
If the iSCSI initiator is a hardware iSCSI adapter (both independent or dependent), the adapter performs the following tasks.
n n n
Encapsulates I/O requests into iSCSI Protocol Data Units (PDUs). Encapsulates iSCSI PDUs into TCP/IP packets. Sends IP packets over Ethernet to the iSCSI storage system.
If the iSCSI initiator is a software iSCSI adapter, the following takes place.
n n n n
The iSCSI initiator encapsulates I/O requests into iSCSI PDUs. The initiator sends iSCSI PDUs through TCP/IP connections. The VMkernel TCP/IP stack relays TCP/IP packets to a physical NIC. The physical NIC sends IP packets over Ethernet to the iSCSI storage system.
Depending on which port the iSCSI initiator uses to connect to the network, Ethernet switches and routers carry the request to the storage device that the host wants to access.
VMware, Inc.
21
Loads and unloads multipathing plug-ins. Hides virtual machine specifics from a particular plug-in. Routes I/O requests for a specific logical device to the MPP managing that device. Handles I/O queuing to the logical devices. Implements logical device bandwidth sharing between virtual machines. Handles I/O queueing to the physical storage HBAs. Handles physical path discovery and removal. Provides logical device and physical path I/O statistics.
As Figure 1-4 illustrates, multiple third-party MPPs can run in parallel with the VMware NMP. When installed, the third-party MPPs replace the behavior of the NMP and take complete control of the path failover and the load-balancing operations for specified storage devices.
22
VMware, Inc.
Manage physical path claiming and unclaiming. Manage creation, registration, and deregistration of logical devices. Associate physical paths with logical devices. Support path failure detection and remediation. Process I/O requests to logical devices:
n n
Select an optimal physical path for the request. Depending on a storage device, perform specific actions necessary to handle path failures and I/O command retries.
VMware, Inc.
23
After the NMP determines which SATP to use for a specific storage device and associates the SATP with the physical paths for that storage device, the SATP implements the tasks that include the following:
n n n
Monitors the health of each physical path. Reports changes in the state of each physical path. Performs array-specific actions necessary for storage fail-over. For example, for active-passive devices, it can activate passive paths.
VMware PSPs Path Selection Plug-Ins (PSPs) run with the VMware NMP and are responsible for choosing a physical path for I/O requests. The VMware NMP assigns a default PSP for each logical device based on the SATP associated with the physical paths for that device. You can override the default PSP. By default, the VMware NMP supports the following PSPs: Most Recently Used (VMW_PSP_MRU) Selects the path the ESX/ESXi host used most recently to access the given device. If this path becomes unavailable, the host switches to an alternative path and continues to use the new path while it is available. MRU is the default path policy for active-passive arrays. Uses the designated preferred path, if it has been configured. Otherwise, it uses the first working path discovered at system boot time. If the host cannot use the preferred path, it selects a random alternative available path. The host reverts back to the preferred path as soon as that path becomes available. Fixed is the default path policy for active-active arrays. CAUTION If used with active-passive arrays, the Fixed path policy might cause path thrashing. VMW_PSP_FIXED_AP Round Robin (VMW_PSP_RR) VMware NMP Flow of I/O When a virtual machine issues an I/O request to a storage device managed by the NMP, the following process takes place. 1 2 3 4 5 6 7 The NMP calls the PSP assigned to this storage device. The PSP selects an appropriate physical path on which to issue the I/O. The NMP issues the I/O request on the path selected by the PSP. If the I/O operation is successful, the NMP reports its completion. If the I/O operation reports an error, the NMP calls the appropriate SATP. The SATP interprets the I/O command errors and, when appropriate, activates the inactive paths. The PSP is called to select a new path on which to issue the I/O. Extends the Fixed functionality to active-passive and ALUA mode arrays. Uses a path selection algorithm that rotates through all available active paths enabling load balancing across the paths.
Fixed (VMW_PSP_FIXED)
24
VMware, Inc.
IP network
SP
iSCSI storage
Array-Based Failover
Some iSCSI storage systems manage path use of their ports automatically and transparently to ESX/ESXi. When using one of these storage systems, ESX/ESXi does not see multiple ports on the storage and cannot choose the storage port it connects to. These systems have a single virtual port address that ESX/ESXi uses to initially communicate. During this initial communication, the storage system can redirect ESX/ESXi to communicate with another port on the storage system. The iSCSI initiators in ESX/ESXi obey this reconnection request and connect with a different port on the system. The storage system uses this technique to spread the load across available ports.
VMware, Inc.
25
If ESX/ESXi loses connection to one of these ports, it automatically attempts to reconnect with the virtual port of the storage system, and should be redirected to an active, usable port. This reconnection and redirection happens quickly and generally does not disrupt running virtual machines. These storage systems can also request that iSCSI initiators reconnect to the system, to change which storage port they are connected to. This allows the most effective use of the multiple ports. Figure 1-6 shows an example of port redirection. ESX/ESXi attempts to connect to the 10.0.0.1 virtual port. The storage system redirects this request to 10.0.0.2. ESX/ESXi connects with 10.0.0.2 and uses this port for I/O communication. NOTE The storage system does not always redirect connections. The port at 10.0.0.1 could be used for traffic, also. Figure 1-6. Port Redirection
ESX/ESXi Connect to storage at 10.0.0.1 10.0.0.1 Reconnect to 10.0.0.2 10.0.0.2 storage
ESX/ESXi
If the port on the storage system that is acting as the virtual port becomes unavailable, the storage system reassigns the address of the virtual port to another port on the system. Figure 1-7 shows an example of this type of port reassignment. In this case, the virtual port 10.0.0.1 becomes unavailable and the storage system reassigns the virtual port IP address to a different port. The second port responds to both addresses. Figure 1-7. Port Reassignment
10.0.0.1 10.0.0.2 storage
With array-based failover, you can have multiple paths to the storage only if you use multiple ports on the ESX/ESXi host. These paths are active-active. For additional information, see iSCSI Session Management, on page 49.
26
VMware, Inc.
High Tier. Offers high performance and high availability. Might offer built-in snapshots to facilitate backups and point-in-time (PiT) restorations. Supports replication, full SP redundancy, and SAS drives. Uses high-cost spindles. Mid Tier. Offers mid-range performance, lower availability, some SP redundancy, and SCSI or SAS drives. May offer snapshots. Uses medium-cost spindles. Lower Tier. Offers low performance, little internal storage redundancy. Uses low end SCSI drives or SATA (serial low-cost spindles).
Not all applications need to be on the highest-performance, most-available storageat least not throughout their entire life cycle. NOTE If you need some of the functionality of the high tier, such as snapshots, but do not want to pay for it, you might be able to achieve some of the high-performance characteristics in software. For example, you can create snapshots in software. When you decide where to place a virtual machine, ask yourself these questions:
n n n n n
How critical is the virtual machine? What are its performance and availability requirements? What are its PiT restoration requirements? What are its backup requirements? What are its replication requirements?
A virtual machine might change tiers throughout its life cycle because of changes in criticality or changes in technology that push higher-tier features to a lower tier. Criticality is relative and might change for a variety of reasons, including changes in the organization, operational processes, regulatory requirements, disaster planning, and so on.
Using VMware HA
One of the failover options ESX/ESXi provides is VMware High Availability (HA). VMware HA allows you to organize virtual machines into failover groups. When a host fails, all its virtual machines are immediately started on different hosts. When a virtual machine is restored on a different host, it loses its memory state, but its disk state is exactly as it was when the host failed (crash-consistent failover). Shared storage (such as a SAN) is required for HA. NOTE You must be licensed to use VMware HA.
VMware, Inc.
27
Approaches to server failover work only if each server has access to the same storage. Because multiple servers require a lot of disk space, and because failover for the storage system complements failover for the server, SANs are usually employed in conjunction with server failover. When you design a SAN to work in conjunction with server failover, all ESX/ESXi hosts must see all datastores that the clustered virtual machines use. Although a datastore is accessible to a host, all virtual machines on that host do not necessarily have access to all data on that datastore. A virtual machine can access only the virtual disks for which it was configured. In case of a configuration error, virtual disks are locked when the virtual machine boots so no corruption occurs.
NOTE As a rule, when you boot from a SAN, each boot volume should be seen only by the host that is booting from that volume. An exception is when you try to recover from a failure by pointing a second host to the same volume. In this case, the SAN volume in question is not really for booting from a SAN. No host is booting from it because it is corrupted. The SAN volume is a regular non-boot volume that is made visible to a host.
New LUNs created on the iSCSI storage Changes to LUN access control Changes in connectivity
28
VMware, Inc.
Before ESX/ESXi can work with a SAN, you must set up your iSCSI initiators and storage. To do this, you must first observe certain basic requirements and then follow best practices for installing and setting up hardware or software iSCSI initiators to access the SAN. This chapter includes the following topics:
n n n n n n n n n n n n n n n n
ESX/ESXi iSCSI SAN Requirements, on page 29 ESX/ESXi iSCSI SAN Restrictions, on page 30 Setting LUN Allocations, on page 30 Network Configuration and Authentication, on page 30 Setting Up Independent Hardware iSCSI Adapters, on page 31 Setting Up and Configuring Dependent Hardware iSCSI Adapters, on page 32 Setting Up and Configuring Software iSCSI Adapter, on page 34 Networking Configuration for Software iSCSI and Dependent Hardware iSCSI, on page 36 Bind iSCSI Ports to iSCSI Adapters, on page 40 Using Jumbo Frames with iSCSI, on page 41 Enabling Jumbo Frames for Software and Dependent Hardware iSCSI, on page 42 Configuring Discovery Addresses for iSCSI Initiators, on page 43 Configuring CHAP Parameters for iSCSI Adapters, on page 44 Configuring Additional Parameters for iSCSI, on page 48 iSCSI Session Management, on page 49 Add iSCSI Storage, on page 51
Verify that your SAN storage hardware and firmware combinations are supported in conjunction with ESX/ESXi systems. For an up-to-date list, see the Storage/SAN section of the online Hardware Compatibility Guide. Configure your system to have only one VMFS datastore for each LUN. In VMFS-3, you do not need to set accessibility.
VMware, Inc.
29
Unless you are using diskless servers (booting from a SAN), do not set up the diagnostic partition on a SAN LUN. In the case of diskless servers that boot from a SAN, a shared diagnostic partition is appropriate. Use RDMs for access to any raw disk. Set the SCSI controller driver in the guest operating system to a large enough queue. You can set the queue depth for the physical HBA during system setup. On virtual machines running Microsoft Windows, increase the value of the SCSI TimeoutValue parameter to allow Windows to better tolerate delayed I/O resulting from path failover.
n n
ESX/ESXi does not support iSCSI-connected tape devices. You cannot use virtual-machine multipathing software to perform I/O load balancing to a single physical LUN. ESX/ESXi does not support multipathing when you combine an independent hardware adapter with either software iSCSI adapter or dependent hardware iSCSI adapter.
Storage Provisioning. To ensure that the ESX/ESXi host recognizes LUNs at startup time, configure all iSCSI storage targets so that your host can access them and use them. Also, configure your host so that it can discover all available iSCSI targets. vMotion and VMware DRS. When you use vCenter Server and vMotion or DRS, make sure that the LUNs for the virtual machines are provisioned to all ESX/ESXi hosts. This configuration provides the greatest freedom in moving virtual machines. Active-active versus active-passive arrays. When you use vMotion or DRS with an active-passive SAN storage device, make sure that all ESX/ESXi systems have consistent paths to all storage processors. Not doing so can cause path thrashing when a vMotion migration occurs. For active-passive storage arrays not listed in the Storage/SAN section of the online VMware Compatibility Guide, VMware does not support storage-port failover. You must connect the server to the active port on the storage system. This configuration ensures that the LUNs are presented to the ESX/ESXi host.
For software iSCSI and dependent hardware iSCSI, networking for the VMkernel must be configured. You can verify the network configuration by using the vmkping utility. For hardware iSCSI, network parameters, such as IP address, subnet mask, and default gateway must be configured on the HBA. Check and change the default initiator name if necessary.
30
VMware, Inc.
n n
The discovery address of the storage system must be set and should be pingable using vmkping. For CHAP authentication, enable it on the initiator and the storage system side. After authentication is enabled, it applies for all of the targets that are not yet discovered, but does not apply to targets that are already discovered. After the discovery address is set, the new targets discovered are exposed and can be used at that point.
For details on how to use the vmkping command, search the VMware Knowledge Base.
VMware, Inc.
31
Select the initiator to view. The default details for the initiator appear, including the model, iSCSI name, iSCSI alias, IP address, and target and paths information.
Click Properties. The iSCSI Initiator Properties dialog box appears. The General tab displays additional characteristics of the initiator.
You can now configure your hardware initiator or change its default characteristics.
If you change the iSCSI name, it is used for new iSCSI sessions. For existing sessions, new settings are not used until logout and re-login.
32
VMware, Inc.
The entire setup and configuration process for the dependent hardware iSCSI adapters involves these steps: 1 View the dependent hardware adapters. See View Dependent Hardware iSCSI Adapters, on page 33. If your dependent hardware adapters do not appear on the list of storage adapters, check whether they need to be licensed. See your vendor documentation. 2 Determine the association between the dependent hardware adapters and physical NICs. See Determine Association Between Dependent Hardware iSCSI and Physical Network Adapters, on page 34 Make sure to note the names of the corresponding physical NICs. For example, the vmhba33 adapter corresponds to vmnic1 and vmhba34 corresponds to vmnic2. 3 Configure the iSCSI networking by creating ports for the iSCSI traffic. See Networking Configuration for Software iSCSI and Dependent Hardware iSCSI, on page 36. Open a port for each NIC. For example, create the vmk1 port for the vmnic1 NIC and the vmk2 port for vmnic2. 4 Bind the iSCSI ports to corresponding dependent hardware iSCSI adapters. This step is necessary no matter whether you have multiple adapters or just one. See Bind iSCSI Ports to iSCSI Adapters, on page 40. In this example, you bind port vmk1 to vmhba33 and port vmk2 to vmhba34. 5 Configure discovery addresses. See Configuring Discovery Addresses for iSCSI Initiators, on page 43. 6 Configure CHAP parameters. See Configuring CHAP Parameters for iSCSI Adapters, on page 44.
When you use any dependent hardware iSCSI adapter, performance reporting for a NIC associated with the adapter might show little or no activity, even when iSCSI traffic is heavy. This behavior occurs because the iSCSI traffic bypasses the regular networking stack. The Broadcom iSCSI adapter performs data reassembly in hardware, which has a limited buffer space. When you use the Broadcom iSCSI adapter in a congested network or under load, enable flow control to avoid performance degradation. Flow control manages the rate of data transmission between two nodes to prevent a fast sender from overrunning a slow receiver. For best results, enable flow control at the end points of the I/O path, at the hosts and iSCSI storage systems.
VMware, Inc.
33
Procedure 1 2 Log in to the vSphere Client, and select a host from the Inventory panel. Click the Configuration tab and click Storage Adapters in the Hardware panel. If installed, the dependent hardware iSCSI adapter should appear on the list of storage adapters. 3 Select the adapter to view and click Properties. The iSCSI Initiator Properties dialog box displays the default details for the adapter, including the iSCSI name and iSCSI alias.
Determine Association Between Dependent Hardware iSCSI and Physical Network Adapters
You need to determine the name of the physical NIC with which the dependent hardware iSCSI adapter is associated. You need to know the association to be able to perform the port binding correctly. Procedure 1 Use the vSphere CLI command to determine the name of the physical NIC, with which the iSCSI adapter is associated.
esxcli swiscsi vmnic list -d vmhba#
vmhba# is the name of the iSCSI adapter. 2 In the output, find the vmnic name: vmnic# line. vmnic# is the name of the network adapter that corresponds to the iSCSI adapter. What to do next After you determined the name of the NIC, create an iSCSI port on a vSwitch connected to the NIC. You then bind this port to the dependent hardware iSCSI adapter, so that your host can direct the iSCSI traffic through the NIC.
34
VMware, Inc.
Configure discovery addresses. See Configuring Discovery Addresses for iSCSI Initiators, on page 43.
Configure CHAP parameters. See Configuring CHAP Parameters for iSCSI Adapters, on page 44.
After you enable the initiator, the host assigns the default iSCSI name to it. You can change the default name if needed.
VMware, Inc.
35
If you have a single physical NIC, create one iSCSI port on a vSwitch connected to the NIC. VMware recommends that you designate a separate network adapter for iSCSI. Do not use iSCSI on 100Mbps or slower adapters. If you have two or more physical NICs for iSCSI, create a separate iSCSI port for each physical NIC and use the NICs for iSCSI multipathing. See Figure 2-1. Figure 2-1. Networking with iSCSI
Two physical NICs connected to the software iSCSI adapter Two physical NICs with iSCSI offload capabilities
software iSCSI adapter vmhba# vmk1 vSwitch Host1 Host1 vmnic1 vmnic2 vmk2 iSCSI ports
dependent hardware iSCSI adapters vmhba33 vmk1 vSwitch Host2 vmnic1 vmnic2 virtual vmhba34 vmk2
Host2 physical
physical NICs
iSCSI storage
iSCSI storage
NOTE When you use a dependent hardware iSCSI adapter, performance reporting for a NIC associated with the adapter might show little or no activity, even when iSCSI traffic is heavy. This behavior occurs because the iSCSI traffic bypasses the regular networking stack.
n
Create iSCSI Port for a Single NIC on page 37 Use this task to connect the VMkernel, which runs services for iSCSI storage, to a physical NIC. If you have just one physical network adapter to be used for iSCSI traffic, this is the only procedure you must perform to set up your iSCSI networking.
36
VMware, Inc.
Using Multiple NICs for Software and Dependent Hardware iSCSI on page 37 If your host has more than one physical NIC for iSCSI, for each physical NIC, create a separate iSCSI port using 1:1 mapping.
Create Additional iSCSI Ports for Multiple NICs on page 38 Use this task if you have two or more NICs you can designate for iSCSI and you want to connect all of your iSCSI NICs to a single vSwitch. In this task, you associate VMkernel iSCSI ports with the network adapters using 1:1 mapping.
VMware, Inc.
37
An alternative is to add all NIC and iSCSI port pairs to a single vSwitch. See Figure 2-3. You must override the default setup and make sure that each port maps to only one corresponding active NIC. Figure 2-3. iSCSI Ports and NICs on a Single vSwitch
For information about adding the NIC and VMkernel port pairs to a vSwitch, see Create Additional iSCSI Ports for Multiple NICs, on page 38. After you map iSCSI ports to network adapters, use the esxcli command to bind the ports to the iSCSI adapters. With dependent hardware iSCSI adapters, perform port binding, whether you use one NIC or multiple NICs. For information, see Bind iSCSI Ports to iSCSI Adapters, on page 40.
38
VMware, Inc.
3 4
Select the vSwitch that you use for iSCSI and click Properties. Connect additional network adapters to the vSwitch. a b In the vSwitch Properties dialog box, click the Network Adapters tab and click Add. Select one or more NICs from the list and click Next. With dependent hardware iSCSI adapters, make sure to select only those NICs that have a corresponding iSCSI component. c Review the information on the Adapter Summary page, and click Finish. The list of network adapters reappears, showing the network adapters that the vSwitch now claims.
Create iSCSI ports for all NICs that you connected. The number of iSCSI ports must correspond to the number of NICs on the vSwitch. a b c d In the vSwitch Properties dialog box, click the Ports tab and click Add. Select VMkernel and click Next. Under Port Group Properties, enter a network label, for example iSCSI, and click Next. Specify the IP settings and click Next. When you enter subnet mask, make sure that the NIC is set to the subnet of the storage system it connects to. e Review the information and click Finish.
CAUTION If the NIC you use with your iSCSI adapter, either software or dependent hardware, is not in the same subnet as your iSCSI target, your host is not able to establish sessions from this network adapter to the target. 6 Map each iSCSI port to just one active NIC. By default, for each iSCSI port on the vSwitch, all network adapters appear as active. You must override this setup, so that each port maps to only one corresponding active NIC. For example, iSCSI port vmk1 maps to vmnic1, port vmk2 maps to vmnic2, and so on. a b c 7 On the Ports tab, select an iSCSI port and click Edit. Click the NIC Teaming tab and select Override vSwitch failover order. Designate only one adapter as active and move all remaining adapters to the Unused Adapters category.
Repeat the last step for each iSCSI port on the vSwitch.
What to do next After performing this task, use the esxcli command to bind the iSCSI ports to the software iSCSI or dependent hardware iSCSI adapters.
VMware, Inc.
39
For dependent hardware iSCSI adapters, have the correct association between the physical NICs and iSCSI adapters. See View Dependent Hardware iSCSI Adapters, on page 33. Set up networking for the iSCSI traffic. See Networking Configuration for Software iSCSI and Dependent Hardware iSCSI, on page 36. To use the software iSCSI adapter, enable it. See Enable the Software iSCSI Adapter, on page 35.
Procedure 1 Identify the name of the iSCSI port assigned to the physical NIC. The vSphere Client displays the port's name below the network label. In the following graphic, the ports' names are vmk1 and vmk2.
Use the vSphere CLI command to bind the iSCSI port to the iSCSI adapter.
esxcli swiscsi nic add -n port_name -d vmhba
IMPORTANT For software iSCSI, repeat this command for each iSCSI port connecting all ports with the software iSCSI adapter. With dependent hardware iSCSI, make sure to bind each port to an appropriate corresponding adapter. 3 Verify that the port was added to the iSCSI adapter.
esxcli swiscsi nic list -d vmhba
40
VMware, Inc.
Verify that the port was disconnected from the iSCSI adapter.
esxcli swiscsi nic list -d vmhba
VMware, Inc.
41
2 3 4
Run the vicfg-vmknic -l command to display a list of VMkernel interfaces and check that the configuration of the Jumbo Frame-enabled interface is correct. Check that the VMkernel interface is connected to a vSwitch with Jumbo Frames enabled. Configure all physical switches and any physical or virtual machines to which this VMkernel interface connects to support Jumbo Frames.
42
VMware, Inc.
Static Discovery
VMware, Inc.
43
What to do next After configuring Static Discovery for your iSCSI adapter, rescan the adapter.
44
VMware, Inc.
ESX/ESXi supports the following CHAP authentication methods: One-way CHAP In one-way CHAP authentication, also called unidirectional, the target authenticates the initiator, but the initiator does not authenticate the target. In mutual CHAP authentication, also called bidirectional, an additional level of security enables the initiator to authenticate the target. VMware supports this method for software and dependent hardware iSCSI adapters only.
Mutual CHAP
For software and dependent hardware iSCSI adapters, you can set one-way CHAP and mutual CHAP for each initiator or at the target level. Hardware iSCSI supports CHAP only at the initiator level. When you set the CHAP parameters, specify a security level for CHAP. NOTE When you specify the CHAP security level, how the storage array responds depends on the arrays CHAP implementation and is vendor specific. For example, when you select Use CHAP unless prohibited by target, some storage arrays use CHAP in response, while others do not. For information on CHAP authentication behavior in different initiator and target configurations, consult the array documentation. Table 2-2. CHAP Security Level
CHAP Security Level Do not use CHAP Description The host does not use CHAP authentication. Select this option to disable authentication if it is currently enabled. Supported Software iSCSI Dependent hardware iSCSI Independent hardware iSCSI Software iSCSI Dependent hardware iSCSI Software iSCSI Dependent hardware iSCSI Independent hardware iSCSI Software iSCSI Dependent hardware iSCSI
Do not use CHAP unless required by target Use CHAP unless prohibited by target
The host prefers a non-CHAP connection, but can use a CHAP connection if required by the target. The host prefers CHAP, but can use non-CHAP connections if the target does not support CHAP.
Use CHAP
The host requires successful CHAP authentication. The connection fails if CHAP negotiation fails.
In one-way CHAP, the target authenticates the initiator. In mutual CHAP, both the target and initiator authenticate each other. Make sure to use different secrets for CHAP and mutual CHAP.
When configuring CHAP parameters, make sure that they match the parameters on the storage side. The CHAP name should not exceed 511 and the CHAP secret 255 alphanumeric characters. Some adapters, for example the QLogic adapter, might have lower limits, 255 for the CHAP name and 100 for the CHAP secret.
VMware, Inc.
45
Procedure 1 2 3 Access the iSCSI Initiator Properties dialog box. On the General tab, click CHAP. To configure one-way CHAP, under CHAP specify the following: a Select the CHAP security level.
n n n
Do not use CHAP unless required by target (software and dependent hardware iSCSI only) Use CHAP unless prohibited by target Use CHAP (software and dependent hardware iSCSI only). To be able to configure mutual CHAP, you must select this option.
Specify the CHAP name. Make sure that the name you specify matches the name configured on the storage side.
n n
To set the CHAP name to the iSCSI initiator name, select Use initiator name. To set the CHAP name to anything other than the iSCSI initiator name, deselect Use initiator name and enter a name in the Name field.
c 4
Enter a one-way CHAP secret to be used as part of authentication. Make sure to use the same secret that you enter on the storage side.
To configure mutual CHAP, first configure one-way CHAP by following directions in Step 3. Make sure to select Use CHAP as an option for one-way CHAP. Then, specify the following under Mutual CHAP: a b c Select Use CHAP. Specify the mutual CHAP name. Enter the mutual CHAP secret. Make sure to use different secrets for the one-way CHAP and mutual CHAP.
5 6
If you change the CHAP or mutual CHAP parameters, they are used for new iSCSI sessions. For existing sessions, new settings are not used until you log out and login again.
In one-way CHAP, the target authenticates the initiator. In mutual CHAP, both the target and initiator authenticate each other. Make sure to use different secrets for CHAP and mutual CHAP.
46
VMware, Inc.
Procedure 1 2 3 4 Access the iSCSI Initiator Properties dialog box. Select either Dynamic Discovery tab or Static Discovery tab. From the list of available targets, select a target you want to configure and click Settings > CHAP. Configure one-way CHAP in the CHAP area. a b Deselect Inherit from parent. Select one of the following options:
n n n
Do not use CHAP unless required by target Use CHAP unless prohibited by target Use CHAP. To be able to configure mutual CHAP, you must select this option.
Specify the CHAP name. Make sure that the name you specify matches the name configured on the storage side.
n n
To set the CHAP name to the iSCSI initiator name, select Use initiator name. To set the CHAP name to anything other than the iSCSI initiator name, deselect Use initiator name and enter a name in the Name field.
d 5
Enter a one-way CHAP secret to be used as part of authentication. Make sure to use the same secret that you enter on the storage side.
To configure mutual CHAP, first configure one-way CHAP by following directions in Step 4. Make sure to select Use CHAP as an option for one-way CHAP. Then, specify the following in the Mutual CHAP area: a b c d Deselect Inherit from parent. Select Use CHAP. Specify the mutual CHAP name. Enter the mutual CHAP secret. Make sure to use different secrets for the one-way CHAP and mutual CHAP.
6 7
If you change the CHAP or mutual CHAP parameters, they are used for new iSCSI sessions. For existing sessions, new settings are not used until you log out and login again.
Disable CHAP
You can disable CHAP if your storage system does not require it. If you disable CHAP on a system that requires CHAP authentication, existing iSCSI sessions remain active until you reboot your ESX/ESXi host or the storage system forces a logout. After the session ends, you can no longer connect to targets that require CHAP. Procedure 1 2 Open the CHAP Credentials dialog box. For software and dependent hardware iSCSI adapters, to disable just the mutual CHAP and leave the oneway CHAP, select Do not use CHAP in the Mutual CHAP area.
VMware, Inc.
47
To disable one-way CHAP, select Do not use CHAP in the CHAP area. The mutual CHAP, if set up, automatically turns to Do not use CHAP when you disable the one-way CHAP.
Click OK.
Data Digest
Software iSCSI Dependent Hardware iSCSI Software iSCSI Dependent Hardware iSCSI Software iSCSI Dependent Hardware iSCSI Software iSCSI Independent Hardware iSCSI Software iSCSI Dependent Hardware iSCSI Software iSCSI Dependent Hardware iSCSI Software iSCSI Dependent Hardware iSCSI
Maximum Burst Length Maximum Receive Data Segment Length Session Recovery Timeout No-Op Interval
No-Op Timeout
48
VMware, Inc.
Delayed ACK
Enter any required values for the advanced parameters you want to modify and click OK to save your changes.
VMware, Inc.
49
You can also establish a session to a specific target port. This can be useful if your host connects to a singleport storage system that, by default, presents only one target port to your initiator, but can redirect additional sessions to a different target port. Establishing a new session between your iSCSI initiator and another target port creates an additional path to the storage system. CAUTION Some storage systems do not support multiple sessions from the same initiator name or endpoint. Attempts to create multiple sessions to such targets can result in unpredictable behavior of your iSCSI environment.
50
VMware, Inc.
To duplicate a session, use esxcli swiscsi session add -d vmhbaXX -t iqnXX -s session_isid.
n n n
d -- The name of your iSCSI adapter, such as vmhbaXX. t -- The name of the target to log in, such as iqnXX. s -- The ISID of a session to duplicate, such as session_isid. You can find it by listing all session.
To remove a session, use esxcli swiscsi session remove. The command takes these options.
n n n
d -- The name of your iSCSI adapter, such as vmhbaXX. t -- The name of the target to log in, such as iqnXX. s -- The ISID of a session to duplicate, such as session_isid. You can find it by listing all session.
VMware, Inc.
51
5 6 7
Select the iSCSI device to use for your datastore and click Next. Review the current disk layout and click Next. Enter a datastore name and click Next. The datastore name appears in the vSphere Client, and the label must be unique within the current VMware vSphere instance.
If needed, adjust the file system values and capacity you use for the datastore. By default, the entire free space available on the storage device is offered to you.
10
52
VMware, Inc.
After you configure your iSCSI initiators and storage, you might need to modify your storage system to ensure that it works properly with your ESX/ESXi implementation. This section discusses many of the iSCSI storage systems supported in conjunction with VMware ESX/ESXi. For each device, it lists major known potential issues, points to vendor-specific information (if available), or includes information from VMware knowledge base articles. NOTE Information in this section is updated only with each release. New information might already be available. Also, other iSCSI storage systems are supported but are not covered in this chapter. Consult the most recent Storage/SAN Compatibility Guide, check with your storage vendor, and explore the VMware knowledge base articles. This chapter includes the following topics:
n n n n n n n n n n n
Testing ESX/ESXi SAN Configurations, on page 53 General Considerations for iSCSI SAN Storage Systems, on page 54 EMC CLARiiON Storage Systems, on page 54 EMC Symmetrix Storage Systems, on page 55 Enable HP StorageWorks MSA1510i to Communicate with ESX/ESXi, on page 55 HP StorageWorks EVA Storage Systems, on page 56 NetApp Storage Systems, on page 57 EqualLogic Storage Systems, on page 59 LeftHand Networks SAN/iQ Storage Systems, on page 59 Dell PowerVault MD3000i Storage Systems, on page 59 iSCSI Targets in vApps, on page 59
VMware, Inc.
53
VMware tests ESX/ESXi with storage systems in the following configurations: Basic Connectivity iSCSI Failover Storage Port Failover Booting from a SAN (with ESX hosts only) Tests whether ESX/ESXi can recognize and operate with the storage system. This configuration does not allow for multipathing or any type of failover. The server is equipped with multiple iSCSI HBAs or NICs. The server is robust to HBA or NIC failure. The server is attached to multiple storage ports and is robust to storage port failures and switch failures. The ESX host boots from a LUN configured on the SAN rather than from the server itself.
LUNs must be presented to each HBA of each host with the same LUN ID number. If different numbers are used, the ESX/ESXi hosts do not recognize different paths to the same LUN. Because instructions on how to configure identical SAN LUN IDs are vendor-specific, consult your storage documentation for more information. Unless specified for individual storage systems discussed in this chapter, set the host type for LUNs presented to ESX/ESXi to Linux or Linux Cluster, if applicable to your storage system. The method ESX/ESXi uses to access the storage system is most compatible with Linux access, however, this can vary depending on the storage system you are using. If you are using vMotion, DRS, or HA, make sure that source and target hosts for virtual machines can see the same LUNs with identical LUN IDs. SAN administrators might find it counterintuitive to have multiple hosts see the same LUNs because they might be concerned about data corruption. However, VMFS prevents multiple virtual machines from writing to the same file at the same time, so provisioning the LUNs to all required ESX/ESXi system is appropriate. If you do not have CHAP authentication set up on the LUNs that are being accessed, you must also disable CHAP on the ESX/ESXi host. Otherwise, authentication of the storage system fails, although the LUNs have no CHAP requirement.
To avoid the possibility of path thrashing, the default multipathing policy is Most Recently Used, not Fixed. The ESX/ESXi system sets the default policy when it identifies the storage system. To boot from a SAN, choose the active storage processor for the boot LUNs target in the HBA BIOS. On EMC CLARiiON AX100i and AX150i systems, RDMs are supported only if you use the Navisphere Management Suite for SAN administration. Navisphere Express is not guaranteed to configure them properly. To use RDMs successfully, a given LUN must be presented with the same LUN ID to every ESX/ESXi host in the cluster. The AX100i and AX150i do not do this by default.
n n
54
VMware, Inc.
When you use an AX100i or AX150i storage system, no host agent periodically checks the host configuration and pushes changes to the storage system. The axnaviserverutil cli utility is used to update the changes. This is a manual operation that you should perform as needed. Port binding support on EMC CLARiiON storage systems requires initiators in different subnets. See vendor documentation for additional details. For ESX/ESXi to support EMC CLARiiON with ALUA, check the HCLs to make sure that you use the correct firmware version on the storage array. For additional information, contact your storage vendor.
Common serial number (C) Auto negotiation (EAN) enabled SCSI 3 (SC3) set (enabled) Unique world wide name (UWN) SPC-2 (Decal) (SPC2) SPC-2 flag is required
NOTE The ESX/ESXi host considers any LUNs from a Symmetrix storage system that have a capacity of 50MB or less as management LUNs. These LUNs are also known as pseudo or gatekeeper LUNs. These LUNs appear in the EMC Symmetrix Management Interface and should not be used to hold data.
Record the management port IP address that appears in Basic MSA1510i information.
From the server or a workstation on the MSA1510i LAN segment, open a Web browser and enter the address obtained in Step 2. When prompted, enter the default access permissions.
n n
VMware, Inc.
55
Select a data port. Assign an IP address to the data port. VLANs are set up on the switch and are used as one method of controlling access to the storage. If you are using VLANs, enter the VLAN ID to use (0 = not used). d The wizard suggests a default iSCSI Target Name and iSCSI Target Alias. Accept the default or enter user-defined values. NOTE To configure the remaining data ports, complete the Initial System Configuration Wizard process, and then use tasks available on the Configuretab. Enter login settings. Enter management settings.
NOTE Wizards are available for basic configuration tasks only. Use the Manage and Configure tabs to view and change your configuration. What to do next After initial setup, perform the following tasks to complete the configuration:
n n n n n n n n
Create an array. Create a logical drive. Create a target. Create a portal group. Associate or assign the portals created using the wizard with the portal group created. Map logical drives to the target. Add initiators (initiator IQN name and alias). Update the ACLs of the logical drives to provide access to initiators (select the list of initiators to access the logical drive).
For HP EVAgl 3000/5000 (active-passive), use the 000000002200282E host mode type. For HP EVAgl firmware 4.001 (active-active firmware for GL series) and above, use the VMware host mode type. For EVA4000/6000/8000 active-active arrays with firmware earlier than 5.031, use the 000000202200083E host mode type. For EVA4000/6000/8000 active-active arrays with firmware 5.031 and later, use the VMware host mode type.
Otherwise, EVA systems do not require special configuration changes to work with an ESX/ESXi system.
56
VMware, Inc.
Create LUNs. a b Select LUNs and click Add. Enter the following:
n n n n
Path: Enter a path, for example, /vol/vol1/lun1. LUN Protocol Type: VMware. Description: A brief description. Size and Unit: Enter a size, for example, 10GB and select Space Reserved.
VMware, Inc.
57
Create an initiator group. a b Select LUNs > Initiator Group and click Add. Enter the following:
n n n n
Group Name: Enter a group name Type: Choose iSCSI. Operating System: Enter VMware. Initiators: Enter fully qualified initiator names. If there is more than one initiator, each initiator has to be separated with a return carriage.
c 5
Click Add.
Map the LUN to the initiator group. a Select LUNs and click Manage. A LUNs list appears. b c d e From this list, click the label on the Maps row for the specific LUNs. Click Add Groups to Map. Select the initiator group and click Add. When prompted, enter the LUN ID (any number from 0 to 255) and click Apply.
Create a LUN.
lun create -s size -t vmware path
58
VMware, Inc.
Multipathing. No special setup is needed because EqualLogic storage systems support storage-processor failover that is transparent to iSCSI. Multiple iSCSI HBAs or NICs can connect to the same target or LUN on the storage side. Creating iSCSI LUNs. From the EqualLogic web portal, right-click Volumes, and then select Create Volume. Enable ARP redirection for ESX/ESXi hardware iSCSI HBAs. EqualLogic storage systems impose a maximum limit of 512 iSCSI connections per storage pool and 2048 connections per storage group.
n n
For more information about configuring and using EqualLogic storage systems, see the vendors documentation.
As a best practice, configure virtual IP load balancing in SAN/iQ for all ESX/ESXi authentication groups.
On the MD3000i storage system, mutual CHAP configuration requires only a CHAP secret. On the ESX/ESXi host, mutual CHAP configuration requires both the name and CHAP secret. When configuring mutual CHAP on the ESX/ESXi host, enter the IQN name of the target as the mutual CHAP name. Make sure the CHAP secret matches the one set on the array.
VMware, Inc.
59
60
VMware, Inc.
4
Software iSCSI and Dependent Hardware iSCSI Not supported. Supported. The network adapter must support the iBFT.
When you set up your host to boot from a SAN, your host's boot image is stored on one or more LUNs in the SAN storage system. When the host starts, it boots from the LUN on the SAN rather than from its local disk. You can use boot from the SAN if you do not want to handle maintenance of local storage or have diskless hardware configurations, such as blade systems. ESX and ESXi hosts support different methods of booting from the SAN. Table 4-1. Boot from iSCSI SAN support
Type of Host ESX Host ESXi Host Independent Hardware iSCSI Supported. An iSCSI HBA is required to boot from the SAN. Not supported.
Cheaper servers. Servers can be more dense and run cooler without internal storage. Easier server replacement. You can replace servers and have the new server point to the old boot location. Less wasted space. Easier backup processes. The system boot images in the SAN can be backed up as part of the overall SAN backup procedures. Improved management. Creating and managing the operating system image is easier and more efficient.
General Boot from iSCSI SAN Recommendations, on page 62 Prepare the iSCSI SAN, on page 62 Configure ESX Hosts to Boot from iSCSI SAN, on page 63 iBFT iSCSI Boot Overview, on page 64 Collecting Diagnostic Information for ESXi Hosts, on page 69
VMware, Inc.
61
Review any vendor recommendations for the hardware you use in your boot configuration. For installation prerequisites and requirements, review an appropriate ESX/ESXi Installation Guide. Use static IP addresses to reduce the chances of DHCP conflicts. Use different LUNs for VMFS datastores and boot partitions. Configure proper ACLs on your storage system.
n
The boot LUN should be visible only to the host that uses the LUN. No other host on the SAN should be permitted to see that boot LUN. If a LUN is used for a VMFS datastore, it can be shared by multiple hosts. ACLs on the storage systems can allow you to do this.
With independent hardware iSCSI only, you can place the diagnostic partition on the boot LUN. If you configure the diagnostic partition in the boot LUN, this LUN cannot be shared across multiple hosts. If a separate LUN is used for the diagnostic partition, it can be shared by multiple hosts. If you boot an ESXi host from SAN using iBFT, you cannot set up a diagnostic partition on a SAN LUN. Instead, you use the vSphere Management Assistant (vMA) to collect diagnostic information from your host and store it for analysis. See Collecting Diagnostic Information for ESXi Hosts, on page 69.
62
VMware, Inc.
Procedure 1 2 Connect network cables, referring to any cabling guide that applies to your setup. Ensure IP connectivity between your storage system and server. This includes proper configuration of any routers or switches on your storage network. Storage systems must be able to ping the iSCSI adapters in your hosts. 3 Configure the storage system. a b Create a volume (or LUN) on the storage system for your host to boot from. Configure the storage system so that your host has access to the assigned LUN. This could involve updating ACLs with the IP addresses, iSCSI names, and the CHAP authentication parameter you use on your host. On some storage systems, in addition to providing access information for the ESX/ESXi host, you must also explicitly associate the assigned LUN with the host. c d e Ensure that the LUN is presented to the host correctly. Ensure that no other system has access to the configured LUN. Record the iSCSI name and IP addresses of the targets assigned to the host. You must have this information to configure your iSCSI adapters.
Configure iSCSI settings. See Configure iSCSI Boot Settings, on page 64.
VMware, Inc.
63
c 4 5
From the iSCSI Boot Settings menu, select the primary boot device. An auto rescan of the HBA is made to find new target LUNS. Select the iSCSI target. NOTE If more then one LUN exists within the target, you can choose a specific LUN ID by pressing Enter after you locate the iSCSI device.
Return to the Primary Boot Device Setting menu. After the rescan, the Boot LUNand iSCSI Name fields are populated. Change the value of Boot LUN to the desired LUN ID.
64
VMware, Inc.
5 6 7
The VMkernel starts loading and takes over the boot operation. Using the boot parameters from the iBFT, the VMkernel connects to the iSCSI target. After the iSCSI connection is established, the system boots.
NOTE Update your NIC's boot code and iBFT firmware using vendor supplied tools before trying to install and boot VMware ESXi 4.1 release. Consult vendor documentation and VMware HCL guide for supported boot code and iBFT firmware versions for VMware ESXi 4.1 iBFT boot. The boot code and iBFT firmware released by vendors prior to the ESXi 4.1 release might not work. After you set up your host to boot from iBFT iSCSI, the following restrictions apply:
n
You cannot disable the software iSCSI adapter. If the iBFT configuration is present in the BIOS, the host re-enables the software iSCSI adapter during each reboot. You cannot remove the iBFT iSCSI boot target using the vSphere Client. The target appears on the list of adapter static targets.
VMware, Inc.
65
Procedure
u
On the network adapter that you use for the boot from iSCSI, specify networking and iSCSI parameters. Because configuring the network adapter is vendor specific, review your vendor documentation for instructions.
iSCSI DVD-ROM
Because changing the boot sequence in the BIOS is vendor specific, refer to vendor documentation for instructions. The following sample procedure explains how to change the boot sequence on a Dell host with a Broadcom network adapter. Procedure 1 2 3 4 5 6 7 Turn on the host. During Power-On Self-Test (POST), press F2 to enter the BIOS Setup. In the BIOS Setup, select Boot Sequence and press Enter. In the Boot Sequence menu, arrange the bootable items so that iSCSI precedes the DVD-ROM. Press Esc to exit the Boot Sequence menu. Press Esc to exit the BIOS Setup. Select Save Changes and click Exit to exit the BIOS Setup menu.
Configure iSCSI boot firmware on your boot NIC to point to the target LUN that you want to use as the boot LUN. Change the boot sequence in the BIOS so that iSCSI precedes the DVD-ROM. If you use Broadcom adapters, set Boot to iSCSI target to Disabled.
n n
Procedure 1 2 3 Insert the installation media in the DVD-ROM drive and restart the host. When the installer starts, follow the typical installation procedure. When prompted, select the iSCSI LUN as the installation target. The installer copies the ESXi boot image to the iSCSI LUN. 4 After the system restarts, remove the installation DVD.
66
VMware, Inc.
Configure the iSCSI boot firmware on your boot NIC to point to the boot LUN. Change the boot sequence in the BIOS so that iSCSI precedes the boot device. If you use Broadcom adapters, set Boot to iSCSI target to Enabled
Procedure 1 Restart the host. The host boots from the iSCSI LUN using iBFT data. During the first boot, the iSCSI initialization script sets up default networking. The network setup is persistent after subsequent reboots. 2 (Optional) Adjust networking configuration using the vSphere Client.
Your isolated networks must be on different subnets. If you use VLANs to isolate the networks, they must have different subnets to ensure that routing tables are properly set up. VMware recommends that you configure the iSCSI adapter and target to be on the same subnet. If you set up the iSCSI adapter and target on different subnets, the following restrictions apply:
n n
The default VMkernel gateway must be able to route both the management and iSCSI traffic. After you boot your ESXi host, you can use the iBFT-enabled network adapter only for iBFT. You cannot use the adapter for iSCSI traffic.
n n n
Use the first physical network adapter for the management network. Use the second physical network adapter for the iSCSI network. Make sure to configure the iBFT. After the host boots, you can add secondary network adapters to both the management and iSCSI networks.
VMware, Inc.
67
68
VMware, Inc.
Solution 1 2 3 Use the vSphere Client to connect to the ESXi host. Re-configure the iSCSI and networking on the host to match the iBFT parameters. Perform a rescan.
Do not install vMA on the same physical host where you set up the net dump client. If you have multiple ESXi hosts that require the net dump configuration, configure each host separately. One vMA instance is sufficient for collecting the core dump files from multiple ESXi hosts.
VMware, Inc.
69
(Optional) Enter the IP address of gateway to reach the net dump server.
# esxcfg-advcfg -s IP_address_gateway /Net/NetdumpServerGateway
70
VMware, Inc.
This section helps you manage your ESX/ESXi system, use SAN storage effectively, and perform troubleshooting. It also explains how to find information about storage devices, adapters, multipathing, and so on. This chapter includes the following topics:
n n n n n n n n n n n n
Viewing Storage Adapter Information, on page 71 Viewing Storage Device Information, on page 72 Viewing Datastore Information, on page 74 Resolving Storage Display Issues, on page 75 Path Scanning and Claiming, on page 79 Sharing Diagnostic Partitions, on page 84 Avoiding and Resolving SAN Problems, on page 84 Optimizing SAN Storage Performance, on page 85 Resolving Performance Issues, on page 88 SAN Storage Backup Considerations, on page 91 Managing Duplicate VMFS Datastores, on page 93 Storage Hardware Acceleration, on page 96
VMware, Inc.
71
72
VMware, Inc.
Runtime Name
vmhba# is the name of the storage adapter. The name refers to the physical adapter on the host, not to the SCSI controller used by the virtual machines. C# is the storage channel number. Software iSCSI initiators use the channel number to show multiple paths to the same target.
T# is the target number. Target numbering is decided by the host and might change if there is a change in the mappings of targets visible to the host. Targets that are shared by different hosts might not have the same target number. L# is the LUN number that shows the position of the LUN within the target. The LUN number is provided by the storage system. If a target has only one LUN, the LUN number is always zero (0).
For example, vmhba1:C0:T3:L1 represents LUN1 on target 3 accessed through the storage adapter vmhba1 and channel 0.
VMware, Inc.
73
Created on an available storage device. Discovered when a host is added to the inventory. When you add a host to the inventory, the vSphere Client displays any datastores available to the host.
If your vSphere Client is connected to a vCenter Server system, you can see datastore information in the Datastores view. This view displays all datastores in the inventory, arranged by a datacenter. Through this view, you can organize datastores into folder hierarchies, create new datastores, edit their properties, or remove existing datastores.
74
VMware, Inc.
This view is comprehensive and shows all information for your datastores including hosts and virtual machines using the datastores, storage reporting information, permissions, alarms, tasks and events, storage topology, and storage reports. Configuration details for each datastore on all hosts connected to this datastore are provided on the Configuration tab of the Datastores view. NOTE The Datastores view is not available when the vSphere client connects directly to your host. In this case, review datastore information through the host storage configuration tab. The following table describes the datastore details that you can see when you review datastores. Table 5-3. Datastore Information
Datastore Information Identification Device Capacity Free Type Storage I/O Control Hardware Acceleration Location Extents Path Selection Paths Description Editable friendly name that you assign to the datastore. Storage device, on which the datastore is deployed. If the datastore spans over multiple storage devices, only the first storage device is shown. Total formatted capacity of the datastore. Available space. File system that the datastore uses, either VMFS or NFS. Allows cluster-wide storage I/O prioritization. See the vSphere Resource Management Guide. Information on whether the datastore assists the host with various virtual machine management operations. The status can be Supported, Not Supported, or Unknown. A path to the datastore in the /vmfs/volumes/ directory. Individual extents that the datastore spans and their capacity (VMFS datastores only). Path selection policy the host uses to access storage (VMFS datastores only). Number of paths used to access storage and their status (VMFS datastores only).
VMware, Inc.
75
For software iSCSI, check network configuration. Rescan your iSCSI initiator.
76
VMware, Inc.
Perform the manual rescan each time you make one of the following changes.
n n n n n n
Create new LUNs on a SAN. Change the path masking on a host. Reconnect a cable. Change CHAP settings. Add or remove discovery or static addresses. Add a single host to the vCenter Server after you have edited or removed from the vCenter Server a datastore shared by the vCenter Server hosts and the single host.
IMPORTANT If you rescan when a path is unavailable, the host removes the path from the list of paths to the device. The path reappears on the list as soon as it becomes available and starts working again.
VMware, Inc.
77
Procedure 1 2 3 4 In the vSphere Client inventory panel, select the host, click the Configuration tab, and click Advanced Settings under Software. Select Disk. Scroll down to Disk.MaxLUN. Change the existing value to the value of your choice, and click OK. The value you enter specifies the LUN after the last one you want to discover. For example, to discover LUNs from 0 through 31, set Disk.MaxLUN to 32.
In the Value text box, type False for the specified key.
78
VMware, Inc.
5 6
Click Add. Click OK. You are not required to restart the vCenter Server system.
RDM Filter
config.vpxd.filter.rdmFilter
config.vpxd.filter.SameHostAndTransportsFilter
config.vpxd.filter.hostRescanFilter
VMware, Inc.
79
The claim rules are numbered. For each physical path, the host runs through the claim rules starting with the lowest number first. The attributes of the physical path are compared to the path specification in the claim rule. If there is a match, the host assigns the MPP specified in the claim rule to manage the physical path. This continues until all physical paths are claimed by corresponding MPPs, either third-party multipathing plugins or the native multipathing plug-in (NMP). For general information on multipathing plug-ins, see Managing Multiple Paths, on page 22. For the paths managed by the NMP module, a second set of claim rules is applied. These rules determine which Storage Array Type Plug-In (SATP) should be used to manage the paths for a specific array type, and which Path Selection Plug-In (PSP) is to be used for each storage device. For example, for a storage device that belongs to the EMC CLARiiON CX storage family and is not configured as ALUA device, the default SATP is VMW_SATP_CX and the default PSP is Most Recently Used. Use the vSphere Client to view which SATP and PSP the host is using for a specific storage device and the status of all available paths for this storage device. If needed, you can change the default VMware PSP using the vSphere Client. To change the default SATP, you need to modify claim rules using the vSphere CLI. You can find some information about modifying claim rules in Managing Storage Paths and Multipathing Plug-Ins, on page 103. For detailed descriptions of the commands available to manage PSA, see the vSphere Command-Line Interface Installation and Scripting Guide and the vSphere Command-Line Interface Reference.
If you are using the Fixedpath policy, you can see which path is the preferred path. The preferred path is marked with an asterisk (*) in the Preferred column.
80
VMware, Inc.
From the list of configured datastores, select the datastore whose paths you want to view or configure. The Details panel shows the total number of paths being used to access the device and whether any of them are broken or disabled.
Click Properties > Manage Paths to open the Manage Paths dialog box. You can use the Manage Paths dialog box to enable or disable your paths, set multipathing policy, and specify the preferred path.
VMware, Inc.
81
No fail back.
For ALUA arrays, VMkernel picks the path set to be the preferred path. For both A/A and A/P and ALUA arrays, VMkernel resumes using the preferred path, but only if the path-thrashing avoidance algorithm allows the fail-back.
Fixed (VMW_PSP_FIXED) Fixed AP (VMW_PSP_FIXED_AP) Most Recently Used (VMW_PSP_MRU) Round Robin (VMW_PSP_RR)
3 4
For the fixed policy, specify the preferred path by right-clicking the path you want to assign as the preferred path, and selecting Preferred. Click OK to save your settings and exit the dialog box.
Disable Paths
You can temporarily disable paths for maintenance or other reasons. You can do so using the vSphere Client. Procedure 1 2 3 Open the Manage Paths dialog box either from the Datastores or Devices view. In the Paths panel, right-click the path to disable, and select Disable. Click OK to save your settings and exit the dialog box.
You can also disable a path from the adapters Paths view by right-clicking the path in the list and selecting Disable.
82
VMware, Inc.
IP network
SP1
SP2
storage array
For LUN 1: HBA1-SP1-LUN1 For LUN 2: HBA2-SP1-LUN2 For LUN 3: HBA1-SP2-LUN3 For LUN 4: HBA2-SP2-LUN4
With active-passive arrays, you can perform load balancing if the array supports two active paths and the HBA ports can access both SPs in an array. You can use the VMW_PSP_FIXED_AP path selection policies to do static path load balancing on active-passive arrays.
VMware, Inc.
83
Place only one VMFS datastore on each LUN. Multiple VMFS datastores on one LUN is not recommended. Do not change the path policy the system sets for you unless you understand the implications of making such a change.
84
VMware, Inc.
Document everything. Include information about configuration, access control, storage, switch, server and iSCSI HBA configuration, software and firmware versions, and storage cable plan. Plan for failure:
n
Make several copies of your topology maps. For each element, consider what happens to your SAN if the element fails. Cross off different links, switches, HBAs and other elements to ensure you did not miss a critical failure point in your design.
Ensure that the iSCSI HBAs are installed in the correct slots in the ESX/ESXi host, based on slot and bus speed. Balance PCI bus load among the available busses in the server. Become familiar with the various monitor points in your storage network, at all visibility points, including ESX/ESXi performance charts, Ethernet switch statistics, and storage performance statistics. Be cautious when changing IDs of the LUNs that have VMFS datastores being used by your ESX/ESXi host. If you change the ID, virtual machines running on the VMFS datastore will fail. If there are no running virtual machines on the VMFS datastore, after you change the ID of the LUN, you must use rescan to reset the ID on your host. For information on using rescan, see Perform Storage Rescan, on page 77.
VMware, Inc.
85
Server Performance
You must consider several factors to ensure optimal server performance. Each server application must have access to its designated storage with the following conditions:
n n n
High I/O rate (number of I/O operations per second) High throughput (megabytes per second) Minimal latency (response times)
Because each application has different requirements, you can meet these goals by choosing an appropriate RAID group on the storage system. To achieve performance goals, perform the following tasks:
n
Place each LUN on a RAID group that provides the necessary performance levels. Pay attention to the activities and resource utilization of other LUNS in the assigned RAID group. A high-performance RAID group that has too many applications doing I/O to it might not meet performance goals required by an application running on the ESX/ESXi host. Provide each server with a sufficient number of network adapters or iSCSI hardware adapters to allow maximum throughput for all the applications hosted on the server for the peak period. I/O spread across multiple ports provides higher throughput and less latency for each application. To provide redundancy for software iSCSI, make sure the initiator is connected to all network adapters used for iSCSI connectivity. When allocating LUNs or RAID groups for ESX/ESXi systems, multiple operating systems use and share that resource. As a result, the performance required from each LUN in the storage subsystem can be much higher if you are working with ESX/ESXi systems than if you are using physical machines. For example, if you expect to run four I/O intensive applications, allocate four times the performance capacity for the ESX/ESXi LUNs. When using multiple ESX/ESXi systems in conjunction with vCenter Server, the performance needed from the storage subsystem increases correspondingly. The number of outstanding I/Os needed by applications running on an ESX/ESXi system should match the number of I/Os the SAN can handle.
Network Performance
A typical SAN consists of a collection of computers connected to a collection of storage systems through a network of switches. Several computers often access the same storage. Figure 5-2 shows several computer systems connected to a storage system through an Ethernet switch. In this configuration, each system is connected through a single Ethernet link to the switch, which is also connected to the storage system through a single Ethernet link. In most configurations, with modern switches and typical traffic, this is not a problem. Figure 5-2. Single Ethernet Link Connection to Storage
86
VMware, Inc.
When systems read data from storage, the maximum response from the storage is to send enough data to fill the link between the storage systems and the Ethernet switch. It is unlikely that any single system or virtual machine gets full use of the network speed, but this situation can be expected when many systems share one storage device. When writing data to storage, multiple systems or virtual machines might attempt to fill their links. As Figure 5-3 shows, when this happens, the switch between the systems and the storage system has to drop data. This happens because, while it has a single connection to the storage device, it has more traffic to send to the storage system than a single link can carry. In this case, the switch drops network packets because the amount of data it can transmit is limited by the speed of the link between it and the storage system. Figure 5-3. Dropped Packets
1 Gbit 1 Gbit
Recovering from dropped network packets results in large performance degradation. In addition to time spent determining that data was dropped, the retransmission uses network bandwidth that could otherwise be used for current transactions. iSCSI traffic is carried on the network by the Transmission Control Protocol (TCP). TCP is a reliable transmission protocol that ensures that dropped packets are retried and eventually reach their destination. TCP is designed to recover from dropped packets and retransmits them quickly and seamlessly. However, when the switch discards packets with any regularity, network throughput suffers significantly. The network becomes congested with requests to resend data and with the resent packets, and less data is actually transferred than in a network without congestion. Most Ethernet switches can buffer, or store, data and give every device attempting to send data an equal chance to get to the destination. This ability to buffer some transmissions, combined with many systems limiting the number of outstanding commands, allows small bursts from several systems to be sent to a storage system in turn. If the transactions are large and multiple servers are trying to send data through a single switch port, a switch's ability to buffer one request while another is transmitted can be exceeded. In this case, the switch drops the data it cannot send, and the storage system must request retransmission of the dropped packet. For example, if an Ethernet switch can buffer 32KB on an input port, but the server connected to it thinks it can send 256KB to the storage device, some of the data is dropped. Most managed switches provide information on dropped packets, similar to the following:
*: interface is up IHQ: pkts in input hold queue OHQ: pkts in output hold queue RXBS: rx rate (bits/sec) TXBS: tx rate (bits/sec) TRTL: throttle count IQD: pkts dropped from input queue OQD: pkts dropped from output queue RXPS: rx rate (pkts/sec) TXPS: tx rate (pkts/sec)
VMware, Inc.
87
In this example from a Cisco switch, the bandwidth used is 476303000 bits/second, which is less than half of wire speed. In spite of this, the port is buffering incoming packets and has dropped quite a few packets. The final line of this interface summary indicates that this port has already dropped almost 10,000 inbound packets in the IQD column. Configuration changes to avoid this problem involve making sure several input Ethernet links are not funneled into one output link, resulting in an oversubscribed link. When a number of links transmitting near capacity are switched to a smaller number of links, oversubscription is a possibility. Generally, applications or systems that write a lot of data to storage, such as data acquisition or transaction logging systems, should not share Ethernet links to a storage device. These types of applications perform best with multiple connections to storage devices. Figure 5-4 shows multiple connections from the switch to the storage. Figure 5-4. Multiple Connections from Switch to Storage
1 Gbit 1 Gbit 1 Gbit 1 Gbit
Using VLANs or VPNs does not provide a suitable solution to the problem of link oversubscription in shared configurations. VLANs and other virtual partitioning of a network provide a way of logically designing a network, but do not change the physical capabilities of links and trunks between switches. When storage traffic and other network traffic end up sharing physical connections, as they would with a VPN, the possibility for oversubscription and lost packets exists. The same is true of VLANs that share interswitch trunks. Performance design for a SANs must take into account the physical limitations of the network, not logical allocations.
88
VMware, Inc.
You are working with an active-passive array. Path thrashing only occurs on active-passive arrays. For active-active arrays or arrays that provide transparent failover, path thrashing does not occur. Two hosts access the same LUN using different storage processors (SPs). This can happen in two ways.
n
For example, the LUN is configured to use the Fixed PSP. On Host A, the preferred path to the LUN is set to use a path through SP A. On Host B, the preferred path to the LUN is configured to use a path through SP B. Path thrashing can also occur if Host A can access the LUN only with paths through SP A, while Host B can access the LUN only with paths through SP B.
This problem can also occur on a direct connect array (such as AX100) with HBA failover on one or more nodes. Path thrashing is a problem that you typically do not experience with other operating systems. No other common operating system uses shared LUNs for more than two servers. That setup is typically reserved for clustering. If only one server is issuing I/Os to the LUN at a time, path thrashing does not become a problem. In contrast, multiple ESX/ESXi systems might issue I/O to the same LUN concurrently.
On an active-active array the ESX/ESXi system starts sending I/O down the new path. On an active-passive arrays, the ESX/ESXi system checks all standby paths. The SP of the path that is currently under consideration sends information to the system on whether it currently owns the LUN.
n
If the ESX/ESXi system finds an SP that owns the LUN, that path is selected and I/O is sent down that path. If the ESX/ESXi host cannot find such a path, the ESX/ESXi host picks one of the standby paths and sends the SP of that path a command to move the LUN ownership to the SP.
Path thrashing can occur as a result of the following path choice: If server A can reach a LUN only through one SP, and server B can reach the same LUN only through a different SP, they both continually cause the ownership of the LUN to move between the two SPs, effectively ping-ponging the ownership of the LUN. Because the system moves the ownership quickly, the storage array cannot process any I/O (or can process only very little). As a result, any servers that depend on the LUN will experience low throughput due to the long time it takes to complete each I/O request.
VMware, Inc.
89
This change can impact disk bandwidth scheduling, but experiments have shown improvements for diskintensive workloads. What to do next If you adjust this value in the VMkernel, you might also want to adjust the queue depth in your storage adapter.
The iscsivmk_LunQDepth parameter sets the maximum number of outstanding commands, or queue depth, for each LUN accessed through the software iSCSI adapter. The default value is 128. 2 Reboot your system.
CAUTION Setting the queue depth to a value higher than the default can decrease the total number of LUNs supported.
90
VMware, Inc.
Virtual machine power on. vMotion. Virtual machines running with virtual disk snapshots. File operations that require opening files or doing metadata updates.
NOTE ESX/ESXi uses the SCSI reservations mechanism only when a LUN is not VAAI capable. If a LUN is VAAI capable and supports Hardware Acceleration, ESX/ESXi uses atomic test and set (ATS) algorithm to lock the LUN. Performance degradation can occur if such operations occur frequently on multiple servers accessing the same VMFS. For instance, VMware recommends that you do not run many virtual machines from multiple servers that are using virtual disk snapshots on the same VMFS. Limit the number of VMFS file operations when many virtual machines run on the VMFS.
Identification of critical applications that require more frequent backup cycles within a given period of time. Recovery point and recovery time goals. Consider how precise your recovery point needs to be, and how long you are willing to wait for it. The rate of change (RoC) associated with the data. For example, if you are using synchronous/asynchronous replication, the RoC affects the amount of bandwidth required between the primary and secondary storage devices. Overall impact on SAN environment, storage performance (while backing up), and other applications. Identification of peak traffic periods on the SAN (backups scheduled during those peak periods can slow the applications and the backup process). Time to schedule all backups within the datacenter. Time it takes to back up an individual application. Resource availability for archiving data; usually offline media access (tape).
n n
n n n
Include a recovery-time objective for each application when you design your backup strategy. That is, consider the time and resources necessary to reprovision the data. For example, if a scheduled backup stores so much data that recovery requires a considerable amount of time, examine the scheduled backup. Perform the backup more frequently, so that less data is backed up at a time and the recovery time decreases.
VMware, Inc.
91
If a particular application requires recovery within a certain time frame, the backup process needs to provide a time schedule and specific data processing to meet this requirement. Fast recovery can require the use of recovery volumes that reside on online storage to minimize or eliminate the need to access slow offline media for missing data components.
Snapshot Software
Snapshot software allows an administrator to make an instantaneous copy of any single virtual disk defined within the disk subsystem. Snapshot software is available at different levels:
n
ESX/ESXi hosts allow you to create snapshots of virtual machines. This software is included in the basic ESX/ESXi package. Third-party backup software might allow for more comprehensive backup procedures and might contain more sophisticated configuration options.
Backup Disaster recovery Availability of multiple configurations, versions, or both Forensics (looking at a snapshot to find the cause of problems while your system is running) Data mining (looking at a copy of your data to reduce load on production systems)
Some vendors support snapshots for both VMFS and RDMs. If both are supported, you can make either a snapshot of the whole virtual machine file system for a host, or snapshots for the individual virtual machines (one per disk). Some vendors support snapshots only for a setup using RDM. If only RDM is supported, you can make snapshots of individual virtual machines.
See your storage vendors documentation. NOTE ESX/ESXi systems also include a Consolidated Backup component.
Layered Applications
SAN administrators customarily use specialized array-based software for backup, disaster recovery, data mining, forensics, and configuration testing. Storage providers typically supply two types of advanced services for their LUNs: snapshotting and replication. When you use an ESX/ESXi system in conjunction with a SAN, you must decide whether array-based or hostbased tools are more suitable for your particular situation.
92
VMware, Inc.
Array-based solutions usually result in more comprehensive statistics. With RDM, data always takes the same path, which results in easier performance management. Security is more transparent to the storage administrator when you use RDM and an array-based solution because with RDM, virtual machines more closely resemble physical machines. If you use an array-based solution, physical compatibility RDMs are often used for the storage of virtual machines. If you do not intend to use RDM, check the storage vendor documentation to see if operations on LUNs with VMFS volumes are supported. If you use array operations on VMFS LUNs, carefully read the section on resignaturing.
Using VMware tools and VMFS is better for provisioning. One large LUN is allocated and multiple .vmdk files can be placed on that LUN. With RDM, a new LUN is required for each virtual machine. Snapshotting is included with your ESX/ESXi host at no extra cost. The file-based solution is therefore more cost-effective than the array-based solution. Using VMFS is easier for ESX/ESXi administrators. ESX/ESXi administrators who use the file-based solution are more independent from the SAN administrator.
n n
VMware, Inc.
93
When you mount the VMFS datastore, ESX/ESXi allows both reads and writes to the datastore residing on the LUN copy. The LUN copy must be writable. The datastore mounts are persistent and valid across system reboots. Because ESX/ESXi does not allow you to resignature the mounted datastore, unmount the datastore before resignaturing.
What to do next If you later want to resignature the mounted datastore, you must unmount it first.
Unmount Datastores
When you unmount a datastore, it remains intact, but can no longer be seen from the hosts that you specify. It continues to appear on other hosts, where it remains mounted. You can unmount only the following types of datastores:
n n
You cannot unmount an active mounted datastore. Procedure 1 2 Display the datastores. Right-click the datastore to unmount and select Unmount.
94
VMware, Inc.
If the datastore is shared, specify which hosts should no longer access the datastore. a If needed, deselect the hosts where you want to keep the datastore mounted. By default, all hosts are selected. b c Click Next. Review the list of hosts from which to unmount the datastore, and click Finish.
Datastore resignaturing is irreversible. The LUN copy that contains the VMFS datastore that you resignature is no longer treated as a LUN copy. A spanned datastore can be resignatured only if all its extents are online. The resignaturing process is crash and fault tolerant. If the process is interrupted, you can resume it later. You can mount the new VMFS datastore without a risk of its UUID colliding with UUIDs of any other datastore, such as an ancestor or child in a hierarchy of LUN snapshots.
VMware, Inc.
95
If the resignatured datastore contains virtual machines, update references to the original VMFS datastore in the virtual machine files, including .vmx, .vmdk, .vmsd, and .vmsn. To power on virtual machines, register them with vCenter Server.
ESX/ESXi version 4.1 or later. Storage arrays that support storage-based hardware acceleration. ESX/ESXi version 4.1 does not support hardware acceleration with NAS storage devices.
On your host, the hardware acceleration is enabled by default. To enable the hardware acceleration on the storage side, check with your storage vendor. Certain storage arrays require that you explicitly activate the hardware acceleration support on the storage side. When the hardware acceleration functionality is supported, the host can get hardware assistance and perform the following operations faster and more efficiently:
n n n n n n
Migration of virtual machines with Storage vMotion Deployment of virtual machines from templates Cloning of virtual machines or templates VMFS clustered locking and metadata operations for virtual machine files Writes to thin provisioned and thick virtual disks Creation of fault-tolerant virtual machines
96
VMware, Inc.
DataMover.HardwareAcceleratedMove DataMover.HardwareAcceleratedInit
VMware, Inc.
97
98
VMware, Inc.
This topic provides a checklist of special setup requirements for different storage systems and ESX/ESXi hosts. Table A-1. iSCSI SAN Configuration Requirements
Component All storage systems Topology EMC Symmetrix EMC Clariion HP MSA HP EVA Comments Write cache must be disabled if not battery backed. No single failure should cause HBA and SP failover, especially with active-passive storage arrays. Enable the SPC2 and SC3 settings. Contact EMC for the latest settings. Set the EMC Clariion failover mode to 1 or 4. Contact EMC for details. No specific requirements For EVA3000/5000 firmware 4.001 and later, and EVA4000/6000/8000 firmware 5.031 and later, set the host type to VMware. Otherwise, set the host mode type to Custom. The value is:
n n
If any of your iSCSI initiators are a part of an initiator group (igroup), disable ALUA on the NetApp array. Make sure ARP Redirect is enabled on hardware iSCSI adapters. Make sure ARP Redirect is enabled on hardware iSCSI adapters. Set the following Advanced Settings for the ESX/ESXi host: n Set Disk.UseLunReset to 1 n Set Disk.UseDeviceReset to 0 A multipathing policy of Most Recently Used must be set for all LUNs hosting clustered disks for active-passive arrays. A multipathing policy of Most Recently Used or Fixed may be set for LUNs on active-active arrays. Allow ARP redirection if the storage system supports transparent failover.
VMware, Inc.
99
100
VMware, Inc.
In most cases, the vSphere Client is well-suited for monitoring an ESX/ESXi host connected to SAN storage. Advanced users might, at times, want to use some VMware vSphere Command-Line Interface (vSphere CLI) commands for additional details. For more information, see vSphere Command-Line Interface Installation and Scripting Guide. This appendix includes the following topics:
n n n n n
resxtop Command, on page 101 vicfg-iscsi Command, on page 101 vicfg-mpath Command, on page 101 esxcli corestorage claimrule Command, on page 102 vmkping Command, on page 102
resxtop Command
The resxtop command provides a detailed look at ESX/ESXi resource use in real time. For detailed information about resxtop, see the Resource Management Guide and vSphere Command-Line Interface Installation and Scripting Guide.
vicfg-iscsi Command
The vicfg-iscsi command allows you to configure software or hardware iSCSI on ESX/ESXi hosts, set up CHAP parameters, and set up iSCSI networking. For details, see the vSphere Command-Line Interface Installation and Scripting Guide and the vSphere CommandLine Interface Reference.
vicfg-mpath Command
Use the vicfg-mpath command to view information about storage devices, paths, and multipathing plug-ins. For details, see the vSphere Command-Line Interface Installation and Scripting Guide and the vSphere CommandLine Interface Reference.
VMware, Inc.
101
vmkping Command
The vmkping command allows you to verify the VMkernel networking configuration. Usage example:
vmkping [options] [host|IP address]
102
VMware, Inc.
Use the vSphere CLI to manage the Pluggable Storage Architecture (PSA) multipathing plug-ins and Hardware Acceleration plug-ins. This appendix includes the following topics:
n n n
Managing Storage Paths and Multipathing Plug-Ins, on page 103 Managing Hardware Acceleration Filter and Plug-Ins, on page 110 esxcli corestorage claimrule Options, on page 113
Vendor/model strings Transportation, such as SATA, IDE, Fibre Channel, and so on Adapter, target, or LUN location Device driver, for example, Mega-RAID
Procedure
u
Use the esxcli corestorage claimrule list --claimrule-class=MP to list the multipathing claim rules. Example: Sample Output of the esxcli corestorage claimrule list Command, on page 104 shows the output of the command.
VMware, Inc.
103
The NMP claims all paths connected to storage devices that use the USB, SATA, IDE, and Block SCSI transportation. The MASK_PATH module by default claims all paths returning SCSI inquiry data with a vendor string of DELL and a model string of Universal Xport. The MASK_PATH module is used to mask paths from your host. The MPP_1 module claims all paths connected to any model of the NewVend storage array. The MPP_3 module claims the paths to storage devices controlled by the Mega-RAID device driver. Any paths not described in the previous rules are claimed by NMP. The Rule Class column in the output describes the category of a claim rule. It can be MP (multipathing plug-in), Filter, or VAAI. The Class column shows which rules are defined and which are loaded. The file parameter in the Class column indicates that the rule is defined. The runtime parameter indicates that the rule has been loaded into your system. For a user-defined claim rule to be active, two lines with the same rule number should exist, one line for the rule with the file parameter and another line with runtime. Several low numbered rules have only one line with the Class of runtime. These are system-defined claim rules that you cannot modify.
n n n n
At a minimum, this command returns the NMP and the MASK_PATH modules. If any third-party MPPs have been loaded, they are listed as well.
104
VMware, Inc.
For each SATP, the command displays information that shows the type of storage array or system this SATP supports and the default PSP for any LUNs using this SATP. Keep in mind the following:
n
If no SATP is assigned to the device by the claim rules, the default SATP for iSCSI or FC devices is VMW_SATP_DEFAULT_AA. The default PSP is VMW_PSP_FIXED. If VMW_SATP_ALUA is assigned to a specific storage device, but the device is not ALUA-aware, there is no claim rule match for this device. In this case, the device is claimed by the default SATP based on the device's transport type. The default PSP for all devices claimed by VMW_SATP_ALUA is VMW_PSP_MRU. The VMW_PSP_MRU selects an active/optimized path as reported by the VMW_SATP_ALUA, or an active/unoptimized path if there is no active/optimized path. This path is used until a better path is available (MRU). For example, if the VMW_PSP_MRU is currently using an active/unoptimized path and an active/optimized path becomes available, the VMW_PSP_MRU will switch the current path to the active/optimized one.
VMware, Inc.
105
Procedure 1 To define a new claim rule, on the vSphere CLI, run the following command:
esxcli corestorage claimrule add
For information on the options that the command requires, see esxcli corestorage claimrule Options, on page 113. 2 To load the new claim rule into your system, run the following command:
esxcli corestorage claimrule load
This command loads all newly created multipathing claim rules from your system's configuration file.
Add rule # 500 to claim all paths with the NewMod model string and the NewVend vendor string for the NMP plug-in.
# esxcli corestorage claimrule add -r 500 -t vendor -V NewVend -M NewMod -P NMP
After you load the claim rule and run the esxcli corestorage claimrule list command, you can see the new claim rule appearing on the list. NOTE The two lines for the claim rule, one with the Class of runtime and another with the Class of file, indicate that the new claim rule has been loaded into the system and is active.
Rule Class MP MP MP MP MP MP MP MP MP n Rule 0 1 2 3 4 101 101 500 500 Class runtime runtime runtime runtime runtime runtime file runtime file Type transport transport transport transport transport vendor vendor vendor vendor Plugin NMP NMP NMP NMP NMP MASK_PATH MASK_PATH NMP NMP Matches transport=usb transport=sata transport=ide transport=block transport=unknown vendor=DELL model=Universal Xport vendor=DELL model=Universal Xport vendor=NewVend model=NewMod vendor=NewVend model=NewMod
Add rule # 321 to claim the path on adapter vmhba0, channel 0, target 0, LUN 0 for the NMP plug-in.
# esxcli corestorage claimrule add -r 321 -t location -A vmhba0 -C 0 -T 0 -L 0 -P NMP
Add rule # 1015 to claim all paths provided by Fibre Channel adapters for the NMP plug-in.
# esxcli corestorage claimrule add -r 1015 -t transport -R fc -P NMP
Add a rule with a system assigned rule id to claim all paths provided by Fibre Channel type adapters for the NMP plug-in.
# esxcli corestorage claimrule add --autoassign -t transport -R fc -P NMP
106
VMware, Inc.
For information on the options that the command takes, see esxcli corestorage claimrule Options, on page 113. NOTE By default, the PSA claim rule 101 masks Dell array pseudo devices. Do not delete this rule, unless you want to unmask these devices. 2 Remove the claim rule from the ESX/ESXi system.
esxcli corestorage claimrule load
Mask Paths
You can prevent the ESX/ESXi host from accessing storage devices or LUNs or from using individual paths to a LUN. Use the vSphere CLI commands to mask the paths. When you mask paths, you create claim rules that assign the MASK_PATH plug-in to the specified paths. Procedure 1 Check what the next available rule ID is.
esxcli corestorage claimrule list
The claim rules that you use to mask paths should have rule IDs in the range of 101 200. If this command shows that rule 101 and 102 already exist, you can specify 103 for the rule to add. 2 Assign the MASK_PATH plug-in to a path by creating a new claim rule for the plug-in.
esxcli corestorage claimrule add -P MASK_PATH
For information on command-line options, see esxcli corestorage claimrule Options, on page 113. 3 Load the MASK_PATH claim rule into your system.
esxcli corestorage claimrule load
If a claim rule for the masked path exists, remove the rule.
esxcli corestorage claiming unclaim
After you assign the MASK_PATH plug-in to a path, the path state becomes irrelevant and is no longer maintained by the host. As a result, commands that display the masked path's information might show the path state as dead.
VMware, Inc.
107
3 4 5 6
#esxcli corestorage claimrule load #esxcli corestorage claimrule list #esxcli corestorage claiming unclaim -t location -A vmhba2 #esxcli corestorage claiming unclaim -t location -A vmhba3 # esxcli corestorage claimrule run
Unmask Paths
When you need the host to access the masked storage device, unmask the paths to the device. Procedure 1 Delete the MASK_PATH claim rule.
esxcli conn_options corestorage claimrule delete -r rule#
Reload the path claiming rules from the configuration file into the VMkernel.
esxcli conn_options corestorage claimrule load
Run the esxcli corestorage claiming unclaim command for each path to the masked storage device. For example:
esxcli conn_options corestorage claiming unclaim -t location -A vmhba0 -C 0 -T 0 -L 149
Your host can now access the previously masked storage device.
108
VMware, Inc.
Procedure 1 To add a claim rule for a specific SATP, run the esxcli nmp satp addrule command. The command takes the following options.
Option -c|--claim-option Description Set the claim option string when adding a SATP claim rule. This string is passed to the SATP when the SATP claims a path. The contents of this string, and how the SATP behaves as a result, are unique to each SATP. For example, some SATPs support the claim option strings tpgs_on and tpgs_off. If tpgs_on is specified, the SATP will claim the path only if the ALUA Target Port Group support is enabled on the storage device. Set the claim rule description when adding a SATP claim rule. Set the device when adding SATP claim rules. Device rules are mutually exclusive with vendor/model and driver rules. Set the driver string when adding a SATP claim rule. Driver rules are mutually exclusive with vendor/model rules. Force claim rules to ignore validity checks and install the rule anyway. Show the help message. Set the model string when adding SATP a claim rule. Vendor/Model rules are mutually exclusive with driver rules. Set the option string when adding a SATP claim rule. Set the default PSP for the SATP claim rule. Set the PSP options for the SATP claim rule. The SATP for which a new rule will be added. Set the claim transport type string when adding a SATP claim rule. Set the vendor string when adding SATP claim rules. Vendor/Model rules are mutually exclusive with driver rules.
-e|--description -d|--device -D|--driver -f|--force -h|--help -M|--model -o|--option -P|--psp -O|--psp-option -s|--satp -R|--transport -V|--vendor
NOTE When searching the SATP rules to locate a SATP for a given device, the NMP searches the driver rules first. If there is no match, the vendor/model rules are searched, and finally the transport rules. If there is still no match, NMP selects a default SATP for the device. 2 To delete a rule from the list of claim rules for the specified SATP, run the following command. You can run this command with the same options you used for addrule.
esxcli nmp satp deleterule
If you run the esxcli nmp satp listrules -s VMW_SATP_INV command, you can see the new rule added to the list of VMW_SATP_INV rules.
VMware, Inc.
109
The output shows the hardware acceleration, or VAAI, status that can be unknown, supported, or unsupported. If the device supports the hardware acceleration, the output also lists the VAAI filter attached to the device.
# esxcli corestorage device list --d naa.60a98000572d43595a4a52644473374c naa.60a98000572d43595a4a52644473374c Display Name: NETAPP Fibre Channel Disk(naa.60a98000572d43595a4a52644473374c) Size: 20480 Device Type: Direct-Access Multipath Plugin: NMP Devfs Path: /vmfs/devices/disks/naa.60a98000572d43595a4a52644473374c Vendor: NETAPP Model: LUN Revision: 8000 SCSI Level: 4 Is Pseudo: false Status: on
110
VMware, Inc.
Is RDM Capable: true Is Local: false Is Removable: false Attached Filters: VAAI_FILTER VAAI Status: supported Other UIDs: vml.020003000060a98000572d43595a4a52644473374c4c554e202020
Run the esxcli vaai device list --d device_ID command. For example:
# esxcli vaai device list -d naa.6090a028d00086b5d0a4c44ac672a233 naa.6090a028d00086b5d0a4c44ac672a233 Device Display Name: EQLOGIC iSCSI Disk (naa.6090a028d00086b5d0a4c44ac672a233) VAAI Plugin Name: VMW_VAAIP_EQL
To list the VAAI plug-in claim rules, run the esxcli corestorage claimrule list --claimruleclass=VAAI command. In this example, the VAAI claim rules specify devices that should be claimed by a particular VAAI plugin.
esxcli corestorage claimrule list --claimrule-class=VAAI Rule Class Rule Class Type Plugin Matches VAAI 65430 runtime vendor VMW_VAAIP_SYMM vendor=EMC VAAI 65430 file vendor VMW_VAAIP_SYMM vendor=EMC VAAI 65431 runtime vendor VMW_VAAIP_CX vendor=DGC VAAI 65431 file vendor VMW_VAAIP_CX vendor=DGC
VMware, Inc.
111
Run the VAAI filter claim rule by using the esxcli corestorage claimrule run --claimruleclass=Filter command.
NOTE Only the Filter-class rules need to be run. When the VAAI filter claims a device, it automatically finds the proper VAAI plug-in to attach.
112
VMware, Inc.
-t|--type -V|--vendor
VMware, Inc.
113
114
VMware, Inc.
Index
Symbols
* next to path 80
A
access control 15 active-active disk arrays, managing paths 83 active-passive disk arrays managing paths 83 path thrashing 89 adaptive scheme 20 advanced settings Disk.MaxLUN 77 Disk.SchedNumReqOutstanding 90 Disk.SupportSparseLUN 78 allocations, LUN 30 applications, layered 92 array-based solution 93 asterisk next to path 80 authentication 15, 44, 75
commands esxcli corestorage claimrule 102 resxtop 101 vicfg-iscsi 101 vicfg-mpath 101 vmkping 102 configuring dynamic discovery 43 iSCSI storage 51 static discovery 44 current multipathing state 80
D
data digests 15 datastore copies, mounting 93 datastores creating on iSCSI storage 51 displaying 74 managing duplicate 93 mounting 94 paths 80 refreshing 76 reviewing properties 75 unmounting 94 Dell PowerVault MD3000i storage systems 59 dependent hardware iSCSI and associated NICs 34 configuration workflow 32 considerations 33 reviewing adapters 33 diagnostic partitions, sharing 84 disabling paths 82 disaster recovery 16 discovery address 43 dynamic 43 static 44 disk access, equalizing 90 disk arrays active-active 30, 81 active-passive 30, 81, 89 disk shares 20 disk timeout 84 Disk.MaxLUN 77 Disk.SchedNumReqOutstanding 90
B
backups considerations 91 third-party backup package 92 boot from iSCSI SAN configuring HBAs 63 configuring iSCSI settings 64 guidelines 62 hardware iSCSI 63 iBFT 64 preparing SAN 62 software iSCSI 64
C
CHAP disabling 47 for discovery targets 46 for iSCSI initiators 45 for static targets 46 mutual 44 one-way 44 CHAP authentication 15, 44, 75 CHAP authentication methods 44 checklist 99 claim rules 79
VMware, Inc.
115
Disk.SupportSparseLUN 78 dump partitions, sharing 84 dynamic discovery, configuring 43 dynamic discovery addresses 43
E
educational support 7 EMC CLARiiON 54 EMC Symmetrix, pseudo LUNs 55 equalizing disk access 90 EqualLogic, storage systems 59 ESX/ESXi hosts and iSCSI SAN 71 sharing VMFS 18 esxcli corestorage claimrule command 102 esxcli corestorage command, options 113 ESXi, configuring for net dump 69 EUI 12 EVA (HP StorageWorks) 56
I
I/O delay 25, 29 iBFT 64 iBFT iSCSI boot booting an ESXi host 67 changing boot sequence 66 installing an ESXi host 66 limitations 65 networking best practices 67 setting up ESXi 65 troubleshooting 68 IP address 12 IQN 12 iSCSI, with multiple NICs 37 iSCSI adapters about 29 hardware 13 software 13 iSCSI alias 12 iSCSI boot, iBFT 64 iSCSI Boot Firmware Table, See iBFT iSCSI boot parameters, configuring 65 iSCSI HBA, alias 32 iSCSI initiators advanced parameters 48 configuring advanced parameters 49 configuring CHAP 45 hardware 31 setting up CHAP parameters 44 viewing in vSphere Client 71 iSCSI names, conventions 12 iSCSI networking, creating a VMkernel port 37 iSCSI ports 12 iSCSI SAN boot 61 concepts 11 iSCSI sessions adding for a target 50 displaying 50 duplicating 51 managing 49 removing 51 iSCSI storage, adding 51 iSCSI storage systems, working with ESX/ESXi 53 issues performance 88 visibility 75
F
failover I/O delay 25 transparent 14 failover paths, status 80 failure, server 27 file-based (VMFS) solution 93 FilerView 57 finding information 17 Fixed path policy, path thrashing 89
H
hardware acceleration about 96 benefits 96 deleting claim rules 112 disabling 97 requirements 96 status 97 hardware acceleration plug-ins 103 hardware iSCSI, and failover 24 hardware iSCSI adapters dependent 13 independent 13 hardware iSCSI initiators changing iSCSI name 32 configuring 31 installing 31 setting up discovery addresses 43 setting up naming parameters 32 viewing 31 header digests 15 high-tier storage 27
116
VMware, Inc.
Index
J
jumbo frames enabling for dependent hardware iSCSI 42 enabling for software iSCSI 42 using with iSCSI 41
L
layered applications 92 LeftHand Networks SAN/iQ storage systems 59 Linux Cluster host type 54 Linux host type 54 load balancing, manual 83 locations of virtual machines 27 loss of network connection, troubleshooting 68 lower-tier storage 27 LUN decisions adaptive scheme 20 predictive scheme 20 LUN discovery, VMkernel 28 LUN not visible, SP visibility 75 LUNs allocations 30 changing number scanned 77 creating and rescan 75, 77 decisions 19 display and rescan 28 making changes and rescan 76 masking 107 multipathing policy 81 one VMFS volume per 29 setting multipathing policy 81 sparse 78
disabled paths 80 standby paths 80 viewing the current state of 80 multipathing claim rules adding 105 deleting 107 multipathing plug-ins, path claiming 79 multipathing policy 81 multipathing state 80 mutual CHAP 44
N
NAA 12 Native Multipathing Plug-In 22, 23 net dump configuring ESXi 69 configuring vMA 69 NetApp provisioning storage on CLI 58 provisioning storage on FilerView 57 NetApp storage system 57 network adapters, configuring for iBFT iSCSI boot 65 network performance 86 network virtualization 10 networking, configuring 30 NFS datastores, unmounting 94 NICs, mapping to ports 38 NMP I/O flow 24 path claiming 79 See also Native Multipathing Plug-In
O
one-way CHAP 44 outstanding disk requests 90
M
maintenance 16 manual load balancing 83 masking LUNs 107 metadata updates 19 mid-tier storage 27 Most Recently Used path policy, path thrashing 89 mounting VMFS datastores 93 MPPs displaying 104 See also multipathing plug-ins MRU path policy 81 MSA (HP StorageWorks) 55 MTU 42 multipathing activating for software iSCSI 40 active paths 80 broken paths 80
P
passive disk arrays, path thrashing 89 path claiming 79 path failover array-based 25 host-based 24 path failure rescan 76, 77 path management 22, 83 path policies changing defaults 82 Fixed 24, 25, 81 Most Recently Used 24, 81 MRU 81 Round Robin 24, 81 Path Selection Plug-Ins 24 path thrashing, resolving 90
VMware, Inc.
117
paths disabling 82 masking 107 preferred 80 unmasking 108 performance checking Ethernet switch statistics 88 issues 88 network 86 optimizing 85 SCSI reservations 18 storage system 85 plug-ins hardware acceleration 103 multipathing 103 Pluggable Storage Architecture 22 port binding, examples 41 port binding, removing 41 port redirection 25 predictive scheme 20 preferred path 80 prioritizing virtual machines 20 problems performance 88 visibility 75 PSA, See Pluggable Storage Architecture PSPs, See Path Selection Plug-Ins
Q
queue depth 29, 90
R
rescan LUN creation 7577 LUN display 28 LUN masking 75 path masking 76, 77 when path is down 76, 77 reservations, reducing SCSI reservations 91 resxtop command 101 Round Robin path policy 24, 81
S
SAN accessing 21 backup considerations 91 benefits 16 server failover 28 specifics 17 troubleshooting 84 SAN management software 17 SAN restrictions, when working with ESX/ESXi 30
SAN storage performance, optimizing 85 SATPs adding rules 108 displaying 105 See also Storage Array Type Plug-Ins scanning, changing number 77 SCSI controllers 10 SCSI reservations, reducing 91 server failover 28 server failure 27 server performance 86 sharing diagnostic partitions 84 sharing VMFS across servers 18 snapshot software 92 software iSCSI and failover 24 networking 36 software iSCSI adapters, queue depth 90 software iSCSI boot, changing settings 68 software iSCSI initiators configuring 34 enabling 35 setting up discovery addresses 43 SP visibility, LUN not visible 75 sparse LUN support 78 static discovery, configuring 44 static discovery addresses 43 storage adapters copying names to clipboard 72 displaying in vSphere Client 72 viewing in vSphere Client 71 storage area network 9 Storage Array Type Plug-Ins 23 storage devices accessible through adapters 74 available to hosts 74 displaying 105 hardware acceleration status 110 identifiers 74 naming 73 paths 81 viewing information 72 storage filters disabling 78 host rescan 79 RDM 79 same host and transports 79 VMFS 79 storage systems Dell PowerVault MD3000i 59 EMC CLARiiON 54 EMC Symmetrix 55 EqualLogic 59
118
VMware, Inc.
Index
T
targets 13 targets vs. LUNs 13 technical support 7 testing, storage systems 53 third-party backup package 92 third-party management applications 17 TimeoutValue parameter 29 troubleshooting changing iSCSI boot parameters 68 loss of network connection 68
VMFS volume resignaturing 93 VMkernel, LUN discovery 28 VMkernel interface, with Jumbo Frames enabled 42 VMkernel ports 38 vmkping command 102 vMotion 16, 30, 54 VMware DRS, using with vMotion 30 VMware HA 16, 27, 54 VMware NMP I/O flow 24 See also Native Multipathing Plug-In VMware PSPs, See Path Selection Plug-Ins VMware SATPs, See Storage Array Type PlugIns volume resignaturing 93, 95 vSphere CLI 40 vSphere Client 101 vSwitch, with Jumbo Frames enabled 42
U
use cases 16
W
Windows GOS timeout 84
V
VAAI claim rules defining 112 deleting 112 VAAI filter 111 VAAI plug-in 111 VAAI filter, displaying 110 VAAI plug-ins displaying 110 listing for devices 111 vicfg-iscsi command 101 vicfg-module 90 vicfg-mpath command 101 virtual machines accessing SAN 21 equalizing disk access 90 I/O delay 25 locations 27 prioritizing 20 virtualization 9 visibility issues 75 vMA, collecting net dump 69 vMA, configuring for net dump 69 VMFS one volume per LUN 29 sharing across ESX/ESXi hosts 18 volume resignaturing 93 VMFS datastores changing signatures 95 resignaturing copies 95 unmounting 94
VMware, Inc.
119
120
VMware, Inc.