iSCSI Express Configuration For ESXi Using VSC
iSCSI Express Configuration For ESXi Using VSC
Contents
Deciding whether to use this guide ............................................................. 4
iSCSI configuration workflow ..................................................................... 5
Verifying that the iSCSI configuration is supported ................................................... 5
Completing the iSCSI configuration worksheet .......................................................... 6
Installing Virtual Storage Console .............................................................................. 8
Adding the storage cluster or SVM to VSC for VMware vSphere ............................. 8
Configuring your network for best performance ......................................................... 9
Configuring host iSCSI ports and vSwitches .............................................................. 9
Enabling the iSCSI software adapter ........................................................................ 10
Binding iSCSI ports to the iSCSI software adapter .................................................. 10
Configuring the ESXi host best practice settings ...................................................... 10
Creating an aggregate ................................................................................................ 11
Deciding where to provision the volume .................................................................. 11
Verifying that the iSCSI service is running on an existing SVM .................. 12
Configuring iSCSI on an existing SVM ........................................................ 13
Creating a new SVM ..................................................................................... 13
Testing iSCSI paths from the host to the storage cluster ........................................... 15
Provisioning a datastore and creating its containing LUN and volume .................... 15
Verifying that the host can write to and read from the LUN ..................................... 16
Where to find additional information ....................................................... 18
Copyright .................................................................................................... 19
Trademark .................................................................................................. 20
How to send comments about documentation and receive update
notifications ............................................................................................ 21
4
• You want to use best practices, not explore every available option.
• You want to use OnCommand System Manager, not the ONTAP command-line interface or an
automated scripting tool.
Cluster management using System Manager
• You are using the native ESX iSCSI software initiator on ESXi 5.x.
• You are using a supported version of Virtual Storage Console for VMware vSphere to configure
storage settings for your ESX host.
• You want to assign addresses to logical interfaces using any of the following methods:
• You have at least two high-speed Ethernet ports (1 GbE minimum, 10 GbE recommended)
available on each node in the cluster.
Onboard UTA2 (also called “CNA”) ports are configurable. You configure those ports in the
ONTAP CLI; that process is not covered in this guide.
You should see the ONTAP 9 Network Management Guide for using the CLI to configure
Ethernet port flow control.
Network and LIF management
• You are not configuring iSCSI SAN boot.
• You are providing storage to VMs through the ESX hypervisor and not running an iSCSI initiator
within the VM.
If these assumptions are not correct for your situation, you should see the following resources:
• SAN administration
• SAN configuration
• Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vSphere
Administration Guide for 7.2 release
• VMware vSphere Storage for your version of ESX 5 (available from VMware)
• NetApp Documentation: OnCommand Workflow Automation (current releases)
OnCommand Workflow Automation enables you to run prepackaged workflows that automate
management tasks such as the workflows described in Express Guides.
5
Steps
1. Go to the Interoperability Matrix to verify that you have a supported combination of the following
components:
6 | iSCSI Configuration for ESXi using VSC Express Guide
• ONTAP software
Storage configuration
If the aggregate and SVM are already created, record their names here; otherwise, you can create
them as required:
LUN information
LUN size
LUN name (optional)
LUN description (optional)
SVM information
If you are not using an existing SVM, you require the following information to create a new one:
SVM name
SVM IPspace
Aggregate for SVM root volume
SVM user name (optional)
SVM password (optional)
8 | iSCSI Configuration for ESXi using VSC Express Guide
• Virtual Storage Console is installed as a virtual appliance that includes Virtual Storage Console,
vStorage APIs for Storage Awareness (VASA) Provider, and Storage Replication Adapter (SRA)
for VMware vSphere capabilities.
Steps
1. Download the version of Virtual Storage Console that is supported for your configuration, as
shown in the Interoperability Matrix tool.
NetApp Support
2. Deploy the virtual appliance and configure it following the steps in the Deployment and Setup
Guide.
Steps
4. In the Add Storage System dialog box, enter the host name and administrator credentials for the
storage cluster or SVM and then click OK.
Steps
2. Select the highest speed ports available, and dedicate them to iSCSI.
10 GbE ports are best. 1 GbE ports are the minimum.
Steps
1. Log in to the vSphere Client, and then select the ESXi host from the inventory pane.
3. Click Add Networking, and then select VMkernel and Create a vSphere standard switch to
create the VMkernel port and vSwitch.
4. Configure jumbo frames for the vSwitch (MTU size of 9000, if used).
5. Repeat the previous step to create a second VMkernel port and vSwitch.
10 | iSCSI Configuration for ESXi using VSC Express Guide
Steps
4. Select the iSCSI software adapter and click Properties > Configure.
Steps
1. Bind the first iSCSI port to the iSCSI software adapter by using the Network Port Binding tab of
the iSCSI software adapter Adapter Details dialog box in the vSphere Client.
Steps
1. From the VMware vSphere Web Client Home page, click vCenter > Hosts.
iSCSI configuration workflow | 11
2. Right-click the host, and then select Actions > NetApp VSC > Set Recommended Values.
3. In the NetApp Recommended Settings dialog box, ensure that all of the options are selected,
and then click OK.
The vCenter Web Client displays the task progress.
Creating an aggregate
If you do not want to use an existing aggregate, you can create a new aggregate to provide physical
storage to the volume which you are provisioning.
Steps
3. Click Create.
4. Follow the instructions on the screen to create the aggregate using the default RAID-DP
configuration, and then click Create.
Result
The aggregate is created with the specified configuration and added to the list of aggregates in the
Aggregates window.
Choices
• If you want to provision volumes on an SVM that is already configured for iSCSI, you must
verify that the iSCSI service is running.
Verifying that the iSCSI service is running on an existing SVM
• If you want to provision volumes on an existing SVM that has iSCSI enabled but not configured,
configure iSCSI on the existing SVM.
Configuring iSCSI on an existing SVM
This is the case when you followed another Express Guide to create the SVM while configuring a
different protocol.
Steps
Steps
3. In the SVM Details pane, verify that iSCSI is displayed with a gray background, which indicates
that the protocol is enabled but not fully configured.
If iSCSI is displayed with a green background, the SVM is already configured.
5. Configure the iSCSI service and LIFs from the Configure iSCSI protocol page:
c. Assign IP addresses for the LIFs either with a subnet or without a subnet.
d. Ignore the optional Provision a LUN for iSCSI storage area, because the LUN is provisioned
by Virtual Storage Console for VMware vSphere in a later step.
6. Review the Summary page, record the LIF information, and then click OK.
• You must have enough network addresses available to create two LIFs for each node.
Steps
2. Click Create.
3. In the Storage Virtual Machine (SVM) Setup window, create the SVM:
d. Select all of the protocols that you have licenses for and that you might use on the SVM, even
if you do not want to configure all of the protocols immediately.
Selecting both NFS and CIFS when you create the SVM enables these two protocols to share
the same LIFs. Adding these protocols later does not allow them to share LIFs.
If CIFS is one of the protocols you selected, then the security style is set to NTFS. Otherwise,
the security style is set to UNIX.
e. Keep the default language setting C.UTF-8.
f. Select the desired root aggregate to contain the SVM root volume.
The aggregate for the data volume is selected separately in a later step.
4. If the Configure CIFS/NFS protocol page appears because you enabled CIFS or NFS, click
Skip and then configure CIFS or NFS later.
5. Configure the iSCSI service and create LIFs from the Configure iSCSI protocol page:
b. Assign IP address for the LIFs either by using a subnet or without a subnet.
d. Skip the optional Provision a LUN for iSCSI storage area because the LUN is provisioned
by Virtual Storage Console for VMware vSphere in a later step.
6. If the Configure FC/FCoE protocol page appears because you enabled FC, click Skip and then
configure FC later.
7. When the SVM Administration appears, configure or defer configuring a separate administrator
for this SVM:
• Enter the requested information, and then click Submit & Continue.
8. Review the Summary page, record the LIF information, and then click OK.
iSCSI configuration workflow | 15
• By default, only paths from the host to the node containing the storage virtual machine (SVM)
where the LUN was created, and paths to the HA partner of that node, are visible to the host.
• You still must create and test paths from the host to every node in the cluster, but the host can
access only those paths on the owning node and its HA partner.
• You should use the default LUN mapping behavior.
Only add nodes in other HA pairs to the LUN map in preparation for moving the LUN to a
different node.
Steps
1. From the ESX host, use the ping command to verify the path to the first LIF.
The ping command is available from the ESX service console.
2. Repeat the ping command to verify connectivity to each iSCSI LIF on each node in the cluster.
Related information
VMware KB article 1003486: Testing network connectivity with the ping command
Steps
1. From the vSphere Web Client Home page, click Hosts and Clusters.
2. In the navigation pane, expand the datacenter where you want to provision the datastore.
3. Right-click the ESXi host, and then select NetApp VSC > Provision Datastore.
Alternatively, you can right-click the cluster when provisioning to make the datastore available to
all hosts in the cluster.
Verifying that the host can write to and read from the LUN
Before using the LUN, you should verify that the host can write data to the LUN and read it back.
Steps
1. On the vSphere Web Client Home page, click Hosts and Clusters.
5. Create a new folder in the datastore and upload a file to the new folder.
You might need to install the Client Integration Plug-in.
6. Verify that you can access the file you just wrote.
7. Optional: Fail over the cluster node containing the LUN and verify that you can still write and
read a file.
If any of the tests fail, verify that the iSCSI service is running on the storage cluster and check the
iSCSI paths to the LUN.
8. Optional: If you failed over the cluster node, be sure to give back the node and return all LIFs to
their home ports.
9. For an ESXi cluster, view the datastore from each ESXi host in the cluster and verify that the file
you uploaded is displayed.
Related information
High-availability configuration
18
• SAN configuration
Describes supported FC, iSCSI, and FCoE topologies for connecting host computers to storage
controllers in clusters.
• SAN administration
Describes how to configure and manage the iSCSI, FCoE, and FC protocols for clustered SAN
environments, including configuration of LUNs, igroups, and targets.
VMware documentation
Documentation about iSCSI for ESXi servers is available directly from VMware.
VMware
• vSphere Storage
This VMware guide describes FC and iSCSI configuration for ESXi 5.x.
Copyright
Copyright © 2019 NetApp, Inc. All rights reserved. Printed in the U.S.
No part of this document covered by copyright may be reproduced in any form or by any means—
graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an
electronic retrieval system—without prior written permission of the copyright owner.
Software derived from copyrighted NetApp material is subject to the following license and
disclaimer:
THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE,
WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
NetApp reserves the right to change any products described herein at any time, and without notice.
NetApp assumes no responsibility or liability arising from the use of products described herein,
except as expressly agreed to in writing by NetApp. The use or purchase of this product does not
convey a license under any patent rights, trademark rights, or any other intellectual property rights of
NetApp.
The product described in this manual may be protected by one or more U.S. patents, foreign patents,
or pending applications.
Data contained herein pertains to a commercial item (as defined in FAR 2.101) and is proprietary to
NetApp, Inc. The U.S. Government has a non-exclusive, non-transferrable, non-sublicensable,
worldwide, limited irrevocable license to use the Data only in connection with and in support of the
U.S. Government contract under which the Data was delivered. Except as provided herein, the Data
may not be used, disclosed, reproduced, modified, performed, or displayed without the prior written
approval of NetApp, Inc. United States Government license rights for the Department of Defense are
limited to those rights identified in DFARS clause 252.227-7015(b).
20
Trademark
NETAPP, the NETAPP logo, and the marks listed on the NetApp Trademarks page are trademarks of
NetApp, Inc. Other company and product names may be trademarks of their respective owners.
http://www.netapp.com/us/legal/netapptmlist.aspx
21