Bsag
Bsag
NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 USA Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP Documentation comments: [email protected] Information Web: http://www.netapp.com Part number: 210-04730_A0 February 2010
Table of Contents | 3
Contents
Copyright information ............................................................................... 11 Trademark information ............................................................................. 13 About this guide .......................................................................................... 15
Accessing Data ONTAP man pages .......................................................................... 15 Terminology .............................................................................................................. 16 Where to enter commands ......................................................................................... 17 Keyboard and formatting conventions ...................................................................... 18 Special messages ....................................................................................................... 19 How to send your comments ..................................................................................... 19
4 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Table of Contents | 5 Enabling and disabling space reservations for LUNs ................................................ 67 Removing LUNs ........................................................................................................ 67 Accessing LUNs with NAS protocols ....................................................................... 68 Checking LUN, igroup, and FC settings ................................................................... 68 Displaying LUN serial numbers ................................................................................ 70 Displaying LUN statistics .......................................................................................... 70 Displaying LUN mapping information ...................................................................... 71 Displaying detailed LUN information ....................................................................... 72
6 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC Starting the iSCSI service .............................................................................. 88 Stopping the iSCSI service ............................................................................ 88 Displaying the target node name ................................................................... 89 Changing the target node name ..................................................................... 89 Displaying the target alias ............................................................................. 90 Adding or changing the target alias ............................................................... 90 iSCSI service management on storage system interfaces .............................. 91 Displaying iSCSI interface status .................................................................. 91 Enabling iSCSI on a storage system interface ............................................... 91 Disabling iSCSI on a storage system interface .............................................. 92 Displaying the storage system's target IP addresses ...................................... 92 iSCSI interface access management .............................................................. 93 iSNS server registration ............................................................................................. 95 What an iSNS server does ............................................................................. 95 How the storage system interacts with an iSNS server ................................. 95 About iSNS service version incompatibility ................................................. 95 Setting the iSNS service revision .................................................................. 96 Registering the storage system with an ISNS server ..................................... 96 Immediately updating the ISNS server .......................................................... 97 Disabling ISNS .............................................................................................. 97 Setting up vFiler units with the ISNS service ................................................ 98 Displaying initiators connected to the storage system ............................................... 98 iSCSI initiator security management ......................................................................... 99 How iSCSI authentication works ................................................................ 100 Guidelines for using CHAP authentication ................................................. 100 Defining an authentication method for an initiator ..................................... 101 Defining a default authentication method for initiators ............................... 102 Displaying initiator authentication methods ................................................ 103 Removing authentication settings for an initiator ........................................ 103 iSCSI RADIUS configuration ..................................................................... 103 Target portal group management ............................................................................. 109 Range of values for target portal group tags ................................................ 110 Important cautions for using target portal groups ....................................... 110 Displaying target portal groups ................................................................... 111 Creating target portal groups ....................................................................... 111 Destroying target portal groups ................................................................... 112
Table of Contents | 7 Adding interfaces to target portal groups .................................................... 112 Removing interfaces from target portal groups ........................................... 113 Configuring iSCSI target portal groups ....................................................... 113 Displaying iSCSI statistics ...................................................................................... 114 Definitions for iSCSI statistics .................................................................... 116 Displaying iSCSI session information ..................................................................... 118 Displaying iSCSI connection information ............................................................... 119 Guidelines for using iSCSI with HA pairs .............................................................. 120 Simple HA pairs with iSCSI ........................................................................ 120 Complex HA pairs with iSCSI .................................................................... 122 iSCSI problem resolution ........................................................................................ 122 LUNs not visible on the host ....................................................................... 122 System cannot register with iSNS server .................................................... 124 No multi-connection session ....................................................................... 124 Sessions constantly connecting and disconnecting during takeover ........... 124 Resolving iSCSI error messages on the storage system .............................. 125
8 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC Starting and stopping the FC service ........................................................... 137 Taking target expansion adapters offline and bringing them online ........... 138 Changing the adapter speed ......................................................................... 138 How WWPN assignments work with FC target expansion adapters .......... 140 Changing the system's WWNN ................................................................... 143 WWPN aliases ............................................................................................. 143 Managing systems with onboard Fibre Channel adapters ....................................... 145 Configuring onboard adapters for target mode ............................................ 146 Configuring onboard adapters for initiator mode ........................................ 148 Reconfiguring onboard FC adapters ............................................................ 149 Configuring onboard adapters on the FAS270 for target mode .................. 150 Configuring onboard adapters on the FAS270 for initiator mode ............... 151 Commands for displaying adapter information ........................................... 152
Table of Contents | 9 Backing up SAN systems to tape ............................................................................ 187 Using volume copy to copy LUNs .......................................................................... 190
Copyright information | 11
Copyright information
Copyright 19942010 NetApp, Inc. All rights reserved. Printed in the U.S.A. No part of this document covered by copyright may be reproduced in any form or by any means graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an electronic retrieval systemwithout prior written permission of the copyright owner. Software derived from copyrighted NetApp material is subject to the following license and disclaimer: THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. NetApp reserves the right to change any products described herein at any time, and without notice. NetApp assumes no responsibility or liability arising from the use of products described herein, except as expressly agreed to in writing by NetApp. The use or purchase of this product does not convey a license under any patent rights, trademark rights, or any other intellectual property rights of NetApp. The product described in this manual may be protected by one or more U.S.A. patents, foreign patents, or pending applications. RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject to restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFARS 252.277-7103 (October 1988) and FAR 52-227-19 (June 1987).
Trademark information | 13
Trademark information
All applicable trademark attribution is listed here. NetApp, the Network Appliance logo, the bolt design, NetApp-the Network Appliance Company, Cryptainer, Cryptoshred, DataFabric, DataFort, Data ONTAP, Decru, FAServer, FilerView, FlexClone, FlexVol, Manage ONTAP, MultiStore, NearStore, NetCache, NOW NetApp on the Web, SANscreen, SecureShare, SnapDrive, SnapLock, SnapManager, SnapMirror, SnapMover, SnapRestore, SnapValidator, SnapVault, Spinnaker Networks, SpinCluster, SpinFS, SpinHA, SpinMove, SpinServer, StoreVault, SyncMirror, Topio, VFM, and WAFL are registered trademarks of NetApp, Inc. in the U.S.A. and/or other countries. gFiler, Network Appliance, SnapCopy, Snapshot, and The evolution of storage are trademarks of NetApp, Inc. in the U.S.A. and/or other countries and registered trademarks in some other countries. The NetApp arch logo; the StoreVault logo; ApplianceWatch; BareMetal; Camera-to-Viewer; ComplianceClock; ComplianceJournal; ContentDirector; ContentFabric; Data Motion; EdgeFiler; FlexShare; FPolicy; Go Further, Faster; HyperSAN; InfoFabric; Lifetime Key Management, LockVault; NOW; ONTAPI; OpenKey, RAIDDP; ReplicatorX; RoboCache; RoboFiler; SecureAdmin; SecureView; Serving Data by Design; Shadow Tape; SharedStorage; Simplicore; Simulate ONTAP; Smart SAN; SnapCache; SnapDirector; SnapFilter; SnapMigrator; SnapSuite; SohoFiler; SpinMirror; SpinRestore; SpinShot; SpinStor; vFiler; VFM Virtual File Manager; VPolicy; and Web Filer are trademarks of NetApp, Inc. in the U.S.A. and other countries. NetApp Availability Assurance and NetApp ProTech Expert are service marks of NetApp, Inc. in the U.S.A. IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. A complete and current list of other IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml. Apple is a registered trademark and QuickTime is a trademark of Apple, Inc. in the U.S.A. and/or other countries. Microsoft is a registered trademark and Windows Media is a trademark of Microsoft Corporation in the U.S.A. and/or other countries. RealAudio, RealNetworks, RealPlayer, RealSystem, RealText, and RealVideo are registered trademarks and RealMedia, RealProxy, and SureStream are trademarks of RealNetworks, Inc. in the U.S.A. and/or other countries. All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as such. NetApp, Inc. is a licensee of the CompactFlash and CF Logo trademarks. NetCache is certified RealSystem compatible.
systems. The 7-Mode in the Data ONTAP 8.0 7-Mode product name means that this release has the features and functionality you are used to if you have been using the Data ONTAP 7.0, 7.1, 7.2, or 7.3 release families. If you are a Data ONTAP 8.0 Cluster-Mode user, use the Data ONTAP 8.0 Cluster-Mode guides plus any Data ONTAP 8.0 7-Mode guides for functionality you might want to access with 7-Mode commands through the nodeshell.
Next topics
Accessing Data ONTAP man pages on page 15 Terminology on page 16 Where to enter commands on page 17 Keyboard and formatting conventions on page 18 Special messages on page 19 How to send your comments on page 19
Data ONTAP manual pages are available for the following types of information. They are grouped into sections according to standard UNIX naming conventions. Types of information Commands Special files File formats and conventions Man page section 1 4 5
16 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
1. View man pages in the following ways: Enter the following command at the console command line:
man command_or_file_name
Click the manual pages button on the main Data ONTAP navigational page in the FilerView user interface.
Note: All Data ONTAP 8.0 7-Mode man pages are stored on the system in files whose names are prefixed with the string "na_" to distinguish them from other man pages. The prefixed names sometimes appear in the NAME field of the man page, but the prefixes are not part of the command, file, or service.
Terminology
To understand the concepts in this document, you might need to know how certain terms are used. Storage terms array LUN Refers to storage that third-party storage arrays provide to storage systems running Data ONTAP software. One array LUN is the equivalent of one disk on a native disk shelf. Refers to a logical unit of storage identified by a number. Refers to a disk that is sold as local storage for storage systems that run Data ONTAP software. Refers to a disk shelf that is sold as local storage for storage systems that run Data ONTAP software. Refers to the component of a storage system that runs the Data ONTAP operating system and controls its disk subsystem. Storage controllers are also sometimes called controllers, storage appliances, appliances, storage engines, heads, CPU modules, or controller modules. Refers to the hardware device running Data ONTAP that receives data from and sends data to native disk shelves, third-party storage, or both. Storage systems
LUN (logical unit number) native disk native disk shelf storage controller
storage system
that run Data ONTAP are sometimes referred to as filers, appliances, storage appliances, V-Series systems, or systems. third-party storage Refers to the back-end storage arrays, such as IBM, Hitachi Data Systems, and HP, that provide storage for storage systems running Data ONTAP.
Cluster and high-availability terms cluster In Data ONTAP 8.0 Cluster-Mode, refers to a group of connected nodes (storage systems) that share a global namespace and that you can manage as a single virtual server or multiple virtual servers, providing performance, reliability, and scalability benefits. In the Data ONTAP 7.1 release family and earlier releases, refers to an entirely different functionality: a pair of storage systems (sometimes called nodes) configured to serve data for each other if one of the two systems stops functioning.
In Data ONTAP 8.0, refers to the recovery capability provided by a pair of nodes (storage systems), called an HA pair, that are configured to serve data for each other if one of the two nodes stops functioning. In Data ONTAP 8.0, refers to a pair of nodes (storage systems) configured to serve data for each other if one of the two nodes stops functioning. In the Data ONTAP 7.3 and 7.2 release families, this functionality is referred to as an active/ active configuration.
You can enter commands either at the system console or from any client computer that can obtain access to the storage system using a Telnet or Secure Shell (SSH) session. In examples that illustrate command execution, the command syntax and output shown might differ from what you enter or see displayed, depending on your version of the operating system. You can use the FilerView graphical user interface. For information about accessing your system with FilerView, see the Data ONTAP 8.0 7-Mode System Administration Guide.
18 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC You can enter Windows, ESX, HP-UX, AIX, Linux, and Solaris commands at the applicable client console. In examples that illustrate command execution, the command syntax and output shown might differ from what you enter or see displayed, depending on your version of the operating system. You can use the client graphical user interface. Your product documentation provides details about how to use the graphical user interface. You can enter commands either at the switch console or from any client that can obtain access to the switch using a Telnet session. In examples that illustrate command execution, the command syntax and output shown might differ from what you enter or see displayed, depending on your version of the operating system.
Enter, enter
Used to refer to the key that generates a carriage return; the key is named Return on some keyboards. Used to mean pressing one or more keys on the keyboard and then pressing the Enter key, or clicking in a field in a graphical interface and then typing information into the field.
Used to separate individual keys. For example, Ctrl-D means holding down the Ctrl key while pressing the D key. Used to mean pressing one or more keys on the keyboard.
About this guide | 19 Formatting conventions Convention What it means Words or characters that require special attention. Placeholders for information you must supply. For example, if the guide says to enter the arp -d hostname command, you enter the characters "arp -d" followed by the actual name of the host. Book titles in cross-references. Command names, option names, keywords, and daemon names. Information displayed on the system console or other computer monitors. Contents of files. File, path, and directory names.
Italic font
Monospaced font
Bold monospaced Words or characters you type. What you type is always shown in lowercase
font
letters, unless your program is case-sensitive and uppercase letters are necessary for it to work properly.
Special messages
This document might contain the following types of messages to alert you to conditions that you need to be aware of.
Note: A note contains important information that helps you install or operate the system
efficiently.
Attention: An attention notice contains instructions that you must follow to avoid a system crash,
How hosts connect to storage systems on page 21 How Data ONTAP implements an iSCSI network on page 23 How Data ONTAP implements a Fibre Channel SAN on page 29
What Host Utilities are on page 21 What ALUA is on page 22 About SnapDrive for Windows and UNIX on page 22
Related information
22 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC The documentation included with the Host Utilities describes how to install and use the Host Utilities software. It includes instructions for using the commands and features specific to your host operating system. Use the Host Utilities documentation along with this guide to set up and manage your iSCSI or FC network.
Related information
What ALUA is
Data ONTAP 7.2 added support for the Asymmetric Logical Unit Access (ALUA) features of SCSI, also known as SCSI Target Port Groups or Target Port Group Support. ALUA defines a standard set of SCSI commands for discovering and managing multiple paths to LUNs on Fibre Channel and iSCSI SANs. ALUA allows the initiator to query the target about path attributes, such as primary path and secondary path. It also allows the target to communicate events back to the initiator. As a result, multipathing software can be developed to support any array. Proprietary SCSI commands are no longer required as long as the host supports the ALUA standard. For iSCSI SANs, ALUA is supported only with Solaris hosts running the iSCSI Solaris Host Utilities 3.0 for Native OS.
Attention: Make sure your host supports ALUA before enabling it. Enabling ALUA for a host that
does not support it can cause host failures during cluster failover.
Related tasks
What iSCSI is on page 23 What iSCSI nodes are on page 24 Supported configurations on page 24 How iSCSI nodes are identified on page 25 How the storage system checks initiator node names on page 26 Default port for iSCSI on page 26 What target portal groups are on page 26 What iSNS is on page 27 What CHAP authentication is on page 27 How iSCSI communication sessions work on page 28 How iSCSI works with HA pairs on page 28 Setting up the iSCSI protocol on a host and storage system on page 28
What iSCSI is
The iSCSI protocol is a licensed service on the storage system that enables you to transfer block data to hosts using the SCSI protocol over TCP/IP. The iSCSI protocol standard is defined by RFC 3720. In an iSCSI network, storage systems are targets that have storage target devices, which are referred to as LUNs (logical units). A host with an iSCSI host bus adapter (HBA), or running iSCSI initiator software, uses the iSCSI protocol to access LUNs on a storage system. The iSCSI protocol is implemented over the storage systems standard gigabit Ethernet interfaces using a software driver. The connection between the initiator and target uses a standard TCP/IP network. No special network configuration is needed to support iSCSI traffic. The network can be a dedicated TCP/IP network, or it can be your regular public network. The storage system listens for iSCSI connections on TCP port 3260.
Related information
24 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Supported configurations
Storage systems and hosts can be direct-attached or connected through Ethernet switches. Both direct-attached and switched configurations use Ethernet cable and a TCP/IP network for connectivity.
Next topics
How iSCSI is implemented on the host on page 24 How iSCSI target nodes connect to the network on page 24
Related information
NetApp Interoperability Matrix - http://now.netapp.com/NOW/products/interoperability/ Fibre Channel and iSCSI Configuration Guide - http://now.netapp.com/NOW/knowledge/docs/ docs.cgi
How iSCSI is implemented on the host iSCSI can be implemented on the host in hardware or software. You can implement iSCSI in one of the following ways: Initiator software that uses the hosts standard Ethernet interfaces. An iSCSI host bus adapter (HBA). An iSCSI HBA appears to the host operating system as a SCSI disk adapter with local disks. TCP Offload Engine (TOE) adapter that offloads TCP/IP processing. The iSCSI protocol processing is still performed by host software.
How iSCSI target nodes connect to the network You can implement iSCSI on the storage system using software or hardware solutions, depending on the model. Target nodes can connect to the network: Over the system's Ethernet interfaces using software that is integrated into Data ONTAP. iSCSI can be implemented over multiple system interfaces, and an interface used for iSCSI can also transmit traffic for other protocols, such as CIFS and NFS. On the FAS2000 series, FAS30xx, and FAS60xx systems, using an iSCSI target expansion adapter, to which some of the iSCSI protocol processing is offloaded. You can implement both hardware-based and software-based methods on the same system.
Introduction to block access | 25 Using a Fibre Channel over Ethernet (FCoE) target expansion adapter.
iqn-type designator on page 25 Storage system node name on page 26 eui-type designator on page 26
iqn-type designator The iqn-type designator is a logical name that is not linked to an IP address. It is based on the following components: The type designator itself, iqn, followed by a period (.) The date when the naming authority acquired the domain name, followed by a period The name of the naming authority, optionally followed by a colon (:) A unique device name
Note: Some initiators might provide variations on the preceding format. Also, even though some
hosts do support dashes in the file name, they are not supported on NetApp systems. For detailed information about the default initiator-supplied node name, see the documentation provided with your iSCSI Host Utilities. The format is: iqn.yyyymm.backward-naming-authority:unique-device-name
yyyymm is the month and year in which the naming authority acquired the domain name. backward-naming-authority is the reverse domain name of the entity responsible for naming this device. An example reverse domain name is com.microsoft. unique-device-name is a free-format unique name for this device assigned by the naming
authority. The following example shows the iSCSI node name for an initiator that is an application server: iqn.198706.com.initvendor1:123abc
26 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Storage system node name Each storage system has a default node name based on a reverse domain name and the serial number of the storage system's non-volatile RAM (NVRAM) card. The node name is displayed in the following format: iqn.1992-08.com.netapp:sn.serial-number The following example shows the default node name for a storage system with the serial number 12345678: iqn.1992-08.com.netapp:sn.12345678 eui-type designator The eui-type designator is based on the type designator, eui, followed by a period, followed by sixteen hexadecimal digits. The format is: eui.0123456789abcdef
Introduction to block access | 27 All connections within an iSCSI session must use target portals that belong to the same portal group.
By default, Data ONTAP maps each Ethernet interface on the storage system to its own default portal group. You can create new portal groups that contain multiple interfaces. You can have only one session between an initiator and target using a given portal group. To support some multipath I/O (MPIO) solutions, you need to have separate portal groups for each path. Other initiators, including the Microsoft iSCSI initiator version 2.0, support MPIO to a single target portal group by using different initiator session IDs (ISIDs) with a single initiator node name.
Note: Although this configuration is supported, it is not recommended for NetApp storage systems. For more information, see the Technical Report on iSCSI Multipathing. Related information
What iSNS is
The Internet Storage Name Service (iSNS) is a protocol that enables automated discovery and management of iSCSI devices on a TCP/IP storage network. An iSNS server maintains information about active iSCSI devices on the network, including their IP addresses, iSCSI node names, and portal groups. You obtain an iSNS server from a third-party vendor. If you have an iSNS server on your network, and it is configured and enabled for use by both the initiator and the storage system, the storage system automatically registers its IP address, node name, and portal groups with the iSNS server when the iSNS service is started. The iSCSI initiator can query the iSNS server to discover the storage system as a target device. If you do not have an iSNS server on your network, you must manually configure each target to be visible to the host. Currently available iSNS servers support different versions of the iSNS specification. Depending on which iSNS server you are using, you may have to set a configuration parameter in the storage system.
28 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
You must alternate between setting up the host and the storage system in the order shown below.
Steps
1. Install the initiator HBA and driver or software initiator on the host and record or change the hosts iSCSI node name. It is recommended that you use the host name as part of the initiator node name to make it easier to associate the node name with the host. 2. Configure the storage system, including:
Introduction to block access | 29 Licensing and starting the iSCSI service Optionally configuring CHAP Creating LUNs, creating an igroup that contains the hosts iSCSI node name, and mapping the LUNs to that igroup
Note: If you are using SnapDrive, do not manually configure LUNs. Configure them using SnapDrive after it is installed.
3. Configure the initiator on the host, including: Setting initiator parameters, including the IP address of the target on the storage system Optionally configuring CHAP Starting the iSCSI service
4. Access the LUNs from the host, including: Creating file systems on the LUNs and mounting them, or configuring the LUNs as raw devices Creating persistent mappings of LUNs to file systems
What FC is on page 29 What FC nodes are on page 30 How FC target nodes connect to the network on page 30 How FC nodes are identified on page 30
Related concepts
What FC is
FC is a licensed service on the storage system that enables you to export LUNs and transfer block data to hosts using the SCSI protocol over a Fibre Channel fabric.
Related concepts
30 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
How WWPNs are used on page 30 How storage systems are identified on page 31 About system serial numbers on page 31 How hosts are identified on page 31 How switches are identified on page 32
How WWPNs are used WWPNs identify each port on an adapter. WWPNs are used for the following purposes: Creating an initiator group The WWPNs of the hosts HBAs are used to create an initiator group (igroup). An igroup is used to control host access to specific LUNs. You create an igroup by specifying a collection of WWPNs of initiators in an FC network. When you map a LUN on a storage system to an igroup, you grant all the initiators in that group access to that LUN. If a hosts WWPN is not in an igroup that is mapped to a LUN, that host does not have access to the LUN. This means that the LUNs do not appear as disks on that host. You can also create port sets to make a LUN visible only on specific target ports. A port set consists of a group of FC target ports. You bind a port set to an igroup. Any host in the igroup can access the LUNs only by connecting to the target ports in the port set. Uniquely identifying a storage systems HBA target ports The storage systems WWPNs uniquely identify each target port on the system. The host operating system uses the combination of the WWNN and WWPN to identify storage system
Introduction to block access | 31 adapters and host target IDs. Some operating systems require persistent binding to ensure that the LUN appears at the same target ID on the host.
Related concepts
Required information for mapping a LUN to an igroup on page 54 How to make LUNs available on specific FC target ports on page 56
How storage systems are identified When the FCP service is first initialized, it assigns a WWNN to a storage system based on the serial number of its NVRAM adapter. The WWNN is stored on disk. Each target port on the HBAs installed in the storage system has a unique WWPN. Both the WWNN and the WWPN are a 64-bit address represented in the following format: nn:nn:nn:nn:nn:nn:nn:nn, where n represents a hexadecimal value. You can use commands such as fcp show adapter, fcp config, sysconfig -v, fcp nodename, or FilerView to see the systems WWNN as FC Nodename or nodename, or the systems WWPN as FC portname or portname.
Attention: The target WWPNs might change if you add or remove adapters from the storage
system. About system serial numbers The storage system also has a unique system serial number that you can view by using the sysconfig command. The system serial number is a unique seven-digit identifier that is assigned when the storage system is manufactured. You cannot modify this serial number. Some multipathing software products use the system serial number together with the LUN serial number to identify a LUN. How hosts are identified You use the fcp show initiator command to see all of the WWPNs, and any associated aliases, of the FC initiators that have logged on to the storage system. Data ONTAP displays the WWPN as Portname. To know which WWPNs are associated with a specific host, see the FC Host Utilities documentation for your host. These documents describe commands supplied by the Host Utilities or the vendor of the initiator, or methods that show the mapping between the host and its WWPN. For example, for Windows hosts, use the lputilnt, HBAnywhere, or SANsurfer applications, and for UNIX hosts, use the sanlun command.
32 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
How switches are identified Fibre Channel switches have one WWNN for the device itself, and one WWPN for each of its ports. For example, the following diagram shows how the WWPNs are assigned to each of the ports on a 16-port Brocade switch. For details about how the ports are numbered for a particular switch, see the vendor-supplied documentation for that switch.
Port 0, WWPN 20:00:00:60:69:51:06:b4 Port 1, WWPN 20:01:00:60:69:51:06:b4 Port 14, WWPN 20:0e:00:60:69:51:06:b4 Port 15, WWPN 20:0f:00:60:69:51:06:b4
Storage provisioning | 33
Storage provisioning
When you create a volume, you must estimate the amount of space you need for LUNs and Snapshot copies. You must also determine the amount of space you want to reserve so that applications can continue to write data to the LUNs in the volume.
Next topics
Storage units for managing disk space on page 33 What autodelete is on page 34 What space reservation is on page 35 What fractional reserve is on page 35 Guidelines for provisioning storage in a SAN environment on page 37 About LUNs, igroups, and LUN maps on page 47 Ways to create LUNs, create igroups, and map LUNs to igroups on page 57 Creating LUNs on vFiler units for MultiStore on page 59
The aggregate is the physical layer of storage that consists of the disks within the Redundant Array of Independent Disks (RAID) groups and the plexes that contain the RAID groups. A plex is a collection of one or more RAID groups that together provide the storage for one or more Write Anywhere File Layout (WAFL) file system volumes. Data ONTAP uses plexes as the unit of RAID-level mirroring when the SyncMirror software is enabled. An aggregate is a collection of one or two plexes, depending on whether you want to take advantage of RAID-level mirroring. If the aggregate is unmirrored, it contains a single plex. Aggregates provide the underlying physical storage for traditional and FlexVol volumes. A traditional volume is directly tied to the underlying aggregate and its properties. When you create a traditional volume, Data ONTAP creates the underlying aggregate based on the properties you assign with the vol create command, such as the disks assigned to the RAID group and RAID-level protection.
34 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC A FlexVol volume is a volume that is loosely coupled to its containing aggregate. A FlexVol volume can share its containing aggregate with other FlexVol volumes. Thus, a single aggregate can be the shared source of all the storage used by all the FlexVol volumes contained by that aggregate. Once you set up the underlying aggregate, you can create, clone, or resize FlexVol volumes without regard to the underlying physical storage. You do not have to manipulate the aggregate frequently. You use either traditional or FlexVol volumes to organize and manage system and user data. A volume can hold qtrees and LUNs. A qtree is a subdirectory of the root directory of a volume. You can use qtrees to subdivide a volume in order to group LUNs. You create LUNs in the root of a volume (traditional or flexible) or in the root of a qtree, with the exception of the root volume. Do not create LUNs in the root volume because it is used by Data ONTAP for system administration. The default root volume is /vol/vol0. See the Storage Management Guide for more information.
Related information
What autodelete is
Autodelete is a volume-level option that allows you to define a policy for automatically deleting Snapshot copies based on a definable threshold. You can set that threshold, or trigger, to automatically delete Snapshot copies when: The volume is nearly full The snap reserve space is nearly full The overwrite reserved space is full
Using autodelete is recommended in most SAN configurations. See the Data ONTAP Data Protection Online Backup and Recovery Guide for more information on using autodelete to automatically delete Snapshot copies. Also see the Technical Report on thin provisioning below for additional details.
Related tasks
Configuring volumes and LUNs when using autodelete on page 42 Estimating how large a volume needs to be when using autodelete on page 38
Related information
Technical Report: Thin Provisioning in a NetApp SAN or IP SAN Enterprise Environment - http:// media.netapp.com/documents/tr3483.pdf
Storage provisioning | 35
For example, if you create a 100-GB space reserved LUN in a 500-GB volume, that 100 GB of space is immediately allocated, leaving 400 GB remaining in the volume. In contrast, if space reservation is disabled on the LUN, all 500 GB in the volume remain available until writes are made to the LUN. Space reservation is an attribute of the LUN; it is persistent across storage system reboots, takeovers, and givebacks. Space reservation is enabled for new LUNs by default, but you can create a LUN with space reservations disabled or enabled. After you create the LUN, you can change the space reservation attribute by using the lun set reservation command. When a volume contains one or more LUNs with space reservation enabled, operations that require free space, such as the creation of Snapshot copies, are prevented from using the reserved space. If these operations do not have sufficient unreserved free space, they fail. However, writes to the LUNs with space reservation enabled will continue to succeed.
Related tasks
36 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC volume is set to none, then fractional reserve for that volume can be set to the desired value. For the vast majority of configurations, you should set fractional reserve to zero when the guarantee option is set to none because it greatly simplifies space management. If fractional reserve is set to 100%, when you create space-reserved LUNs, you can be sure that writes to those LUNs will always succeed without deleting Snapshot copies, even if all of the spacereserved LUNs are completely overwritten. Setting fractional reserve to less than 100 percent causes the space reservation held for all spacereserved LUNs in that volume to be reduced to that percentage. Writes to the space-reserved LUNs in that volume are no longer unequivocally guaranteed, which is why you should use snap autodelete or vol autogrow for these volumes. Fractional reserve is generally used for volumes that hold LUNs with a small percentage of data overwrite.
Note: If you are using fractional reserve in environments in which write errors due to lack of available space are unexpected, you must monitor your free space and take corrective action to avoid write errors. Data ONTAP provides tools for monitoring available space in your volumes. Note: Reducing the space reserved for overwrites (by using fractional reserve) does not affect the
size of the space-reserved LUN. You can write data to the entire size of the LUN. The space reserved for overwrites is used only when the original data is overwritten. Example If you create a 500-GB space-reserved LUN, then Data ONTAP ensures that 500 GB of free space always remains available for that LUN to handle writes to the LUN. If you then set fractional reserve to 50 for the LUN's containing volume, then Data ONTAP reserves 250 GB, or half of the space it was previously reserving for overwrites with fractional reserve set to 100. If more than half of the LUN is overwritten, then subsequent writes to the LUN could fail due to insufficient free space in the volume.
Note: When more than one LUN in the same volume has space reservations enabled, and fractional reserve for that volume is set to less than 100 percent, Data ONTAP does not limit any space-reserved LUN to its percentage of the reserved space. In other words, if you have two 100-GB LUNs in the same volume with fractional reserve set to 30, one of the LUNs could use up the entire 60 GB of reserved space for that volume.
See the Technical Report on thin provisioning for detailed information on using fractional reserve .
Related tasks
Configuring volumes and LUNs when using autodelete on page 42 Estimating how large a volume needs to be when using fractional reserve on page 39
Storage provisioning | 37
Related information
Technical Report: Thin Provisioning in a NetApp SAN or IP SAN Enterprise Environment - http:// media.netapp.com/documents/tr3483.pdf
In Data ONTAP, fractional reserve is set to 100 percent and autodelete is disabled by default. However, in a SAN environment, it usually makes more sense to use autodelete (and sometimes autosize). In addition, this method is far simpler than using fractional reserve. When using fractional reserve, you need to reserve enough space for the data inside the LUN, fractional reserve, and snapshot data, or: X + X + Delta. For example, you might need to reserve 50 GB for the LUN, 50 GB when fractional reserve is set to 100%, and 50 GB for snapshot data, or a volume of 150 GB. If fractional reserve is set to a percentage other than 100%, then the calculation becomes more complex. In contrast, when using autodelete, you need only calculate the amount of space required for the LUN and snapshot data, or X + Delta. Since you can configure the autodelete setting to automatically delete older snapshots when space is required for data, you need not worry about running out of space for data. For example, if you have a 100 GB volume, you might allocate 50 GB for a LUN, and the remaining 50 GB is used for snapshot data. Or in that same 100 GB volume, you might reserve 30 GB for the LUN, and 70 GB is then allocated for snapshots. In both cases, you can configure snapshots to be automatically deleted to free up space for data, so fractional reserve is unnecessary.
Note: Refer to the Technical Report on thin provisioning below for detailed guidelines on using fractional reserve.
Finally, you should follow these important guidelines when creating traditional or FlexVol volumes that contain LUNs, regardless of which provisioning method you choose: Do not create any LUNs in the systems root volume. Data ONTAP uses this volume to administer the storage system. The default root volume is /vol/ vol0. Ensure that no other files or directories exist in a volume that contains a LUN. If this is not possible and you are storing LUNs and files in the same volume, use a separate qtree to contain the LUNs.
38 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC If multiple hosts share the same volume, create a qtree on the volume to store all LUNs for the same host. This is a recommended best practice that simplifies LUN administration and tracking. Ensure that the volume option create_ucode is set to on. Make the required changes to the Snapshot copy default settings. Change the snapreserve setting for the volume to 0, set the snap schedule so that no controller-based Snapshot copies are taken, and delete all Snapshot copies after you create the volume. To simplify management, use naming conventions for LUNs and volumes that reflect their ownership or the way that they are used.
See the Data ONTAP Storage Management Guide for more information on creating volumes.
Next topics
Estimating how large a volume needs to be when using autodelete on page 38 Estimating how large a volume needs to be when using fractional reserve on page 39 Configuring volumes and LUNs when using autodelete on page 42
Related information
Data ONTAP documentation on NOW - http://now.netapp.com/NOW/knowledge/docs/ontap/ ontap_index.shtml Technical Report: Thin Provisioning in a NetApp SAN or IP SAN Enterprise Environment - http:// media.netapp.com/documents/tr3483.pdf
1. Calculate the Rate of Change (ROC) of your data per day. This value depends on how often you overwrite data. It is expressed as GB per day. 2. Calculate the amount of space you need for Snapshot copies by multiplying your ROC by the number of Snapshot copies you intend to keep. Space required for Snapshot copies = ROC x number of Snapshot copies.
Example
You need a 200-GB LUN, and you estimate that your data changes at a rate of about 10 percent, or 20 GB each day. You want to take one Snapshot copy each day and want to keep three weeks worth of Snapshot copies, for a total of 21 Snapshot copies. The amount of space you need for Snapshot copies is 21 20 GB, or 420 GB. 3. Calculate the required volume size by adding together the total data size and the space required for Snapshot copies.
Storage provisioning | 39
Volume size calculation example The following example shows how to calculate the size of a volume based on the following information: You need to create two 200-GB LUNs. The total LUN size is 400 GB. Your data changes at a rate of 10 percent of the total LUN size each day. Your ROC is 40 GB per day (10 percent of 400 GB). You take one Snapshot copy each day and you want to keep the Snapshot copies for 10 days. You need 400 GB of space for Snapshot copies (40 GB ROC 10 Snapshot copies). You want to ensure that you can continue to write to the LUNs through the weekend, even after you take the last Snapshot copy and you have no more free space.
You would calculate the size of your volume as follows: Volume size = Total data size + Space required for Snapshot copies. The size of the volume in this example is 800 GB (400 GB + 400 GB).
See the Data Protection Online Backup and Recovery Guide for more information about the autodelete function, and refer to the Storage Management Guide for more information about working with traditional and FlexVol volumes.
Related information
1. Estimate the size of the data that will be contained in the volume as described in "Calculating the total data size." 2. Compute the estimated volume size using the appropriate procedure depending on whether you are using Snapshot copies.
40 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Use this procedure... Determining the volume size and fractional reserve setting when you need Snapshot copies
No snapshot copies Determining the volume size when you do not need Snapshot copies Next topics
Calculating the total data size on page 40 Determining the volume size and fractional reserve setting when you need Snapshot copies on page 40 Determining the volume size when you do not need Snapshot copies on page 42
Calculating the total data size Determining the total data sizethe sum of the sizes of all of the space-reserved LUNs in the volume helps you estimate how large a volume needs to be.
Steps
If you know your database needs two 20-GB disks, you must create two 20-GB space-reserved LUNs. The total LUN size in this example is 40 GB. 2. Add in whatever amount of space you want to allocate for the non-space-reserved LUNs.
Note: This amount can vary, depending on the amount of space you have available and how
Determining the volume size and fractional reserve setting when you need Snapshot copies The required volume size for a volume when you need Snapshot copies depends on several factors, including how much your data changes, how long you need to keep Snapshot copies, and how much data the volume is required to hold.
Steps
1. Calculate the Rate of Change (ROC) of your data per day. This value depends on how often you overwrite data. It is expressed as GB per day. 2. Calculate the amount of space you need for Snapshot copies by multiplying your ROC by the number of days you want to keep Snapshot copies. Space required for Snapshot copies = ROC number of days the Snapshot copies will be kept
Storage provisioning | 41
Example
You need a 20-GB LUN, and you estimate that your data changes at a rate of about 10 percent, or 2 GB each day. You want to take one Snapshot copy each day and want to keep three weeks worth of Snapshot copies, for a total of 21 Snapshot copies. The amount of space you need for Snapshot copies is 21 2 GB, or 42 GB. 3. Determine how much space you need for overwrites by multiplying your ROC by the amount of time, in days, you want to keep Snapshot copies before deleting. Space required for overwrites = ROC number of days you want to keep Snapshot copies before deleting
Example
You have a 20-GB LUN and your data changes at a rate of 2 GB each day. You want to ensure that write operations to the LUNs do not fail for three days after you take the last Snapshot copy. You need 2 GB 3, or 6 GB of space reserved for overwrites to the LUNs. 4. Calculate the required volume size by adding together the total data size, the space required for Snapshot copies, and the space required for overwrites. Volume size = Total data size + space required for Snapshot copies + space required for overwrites 5. Calculate the fractional reserve value you must use for this volume by dividing the size of the space required for overwrites by the total size of the space-reserved LUNs in the volume. Fractional reserve = space required for overwrites total data size.
Example
You have a 20-GB LUN. You require 6 GB for overwrites. Thirty percent of the total LUN size is 6 GB, so you must set your fractional reserve to 30. Volume size calculation example The following example shows how to calculate the size of a volume based on the following information: You need to create two 50-GB LUNs. The total LUN size is 100 GB. Your data changes at a rate of 10 percent of the total LUN size each day. Your ROC is 10 GB per day (10 percent of 100 GB). You take one Snapshot copy each day and you want to keep the Snapshot copies for 10 days. You need 100 GB of space for Snapshot copies (10 GB ROC 10 Snapshot copies). You want to ensure that you can continue to write to the LUNs through the weekend, even after you take the last Snapshot copy and you have no more free space. You need 20 GB of space reserved for overwrites (10 GB per day ROC 2 days). This means you must set fractional reserve to 20 percent (20 GB = 20 percent of 100 GB).
42 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC You would calculate the size of your volume as follows: Volume size = Total data size + Space required for Snapshot copies + Space for overwrites. The size of the volume in this example is 220 GB (100 GB + 100 GB + 20 GB).
Note: This volume size requires that you set the fractional reserve setting for the new volume to 20. If you leave fractional reserve at 100 to ensure that writes could never fail, then you need to increase the volume size by 80 GB to accommodate the extra space needed for overwrites (100 GB rather than 20 GB).
Determining the volume size when you do not need Snapshot copies If you are not using Snapshot copies, the size of your volume depends on the size of the LUNs and whether you are using traditional or FlexVol volumes. Before you determine that you do not need Snapshot copies, verify the method for protecting data in your configuration. Most data protection methods, such as SnapRestore, SnapMirror, SnapManager for Microsoft Exchange or Microsoft SQL Server, SyncMirror, dump and restore, and ndmpcopy methods rely on Snapshot copies. If you are using any of these methods, you cannot use this procedure to estimate volume size.
Note: Host-based backup methods do not require Snapshot copies. Step
1. Use the following method to determine the required size of your volume, depending on your volume type.
If you are estimating a... Then... FlexVol volume Traditional volume The FlexVol volume should be at least as large as the size of the data to be contained by the volume. The traditional volume should contain enough disks to hold the size of the data to be contained by the volume.
Example
If you need a traditional volume to contain two 200-GB LUNs, you should create the volume with enough disks to provide at least 400 GB of storage capacity.
Storage provisioning | 43 You do not want your volumes to affect any other volumes in the aggregate. For example, if you want to use the available space in an aggregate as a shared pool of storage for multiple volumes or applications, use the autosize option instead. Autosize is disabled under this configuration. Ensuring availability of your LUNs is more important to you than maintaining old Snapshot data.
To configure volumes and LUNs using autodelete, complete these tasks in order: Complete the procedure below. Make the necessary changes to the Snapshot copy default settings. Verify the create ucode volume option is enabled. Enable the create ucode volume option, if it is not already enabled.
Steps
1. Create your volumes according to the guidelines in the Storage Management Guide. 2. Set the space guarantee on the volumes by entering the following command:
vol options vol_name guarantee volume
4. Set fractional reserve to 0%, if it is not already, by entering the following command:
vol options vol_name fractional_reserve 0
5. Set the Snapshot reserve to zero percent by entering the following command:
snap reserve vol_name 0
The Snapshot space and application data is now combined into one large storage pool. 6. Configure Snapshot copies to begin being automatically deleted when the volume reaches the capacity threshold percentage by entering the following command:
snap autodelete vol_name trigger volume Note: The capacity threshold percentage is based on the size of the volume. Refer to the Data Protection and Online Backup and Recovery Guide for more details.
This enables Data ONTAP to begin deleting Snapshot copies, starting with the oldest first, to free up space for application data. 8. Create your space-reserved LUNs.
44 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
After you finish
Make sure you make the required changes to the Snapshot copy default settings.
Note: Refer to the Data Protection Online Backup and Recovery Guide for more information on options related to Snaphsot copies and refer to the Storage Management Guide for more information on volume options. Next topics
Required changes to SnapShot copy default settings on page 44 Verifying the create_ucode volume option on page 46 Enabling the create_ucode volume option on page 47
Related tasks
Creating LUNs, creating igroups, and mapping LUNs using individual commands on page 58
Related information
Because the internal scheduling mechanism for taking Snapshot copies within Data ONTAP has no means of ensuring that the data within a LUN is in a consistent state, it is recommended that you change these Snapshot copy settings by performing the following tasks: Turn off the automatic Snapshot copy schedule. Delete all existing Snapshot copies. Set the percentage of space reserved for Snapshot copies to zero.
Turning off the automatic Snapshot copy schedule on page 45 Deleting all existing Snapshot copies in a volume on page 45
Storage provisioning | 45
1. To turn off the automatic Snapshot copy schedule, enter the following command:
snap sched volname 0 0 0 Example snap sched vol1 0 0 0
This command turns off the Snapshot copy schedule because there are no weekly, nightly, or hourly Snapshot copies scheduled. You can still take Snapshot copies manually by using the snap command. 2. To verify that the automatic Snapshot copy schedule is off, enter the following command:
snap sched [volname] Example snap sched vol1
Deleting all existing Snapshot copies in a volume When creating volumes that contain LUNs, delete all existing Snapshot copies in the volume.
Step
Setting the percentage of snap reserve space to zero When creating volumes that contain LUNs, set the percentage of space reserved for Snapshot copies to zero.
Steps
46 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Example snap reserve vol1 0 Note: For volumes that contain LUNs and no Snapshot copies, it is recommended that you set the percentage to zero.
Verifying the create_ucode volume option Use the vol status command to verify that the create_ucode volume option is enabled.
Step
1. To verify that the create_ucode option is enabled (on), enter the following command:
vol status [volname] -v Example vol status vol1 -v Note: If you do not specify a volume, the status of all volumes is displayed.
The following output example shows that the create_ucode option is on:
Volume vol1 State online Status normal Options nosnap=off, nosnapdir=off, minra=off, no_atime_update=off, raidsize=8, nvfail=off, resyncsnaptime=60,create_ucode=on convert_ucode=off, maxdirsize=10240, fs_size_fixed=off, create_reserved=on raid_type=RAID4 Plex /vol/vol1/plex0: online, normal, active RAID group /vol/vol1/plex0/rg0: normal
snapmirrored=off,
Storage provisioning | 47
Enabling the create_ucode volume option Data ONTAP requires that the path of a volume or qtree containing a LUN is in the Unicode format. This option is Off by default when you create a volume. It is important to enable this option for volumes that will contain LUNs.
Step
Example
vol options vol1 create_ucode on
Next topics
Information required to create a LUN on page 48 What igroups are on page 51 Required information for creating igroups on page 52 What LUN mapping is on page 53 Required information for mapping a LUN to an igroup on page 54 Guidelines for mapping LUNs to igroups on page 54 Mapping read-only LUNs to hosts at SnapMirror destinations on page 55 How to make LUNs available on specific FC target ports on page 56 Guidelines for LUN layout and space allocation on page 56
48 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Path name of the LUN on page 48 Name of the LUN on page 48 LUN Multiprotocol Type on page 48 LUN size on page 50 LUN description on page 50 LUN identification number on page 51 Space reservation setting on page 51
Path name of the LUN The path name of a LUN must be at the root level of the qtree or volume in which the LUN is located. Do not create LUNs in the root volume. The default root volume is /vol/vol0. For clustered storage system configurations, it is recommended that you distribute LUNs across the cluster.
Note: You might find it useful to provide a meaningful path name for the LUN. For example, you might choose a name that describes how the LUN is used, such as the name of the application, the type of data that it stores, or the user accessing the data. Examples are /vol/database/lun0, /vol/ finance/lun1, and /vol/bill/lun2.
Name of the LUN The name of the LUN is case-sensitive and can contain 1 to 256 characters. You cannot use spaces. LUN names must use only specific letters and characters. LUN names can contain only the letters A through Z, a through z, numbers 0 through 9, hyphen (-), underscore (_), left brace ({), right brace (}), and period (.). LUN Multiprotocol Type The LUN Multiprotcol Type, or operating system type, specifies the OS of the host accessing the LUN. It also determines the layout of data on the LUN, the geometry used to access that data, and the minimum and maximum size of the LUN. The LUN Multiprotocol Type values are solaris, solaris_efi, windows, windows_gpt , windows_2008 , hpux, aix, linux, netware, xen, hyper_v, and vmware.
Storage provisioning | 49 The following table describes the guidelines for using each LUN Multiprotocol Type:
LUN Multiprotocol Type solaris solaris_efi When to use If your host operating system is Solaris and you are not using Solaris EFI labels. If you are using Solaris EFI labels. Note that using any other LUN Multiprotocol Type with Solaris EFI labels may result in LUN misalignment problems. Refer to your Solaris Host Utilities documentation and release notes for more information. If your host operating system is Windows 2000 Server, Windows XP, or Windows Server 2003 using the MBR partitioning method. If you want to use the GPT partitioning method and your host is capable of using it. Windows Server 2003, Service Pack 1 and later are capable of using the GPT partitioning method, and all 64-bit versions of Windows support it. If your host operating system is Windows Server 2008; both MBR and GPT partitioning methods are supported. If your host operating system is HP-UX. If your host operating system is AIX. If your host operating system is Linux. Your host operating system is Netware. If you are using ESX Server and your LUNs will be configured with VMFS. Note: If you configure the LUNs with RDM, use the guest operating system as the LUN Multiprotocol Type. xen If you are using Xen and your LUNs will be configured with Linux LVM with Dom0. Note: For raw LUNs, use the type of guest operating system as the LUN Multiprotocol Type. hyper_v If you are using Windows Server 2008 Hyper-V and your LUNs contain virtual hard disks (VHDs). Note: For raw LUNs, use the type of child operating system as the LUN Multiprotocol Type. Note: If you are using SnapDrive for Windows, the LUN Multiprotocol Type is automatically set.
windows
windows_gpt
windows_2008
50 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC When you create a LUN, you must specify the LUN type. Once the LUN is created, you cannot modify the LUN host operating system type. See the Interoperability Matrix for information about supported hosts.
Related information
The usable space in the LUN depends on host or application requirements for overhead. For example, partition tables and metadata on the host file system reduce the usable space for applications. In general, when you format and partition LUNs as a disk on a host, the actual usable space on the disk depends on the overhead required by the host. The disk geometry used by the operating system determines the minimum and maximum size values of LUNs. For information about the maximum sizes for LUNs and disk geometry, see the vendor documentation for your host OS. If you are using third-party volume management software on your host, consult the vendors documentation for more information about how disk geometry affects LUN size. LUN description The LUN description is an optional attribute you use to specify additional information about the LUN. You can edit this description at the command line or with FilerView.
Storage provisioning | 51
LUN identification number A LUN must have a unique identification number (ID) so that the host can identify and access the LUN. You map the LUN ID to an igroup so that all the hosts in that igroup can access the LUN. If you do not specify a LUN ID, Data ONTAP automatically assigns one. Space reservation setting When you create a LUN by using the lun setup command or FilerView, you specify whether you want to enable space reservations. When you create a LUN using the lun create command, space reservation is automatically turned on.
Note: You should keep space reservation on.
52 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Host with HBA WWPNs Host1, single-path (one HBA) 10:00:00:00:c9:2b:7c:0f Host2, multipath (two HBAs) 10:00:00:00:c9:2b:6b:3c 10:00:00:00:c9:2b:02:3c Host3, multipath, clustered (connected to Host4) 10:00:00:00:c9:2b:32:1b 10:00:00:00:c9:2b:41:02 Host4, multipath, clustered (connected to Host3) 10:00:00:00:c9:2b:51:2c 10:00:00:00:c9:2b:47:a2
igroups
aix-group0 10:00:00:00:c9:2b:7c:0f
/vol/vol2/lun1
/vol/vol2/qtree1/ lun2
igroup name on page 52 igroup type on page 53 igroup ostype on page 53 iSCSI initiator node name on page 53 FCP initiator WWPN on page 53
igroup name The igroup name is a case-sensitive name that must satisfy several requirements. The igroup name: Contains 1 to 96 characters. Spaces are not allowed. Can contain the letters A through Z, a through z, numbers 0 through 9, hyphen (-), underscore (_), colon (:), and period (.).
The name you assign to an igroup is independent of the name of the host that is used by the host operating system, host files, or Domain Name Service (DNS). If you name an igroup aix1, for example, it is not mapped to the actual IP host name (DNS name) of the host.
Note: You might find it useful to provide meaningful names for igroups, ones that describe the hosts that can access the LUNs mapped to them.
igroup type The igroup type can be either -i for iSCSI or -f for FC. igroup ostype The ostype indicates the type of host operating system used by all of the initiators in the igroup. All initiators in an igroup must be of the same ostype. The ostypes of initiators are solaris, windows, hpux, aix, netware, xen, hyper_v, vmware, and linux. You must select an ostype for the igroup. iSCSI initiator node name You can specify the node names of the initiators when you create an igroup. You can also add them or remove them later. To know which node names are associated with a specific host, see the Host Utilities documentation for your host. These documents describe commands that display the hosts iSCSI node name. FCP initiator WWPN You can specify the WWPNs of the initiators when you create an igroup. You can also add them or remove them later. To know which WWPNs are associated with a specific host, see the Host Utilities documentation for your host. These documents describe commands supplied by the Host Utilities or the vendor of the initiator or methods that show the mapping between the host and its WWPN. For example, for Windows hosts, use the lputilnt, HBAnywhere, and SANsurfer applications, and for UNIX hosts, use the sanlun command.
Related tasks
Creating FCP igroups on UNIX hosts using the sanlun command on page 74
54 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
LUN name on page 54 igroup name on page 54 LUN identification number on page 54
LUN name Specify the path name of the LUN to be mapped. igroup name Specify the name of the igroup that contains the hosts that will access the LUN. LUN identification number Assign a number for the LUN ID, or accept the default LUN ID. Typically, the default LUN ID begins with 0 and increments by 1 for each additional LUN as it is created. The host associates the LUN ID with the location and path name of the LUN. The range of valid LUN ID numbers depends on the host.
Note: For detailed information, see the documentation provided with your Host Utilities.
If you are attempting to map a LUN when the cluster interconnect is down, you must not include a LUN ID, because the partner system will have no way of verifying that the LUN ID is unique. Data ONTAP reserves a range of LUN IDs for this purpose and automatically assigns the first available LUN ID in this range. If you are mapping the LUN from the primary system, Data ONTAP assigns a LUN in the range of 193 to 224. If you are mapping the LUN from the secondary system, Data ONTAP assigns a LUN in the range of 225 to 255.
For more information about HA pairs, refer to the Data ONTAP 8.0 7-Mode High-Availability Configuration Guide.
Storage provisioning | 55 Make sure the LUNs are online before mapping them to an igroup. Do not map LUNs that are in the offline state. You can map a LUN only once to an igroup or a specific initiator. You can add a single initiator to multiple igroups. but the initiator can be mapped to a LUN only once. You cannot map a LUN to multiple igroups that contain the same initiator. You cannot use the same LUN ID for two LUNs mapped to the same igroup. You cannot map a LUN to both FC and iSCSI igroups if ALUA is enabled on one of the igroups. Run the lun config_check command to determine if any such conflicts exist.
hosts to fail as well. Before mapping read-only LUNs to hosts, ensure the operating system and application support read-only LUNs. Also note that you cannot create LUNs on read-only qtrees or volumes. The LUNs that display in a mirrored destination inherit the read-only property from the container. For more information about read-only LUNs and SnapMirror, see the Data ONTAP 8.0 7-Mode Data Protection Online Backup and Recovery Guide.
56 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
How to use port sets to make LUNs available on specific FC target ports on page 130
Storage provisioning | 57
Best Practicies for File System Alignment in Virtual Environments - http://media.netapp.com/ documents/tr-3747.pdf Recommendations for Aligning VMFS Partitions - http://www.vmware.com/pdf/ esx3_partition_align.pdf
Creating LUNs, creating igroups, and mapping LUNs with the LUN setup program on page 57 Creating LUNs, creating igroups, and mapping LUNs using individual commands on page 58
Creating LUNs, creating igroups, and mapping LUNs with the LUN setup program
LUN setup is a guided program that prompts you for the information needed to create a LUN and an igroup, and to map the LUN to the igroup. When a default is provided in brackets in the prompt, press Enter to accept it.
Before you begin
If you did not create volumes for storing LUNs before running the lun setup program, terminate the program and create volumes. If you want to use qtrees, create them before running the lun setup program.
Step
58 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
lun setup Result
The lun setup program displays prompts that lead you through the setup process.
Creating LUNs, creating igroups, and mapping LUNs using individual commands
Rather than use FilerView or LUN setup, you can use individual commands to create LUNs, create igroups, and map the LUNs to the appropriate igroups.
Steps
1. Create a space-reserved LUN by entering the following command on the storage system command line:
lun create -s size -t ostype lun_path -s size indicates the size of the LUN to be created, in bytes by default. -t ostype indicates the LUN type. The LUN type refers to the operating system type, which determines the geometry used to store data on the LUN. lun_path is the LUNs path name that includes the volume and qtree. Example
The following example command creates a 5-GB LUN called /vol/vol2/qtree1/lun3 that is accessible by a Windows host. Space reservation is enabled for the LUN.
lun create -s 5g -t windows /vol/vol2/qtree1/lun3
2. Create an igroup by entering the following command on the storage system command line:
igroup create {-i | -f} -t ostype initiator_group [node ...] -i specifies that the igroup contains iSCSI node names. -f specifies that the igroup contains FCP WWPNs. -t ostype indicates the operating system type of the initiator. initiator_group is the name you specify as the name of the igroup. node is a list of iSCSI node names or FCP WWPNs, separated by spaces. Example
iSCSI example:
igroup create -i -t windows win_host5_group2 iqn. 1991-05.com.microsoft:host5.domain.com
FCP example:
Storage provisioning | 59
igroup create -f -t aix aix-igroup3 10:00:00:00c:2b:cc:92
3. Map the LUN to an igroup by entering the following command on the storage system command line:
lun map lun_path initiator_group [lun_id] lun_path is the path name of the LUN you created. initiator_group is the name of the igroup you created. lun_id is the identification number that the initiator uses when the LUN is mapped to it. If you
do not enter a number, Data ONTAP generates the next available LUN ID number.
Example
LUN size on page 50 LUN Multiprotocol Type on page 48 What igroups are on page 51
MultiStore vFiler technology is supported for the iSCSI protocol only. You must purchase a MultiStore license to create vFiler units. Then you can enable the iSCSI license for each vFiler to manage LUNs (and igroups) on a per-vFiler basis.
Note: SnapDrive can only connect to and manage LUNs on the hosting storage system (vfiler0), not to vFiler units.
Use the following guidelines when creating LUNs on vFiler units: The vFiler unit access rights are enforced when the storage system processes iSCSI host requests. LUNs inherit vFiler unit ownership from the storage unit on which they are created. For example, if /vol/vfstore/vf1_0 is a qtree owned by vFiler unit vf1, all LUNs created in this qtree are owned by vf1. As vFiler unit ownership of storage changes, so does ownership of the storages LUNs.
60 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
About this task
You can issue LUN subcommands using the following methods: From the default vFiler unit (vfiler0) on the hosting storage system, you can do the following: Enter the vfiler run * lun subcommand., which runs the lun subcommand on all vFiler units. Run a LUN subcommand on a specific vFiler unit. To access a specific vFiler unit, you change the vFiler unit context by entering the following commands:
filer> vfiler context vfiler_name vfiler_name@filer> lun subcommand
From non-default vFiler units, you can: Enter the vfiler run * lun command
Step
1. Enter the lun create command in the vFiler unit context that owns the storage, as follows:
vfiler run vfiler_name lun create -s 2g -t os_type /vol/vfstore/vf1_0/ lun0
See the Data ONTAP Multistore Management Guide for more information.
Related information
1. Enter the following command from the vFiler unit that contains the LUNs:
vfiler run * lun show Result
Storage provisioning | 61
==== vfiler0 /vol/vfstore/vf0_0/vf0_lun0 /vol/vfstore/vf0_0/vf0_lun1 ==== vfiler1 /vol/vfstore/vf0_0/vf1_lun0 /vol/vfstore/vf0_0/vf1_lun1 2g 2g (21437483648) (21437483648) (r/w, online) (r/w, online) 2g 2g (21437483648) (21437483648) (r/w, online) (r/w, online)
LUN management | 63
LUN management
After you create your LUNs, you can manage them in a number of ways. For example, you can control LUN availability, unmap a LUN from an igroup, and remove, and rename a LUN. You can use the command-line interface or FilerView to manage LUNs.
Next topics
Displaying command-line Help for LUNs on page 63 Controlling LUN availability on page 64 Unmapping LUNs from igroups on page 65 Renaming LUNs on page 66 Modifying LUN descriptions on page 66 Enabling and disabling space reservations for LUNs on page 67 Removing LUNs on page 67 Accessing LUNs with NAS protocols on page 68 Checking LUN, igroup, and FC settings on page 68 Displaying LUN serial numbers on page 70 Displaying LUN statistics on page 70 Displaying LUN mapping information on page 71 Displaying detailed LUN information on page 72
64 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
lun lun lun lun lun lun lun lun lun lun lun offline online resize serial set setup share show snap stats unmap Stop block protocol access to LUN Restart block protocol access to LUN Resize LUN Display/change LUN serial number Manage LUN properties Initialize/Configure LUNs, mapping Configure NAS file-sharing properties Display LUNs Manage LUN and snapshot interactions Displays or zeros read/write statistics for LUN Remove LUN mapping
2. To display the syntax for any of the subcommands, enter the following command:
lun help subcommand Example lun help show
Next topics
Before you bring a LUN online, make sure that you quiesce or synchronize any host application accessing the LUN.
Step
Example
lun online /vol/vol1/lun0
LUN management | 65
Before you take a LUN offline, make sure that you quiesce or synchronize any host application accessing the LUN.
About this task
Example
lun offline /vol/vol1/lun0
66 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Example lun online /vol/vol1/lun1
Renaming LUNs
Use the lun move command to rename a LUN.
About this task
If you are organizing LUNs in qtrees, the existing path (lun_path) and the new path (new_lun_path) must be either in the same qtree or in another qtree in that same volume.
Note: This process is completely non-disruptive; it can be performed while the LUN is online and serving data. Step
Example
lun move /vol/vol1/mylun /vol/vol1/mynewlun
If you use spaces in the comment, enclose the comment in quotation marks.
Step
Example
lun comment /vol/vol1/lun2 "10GB for payroll records"
LUN management | 67
insufficient disk space, and the host application or operating system might crash. When write operations fail, Data ONTAP displays system messages (one message per file) on the console, or sends these messages to log files and other remote systems, as specified by its /etc/syslog.conf configuration file.
Steps
1. Enter the following command to display the status of space reservations for LUNs in a volume:
lun set reservation lun_path Example lun set reservation /vol/lunvol/hpux/lun0 Space Reservation for LUN /vol/lunvol/hpux/lun0 (inode 3903199): enabled
Removing LUNs
Use the lun destroy command to remove one or more LUNs.
About this task
Without the -f parameter, you must first take the LUN offline and unmap it, and then enter the lun destroy command.
Step
68 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
-f forces the lun destroy command to execute even if the LUNs specified by one or more lun_paths are mapped or are online.
The usefulness of accessing a LUN over NAS protocols depends on the host application. For example, the application must be equipped to understand the format of the data within the LUN and be able to traverse any file system the LUN may contain. Access is provided to the LUN's raw data, but not to any particular piece of data within the LUN. If you want to write to a LUN using a NAS protocol, you must take the LUN offline or unmap it to prevent an iSCSI or FCP host from overwriting data in the LUN.
Note: A LUN cannot be extended or truncated using NFS or CIFS protocols. Steps
1. Determine whether you want to read, write, or do both to the LUN over the NAS protocol and take the appropriate action: If you want read access, the LUN can remain online. If you want write access, ensure that the LUN is offline or unmapped.
LUN management | 69 Checks whether any FC interfaces are down. Verifies that the ALUA igroup settings are valid. Checks for nodename conflicts. Checks for igroup and LUN map conflicts. Checks for igroup ALUA conflicts.
Step
Use the -v option for verbose mode, which provides detailed information about each check. Use the -S to only check the single_image cfmode settings. Use the -s option for silent mode, which only provides output if there are errors.
Example
3070-6> lun config_check -v Checking for down fcp interfaces ====================================================== No Problems Found Checking initiators with mixed/incompatible settings ====================================================== No Problems Found Checking igroup ALUA settings ====================================================== No Problems Found Checking for nodename conflicts ====================================================== Checking for initiator group and lun map conflicts ====================================================== No Problems Found Checking for igroup ALUA conflicts ====================================================== No Problems Found
Related concepts
What ALUA is on page 22 igroup ostype on page 53 How Data ONTAP avoids igroup mapping conflicts during cluster failover on page 128
70 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Although the storage system displays the LUN serial number in ASCII format by default, you can display the serial number in hexadecimal format as well.
Step
or
lun serial [-x]lun_path new_lun_serial
Use -v option to display the serial numbers in ASCII format. Use -x option to display the serial numbers in hexadecimal format. Use new_lun_serial to change the existing LUN serial number to the specifed serial number.
Note: Under normal circumstances, you should not change the LUN serial number. However,
if you do need to change it, ensure the LUN is offline before issuing the command. Also, you can not use the -x option when changing the serial number; the new serial number must be in ASCII format.
Example lun serial -x /vol/blocks_fvt/ncmds_lun2
LUN management | 71
-z resets the statistics on all LUNs or the LUN specified in the lun_path option. interval is the interval, in seconds, at which the statistics are displayed. count is the number of intervals. For example, the lun stats -i 10 -c 5 command displays
sends when its SCSI command queue is full and the amount of traffic received from the partner storage system.
-a shows statistics for all LUNs. lun_path displays statistics for a specific LUN.
Example
lun stats -o -i 1 Read Write Other QFull Ops Ops Ops 0 351 0 0 log_22 0 233 0 0 log_22 0 411 0 0 log_22 2 1 0 0 ctrl_0 1 1 0 0 ctrl_1 0 326 0 0 log_22 0 353 0 0 log_22 0 282 0 0 log_22 Read Write Average kB kB Latency 0 44992 11.35 0 0 16 8 0 0 0 29888 52672 8 8 41600 45056 36160 14.85 8.93 1.00 1.50 11.93 10.57 12.81 Queue Partner Length Ops kB 3.00 0 0 2.05 2.08 1.00 1.00 3.00 2.09 2.07 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Lun /vol/tpcc/ /vol/tpcc/ /vol/tpcc/ /vol/tpcc/ /vol/tpcc/ /vol/tpcc/ /vol/tpcc/ /vol/tpcc/
Example
LUN path Mapped to LUN ID Protocol -------------------------------------------------------/vol/tpcc/ctrl_0 host5 0 iSCSI /vol/tpcc/ctrl_1 host5 1 iSCSI
72 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
/vol/tpcc/crash1 /vol/tpcc/crash2 /vol/tpcc/cust_0 /vol/tpcc/cust_1 /vol/tpcc/cust_2 host5 host5 host6 host6 host6 2 3 4 5 6 iSCSI iSCSI iSCSI iSCSI iSCSI
1. On the storage systems command line, enter the following command to display LUN status and characteristics:
lun show -v
Example
/vol/tpcc_disks/cust_0_1 382m (400556032) (r/w, online, mapped) Serial#: VqmOVYoe3BUf Share: none Space Reservation: enabled Multiprotocol Type: aix SnapValidator Offset: 1m (1048576) Maps: host5=0 /vol/tpcc_disks/cust_0_2 382m (400556032) (r/w, online, mapped) Serial#: VqmOVYoe3BV6 Share: none Space Reservation: enabled Multiprotocol Type: aix SnapValidator Offset: 1m (1048576) Maps: host6=1
igroup management | 73
igroup management
To manage your initiator groups (igroups), you can perform a range of tasks, including creating igroups, destroying them, and renaming them.
Next topics
Creating igroups on page 73 Creating FCP igroups on UNIX hosts using the sanlun command on page 74 Deleting igroups on page 75 Adding initiators to an igroup on page 76 Removing initiators from an igroup on page 76 Displaying initiators on page 77 Renaming igroups on page 77 Setting the operating system type for an igroup on page 77 Enabling ALUA for iSCSI and FC igroups on page 78 Creating igroups for a non-default vFiler unit on page 79 Fibre Channel initiator request management on page 80
Related concepts
Creating igroups
Initiator groups, or igroups, are tables of host identifiers such as Fibre Channel WWPNs and iSCSI node names. You can use igroups to control which hosts can access specific LUNs.
Step
74 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
WWPN is the FC worldwide port name. You can specify more than one WWPN. wwpn alias is the name of the alias you created for a WWPN. You can specify more than one
alias.
-a portset applies only to FC igroups. This binds the igroup to a port set. A port set is a group
of target FC ports. When you bind an igroup to a port set, any host in the igroup can access the LUNs only by connecting to the target ports in the port set.
Example igroup create -i -t windows win-group0 iqn.1991-05.com.microsoft:eng1
To create an iSCSI igroup called win-group0 that contains the node name of the Windows host associated with that node name.
Related concepts
How to use port sets to make LUNs available on specific FC target ports on page 130 What igroups are on page 51
1. Ensure that you are logged in as root on the host. 2. Change to the /opt/netapp/santools/bin directory. 3. Enter the following command to print a command to be run on the storage system that creates an igroup containing all the HBAs on your host:
./sanlun fcp show adapter -c -c prints the full igroup create command on the screen.
In this example, the name of the host is hostA, so the name of the igroup with the two WWPNs is hostA. 4. Create a new session on the host and use the telnet command to access the storage system.
igroup management | 75 5. Copy the igroup create command from Step 3, paste the command on the storage systems command line, and press Enter to run the igroup command on the storage system. An igroup is created on the storage system. 6. On the storage systems command line, enter the following command to verify the newly created igroup:
igroup show Example systemX> igroup show hostA (FCP) (ostype: aix): 10:00:00:00:AA:11:BB:22 10:00:00:00:AA:11:EE:33
Deleting igroups
When deleting igroups, you can use a single command to simultaneously remove the LUN mapping and delete the igroup. You can also use two separate commands to unmap the LUNs and delete the igroup.
Step
then
igroup destroy win-group5 Example
76 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
An initiator cannot be a member of two igroups of differing types. For example, if you have an initiator that belongs to a Solaris igroup, Data ONTAP does not allow you to add this inititator to an AIX igroup.
Step
igroup management | 77
Displaying initiators
Use the igroup show command to display all initiators belonging to a particular igroup.
Step
Renaming igroups
Use the igroup rename command to rename an igroup.
Step
78 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Example igroup set aix-group3 ostype aix
Make sure that your host supports ALUA before enabling it. Enabling ALUA for a host that does not support it can cause host failures during cluster failover.
About this task
ALUA might or might not be disabled by default. When you create a new igroup or add the first initiator to an existing igroup, Data ONTAP checks whether that initiator is enabled for ALUA in an existing igroup. If so, the igroup being modified is automatically enabled for ALUA as well. Otherwise, you must manually set ALUA to yes for each igroup, unless the host operating system type is AIX, HP-UX, or Linux. ALUA is automatically enabled for these operating systems. For iSCSI SANs, ALUA is supported only with Solaris hosts running the iSCSI Solaris Host Utilities 3.0 for Native OS. In addition, you cannot map a LUN to both a FC and iSCSI igroup if ALUA is enabled on one of the igroups. Run the lun config_check command to determine if any such conflicts exist.
Note: If you map multiple igroups to a LUN, and you enable one of the igroups for ALUA, you must enable all of the igroups for ALUA. Steps
igroup management | 79
Member: 10:00:00:00:c9:6b:76:49 (logged in on: vtic, 0a) ALUA: Yes
3. For iSCSI igroups, set the path priority to each target portal group by entering the following command:
iscsi tpgroup alua set target_portal_group_name [optimized | nonoptimized] Example iscsi tpgroup alua set tpgroup1 non-optimized Related concepts
Configuring iSCSI target portal groups on page 113 Checking LUN, igroup, and FC settings on page 68
1. Change the context to the desired vFiler unit by entering the following command:
vfiler context vf1
The vFiler units prompt is displayed. 2. Create the igroup on vFiler unit determined in step 1 by entering the following command:
igroup create -i vf1_iscsi_group iqn.1991-05.com.microsoft:server1
80 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
vf1_iscsi_group (iSCSI) (ostype: windows): iqn.1991-05.com.microsoft:server1 After you finish
You must map LUNs to igroups that are in the same vFiler unit.
How Data ONTAP manages Fibre Channel initiator requests on page 80 How to use igroup throttles on page 80 How failover affects igroup throttles on page 81 Creating igroup throttles on page 81 Destroying igroup throttles on page 81 Borrowing queue resources from the unreserved pool on page 82 Displaying throttle information on page 82 Displaying igroup throttle usage on page 83 Displaying LUN statistics on exceeding throttles on page 83
igroup management | 81 Use igroup throttles to perform the following tasks: Create one igroup throttle per igroup, if desired.
Note: Any igroups without a throttle share all the unreserved queue resources.
Assign a specific percentage of the queue resources on each physical port to the igroup. Reserve a minimum percentage of queue resources for a specific igroup. Restrict an igroup to a maximum percentage of use. Allow an igroup throttle to exceed its limit by borrowing from these resources: The pool of unreserved resources to handle unexpected I/O requests The pool of unused reserved resources, if those resources are available
The igroup throttle is created for aix-igroup1, and it persists through reboots.
82 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
To define whether an igroup can borrow queue resources from the unreserved pool, complete the following step with the appropriate option. The default when you create an igroup throttle is no.
Step
When you set the throttle_borrow setting to yes, the percentage of queue resources used by the initiators in the igroup might be exceeded if resources are available.
The exceeds column displays the number of times the initiator sends more requests than the throttle allows. The borrows column displays the number of times the throttle is exceeded and the storage system uses queue resources from the unreserved pool. In the borrows column, N/A indicates that the igroup throttle_borrow option is set to no.
igroup management | 83
The first number under the port name indicates the number of command blocks the initiator is using. The second number under the port name indicates the number of command blocks reserved for the igroup on that port. In this example, the display indicates that igroup1 is using 45 of the 98 reserved command blocks on adapter 4a, and igroup2 is using 17 of the 49 reserved command blocks on adapter 5a. igroups without throttles are counted as unreserved.
84 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
-o displays additional statistics, including the number of QFULL messages, or "QFULLS". Example lun stats -o -i 1 /vol/vol1/lun2
The output displays performance statistics, including the QFULL column. This column indicates the number of initiator requests that exceeded the number allowed by the igroup throttle, and, as a result, received the SCSI Queue Full response. 2. Display the total count of QFULL messages sent for each LUN by entering the following command:
lun stats -o lun_path
Enabling multi-connection sessions on page 85 Enabling error recovery levels 1 and 2 on page 86 iSCSI service management on page 87 iSNS server registration on page 95 Displaying initiators connected to the storage system on page 98 iSCSI initiator security management on page 99 Target portal group management on page 109 Displaying iSCSI statistics on page 114 Displaying iSCSI session information on page 118 Displaying iSCSI connection information on page 119 Guidelines for using iSCSI with HA pairs on page 120 iSCSI problem resolution on page 122
The iscsi.max_connections_per_session option specifies the number of connections per session allowed by the storage system. You can specify between 1 and 32 connections, or you can accept the default value. Note that this option specifies the maximum number of connections per session supported by the storage system. The initiator and storage system negotiate the actual number allowed for a session when the session is created; this is the smaller of the initiators maximum and the storage systems maximum. The number of connections actually used also depends on how many connections the initiator establishes.
Steps
1. Verify the current option setting by entering the following command on the system console:
86 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
options iscsi.max_connections_per_session
The current setting is displayed. 2. If needed, change the number of connections allowed by entering the following command:
options iscsi.max_connections_per_session [connections | use_system_default] connections is the maximum number of connections allowed for each session, from 1 to 32. use_system_default equals 1 for Data ONTAP 7.1 and 7.2, 4 for Data ONTAP 7.2.1 and
subsequent 7.2 maintenance releases, and 32 starting with Data ONTAP 7.3. The meaning of this default might change in later releases.
There might be a minor performance reduction for sessions running error recovery level 1 or 2. The iscsi.max_error_recovery_level option specifies the maximum error recovery level allowed by the storage system. You can specify 0, 1, or 2, or you can accept the default value. Note that this option specifies the maximum error recovery level supported by the storage system. The initiator and storage system negotiate the actual error recovery level used for a session when the session is created; this is the smaller of the initiators maximum and the storage systems maximum.
Steps
1. Verify the current option setting by entering the following command on the system console:
options iscsi.max_error_recovery_level
The current setting is displayed. 2. If needed, change the error recovery levels allowed by entering the following command:
options iscsi.max_error_recovery_level [level | use_system_default] level is the maximum error recovery level allowed, 0, 1, or 2. use_system_default equals 0 for Data ONTAP 7.1 and 7.2. The meaning of this default may
Verifying that the iSCSI service is running on page 87 Verifying that iSCSI is licensed on page 87 Enabling the iSCSI license on page 88 Starting the iSCSI service on page 88 Stopping the iSCSI service on page 88 Displaying the target node name on page 89 Changing the target node name on page 89 Displaying the target alias on page 90 Adding or changing the target alias on page 90 iSCSI service management on storage system interfaces on page 91 Displaying iSCSI interface status on page 91 Enabling iSCSI on a storage system interface on page 91 Disabling iSCSI on a storage system interface on page 92 Displaying the storage system's target IP addresses on page 92 iSCSI interface access management on page 93
88 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC A list of all available licenses is displayed. An enabled license shows the license code.
The following options are automatically enabled when the iSCSI service is turned on. Do not change these options:
volume option create_ucode to on cf.takeover.on_panic to on
Step
Example
iscsi nodename iSCSI target nodename: iqn.1992-08.com.netapp:sn.12345678
Changing the storage systems node name while iSCSI sessions are in progress does not disrupt the existing sessions. However, when you change the storage systems node name, you must reconfigure the initiator so that it recognizes the new target node name. If you do not reconfigure the initiator, subsequent initiator attempts to log in to the target will fail. When you change the storage systems target node name, be sure the new name follows all of these rules: A node name can be up to 223 bytes. Uppercase characters are always mapped to lowercase characters. A node name can contain alphabetic characters (a to z), numbers (0 to 9) and three special characters: Period (.) Hyphen (-) Colon (:) The underscore character (_) is not supported.
Step
90 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Example
iscsi nodename iqn.1992-08.com.netapp:filerhq
Depending on your initiator, the alias may or may not be displayed in the initiators user interface.
Step
Example
iscsi alias iSCSI target alias: Filer_1
Examples
iscsi alias Storage-System_2 New iSCSI target alias: Storage-System_2 iscsi alias -c Clearing iSCSI target alias
Example The following example shows the iSCSI service enabled on two storage system Ethernet interfaces:
iscsi interface show Interface e0 disabled Interface e9a enabled Interface e9b enabled
92 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Example The following example enables the iSCSI service on interfaces e9a and e9b:
iscsi interface enable e9a e9b
confirmation. If you do not use this option, the command displays a message notifying you that active sessions are in progress on the interface and requests confirmation before terminating these sessions and disabling the interface.
-a specifies all interfaces. interface is a list of specific Ethernet interfaces, separated by spaces.
The IP address, TCP port number, target portal group tag, and interface identifier are displayed for each interface. Example
system1> iscsi portal show Network portals: IP address 10.60.155.105 fe80::2a0:98ff:fe00:fd81
Interface e0 e0
By default, all initiators have access to all interfaces, so access lists must be explicitly defined. When an initiator begins a discovery session using an iSCSI SendTargets command, it will only receive those IP addresses associated with network interfaces on its access list.
Next topics
Creating iSCSI interface access lists on page 93 Removing interfaces from iSCSI interface access lists on page 94 Displaying iSCSI interface access lists on page 94
Creating iSCSI interface access lists You can use iSCSI interface access lists to control which interfaces an initiator can access. An access list ensures that an initiator only logs in with IP addresses associated with the interfaces defined in the access list. Access list policies are based on the interface name, and can include physical interfaces, VIFs, and VLANs.
Note: For vFiler contexts, all interfaces can be added to the vFiler unit's access list, but the
initiator will only be able to access the interfaces that are bound to the vFiler unit's IP addresses.
Step
94 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Example iscsi interface accesslist add iqn.1991-05.com.microsoft:ms e0b Related concepts
Displaying iSCSI interface access lists If you created one or more access lists, you can display the initiators and the interfaces to which they have access.
Step
What an iSNS server does on page 95 How the storage system interacts with an iSNS server on page 95 About iSNS service version incompatibility on page 95 Setting the iSNS service revision on page 96 Registering the storage system with an ISNS server on page 96 Immediately updating the ISNS server on page 97 Disabling ISNS on page 97 Setting up vFiler units with the ISNS service on page 98
96 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Set iscsi.isns.rev option to 18 or upgrade to iSNS server 3.0. Verify that the iscsi.isns.rev option is set to 22.
7.1
3.0
Note: When you upgrade to a new version of Data ONTAP, the existing value for the iscsi.isns.rev option is maintained. This reduces the risk of a draft version problem when
upgrading. If necessary, you must manually change iscsi.isns.rev to the correct value when upgrading Data ONTAP.
1. Verify the current iSNS revision value by entering the following command on the system console:
options iscsi.isns.rev
The current draft revision used by the storage system is displayed. 2. If needed, change the iSNS revision value by entering the following command:
options iscsi.isns.rev draft draft is the iSNS standard draft revision, either 18 or 22.
The iscsi isns command only configures the storage system to register with the iSNS server. The storage system does not provide commands that enable you to configure or manage the iSNS server. To manage the iSNS server, use the server administration tools or interface provided by the vendor of the iSNS server.
1. Make sure the iSCSI service is running by entering the following command on the storage system console:
iscsi status
3. On the storage system console, enter the following command to identify the iSNS server that the storage system registers with:
iscsi isns config [ip_addr|hostname] ip_addr is the IP address of the iSNS server. hostname is the hostname associated with the iSNS server. Note: As of Data ONTAP 7.3.1, you can configure iSNS with an IPv6 address.
The iSNS service is started and the storage system registers with the iSNS server.
Note: iSNS registration is persistent across reboots if the iSCSI service is running and iSNS is
started.
Disabling ISNS
When you stop the iSNS service, the storage system stops registering its iSCSI information with the iSNS server.
Step
98 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
For information about managing vFiler units, see the sections on iSCSI service on vFiler units in the Data ONTAP 8.0 7-Mode MultiStore Management Guide.
Steps
1. Register the vFiler unit with the iSNS service by entering the following command:
iscsi isns config -i ip_addr ip_addr is the IP address of the iSNS server.
Examples for vFiler units The following example defines the iSNS server for the default vFiler unit (vfiler0) on the hosting storage system:
iscsi isns config -i 10.10.122.101
The following example defines the iSNS server for a specific vFiler unit (vf1). The vfiler context command switches to the command line for a specific vFiler unit.
vfiler context vf1 vf1> iscsi isns config -i 10.10.122.101
Related information
iSCSI network management | 99 alias (if provided by the initiator), the initiator's iSCSI node name and initiator session identifier (ISID), and the igroup.
Step
The initiators currently connected to the storage system are displayed. Example
system1> iscsi initiator show Initiators connected: TSIH TPGroup Initiator/ISID/IGroup 1 1000 iqn.1991-05.com.microsoft:hual-lxp.hq.netapp.com / 40:00:01:37:00:00 / windows_ig2; windows_ig 2 1000 vanclibern (iqn.1987-05.com.cisco:vanclibern / 00:02:3d:00:00:01 / linux_ig) 4 1000 iqn.1991-05.com.microsoft:cox / 40:00:01:37:00:00 /
How iSCSI authentication works on page 100 Guidelines for using CHAP authentication on page 100 Defining an authentication method for an initiator on page 101 Defining a default authentication method for initiators on page 102 Displaying initiator authentication methods on page 103 Removing authentication settings for an initiator on page 103 iSCSI RADIUS configuration on page 103
100 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
You can define a list of initiators and their authentication methods. You can also define a default authentication method that applies to initiators that are not on this list. The default iSCSI authentication method is none, which means any initiator not in the authentication list can log into the storage system without authentication. However, you can change the default method to deny or CHAP. If you use iSCSI with vFiler units, the CHAP authentication settings are configured separately for each vFiler unit. Each vFiler unit has its own default authentication mode and list of initiators and passwords. To configure CHAP settings for vFiler units, you must use the command line.
Note: For information about managing vFiler units, see the sections on iSCSI service on vFiler
iSCSI network management | 101 bidirectional authentication, you must use the same user name and password for inbound CHAP settings on the initiator. You cannot use the same user name and password for inbound and outbound settings on the storage system. CHAP user names can be 1 to 128 bytes. A null user name is not allowed. CHAP passwords (secrets) can be 1 to 512 bytes. Passwords can be hexadecimal values or strings. For hexadecimal values, enter the value with a prefix of 0x or 0X. A null password is not allowed. See the initiators documentation for additional restrictions. For example, the Microsoft iSCSI software initiator requires both the initiator and target CHAP passwords to be at least 12 bytes if IPsec encryption is not being used. The maximum password length is 16 bytes regardless of whether IPsec is used.
You can generate a random password, or you can specify the password you want to use.
Steps
The storage system generates a 128-bit random password. 2. For each initiator, enter the following command:
iscsi security add -i initiator -s [chap | deny | none] [-f radius | -p inpassword -n inname] [-o outpassword -m outname] initiator is the initiator name in the iSCSI nodename format.
102 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
inpassword is the inbound password for CHAP authentication. The storage system uses the
inbound password to authenticate the initiator. An inbound password is required if you are using CHAP authentication and you are not using RADIUS.
inname is a user name for inbound CHAP authentication. The storage system uses the inbound
storage system, which uses this password for authentication by the initiator.
outname is a user name for outbound CHAP authentication. The storage system uses this user
The initiator is removed from the authentication list and logs in to the storage system using the default authentication method.
104 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC 4. Enable the storage system to use RADIUS for CHAP authentication.
Next topics
Defining RADIUS as the authentication method for initiators on page 104 Starting the RADIUS client service on page 105 Adding a RADIUS server on page 106 Enabling the storage system to use RADIUS for CHAP authentication on page 106 Displaying the RADIUS service status on page 107 Stopping the RADIUS client service on page 107 Removing a RADIUS server on page 108 Displaying and clearing RADIUS statistics on page 108
Defining RADIUS as the authentication method for initiators You can define RADIUS as the authentication method for one or more initiators, as well as make it the default authentication method that applies to initiators that are not on this list. You can generate a random password, or you can specify the password you want to use. Inbound passwords are saved on the RADIUS server and outbound passwords are saved on the storage system.
Steps
Use the -f option to ensure that initiator only uses RADIUS as the authentication method. If you do not use the -f option, the initiator only attempts to authenticate via RADIUS if the local CHAP authentication fails.
outpassword is a password for outbound CHAP authentication. It is stored locally on the
storage system, which uses this password for authentication by the initiator.
outname is a user name for outbound CHAP authentication. The storage system uses this user name for authentication by the initiator.
3. To define RADIUS as the default authentication method for all initiators not previously specified, enter the following command:
iscsi security default -s chap -f radius [-o outpassword -m outname]
Examples
3070-6> iscsi security add -i iqn.1992-08.com.microsoft:system1 -s chap -f radius 3070-6> iscsi security show Default sec is CHAP RADIUS Outbound password: **** Outbound username: init: iqn.1994-05.com.redhat:10ca21e21b75 auth: CHAP RADIUS Outbound password: **** Outbound username: icroto 3070-6> iscsi security default -s chap -f radius 3070-6>
After enabling RADIUS authentication for the initiators, start the RADIUS client service on the storage system. Starting the RADIUS client service Once you enable RADIUS authentication for the appropriate initiators, you must start the RADIUS client.
Step
Example
3070-6> radius start RADIUS client service started
After the RADIUS service is started, ensure you add one or more RADIUS servers with which the storage system can communicate.
106 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Adding a RADIUS server Once you start the RADIUS client service, add a RADIUS server with which the storage system can communicate. You can add up to three RADIUS servers.
Step
Use the -d option to make the RADIUS server you are adding the default server. If there is no default server, the one you add becomes the default. Use the -p option to specify a port number on the RADIUS server. The default port number is 1812. Example
3070-6> radius add 10.60.155.58 -p 1812 3070-6> radius show RADIUS client service is running Default RADIUS server : IP_Addr=10.60.155.58 UDPPort=1812
After adding the necessary servers, you must enable the storage system to use the RADIUS server for CHAP authentication. Enabling the storage system to use RADIUS for CHAP authentication Once RADIUS authentication is enabled for the initiators and the RADIUS client service is started, you must set the iscsi.auth.radius.enable option to on. This ensures the storage system uses RADIUS for CHAP authentication. This option is set to off by default, and you must set it to on, regardless of whether you used the -f option when enabling RADIUS for the initiators.
Step
3070-6> options iscsi.auth.radius.enable on 3070-6> options iscsi iscsi.auth.radius.enable on iscsi.enable on iscsi.isns.rev 22 iscsi.max_connections_per_session use_system_default
Displaying the RADIUS service status Use the radius show command to display important RADIUS information, including whether the service is running and the default RADIUS server.
Step
You can also run the radius status command to see if the client service is running.
Example ie3070-6> radius status RADIUS client service is running ie3070-6>
Stopping the RADIUS client service Use the radius stop command to stop the RADIUS client service.
Step
108 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Removing a RADIUS server Use the radius remove command to ensure a RADIUS server is no longer used for RADIUS authentication.
Step
If the server is using a port other than 1812, use the -p option to specify the port number.
3070-6> radius show RADIUS client service is running Default RADIUS server : IP_Addr=10.60.155.58 UDPPort=1812 3070-6> radius remove 10.60.155.58 3070-6> radius show RADIUS client service is running
Displaying and clearing RADIUS statistics Use the radius stats command to display important details about the RADIUS service, including packets accepted, packets rejected, and the number of authentication requests. You can also clear the existing statistics.
Step
3070-6> radius stats RADIUS client statistics RADIUS access-accepted-packets: RADIUS access-challenged-packets: RADIUS access-rejected-packets: RADIUS authentication-requests: RADIUS denied-packets: RADIUS late-packets: RADIUS retransmitted-packets: RADIUS short-packets: RADIUS timed-out-packets:
121 3 0 124 0 0 14 0 0
0 0 0 0 0 0 0 0 0 0 0
group management based on IP address. For iSCSI sessions that use multiple connections, all of the connections must use interfaces in the same target portal group. Each interface belongs to one and only one target portal group. Interfaces can be physical interfaces or logical interfaces (VLANs and vifs). Starting with Data ONTAP 7.1, you can explicitly create target portal groups and assign tag values. If you want to increase performance and reliability by using multi-connections per session across more than one interface, you must create one or more target portal groups. Because a session can use interfaces in only one target portal group, you may want to put all of your interfaces in one large group. However, some initiators are also limited to one session with a given target portal group. To support multipath I/O (MPIO), you need to have one session per path, and therefore more than one target portal group. When an interface is added to the storage system, each network interface is automatically assigned to its own target portal group. In addition, some storage systems support the use of an iSCSI Target expansion adapter, which contains special network interfaces that offload part of the iSCSI protocol processing. You cannot combine these iSCSI hardware-accelerated interfaces with standard iSCSI storage system interfaces in the same target portal group.
110 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Next topics
Range of values for target portal group tags on page 110 Important cautions for using target portal groups on page 110 Displaying target portal groups on page 111 Creating target portal groups on page 111 Destroying target portal groups on page 112 Adding interfaces to target portal groups on page 112 Removing interfaces from target portal groups on page 113 Configuring iSCSI target portal groups on page 113
Example
iscsi tpgroup show TPGTag Name 1000 e0_default 1001 e5a_default 1002 e5b_default 1003 e9a_default 1004 e9b_default Member Interfaces e0 e5a e5b e9a e9b
Create a target portal group that contains all of the interfaces you want to use for one iSCSI session. However, note that you cannot combine iSCSI hardware-accelerated interfaces with standard iSCSI storage system interfaces in the same target portal group. When you create a target portal group, the specified interfaces are removed from their current groups and added to the new group. Any iSCSI sessions using the specified interfaces are terminated, but the initiator should automatically reconnect. However, initiators that create a persistent association between the IP address and the target portal group will not be able to reconnect.
Step
printing characters).
112 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
-t tag sets the target portal group tag to the specified value. In general you should accept the
Example The following command creates a target portal group named server_group that includes interfaces e8a and e9a:
iscsi tpgroup create server_group e8a e9a
Any iSCSI sessions using the specified interfaces are terminated, but the initiator should reconnect automatically. However, initiators that create a persistent association between the IP address and the target portal group are not able to reconnect.
Step
Example The following command adds interfaces e8a and e9a to the portal group named server_group:
iscsi tpgroup add server_group e8a e9a
Any iSCSI sessions with the interfaces being removed are terminated, but the initiator should reconnect automatically. However, initiators that create a persistent association between the IP address and the target portal group are not able to reconnect.
Step
Example The following command removes interfaces e8a and e9a from the portal group named server_group, even though there is an iSCSI session currently using e8a:
iscsi tpgroup remove -f server_group e8a e9a
When you first enable ALUA, all target portal groups are set to optimized by default. Some storage systems support the use of an iSCSI Target HBA, which contains special network interfaces that offload part of the iSCSI protocol processing. You might want to set the target portal
114 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC groups that contain these iSCSI hardware-accelerated interfaces to optimized and the standard iSCSI storage system interfaces to non-optimized. As a result, the host uses the iSCSI hardware-accelerated interface as the primary path.
Attention: When setting the path priority for target portal groups on clustered storage systems,
make sure that the path priority setting is identical for the target portal group on the primary storage system and the target portal group on its partner interface on the secondary storage system. To change the path priority to a target portal group, complete the following step.
Step
Example
iscsi tpgroup alua set tpgroup1 non-optimized
Related concepts
and IPv6.
-z resets the iSCSI statistics. ipv4 displays only the IPv4 statistics. ipv6 displays only the IPv6 statistics.
Entering the iscsi stats command without any options displays only the combined IPv4 and IPv6 statistics.
system1> iscsi stats -a iSCSI stats(total) iSCSI PDUs Received SCSI-Cmd: 1465619 | Nop-Out: 4 TaskMgtCmd: 0 LoginReq: 6 | LogoutReq: 1 Req: 1 DataOut: 0 | SNACK: 0 Unknown: 0 Total: 1465631 iSCSI PDUs Transmitted SCSI-Rsp: 733684 | Nop-In: 4 TaskMgtRsp: 0 LoginRsp: 6 | LogoutRsp: 1 TextRsp: 1 Data_In: 790518 | R2T: 0 Asyncmsg: 0 Reject: 0 Total: 1524214 iSCSI CDBs DataIn Blocks: 5855367 | DataOut Blocks: Error Status: 1 | Success Status: Total CDBs: 1465619 iSCSI ERRORS Failed Logins: 0 | Failed TaskMgt: Failed Logouts: 0 | Failed TextCmd: Protocol: 0 Digest: 0 PDU discards (outside CmdSN window): 0 PDU discards (invalid header): 0 Total: 0 iSCSI Stats(ipv4) iSCSI PDUs Received SCSI-Cmd: 732789 | Nop-Out: 1 TaskMgtCmd: 0 LoginReq: 2 | LogoutReq: 0 Req: 0 DataOut: 0 | SNACK: 0 Unknown: 0 Total: 732792 iSCSI PDUs Transmitted SCSI-Rsp: 366488 | Nop-In: 1 TaskMgtRsp: 0 LoginRsp: 2 | LogoutRsp: 0 TextRsp: 0 Data_In: 395558 | R2T: 0 Asyncmsg: 0 Reject: 0 Total: 762049 iSCSI CDBs DataIn Blocks: 2930408 | DataOut Blocks: Error Status: 0 | Success Status: Total CDBs: 732789
| SCSI | Text |
| SCSI | |
0 1465618 0 0
| SCSI | Text |
| SCSI | |
0 732789
116 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
iSCSI ERRORS Failed Logins: 0 | Failed Failed Logouts: 0 | Failed Protocol: 0 Digest: 0 PDU discards (outside CmdSN window): PDU discards (invalid header): Total: 0
TaskMgt: TextCmd: 0 0
0 0
iSCSI Stats(ipv6) iSCSI PDUs Received SCSI-Cmd: 732830 | Nop-Out: 3 TaskMgtCmd: 0 LoginReq: 4 | LogoutReq: 1 Req: 1 DataOut: 0 | SNACK: 0 Unknown: 0 Total: 732839 iSCSI PDUs Transmitted SCSI-Rsp: 367196 | Nop-In: 3 TaskMgtRsp: 0 LoginRsp: 4 | LogoutRsp: 1 TextRsp: 1 Data_In: 394960 | R2T: 0 Asyncmsg: 0 Reject: 0 Total: 762165 iSCSI CDBs DataIn Blocks: 2924959 | DataOut Blocks: Error Status: 1 | Success Status: Total CDBs: 732830 iSCSI ERRORS Failed Logins: 0 | Failed TaskMgt: Failed Logouts: 0 | Failed TextCmd: Protocol: 0 Digest: 0 PDU discards (outside CmdSN window): 0 PDU discards (invalid header): 0 Total: 0
| SCSI | Text |
| SCSI | |
0 732829 0 0
Description SCSI-level command descriptor blocks. Login request PDUs sent by initiators during session setup. PDUs containing write operation data that did not fit within the PDU of the SCSI command. The PDU maximum size is set by the storage system during the operation negotiation phase of the iSCSI login sequence. A message sent by initiators to check whether the target is still responding. request sent by initiators to terminate active iSCSI sessions or to terminate one connection of a multi-connection session. A PDU sent by the initiator to acknowledge receipt of a set of DATA_IN PDUs or to request retransmission of specific PDUs.
SCSI TaskMgtCmd SCSI-level task management messages, such as ABORT_TASK and RESET_LUN. Text-Req Text request PDUs that initiators send to request target information and renegotiate session parameters.
iSCSI PDUs transmitted This section lists the iSCSI PDUs sent by the storage system and includes the following statistics. Field SCSI-Rsp LoginRsp DataIn Nop-In Logout-Rsp R2T Description SCSI response messages. Responses to login requests during session setup. Messages containing data requested by SCSI read operations. Responses to initiator Nop-Out messages. Responses to Logout-Req messages. Ready to transfer messages indicating that the target is ready to receive data during a SCSI write operation.
SCSI TaskMgtRsp Responses to task management requests. TextRsp Asyncmsg Reject Responses to Text-Req messages. Messages the target sends to asynchronously notify the initiator of an event, such as the termination of a session. Messages the target sends to report an error condition to the initiator, for example:
118 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Field
Description Data Digest Error (checksum failed) Target does not support command sent by the initiator Initiator sent a command PDU with an invalid PDU field
iSCSI CDBs This section lists statistics associated with the handling of iSCSI Command Descriptor Blocks, including the number of blocks of data transferred, and the number of SCSI-level errors and successful completions. iSCSI Errors This section lists login failures and other SCSI protocol errors.
An iSCSI session can have zero or more connections. Typically a session has at least one connection. Connections can be added and removed during the life of the iSCSI session. You can display information about all sessions or connections, or only specified sessions or connections. The iscsi session show command displays session information, and the iscsi connection show command displays connection information. The session information is also available using FilerView. The command line options for these commands control the type of information displayed. For troubleshooting performance problems, the session parameters (especially HeaderDigest and DataDigest) are particularly important. The -v option displays all available information. In FilerView, the iSCSI Session Information page has buttons that control which information is displayed.
Step
system1> iscsi session show -t Session 2 Initiator Information Initiator Name: iqn.1991-05.com.microsoft:legbreak ISID: 40:00:01:37:00:00 Connection Information Connection 1 Remote Endpoint: fe80::211:43ff:fece:ccce:1135 Local Endpoint: fe80::2a0:98ff:fe00:fd81:3260 Local Interface: e0 TCP recv window size: 132480 Connection 2 Remote Endpoint: 10.60.155.31:2280 Local Endpoint: 10.60.155.105:3260 Local Interface: e0 TCP recv window size: 131400
the session identifier and the connection identifier. Example The following example shows the -v option.
system1> iscsi connection show -v No new connections Session connections Connection 2/1: State: Full_Feature_Phase Remote Endpoint: fe80::211:43ff:fece:ccce:1135
120 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Local Endpoint: fe80::2a0:98ff:fe00:fd81:3260 Local Interface: e0 Connection 2/2: State: Full_Feature_Phase Remote Endpoint: 10.60.155.31:2280 Local Endpoint: 10.60.155.105:3260 Local Interface: e0
Next topics
Simple HA pairs with iSCSI on page 120 Complex HA pairs with iSCSI on page 122
Storage System B has the same Ethernet card in slot 9. Interface e9a is assigned 10.1.2.6, and e9b is assigned 10.1.3.6. Again, the two interfaces are in a user-defined target portal group with tag value 2. In the HA pair, interface e9a on Storage System A is the partner of e9a on Storage System B. Likewise, e9b on System A is the partner of e9b on system B. For more information on configuring interfaces for an HA pair, see the Data ONTAP 8.0 7-Mode High-Availability Configuration Guide. Now assume that Storage System B fails and its iSCSI sessions are dropped. Storage System A assumes the identity of Storage System B. Interface e9a now has two IP addresses: its original address of 10.1.2.5, and the 10.1.2.6 address from Storage System B. The iSCSI host that was using Storage System B reestablishes its iSCSI session with the target on Storage System A. If the e9a interface on Storage System A was in a target portal group with a different tag value than the interface on Storage System B, the host might not be able to continue its iSCSI session from Storage System B. This behavior varies depending on the specific host and initiator. To ensure correct CFO behavior, both the IP address and the tag value must be the same as on the failed system. And because the target portal group tag is a property of the interface and not the IP address, the surviving interface cannot change the tag value during a CFO.
Related information
122 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
LUNs not visible on the host on page 122 System cannot register with iSNS server on page 124 No multi-connection session on page 124 Sessions constantly connecting and disconnecting during takeover on page 124 Resolving iSCSI error messages on the storage system on page 125
System requirements
Verify that the components of your configuration are qualified. Verify that you have the correct host operating system (OS) service pack level, initiator version, Data ONTAP version, and other system requirements. You can check the most up to date system requirements in the Interoperability Matrix at http://now.netapp.com/NOW/products/interoperability/.
What to do If you are using jumbo frames in your configuration, ensure that jumbo frames are enabled on all devices in the network path: the host Ethernet NIC, the storage system, and any switches. Verify that the iSCSI service is licensed and started on the storage system. Verify that the initiator is logged in to the storage system. If the command output shows no initiators are logged in, check the initiator configuration on the host. Verify that the storage system is configured as a target of the initiator.
Verify that you are using the correct initiator node names in the igroup configuration. For the storage system, see Managing igroups on page 94. On the host, use the initiator tools and commands to display the initiator node name. The initiator node names configured in the igroup and on the host must match.
LUN mappings
Verify that the LUNs are mapped to an igroup. On the storage system console, use one of the following commands:
lun show -m Displays all LUNs and the igroups to which they are
mapped.
lun show -g igroup-name Displays the LUNs mapped to a specific
igroup. Or, using FilerView, Click LUNs > ManageDisplays all LUNs and the igroups to which they are mapped.
Related concepts
igroup management on page 73 About LUNs, igroups, and LUN maps on page 47
Related tasks
Verifying that the iSCSI service is running on page 87 Displaying initiators connected to the storage system on page 98
124 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
No multi-connection session
All of the connections in a multi-connection iSCSI session must go to interfaces on the storage system that are in the same target portal group. If an initiator is unable to establish a multi-connection session, check the portal group assignments of the initiator. If an initiator can establish a multi-connection session, but not during a cluster failover (CFO), the target portal group assignment on the partner storage system is probably different from the target portal group assignment on the primary storage system.
Related concepts
Target portal group management on page 109 Guidelines for using iSCSI with HA pairs on page 120
What to do Use the iscsi command or FilerView LUNs > iSCSI > Manage Interfaces page to enable the iSCSI service on the interface. For example:
iscsi interface enable e9b
Check CHAP settings. Inbound credentials on the storage system must match outbound credentials on the initiator. Outbound credentials on the storage system must match inbound credentials on the initiator. You cannot use the same user name and password for inbound and outbound settings on the storage system.
126 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Message
ifconfig: interface cannot be configured: Address does not match any partner interface.
Explanation
What to do
or
Cluster monitor: takeover during ifconfig_2 failed; takeover continuing...
A single-mode VIF can be a 1. Add the partner's interface using partner interface to a the ifconfig command on each standalone, physical interface system in the HA pair. For on a cluster partner. example: However, the partner system1> ifconfig vif0 statement in the ifconfig partner e0a command must use the name of the partner interface, not system2> ifconfig e0a partner vif0 the partner's IP address. If the IP address of the partner's 2. Modify the /etc/rc file on both physical interface is used, the systems to contain the same interface will not be interface information. successfully taken over by the storage system's VIF interface.
Related concepts
FC SAN management
This section contains critical information required to successfully manage your FC SAN.
Next topics
How to manage FC with HA pairs on page 127 How to use port sets to make LUNs available on specific FC target ports on page 130 FC service management on page 135 Managing systems with onboard Fibre Channel adapters on page 145
128 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Next topics
How Data ONTAP avoids igroup mapping conflicts during cluster failover on page 128 Multipathing requirements for cluster failover on page 130
How Data ONTAP avoids igroup mapping conflicts during cluster failover Each node in the HA pair shares its partner's igroup and LUN mapping information. Data ONTAP uses the cluster interconnect to share igroup and LUN mapping information and also provides the mechanisms for avoiding mapping conflicts.
Next topics
Reserved LUN ID ranges on page 129 Bringing LUNs online on page 129 When to override possible mapping conflicts on page 129
Related tasks
Bringing LUNs online The lun online command fails when the cluster interconnect is down to avoid possible LUN mapping conflicts. When to override possible mapping conflicts When the cluster interconnect is down, Data ONTAP cannot check for LUN mapping or igroup ostype conflicts. The following commands fail unless you use the -f option to force these commands. The -f option is only available with these commands when the cluster interconnect is down.
lun map lun online igroup add
130 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
igroup set
You might want to override possible mapping conflicts in disaster recovery situations or situations in which the partner in the HA pair cannot be reached and you want to regain access to LUNs. For example, the following command maps a LUN to an AIX igroup and assigns a LUN ID of 5, regardless of any possible mapping conflicts:
lun map -f /vol/vol2/qtree1/lun3 aix_host5_group2 5
Multipathing requirements for cluster failover Multipathing software is required on the host so that SCSI commands fail over to alternate paths when links go down due to switch failures or cluster failovers. In the event of a failover, none of the adapters on the takeover storage system assume the WWPNs of the failed storage system.
How to use port sets to make LUNs available on specific FC target ports
A port set consists of a group of FC target ports. You bind a port set to an igroup, to make the LUN available only on a subset of the storage system's target ports. Any host in the igroup can access the LUNs only by connecting to the target ports in the port set. If an igroup is not bound to a port set, the LUNs mapped to the igroup are available on all of the storage systems FC target ports. The igroup controls which initiators LUNs are exported to. The port set limits the target ports on which those initiators have access. You use port sets for LUNs that are accessed by FC hosts only. You cannot use port sets for LUNs accessed by iSCSI hosts.
Next topics
How port sets work in HA pairs on page 131 How upgrades affect port sets and igroups on page 131 How port sets affect igroup throttles on page 131 Creating port sets on page 132 Binding igroups to port sets on page 132 Unbinding igroups from port sets on page 133 Adding ports to port sets on page 133 Removing ports from port sets on page 134 Destroying port sets on page 134 Displaying the ports in a port set on page 135 Displaying igroup-to-port-set bindings on page 135
132 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC It is also important to check throttle reserves before you unbind a port set from an igroup. In this case, you make the ports visible to all igroups that are mapped to LUNs. The throttle reserve settings of multiple igroups might exceed the available resources on a port.
For HA pairs, when you add local ports to a port set, also add the partner systems corresponding target ports to the same port set. For example, if you have local systemss target port 4a port in the port set, then make sure to include the partner systems port 4a in the port set as well. This ensures that the takeover and giveback occurs without connectivity problems.
Step
characters.
port is the target FCP port. You can specify a list of ports. If you do not specify any ports, then you create an empty port set. You can add as many as 18 target FCP ports.
and the system is in an HA pair, the port from both the local and partner storage system is added to the port set. filername:slotletter adds only a specific port on a storage systemfor example, SystemA:4b.
If you do not bind an igroup to a port set, and you map a LUN to the igroup, then the initiators in the igroup can access the LUN on any port on the storage system.
If you unbind or unmap an igroup from a port set, then all hosts in the igroup can access LUNs on all target ports.
Step
Note that you cannot remove the last port in the port set if the port set is bound to an igroup. To remove the last port, first unbind the port set from the igroup, then remove the port.
Step
characters.
134 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
port is the target FCP port. You can specify a list of ports. If you do not specify any ports, then you create an empty port set. You can add as many as 18 target FCP ports.
characters.
port is the target FCP port. You can specify a list of ports. If you do not specify any ports, then you create an empty port set. You can add as many as 18 target FCP ports.
and the system is in an HA pair, the port from both the local and partner storage system is added to the port set. filername:slotletter adds only a specific port on a storage systemfor example, SystemA:4b.
1. Unbind the port set from any igroups by entering the following command:
igroup unbind igroup_name portset_name
FC SAN management | 135 If you use the -f option, you destroy the port set even if it is still bound to an igroup. If you do not use the -f option and the port set is still bound to an igroup, the portset destroy command fails.
Example portset destroy portset1 portset2 portset3
If you do not supply portset_name, all port sets and their respective ports are listed. If you supply portset_name, only the ports in the port set are listed.
Example portset show portset1
FC service management
Use the fcp commands for most of the tasks involved in managing the Fibre Channel service and the target and initiator adapters. Enter fcp help at the command line to display the list of available commands.
Next topics
136 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Verifying that the FC service is licensed on page 136 Licensing the FC service on page 136 Disabling the FC license on page 137 Starting and stopping the FC service on page 137 Taking target expansion adapters offline and bringing them online on page 138 Changing the adapter speed on page 138 How WWPN assignments work with FC target expansion adapters on page 140 Changing the system's WWNN on page 143 WWPN aliases on page 143
A list of all available services displays, and those services that are enabled show the license code; those that are not enabled are indicated as not licensed.
After you license the FC service on a FAS270, you must reboot. When the storage system boots, the port labeled Fibre Channel 2 is in SAN target mode. When you enter commands that display adapter statistics, this port is slot 0, so the virtual ports are shown as 0c_0, 0c_1, and 0c_2.
Related concepts
Stopping the FC service disables all FC ports on the system, which has important ramifications for HA pairs during cluster failover. For example, if you stop the FC service on System1, and System2 fails over, System1 will be unable to service System2's LUNs. On the other hand, if System2 fails over, and you stop the FC service on System2 and start the FC service on System1, System1 will successfully service System2's LUNs. Use the partner fcp stop command to disable the FC ports on the failed system during takeover, and use the partner fcp start command to re-enable the FC service after the giveback is complete.
Step
138 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Example fcp start
The FC service is enabled on all FC ports on the system. If you enter fcp stop, the FC service is disabled on all FC ports on the sytem.
The target adapter 4a is offline. If you enter fcp config 4a up, the adapter is brought online.
Steps
Adapter 2a is taken down, and the FC service might be temporarily interrupted on the adapter. 2. Enter the following command:
fcp config adapter speed [auto|1|2|4|8] Example : system1> fcp config 2a speed 2
Although the fcp config command displays the current adapter speed setting, it does not necessarily display the actual speed at which the adapter is running. For example, if the speed is set to auto, the actual speed may be 1 Gb, 2 Gb, 4 Gb, and so on. To view the actual speed at which the adapter is running, use the show adapter -v command and examine the Data Link Rate value, as in the following example:
system1> fcp show adapter -v Slot: 5a Description: Fibre Channel Target Adapter 5a (Dual-channel, QLogic 2312 (2352) rev. 2) Status: ONLINE Host Port Address: 010200 Firmware Rev: 4.0.18 PCI Bus Width: 64-bit PCI Clock Speed: 33 MHz FC Nodename: 50:0a:09:80:87:69:27:ff (500a0980876927ff) FC Portname: 50:0a:09:83:87:69:27:ff (500a0983876927ff) Cacheline Size: 16 FC Packet Size: 2048 SRAM Parity: Yes External GBIC: No
140 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Data Link Rate: Adapter Type: Fabric Established: Connection Established: Mediatype: Partner Adapter: Standby: Target Port ID: 4 GBit Local Yes PTP auto None No 0x1
Slot: 5b Description: Fibre Channel Target Adapter 5b (Dual-channel, QLogic 2312 (2352) rev. 2) Status: ONLINE Host Port Address: 011200 Firmware Rev: 4.0.18 PCI Bus Width: 64-bit PCI Clock Speed: 33 MHz FC Nodename: 50:0a:09:80:87:69:27:ff (500a0980876927ff) FC Portname: 50:0a:09:84:87:69:27:ff (500a0984876927ff) Cacheline Size: 16 FC Packet Size: 2048 SRAM Parity: Yes External GBIC: No Data Link Rate: 4 GBit Adapter Type: Local Fabric Established: Yes Connection Established: PTP Mediatype: auto Partner Adapter: None Standby: No Target Port ID: 0x2
FC SAN management | 141 Moving an existing adapter to a different slot Swapping or upgrading a head As long as the existing root volume is used in the head swap or upgrade, the same port-toWWPN mapping applies. For example, port 0a on the replacement head will have the same WWPN as the original head. If the new head has different adapter ports, the new ports are assigned new WWPNs.
Adding new FC target expansion adapters If you add a new adapter, the new ports are assigned new WWPNs. If you replace an existing adapter, the existing WWPNs are assigned to the replacement adapter. For example, the following table shows the WWPN assignments if you replace a dual-port adapter with a quad-port adapter.
Original configuration 2a - 50:0a:09:81:96:97:c3:ac 2b - 50:0a:09:83:96:97:c3:ac New configuration 2a - 50:0a:09:81:96:97:c3:ac 2b - 50:0a:09:83:96:97:c3:ac 2c - 50:0a:09:82:96:97:c3:ac 2d - 50:0a:09:84:96:97:c3:ac WWPN assignments No change No change New New
Moving a target expansion adapter to a different slot If you move an adapter to a new slot, then adapter is assigned new WWPNs.
Original configuration 2a - 50:0a:09:81:96:97:c3:ac 2b - 50:0a:09:83:96:97:c3:ac New configuration 4a - 50:0a:09:85:96:97:c3:ac 4b - 50:0a:09:86:96:97:c3:ac WWPN assignments New New
Related tasks
142 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC Head swap: after performing a head swap, you might not be able to place the target adapters in their original slots, resulting in different WWPN assignments. In this situation it is important to change the WWPN assignments because many of the hosts will bind to these WWPNs. In addition, the fabric may be zoned by WWPN. Fabric re-organization: you might want to re-organize the fabric connections without having to physically move the target adapters or modify your cabling.
In some cases, you will need to set the new WWPN on a single adapter. In other cases, it will be easier to swap the WWPNs between two adapters, rather than individually set the WWPNs on both adapters.
Steps
If you do not use the -v option, all currently used WWPNs and their associated adapters are displayed. If you use the -v option, all other valid WWPNs that are not being used are also shown. 3. Set the new WWPN for a single adapter or swap WWPNs between two adapters.
Note: If you do not use the -f option, initiators might fail to reconnect to this adapter if the WWPN is changed. If you use the -f option, it overrides the warning message of changing the WWPNs. If you want to... Set the WWPN on a single adapter Then... Enter the following command: fcp portname set [-f] adapter wwpn Swap WWPNs between two adapters. Enter the following command: fcp portname swap [-f] adapter1 adapter2 Example fcp portname set -f 1b 50:0a:09:85:87:09:68:ad
How WWPN assignments work with FC target expansion adapters on page 140
Use -f to force the system to use an invalid nodename. You should not, under normal circumstances, use an invalid nodename.
Example fcp nodename 50:0a:09:80:82:02:8d:ff
WWPN aliases
A WWPN is a unique, 64-bit identifier displayed as a 16-character hexadecimal value in Data ONTAP. However, SAN Administrators may find it easier to identify FC ports using an alias instead, especially in larger SANs. You can use the wwpn-alias sub-command to create, remove, and display WWPN aliases.
144 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Next topics
Creating WWPN aliases on page 144 Removing WWPN aliases on page 144 Displaying WWPN alias information on page 145
Creating WWPN aliases You use the fcp wwpn-alias set command to create a new WWPN alias. You can create multiple aliases for a WWPN, but you cannot use the same alias for multiple WWPNs. The alias can consist of up to 32 characters and can contain only the letters A through Z, a through z, numbers 0 through 9, hyphen ("-"), underscore ("_"), left brace ("{"), right brace ("}"), and period (".").
Step
WWPN.
Example fcp wwpn-alias set my_alias_1 10:00:00:00:c9:30:80:2f Example fcp wwpn-alias set -f my_alias_1 11:11:00:00:c9:30:80:2e
Removing WWPN aliases You use the fcp wwpn-alias remove command to remove an alias for a WWPN.
Step
Displaying WWPN alias information You use the fcp wwpn-alias show command to display the aliases associated with a WWPN or the WWPN associated with an alias.
Step
Note: You can also use the igroup show, igroup create, igroup add, igroup remove, and fcp show initiator commands to display WWPN aliases.
Configuring onboard adapters for target mode on page 146 Configuring onboard adapters for initiator mode on page 148
146 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Reconfiguring onboard FC adapters on page 149 Configuring onboard adapters on the FAS270 for target mode on page 150 Configuring onboard adapters on the FAS270 for initiator mode on page 151 Commands for displaying adapter information on page 152
Related information
Ensure that you have licensed the FCP service on the system.
About this task
If you are installing target expansion adapters, or if you exceed the allowed number of adapter ports, you must set the onboard adapters to unconfigured before installing the expansion adapters.
Note: For detailed information about the number of target adapters supported on each hardware platform, see the iSCSI and Fibre Channel Configuration Guide. Steps
1. If you have already connected the port to a switch or fabric, take it offline by entering the following command:
fcp config adapter down adapter is the port number. You can specify more than one port. Example fcp config 0c 0d down
2. Set the onboard ports to operate in target mode by entering the following command:
fcadmin config -t target adapter... adapter is the port number. You can specify more than one port.
Ports 0c and 0d are set to target mode. 3. Run the following command to see the change in state for the ports:
fcadmin config Example fcadmin config Local Adapter Type State Status --------------------------------------------------0a initiator CONFIGURED online 0b initiator CONFIGURED online 0c target PENDING online 0d target PENDING online Note: The available Local State values are CONFIGURED, PENDING, and UNCONFIGURED. Refer to the fcadmin MAN page for detailed descriptions of each value.
Ports 0c and 0d are now in the PENDING state. 4. Reboot each system in the HA pair by entering the following command:
reboot
6. Verify that the FC ports are online and configured in the correct state for your configuration by entering the following command:
fcadmin config Example fcadmin config Local Adapter Type State Status --------------------------------------------------0a initiator CONFIGURED online 0b initiator CONFIGURED online 0c target CONFIGURED online 0d target CONFIGURED online
Licensing the FC service on page 136 Reconfiguring onboard FC adapters on page 149
148 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Related information
1. If you have already connected the port to a switch or fabric, take it offline by entering the following command:
fcp config adapter down adapter is the port number. You can specify more than one port. Example fcp config 0c 0d down
2. Set the onboard ports to operate in initiator mode by entering the following command:
fcadmin config -t initiator adapter adapter is the port number. You can specify more than one port. Example fcadmin config -t initiator 0c 0d
Ports 0c and 0d are set to initiator mode. 3. Reboot each system in the HA pair by entering the following command:
reboot
4. Verify that the FC ports are online and configured in the correct state for your configuration by entering the following command:
fcadmin config Example fcadmin config Local Adapter Type State Status --------------------------------------------------0a initiator CONFIGURED online 0b initiator CONFIGURED online
Note: The available Local State values are CONFIGURED, PENDING, and UNCONFIGURED. Refer to the fcadmin MAN page for detailed descriptions of each value.
You must reconfigure the onboard adapters under the following circumstances: You are upgrading from a 2-Gb onboard adapter to a 4-Gb target expansion adapter. Because you cannot mix 2-Gb and 4-Gb adapters on the same system, or on two systems in an HA pair, you must set the onboard adapters to unconfigured before installing the target expansion adapter. You have exceeded 16 target adapters, the maximum number of allowed adapters, on a FAS60xx controller.
Steps
The FCP service is stopped and all target adapters are taken offline. 2. Set the onboard adapters to unconfigured by entering the following command:
fcadmin config -t unconfig ports Example fcadmin config -t unconfig 0b 0d
The onboard adapters are unconfigured. 3. Shut down the storage system. 4. If you are installing a 4-Gb expansion adapter, install the adapter according to the instructions provided with the product. 5. Power on the system.
150 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
After you cable your configuration and enable the HA pair, configure FC port C for target mode.
Steps
1. Verify that the FC port C is in target mode by entering the following command:
sysconfig Example sysconfig Release R6.5xN_031130_2230: Mon Dec 1 00:07:33 PST 2003 System ID: 0084166059 System Serial Number: 123456 slot 0: System Board Processors: 2 Processor revision: B2 Processor type: 1250 Memory Size: 1022 MB slot 0: FC Host Adapter 0b 14 Disks: 952.0GB 1 shelf with EFH slot 0: FC Host Target Adapter 0c slot 0: SB1250-Gigabit Dual Ethernet Controller e0a MAC Address: 00:a0:98:01:29:cd (100tx-fd-up) e0b MAC Address: 00:a0:98:01:29:ce (auto-unknowncfg_down) slot 0: ATA/IDE Adapter 0a (0x00000000000001f0) 0a.0 245MB Note: The FC port C is identified as FC Host Target Adapter 0c.
3. After the reboot, verify that port 0c is in initiator mode by entering the following command:
sysconfig Example sysconfig RN_030824_2300: Mon Aug 25 00:07:33 PST 2003 System ID: 0084166059 System Serial Number: 123456 slot 0: System Board Processors: 2 Processor revision: B2 Processor type: 1250 Memory Size: 1022 MB slot 0: FC Host Adapter 0b 14 Disks: 952.0GB 1 shelf with EFH slot 0: Fibre Channel Initiator Host Adapter 0c slot 0: SB1250-Gigabit Dual Ethernet Controller e0a MAC Address: 00:a0:98:01:29:cd (100tx-fd-up) e0b MAC Address: 00:a0:98:01:29:ce (auto-unknowncfg_down) slot 0: ATA/IDE Adapter 0a (0x00000000000001f0) 0a.0 245MB Note: The FC port C is identified as FC Host Initiator Adapter 0c.
152 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
storage show adapter Information for all initiator adapters in the system, including firmware level, PCI bus width and clock speed, node name, cacheline size, FC packet size, link data rate, SRAM parity, and various states
All adapter (HBAs, NICs, and switch ports) configuration and status information Disks, disk loops, and options configuration information that affects coredumps and takeover FCP traffic information How long FCP has been running Initiator HBA port address, port name, port name alias, node name, and igroup name connected to target adapters
sysconfig [-v] [adapter] adapter is a numerical value only. -v displays additional information about all adapters. sysconfig -c
sysstat -f uptime fcp show initiator [-v] [adapter&portnumber] -v displays the Fibre Channel host address of the initiator. adapter&portnumber is the slot number with the port number, a or b; for example, 5a.
Service statistics
availtime
If you want to display... Target adapters node name, port name, and link state
number, a or b; for example, 5a. Information about traffic from the B ports of the partner storage system WWNN of the target adapter
Next topics sysstat -b
fcp nodename
Displaying the status of onboard FC adapters on page 153 Displaying information about all adapters on page 154 Displaying brief target adapter information on page 155 Displaying detailed target adapter information on page 156 Displaying the WWNN of a target adapter on page 157 Displaying HBA information on page 158 Displaying target adapter statistics on page 158 Displaying FC traffic information on page 159 Displaying information about FCP traffic from the partner on page 160 Displaying how long the FC service has been running on page 160 Displaying FCP service statistics on page 161
Displaying the status of onboard FC adapters Use the fcadmin config command to determine the status of the FC onboard adapters. This command also display other important information, including the configuration status of the adapter and whether it is configured as a target or initiator.
Note: Onboard FC adapters are set to initiator mode by default.
154 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Step
Displaying information about all adapters Use the sysconfig -v command to display system configuration and adapter information for all adapters in the system.
Step
System configuration information and adapter information for each slot that is used is displayed on the screen. Look for Fibre Channel Target Host Adapter to get information about target HBAs.
Note: In the output, in the information about the Dual-channel QLogic HBA, the value 2532
does not specify the model number of the HBA; it refers to the device ID set by QLogic. Also, the output varies according to storage system model. For example, if you have a FAS270, the target port is displayed as follows:
slot 0: Fibre Channel Target Host Adapter 0c
Displaying brief target adapter information Use the fcp config command to display information about target adapters in the system, as well as to quickly detect whether the adapters are active and online. The output of the fcp config command depends on the storage system model.
Step
nodename 50:0a:
156 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
09:80:86:57:11:22 mediatype ptp Example
partner adapter 7b
The following example shows output for the FAS270. The fcp config command displays the target virtual local and partner ports:
ONLINE [ADAPTER UP] PTP Fabric host address 010200 portname 50:0a:09:83:87:69:27:ff nodename 50:0a: 09:80:87:69:27:ff mediatype auto partner adapter None speed auto Example 0c:
The following example shows output for the FAS30xx. The fcp config command displays information about the onboard ports connected to the SAN:
0c: ONLINE [ADAPTER UP] PTP Fabric host address 010900 portname 50:0a:09:81:86:f7:a8:42 nodename 50:0a: 09:80:86:f7:a8:42 mediatype ptp partner adapter 0d ONLINE [ADAPTER UP] PTP Fabric host address 010800 portname 50:0a:09:8a:86:47:a8:32 nodename 50:0a: 09:80:86:47:a8:32 mediatype ptp partner adapter 0c
0d:
Displaying detailed target adapter information Use the fcp show adapter command to display the node name, port name, and link state of all target adapters in the system. Notice that the port name and node name are displayed with and without the separating colons. For Solaris hosts, you use the WWPN without separating colons when you map adapter port names (or these target WWPNs) to the host.
Step
Slot: 7b Description: Fibre Channel Target Adapter 7b (Dual-channel, QLogic 2312 (2352) rev. 2) Adapter Type: Partner Status: ONLINE FC Nodename: 50:0a:09:80:86:57:11:22 (500a098086571122) FC Portname: 50:0a:09:8c:86:57:11:22 (500a098c86571122) Standby: No
Displaying the WWNN of a target adapter Use the fcp nodename command to display the WWNN of a target adapter in the system.
Step
158 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Displaying HBA information HBAs are adapters on the host machine that act as initiators. Use the fcp show initiator command to display the port names, aliases, and igroup names of HBAs connected to target adapters on the storage system.
Step
10:00:00:00:c9:32:74:28 calculon0 calculon 10:00:00:00:c9:2d:60:dc gaston0 gaston 10:00:00:00:c9:2b:51:1f Initiators connected on adapter 0b: None connected.
Displaying target adapter statistics Use the fcp stats command to display important statistics for the target adapters in your system.
Step
FC SAN management | 159 r/sThe number of SCSI read operations per second. w/sThe number of SCSI write operations per second. o/sThe number of other SCSI operations per second. ki/s Kilobytes per second of received traffic ko/sKilobytes per second send traffic. asvc_tAverage time in milliseconds to process a request qlenThe average number of outstanding requests pending. hbaThe HBA slot and port number. To see additional statistics, enter the fcp stats command with no variables. Displaying FC traffic information Use the sysstat -f command to display FC traffic information, such as operations per second and kilobytes per second.
Step
160 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
78% 235538 1 0 0 5914 0 0 107562 43830 37396
The following columns provide information about FCP statistics: CPUThe percentage of the time that one or more CPUs were busy. FCPThe number of FCP operations per second. FCP KB/sThe number of kilobytes per second of incoming and outgoing FCP traffic. Displaying information about FCP traffic from the partner If you have an HA pair, you might want to obtain information about the amount of traffic coming to the system from its partner.
Step
The following columns display information about partner traffic: PartnerThe number of partner operations per second. Partner KB/sThe number of kilobytes per second of incoming and outgoing partner traffic.
Related concepts
Displaying FCP service statistics Use the availtime command to display the FCP service statistics.
Step
Commands to display disk space information on page 163 Examples of disk space monitoring using the df command on page 164 How Data ONTAP can automatically provide more free space for full volumes on page 168 Configuring automatic free space preservation for a FlexVol volume on page 169
Related information
snap reclaimable
164 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC For more information about the snap commands, see the Data ONTAP 8.0 7-Mode Data Protection Online Backup and Recovery Guide. For more information about the df and aggr show_space commands, see the appropriate man page.
Monitoring disk space on volumes with LUNs that do not use Snapshot copies on page 164 Monitoring disk space on volumes with LUNs that use Snapshot copies on page 166
Monitoring disk space on volumes with LUNs that do not use Snapshot copies
This example illustrates how to monitor disk space on a volume when you create a LUN without using Snapshot copies.
About this task
For this example, assume that you require less than the minimum capacity based on the recommendation of creating a seven-disk volume. For simplicity, assume the LUN requires only three GB of disk space. For a traditional volume, the volume size must be approximately three GB plus 10 percent.
Steps
1. From the storage system, create a new traditional volume named volspace that has approximately 67 GB, and observe the effect on disk space by entering the following commands:
vol createvolspaceaggr167g df-r/vol/volspace
The following sample output is displayed. There is a snap reserve of 20 percent on the volume, even though the volume will be used for LUNs, because snap reserve is set to 20 percent by default.
Filesystem kbytes /vol/volspace 50119928 /vol/volspace/.snapshot 12529980 volspace/.snapshot used 1440 0 avail 50118488 12529980 reserved 0 0 Mounted on /vol/volspace/ /vol/
Disk space management | 165 2. Set the percentage of snap reserve space to 0 and observe the effect on disk space by entering the following commands:
snap reservevolspace 0 df-r/vol/volspace
The following sample output is displayed. The amount of available Snapshot copy space becomes zero, and the 20 percent of Snapshot copy space is added to available space for /vol/volspace.
Filesystem kbytes /vol/volspace/ 62649908 /vol/volspace/.snapshot 0 volspace/.snapshot used 1440 0 avail 62648468 0 reserved 0 0 Mounted on /vol/volspace/ /vol/
3. Create a LUN named /vol/volspace/lun0 and observe the effect on disk space by entering the following commands:
lun create-s3g-taix/vol/volspace/lun0 df-r/vol/volspace
The following sample output is displayed. Three GB of space is used because this is the amount of space specified for the LUN, and space reservation is enabled by default.
Filesystem /vol/volspace/ /vol/volspace/.snapshot volspace/.snapshot kbytes 62649908 0 used 3150268 0 avail reserved 59499640 0 0 0 Mounted on /vol/volspace/ /vol/
4. Create an igroup named aix_host and map the LUN to it by entering the following commands (assuming that the host node name is iqn.1996-04.aixhost.host1). Depending on your host, you might need to create WWNN persistent bindings. These commands have no effect on disk space.
igroup create-i -taixaix_hostiqn.1996-04.aixhost.host1 lun map /vol/volspace/lun0aix_host 0
5. From the host, discover the LUN, format it, make the file system available to the host, and write data to the file system. For information about these procedures, refer to your Host Utilities documentation. These commands have no effect on disk space. 6. From the storage system, ensure that creating the file system on the LUN and writing data to it has no effect on space on the storage system by entering the following command:
df-r/vol/volspace
The following sample output is displayed. From the storage system, the amount of space used by the LUN remains 3 GB.
Filesystem kbytes /vol/volspace/ 62649908 volspace/ /vol/volspace/.snapshot 0 volspace/.snapshot used 3150268 0 avail 59499640 0 reserved 0 0 Mounted on /vol/ /vol/
7. Turn off space reservations and see the effect on space by entering the following commands:
166 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
lun setreservation/vol/volspace/lun0disable df-r/vol/volspace
The following sample output is displayed. The 3 GB of space for the LUN is no longer reserved, so it is not counted as used space; it is now available space. Any other requests to write data to the volume can occupy all of the available space, including the 3 GB that the LUN expects to have. If the available space is used before the LUN is written to, write operations to the LUN fail. To restore the reserved space for the LUN, turn space reservations on.
Filesystem kbytes /vol/volspace/ 62649908 /vol/volspace/.snapshot 0 volspace/.snapshot used 144 0 avail 62649584 0 reserved 0 0 Mounted on /vol/volspace/ /vol/
Monitoring disk space on volumes with LUNs that use Snapshot copies
This example illustrates how to monitor disk space on a volume when taking Snapshot copies.
About this task
Assume that you start with a new volume, and the LUN requires three GB of disk space, and fractional overwrite reserve is set to 100 percent. The recommended volume size is approximately 2*3 GB plus the rate of change of data.
Steps
1. From the storage system, create a new traditional volume named volspace that has approximately 67 GB, and observe the effect on disk space by entering the following commands:
vol create volspace aggr1 67g df -r /vol/volspace
The following sample output is displayed. There is a snap reserve of 20 percent on the volume, even though the volume will be used for LUNs, because snap reserve is set to 20 percent by default.
Filesystem kbytes /vol/volspace 50119928 /vol/volspace/.snapshot 12529980 volspace/.snapshot used 1440 0 avail reserved 50118488 0 12529980 0 Mounted on /vol/volspace/ /vol/
2. Set the percentage of snap reserve space to zero by entering the following command:
snap reserve volspace 0
Disk space management | 167 The following sample output is displayed. Approximately six GB of space is taken from available space and is displayed as used space for the LUN:
Filesystem kbytes /vol/volspace/ 62649908 volspace/ /vol/volspace/.snapshot 0 volspace/.snapshot used 6300536 0 avail reserved Mounted on 56169372 0 /vol/ 0 0 /vol/
4. Create an igroup named aix_host and map the LUN to it by entering the following commands (assuming that the host node name is iqn.1996-04.aixhost.host1). Depending on your host, you might need to create WWNN persistent bindings. These commands have no effect on disk space.
igroup create -i -t aix aix_host iqn.1996-04.aixhost.host1 lun map/vol/volspace/lun0aix_host 0
5. From the host, discover the LUN, format it, make the file system available to the host, and write data to the file system. For information about these procedures, refer to your Host Utilities documentation. These commands have no effect on disk space. 6. From the host, write data to the file system (the LUN on the storage system). This has no effect on disk space. 7. Ensure that the active file system is in a quiesced or synchronized state. 8. Take a Snapshot copy of the active file system named snap1, write one GB of data to it, and observe the effect on disk space by entering the following commands:
snap create volspace snap1 df -r /vol/volspace
The following sample output is displayed. The first Snapshot copy reserves enough space to overwrite every block of data in the active file system, so you see 12 GB of used space, the 6-GB LUN (which has 1 GB of data written to it), and one Snapshot copy. Notice that 6 GB appears in the reserved column to ensure write operations to the LUN do not fail. If you disable space reservation, this space is returned to available space.
Filesystem kbytes on /vol/volspace/ 62649908 volspace/ /vol/volspace/.snapshot 0 volspace/.snapshot used 12601072 180 avail 49808836 0 reserved 6300536 0 Mounted /vol/ /vol/
9. From the host, write another 1 GB of data to the LUN. Then, from the storage system, observe the effect on disk space by entering the following commands:
df -r /vol/volspace
The following sample output is displayed. The amount of data stored in the active file system does not change. You just overwrote 1 GB of old data with 1 GB of new data. However, the Snapshot copy requires the old data to be retained. Before the write operation, there was only 1 GB of data, and after the write operation, there was 1 GB of new data and 1 GB of data in a
168 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC Snapshot copy. Notice that the used space increases for the Snapshot copy by 1 GB, and the available space for the volume decreases by 1 GB.
Filesystem kbytes /vol/volspace/ 62649908 volspace/ /vol/volspace/.snapshot 0 volspace/.snapshot used 12601072 1050088 avail reserved Mounted on 47758748 0 /vol/ 0 0 /vol/
10. Ensure that the active file system is in a quiesced or synchronized state. 11. Take a Snapshot copy of the active file system named snap2 and observe the effect on disk space by entering the following command:
snap create volspace snap2
The following sample output is displayed. Because the first Snapshot copy reserved enough space to overwrite every block, only 44 blocks are used to account for the second Snapshot copy.
Filesystem kbytes on /vol/volspace/ 62649908 volspace/ /vol/volspace/.snapshot 0 volspace/.snapshot used 12601072 1050136 avail 47758748 0 reserved 6300536 0 Mounted /vol/ /vol/
12. From the host, write 2 GB of data to the LUN and observe the effect on disk space by entering the following command:
df -r /vol/volspace
The following sample output is displayed. The second write operation requires the amount of space actually used if it overwrites data in a Snapshot copy.
Filesystem kbytes on /vol/volspace/ 62649908 volspace/ /vol/volspace/.snapshot 0 volspace/ .snapshot used 12601072 3150371 avail 4608427 0 reserved 6300536 0 Mounted /vol/ /vol/
How Data ONTAP can automatically provide more free space for full volumes
Data ONTAP can automatically make more free space available for a FlexVol volume when that volume is nearly full. You can choose to make the space available by first allowing the volume size to increase, or by first deleting Snapshot copies. You enable this capability for a FlexVol volume by using the vol options command with the try_first option. Data ONTAP can automatically provide more free space for the volume by using one of the following methods:
Disk space management | 169 Increase the size of the volume when it is nearly full. This method is useful if the volume's containing aggregate has enough space to support a larger volume. You can increase the size in increments and set a maximum size for the volume. Delete Snapshot copies when the volume is nearly full. For example, you can automatically delete Snapshot copies that are not linked to Snapshot copies in cloned volumes or LUNs, or you can define which Snapshot copies you want to delete first your oldest or newest Snapshot copies. You can also determine when to begin deleting Snapshot copiesfor example, when the volume is nearly full or when the volumes Snapshot reserve is nearly full.
You can choose which method (increasing the size of the volume or deleting Snapshot copies) you want Data ONTAP to try first. If the first method does not provide sufficient extra free space to the volume, Data ONTAP will try the other method next.
If the specified FlexVol volume is about to run out of free space and is smaller than its maximum size, and if there is space available in its containing aggregate, its size will increase by the specified increment.
170 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC increasing its size or by deleting Snapshot copies, depending on how you have configured the volume.
Step
If you specify volume_grow, Data ONTAP attempts to increase the volume's size before deleting any Snapshot copies. Data ONTAP increases the volume size based on specifications you provided using the vol autosize command. If you specify snap_delete, Data ONTAP attempts to create more free space by deleting Snapshot copies, before increasing the size of the volume. Data ONTAP deletes Snapshot copies based on the specifications you provided using the snap autodelete command.
Data protection methods on page 171 LUN clones on page 173 Deleting busy Snapshot copies on page 182 Restoring a Snapshot copy of a LUN in a volume on page 184 Restoring a single LUN on page 186 Backing up SAN systems to tape on page 187 Using volume copy to copy LUNs on page 190
172 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Method SnapMirror
Used to... Replicate data or asynchronously mirror data from one storage system to another over local or wide area networks (LANs or WANs). Transfer Snapshot copies taken at specific points in time to other storage systems or near-line systems. These replication targets can be in the same data center through a LAN or distributed across the globe connected through metropolitan area networks (MANs) or WANs. Because SnapMirror operates at the changed block level instead of transferring entire files or file systems, it generally reduces bandwidth and transfer time requirements for replication. Back up data by using Snapshot copies on the storage system and transferring them on a scheduled basis to a destination storage system. Store these Snapshot copies on the destination storage system for weeks or months, allowing recovery operations to occur nearly instantaneously from the destination storage system to the original storage system. Manage storage system Snapshot copies directly from a Windows or UNIX host. Manage storage (LUNs) directly from a host. Configure access to storage directly from a host.
SnapVault
SnapDrive for Windows supports Windows 2000 Server and Windows Server 2003. SnapDrive for UNIX supports a number of UNIX environments.
Note: For more information about SnapDrive, see the SnapDrive for Windows
Installation and Administration Guide or SnapDrive for UNIX Installation and Administration Guide.
Native tape backup and recovery Store and retrieve data on tape.
Note: Data ONTAP supports native tape backup and recovery from local,
gigabit Ethernet, and Fibre Channel SAN-attached tape devices. Support for most existing tape drives is included, as well as a method for tape vendors to dynamically add support for new devices. In addition, Data ONTAP supports the Remote Magnetic Tape (RMT) protocol, allowing backup and recovery to any capable system. Backup images are written using a derivative of the BSD dump stream format, allowing full file-system backups as well as nine levels of differential backups.
Method NDMP
Used to... Control native backup and recovery facilities in storage systems and other file servers. Backup application vendors provide a common interface between backup applications and file servers.
Note: NDMP is an open standard for centralized control of enterprise-wide data management. For more information about how NDMP-based topologies can be used by storage systems to protect data, see the Data ONTAP 8.0 7Mode Data Protection Tape Backup and Recovery Guide.
Related information
LUN clones
A LUN clone is a point-in-time, writable copy of a LUN in a Snapshot copy. Changes made to the parent LUN after the clone is created are not reflected in the Snapshot copy. A LUN clone shares space with the LUN in the backing Snapshot copy. When you clone a LUN, and new data is written to the LUN, the LUN clone still depends on data in the backing Snapshot copy. The clone does not require additional disk space until changes are made to it. You cannot delete the backing Snapshot copy until you split the clone from it. When you split the clone from the backing Snapshot copy, the data is copied from the Snapshot copy to the clone, thereby removing any dependence on the Snapshot copy. After the splitting operation, both the backing Snapshot copy and the clone occupy their own space.
Note: Cloning is not NVLOG protected, so if the storage system panics during a clone operation, the operation is restarted from the beginning on a reboot or takeover. Next topics
Reasons for cloning LUNs on page 174 Differences between FlexClone LUNs and LUN clones on page 174 Cloning LUNs on page 175 LUN clone splits on page 176 Displaying the progress of a clone-splitting operation on page 176 Stopping the clone-splitting process on page 177 Deleting Snapshot copies on page 177 Deleting backing Snapshot copies of deleted LUN clones on page 177
174 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
You need to create a Snapshot copy manually before creating a LUN clone, because a LUN clone uses a backing Snapshot copy A LUN clone is coupled with a Snapshot copy.
A temporary Snapshot copy is created during the cloning operation. The Snapshot copy is deleted immediately after the cloning operation. However, you can prevent the Snapshot copy creation by using the -n option of the clone start command. A FlexClone LUN is independent of Snapshot copies. Therefore, no splitting is required.
When a LUN clone is split from the backing Snapshot copy, it uses extra storage space. The amount of extra space used depends on the type of clone split.
FlexClone LUN You can clone a complete LUN or a sub-LUN. To clone a sub-LUN, you should know the block range of the parent entity and clone entity. FlexClone LUNs are best for situations where you need to keep the clone for a long time. No Snapshot copy management is required.
LUN clones are best when you need a clone only for a short time. You need to manage Snapshot copies if you keep the LUN clones for a long time.
For more information about FlexClone LUNs, see the Data ONTAP 8.0 7-Mode Storage Management Guide.
Cloning LUNs
Use LUN clones to create multiple readable, writable copies of a LUN.
Before you begin
Before you can clone a LUN, you must create a Snapshot copy (the backing Snapshot copy) of the LUN you want to clone.
About this task
Note that a space-reserved LUN clone requires as much space as the space-reserved parent LUN. If the clone is not space-reserved, make sure the volume has enough space to accommodate changes to the clone.
Steps
2. Create a Snapshot copy of the volume containing the LUN to be cloned by entering the following command:
snap create volume_name snapshot_name Example snap create vol1 mysnap
176 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
clone_lun_path is the path to the clone you are creating, for example, /vol/vol1/lun0clone. parent_lun_path is the path to the original LUN. parent_snap is the name of the Snapshot copy of the original LUN. Example lun clone create /vol/vol1/lun0clone -b vol/vol1/lun0 mysnap Result
This behavior in not enabled by default; use the snapshot_clone_dependency volume option to enable it. If this option is set to off, you will still be required to delete all subsequent Snapshot copies before deleting the base Snapshot copy. If you enable this option, you are not required to rediscover the LUNs. If you perform a subsequent volume snap restore operation, the system restores whichever value was present at the time the Snapshot copy was taken.
178 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Step
Examples of deleting backing Snapshot copies of deleted LUN clones Use the snapshot_clone_dependency option to determine whether you can delete the base Snapshot copy without deleting the more recent Snapshot copies after deleting a LUN clone. This option is set to off by default. Example with snapshot_clone_dependency set to off The following example illustrates how all newer backing Snapshot copies must be deleted before deleting the base Snapshot copy when a LUN clone is deleted. Set the snapshot_clone_dependency option to off by entering the following command:
vol options volume_name snapshot_clone_dependency off
Create a new LUN clone, lun_s1, from the LUN in Snapshot copy snap1. Run the lun show -v command to show that lun_s1 is backed by snap1.
system1> lun clone create /vol/vol1/lun_s1 -b /vol/vol1/lun snap1 system1> lun show -v /vol/vol1/lun_s1 47.1m (49351680) online) Serial#: C4e6SJI0ZqoH Backed by: /vol/vol1/.snapshot/snap1/lun Share: none Space Reservation: enabled Multiprotocol Type: windows (r/w,
Run the snap list command to show that snap1 is busy, as expected.
system1> snap list vol1 Volume vol1 working... %/used ---------24% (24%) %/total ---------0% ( 0%) date -----------Dec 20 02:40 name -------snap1
(busy,LUNs)
When you create a new Snapshot copy, snap2, it contains a copy of lun_s1, which is still backed by the LUN in snap1.
system1> snap create vol1 snap2 system1> snap list vol1 Volume vol1 working... %/used %/total date name
(busy,LUNs)
30m (31457280)
(r/w,
Run the lun snap usage command to show that snap2 still has a dependency on snap1.
system1> lun snap usage vol1 snap1 Snapshot - snap2: LUN: /vol/vol1/.snapshot/snap2/lun_s1 Backed By: /vol/vol1/.snapshot/snap1/lun
Run the snap list command to show that snap1 is still busy.
system1> snap list vol1 Volume vol1 working... %/used ---------39% (39%) 53% (33%) %/total ---------0% ( 0%) 0% ( 0%) date -----------Dec 20 02:41 Dec 20 02:40 name -------snap2 snap1
(busy, LUNs)
Since snap1 is still busy, you cannot delete it until you delete the more recent Snapshot copy, snap2. Example with snapshot_clone_dependency set to on The following example illustrates how you can delete a base Snapshot copy without deleting all newer backing Snapshot copies when a LUN clone is deleted.
180 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC Set the snapshot_clone_dependency option to on by entering the following command:
vol options volume_name snapshot_clone_dependency on
Create a new LUN clone, lun_s1, from the LUN in Snapshot copy snap1. Run the lun show -v command to show that lun_s1 is backed by snap1.
system1> lun clone create /vol/vol1/lun_s1 -b /vol/vol1/lun snap1 system1> lun show -v /vol/vol1/lun_s1 47.1m (49351680) online) Serial#: C4e6SJI0ZqoH Backed by: /vol/vol1/.snapshot/snap1/lun Share: none Space Reservation: enabled Multiprotocol Type: windows (r/w,
Run the snap list command to show that snap1 is busy, as expected.
system1> snap list vol1 Volume vol1 working... %/used ---------24% (24%) %/total ---------0% ( 0%) date -----------Dec 20 02:40 name -------snap1
(busy,LUNs)
When you create a new Snapshot copy, snap2, it contains a copy of lun_s1, which is still backed by the LUN in snap1.
system1> snap create vol1 snap2 system1> snap list vol1 Volume vol1 working... %/used ---------24% (24%) 43% (31%) %/total ---------0% ( 0%) 0% ( 0%) date -----------Dec 20 02:41 Dec 20 02:40 name -------snap2 snap1
(busy,LUNs)
30m (31457280)
(r/w,
Run the lun snap usage command to show that snap2 still has a dependency on snap1.
system1> lun snap usage vol1 snap1 Snapshot - snap2: LUN: /vol/vol1/.snapshot/snap2/lun_s1 Backed By: /vol/vol1/.snapshot/snap1/lun
Run the snap list command to show that snap1 is no longer busy.
system1> snap list vol1 Volume vol1 working... %/used ---------39% (39%) 53% (33%) %/total ---------0% ( 0%) 0% ( 0%) date -----------Dec 20 02:41 Dec 20 02:40 name -------snap2 snap1
Since snap1 is no longer busy, you can delete it without first deleting snap2.
system1> snap delete vol1 snap1 Wed Dec 20 02:42:55 GMT [wafl.snap.delete:info]: Snapshot copy snap1 on volume vol1 was deleted by the Data ONTAP function snapcmd_delete. The unique ID for this Snapshot copy is (1, 6). system1> snap list vol1 Volume vol1 working... %/used ---------38% (38%) %/total ---------0% ( 0%) date -----------Dec 20 02:41 name -------snap2
182 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Use the lun snap usage command to list all the LUNs backed by data in the specified Snapshot copy. It also lists the corresponding Snapshot copies in which these LUNs exist. The lun snap usage command displays the following information: LUN clones that are holding a lock on the Snapshot copy given as input to this command Snapshots in which these LUN clones exist
Steps
1. Identify all Snapshot copies that are in a busy state, locked by LUNs, by entering the following command:
snap list vol-name Example snap list vol2
2. Identify the LUNs and the Snapshot copies that contain them by entering the following command:
lun snap usage [-s] vol_name snap_name
Use the -s option to only display the relevant backing LUNs and Snapshot copies that must be deleted.
Note: The -s option is particularly useful in making SnapDrive output more readable. For example: lun snap usage -s vol2 snap0 You need to delete the following snapshots before deleting snapshot "snap0":
In some cases, the path for LUN clones backed by a Snapshot copy cannot be determined. In those instances, a message is displayed so that those Snapshot copies can be identified. You must still delete these Snapshot copies in order to free the busy backing Snapshot copy. For example:
lun snap usage vol2 snap0 Snapshot - snap2: LUN: Unable to determine the path of the LUN Backed By: Unable to determine the path of the LUN LUN: /vol/vol2/.snapshot/snap2/lunB Backed By: /vol/vol2/.snapshot/snap0/lunA
3. Delete all the LUNs in the active file system that are displayed by the lun snap usage command by entering the following command:
lun destroy [-f] lun_path [lun_path ...] Example lun destroy /vol/vol2/lunC
4. Delete all the Snapshot copies that are displayed by the lun snap usage command in the order they appear, by entering the following command:
snap delete vol-name snapshot-name Example snap delete vol2 snap2 snap delete vol2 snap1
All the Snapshot copies containing lunB are now deleted and snap0 is no longer busy. 5. Delete the Snapshot copy by entering the following command:
snap delete vol-name snapshot-name
184 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
Example snap delete vol2 snap0
Before using SnapRestore, you must perform the following tasks: Always unmount the LUN before you run the snap restore command on a volume containing the LUN or before you run a single file SnapRestore of the LUN. For a single file SnapRestore, you must also take the LUN offline. Check available space; SnapRestore does not revert the Snapshot copy if sufficient space is unavailable.
About this task Attention: When a single LUN is restored, it must be taken offline or be unmapped prior to
recovery. Using SnapRestore on a LUN, or on a volume that contains LUNs, without stopping all host access to those LUNs, can cause data corruption and system errors.
Steps
1. From the host, stop all host access to the LUN. 2. From the host, if the LUN contains a host file system mounted on a host, unmount the LUN on that host. 3. From the storage system, unmap the LUN by entering the following command:
lun unmap lun_path initiator-group
scripts.
-t vol volume_name specifies the volume name to restore. volume_name is the name of the volume to be restored. Enter the name only, not the complete
If you did not use the -f option, Data ONTAP displays a warning message and prompts you to confirm your decision to restore the volume. 5. Press y to confirm that you want to restore the volume. Data ONTAP displays the name of the volume and the name of the Snapshot copy for the reversion. If you did not use the -f option, Data ONTAP prompts you to decide whether to proceed with the reversion. 6. Decide if you want to continue with the reversion. If you want to continue the reversion, press y. The storage system reverts the volume from the selected Snapshot copy. If you do not want to continue the reversion, press n or Ctrl-C. The volume is not reverted and you are returned to a storage system prompt.
7. Enter the following command to unmap the existing old maps that you do not want to keep.
lun unmap lun_path initiator-group
9. From the host, remount the LUN if it was mounted on a host. 10. From the host, restart access to the LUN. 11. From the storage system, bring the restored LUN online by entering the following command:
lun online lun_path After you finish
After you use SnapRestore to update a LUN from a Snapshot copy, you also need to restart any database applications you closed down and remount the volume from the host side.
186 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
1. Notify network users that you are going to restore a LUN so that they know that the current data in the LUN will be replaced by that of the selected Snapshot copy. 2. Enter the following command:
snap restore [-f] [-t file] [-s snapshot_name] [-r restore_as_path] path_and_LUN_name -f suppresses the warning message and the prompt for confirmation. -t file specifies that you are entering the name of a file to revert. -s snapshot_name specifies the name of the Snapshot copy from which to restore the data. -r restore_as_path restores the file to a location in the volume different from the location in the Snapshot copy. For example, if you specify /vol/vol0/vol3/mylun as the argument to -r,
SnapRestore restores the file called mylun to the location /vol/vol0/vol3 instead of to the path structure indicated by the path in path_and_lun_name.
path_and_LUN_name is the complete path to the name of the LUN to be restored. You can enter
only one path name. A LUN can be restored only to the volume where it was originally. The directory structure to which a LUN is to be restored must be the same as specified in the path. If this directory structure no longer exists, you must re-create it before restoring the file. Unless you enter -r and a path name, only the LUN at the end of the path_and_lun_name is reverted. If you did not use the -f option, Data ONTAP displays a warning message and prompts you to confirm your decision to restore the LUN. 3. Type
y
to confirm that you want to restore the file. Data ONTAP displays the name of the LUN and the name of the Snapshot copy for the restore operation. If you did not use the -f option, Data ONTAP prompts you to decide whether to proceed with the restore operation.
to continue with the restore operation. Data ONTAP restores the LUN from the selected Snapshot copy. Example of a single LUN restore
snap restore -t file -s payroll_backup_friday /vol/vol1/payroll_luns storage_system> WARNING! This will restore a file from a snapshot into the active filesystem. If the file already exists in the active filesystem, it will be overwritten with the contents from the snapshot. Are you sure you want to do this? y You have selected file /vol/vol1/payroll_luns, snapshot payroll_backup_friday Proceed with restore? y
Data ONTAP restores the LUN called payroll_backup_friday to the existing volume and directory structure /vol/vol1/payroll_luns. After a LUN is restored with SnapRestore, all data and all relevant user-visible attributes for that LUN in the active file system are identical to that contained in the Snapshot copy.
The following procedure assumes that you have already performed the following tasks: Created the production LUN Created the igroup to which the LUN will belong The igroup must include the WWPN of the application server. Mapped the LUN to the igroup Formatted the LUN and made it accesssible to the host
Configure volumes as SAN-only or NAS-only and configure qtrees within a single volume as SANonly or NAS-only. From the point of view of the SAN host, LUNs can be confined to a single WAFL volume or qtree or spread across multiple WAFL volumes, qtrees, or storage systems.
188 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC The following diagram shows a SAN setup that uses two application hosts and a pair of storage systems in an HA pair.
Volumes on a host can consist of a single LUN mapped from the storage system or multiple LUNs using a volume manager, such as VxVM on HP-UX systems. To map a LUN within a Snapshot copy for backup, complete the following steps. Step 1 can be part of your SAN backup applications pre-processing script. Steps 5 and 6 can be part of your SAN backup applications post-processing script.
Steps
1. When you are ready to start the backup (usually after your application has been running for some time in your production environment), save the contents of host file system buffers to disk using the command provided by your host operating system, or by using SnapDrive for Windows or SnapDrive for UNIX. 2. Create a Snapshot copy by entering the following command:
snap create volume_name snapshot_name Example snap create vol1 payroll_backup
4. Create an igroup that includes the WWPN of the backup server by entering the following command:
igroup create -f-t ostype group [node ...] Example group create -f -t windows backup_server 10:00:00:00:d3:6d:0f:e1
i Data ONTAP creates an igroup that includes the WWPN (10:00:00:00:d3:6d:0f:e1) of the Windows backup server. 5. To map the LUN clone you created in Step 3 to the backup host, enter the following command:
lun map lun_path initiator-group LUN_ID Example lun map /vol/vol1/qtree_1/payroll_lun_clone backup_server 1
Data ONTAP maps the LUN clone (/vol/vol1/qtree_1/payroll_lun_clone) to the igroup called backup_server with a SCSI ID of 1. 6. From the host, discover the new LUN and make the file system available to the host. 7. Back up the data in the LUN clone from the backup host to tape by using your SAN backup application. 8. Take the LUN clone offline by entering the following command: lun offline /vol/vol_name/ qtree_name/lun_name
Example lun offline /vol/vol1/qtree_1/payroll_lun_clone
9. Remove the LUN clone by entering the following command: lun destroy lun_path
Example lun destroy /vol/vol1/qtree_1/payroll_lun_clone
190 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
You must save contents of host file system buffers to disk before running vol copy commands on the storage system.
Note: The term LUNs in this context refer to the LUNs that Data ONTAP serves to clients, not to the array LUNs used for storage on a storage array. About this task
The vol copy command enables you to copy data from one WAFL volume to another, either within the same storage system or to a different storage system. The result of the vol copy command is a restricted volume containing the same data that was on the source storage system at the time you initiate the copy operation.
Step
1. To copy a volume containing a LUN to the same or different storage system, enter the following command:
vol copy start -S source:source_volume dest:dest_volume -S copies all Snapshot copies in the source volume to the destination volume. If the source volume has Snapshot copy-backed LUNs, you must use the -S option to ensure that the Snapshot copies are copied to the destination volume.
If the copying takes place between two storage systems, you can enter the vol copy start command on either the source or destination storage system. You cannot, however, enter the command on a third storage system that does not contain the source or destination volume.
Example
Index | 191
Index
A
access lists about 93 creating 93 displaying 94 removing interfaces from 94 adapters changing the speed for 138 changing the WWPN for 141 configuring for initiator mode 148 configuring for target mode 146 displaying brief target adapter information 155 displaying detailed target adapter information 156 displaying information about all 154 displaying information for FCP 152 displaying statistics for target adapters 158 aggregate defined 33 aliases for WWPNs 143 ALUA defined 22 enabling 78 setting the priority of target portal groups for 113 authentication defining default for CHAP 102 using CHAP for iSCSI 100
D
Data ONTAP options automatically enabled 88 iscsi.isns.rev 96 iscsi.max_connections_per_session 85 iscsi.max_error_recovery_level 86 df command monitoring disk space using 164 disk space information, displaying 164 disk space monitoring with Snapshot copies 166 monitoring without Snapshot copies 164
E
error recovery level enabling levels 1 and 2 86 eui type designator 26
F
FC changing the adapter speed 138 checking interfaces 68 FCP changing the WWNN 143 defined 29 displaying adapters 152 host nodes 31 how nodes are connected 30 how nodes are identified 30 managing in HA pairs 127 managing systems with onboard adapters 145 noded defined 30 storage system nodes 31 switch nodes 32 taking adapters offline and online 138 FCP commands fcp config 138, 152 fcp nodename 152 fcp portname set 141 fcp show 152 fcp start 137 fcp stats 152
B
backing up SAN systems 187
C
CHAP and RADIUS 106 authentication for iSCSI 100 defined 27 defining default authentication 102 using with vFiler units 100 cluster failover avoiding igroup mapping conflicts with 128 multipathing requirements for 130 overriding mapping conflicts 129 understanding 127 create_ucode option changing with the command line 47
192 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
fcp status 136 fcp stop 137 license 136 license add 136 license delete 137 storage show adapter 152 FCP service disabling 137 displaying how long running 160 displaying partner's traffic information 160 displaying statistics for 161 displaying traffic information about 159 licensing 136 starting and stopping 137 verifying the service is licensed 136 verifying the service is running 136 FlexClone files and FlexClone LUNs differences between FlexClone LUNs and LUN clones 174 flexible volumes described 33 FlexVol volumes automatic free space preservation, configuring 169 automatically adding space for 168 automatically grow, configuring to 169 fractional reserve about 35 free space automatically increasing 168 for vFiler units 79 igroup add 76 igroup create 58 igroup remove 76 igroup rename 77 igroup set 77 igroup set alua 78 igroup show 77 igroup commands for iSCSI igroup create 73 igroup mapping conflicts avoiding during cluster failover 128 igroup throttles borrowing queue resources 82 creating 81 defined 80 destroying 81 displaying information about 82 displaying LUN statistics for 83 displaying usage information 83 how Data ONTAP uses 80 how portsets affect 131 how to use 80 igroups borrowing queue resources for 82 initiator groups adding 76 binding to portsets 132 creating for FCP using sanlun 74 creating for iSCSI 73 defined 51 displaying 77 name rules 52 naming 52 ostype of 53 renaming 77 requirements for creation 52 setting the ostype for 77 showing portset bindings 135 type of 53 unmapping LUNs from 65 initiator, displaying for iSCSI 98 initiators configuring adapters as 148 interface disabling for iSCSI 92 enabling for iSCSI 91 intiator groups destroying 75 removing initiators from 76
H
HA pairs and cluster failover 127 and iSCSI 28 using with iSCSI 120 HBA displaying information about 158 head swap changing WWPNs 141 host bus adapters displaying information about 158 Host Utilities defined 21
I
igroug commands igroup destroy 75 igroup commands
Index | 193
IP addresses, displaying for iSCSI 92 iqn type designator 25 iSCSI access lists 93 connection, displaying 119 creating access lists 93 creating target portal groups 111 default TCP port 26 destroying target portal groups 112 displaying access lists 94 displaying initiators 98 displaying statistics 114 enabling error recovery levels 1 and 2 86 enabling on interface 91 explained 23 how communication sessions work 28 how nodes are identified 25 implementation on the host 24 implementation on the storage system 24 iSNS 95 license 87 multi-connection sessions, enabling 85 node name rules 89 nodes defined 24 RADIUS 103 removing interfaces from access lists 94 security 100 service, verifying 87 session, displaying 118 setup procedure 28 supported configurations 24 target alias 90 target IP addresses 92 target node name 89 target portal groups defined 26, 109 troubleshooting 122 using with HA pairs 28 with HA pairs 120 iscsi commands iscsi alias 90 iscsi connection 119 iscsi initiator 98 iscsi interface 91 iscsi isns 96 iscsi nodename 89 iscsi portal 92 iscsi security 101 iscsi session 118 iscsi start 88 iscsi stats 114 iscsi status 87 iscsi stop 88 iscsi tpgroup 111 iscsi.isns.rev option 96 iscsi.max_connections_per_session option 85 iscsi.max_error_recovery_level option 86 iSNS defined 27 disabling 97 server versions 95 service for iSCSI 95 updating immediately 97 with vFiler units 98 ISNS and IPv6 96 registering 96
L
license iSCSI 87 LUN clones creating 175 defined 173 deleting Snapshot copies 177 displaying progress of split 176 reasons for using 174 splitting from Snapshot copy 176 stopping split 177 lun commands lun config_check 68 lun destroy 67 lun help 63 lun map 58 lun move 66 lun offline 65 lun online 64 lun set reservation 67 lun setup 57 lun share 68 lun show 71 lun stats 70 lun unmap 65 LUN commands lun clone create 175 lun clone split 176 lun snap usage 182 lun unmap 75 LUN creation description attribute 50
194 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
host operating system type 48 LUN ID requirement 51 path name 48 size specifiers 50 space reservation default 51 LUN ID ranges of 54 LUN serial numbers displaying changing 70 LUNs bringing online 64 checking settings for 68 controlling availability 64 displaying mapping 71 displaying reads, writes, and operations for 70 displaying serial numbers for 70 enabling space reservations 67 host operating system type 48 management task list 63 mapping guidelines 54 modifying description 66 multiprotocol type 48 read-only 55 removing 67 renaming 66 restoring 186 statistics for igroup throttles 83 taking offline 65 unmapping from initiator group 65 storage system 26 node type designator eui 26 iqn 25 nodes FCP 30 iSCSI 24
O
onboard adapters configuring for target mode 146 options automatically enabled 88 iscsi.isns.rev 96 iscsi.max_connections_per_session 85 iscsi.max_error_recovery_level 86 ostype setting 77
P
plex defined 33 porset commands portset create 132 port sets defined 130 portset commands portset add 133 portset destroy 134 portset remove 134 portset show 135 portsets adding ports 133 binding to igroups 132 creating 132 destroying 134 how they affect igroup throttles 131 how upgrades affect 131 removing 134 showing igroup bindings 135 unbinding igroups 133 viewing ports in 135 provisioning best practices 37
M
mapping conflicts overriding 129 multi-connection sessions enabling 85 multipathing requirements for cluster failover 130 Multiprotocol type 48 MultiStore creating LUNs for vFiler units 59
N
name rules igroups 52 iSCSI node name 89 node name rules for iSCSI 89
Index | 195
Q
qtrees defined 33
T
target adapter displaying WWNN 157 target adapters displaying statistics for 158 target alias for iSCSI 90 target node name, iSCSI 89 target portal groups about 109 adding interfaces 112 creating 111 defined 26 destroying 112 removing interfaces 113 targets configuring adapters as 146 TCP port default for iSCSI 26 traditional volumes described 33 troubleshooting iSCSI 122
R
RADIUS adding a RADIUS server 106 clearing statistics for 108 defining as the authentication method 104 displaying statistics for 108 displaying the status of 107 enabling for CHAP authentication 106 overview 103 removing a RADIUS server 108 starting the client service 105 stopping the service 107 RAID-level mirroring described 33 restoring LUNs 186
S
SAN systems backing up 187 sanlun creating igroups for FCP 74 serial numbers for LUNs 70 snap commands snap restore 184 snap reserve setting the percentage 45 SnapDrive about 22 SnapMirror destinations mapping read-only LUNs to hosts at 55 Snapshot copies deleting busy 182 schedule, turning off 45 space reservations about 35 statistics displaying for iSCSI 114 storage system node name defined 26 storage units types of 33 SyncMirror use of plexes in 33
V
vFiler units authentication using CHAP 100 creating LUNs for 59 using iSCSI igroups with 79 with iSNS 98 volumes automatically adding space for 168 estimating required size of 37, 40
W
WWNN changing 143 displaying for a target adapter 157 WWPN changing for a target adapter 141 creating igroups with 30 how they are assigned 32 WWPN aliases about 143 creating 144
196 | Data ONTAP 8.0 7-Mode Block Access Management Guide for iSCSI and FC
displaying 145 removing 144