SAN Storage Management
SAN Storage Management
ONTAP 9
NetApp
December 22, 2023
To connect to iSCSI networks, hosts can use standard Ethernet network adapters (NICs), TCP offload engine
(TOE) cards with software initiators, converged network adapters (CNAs), or dedicated iSCSI host bus
adapters (HBAs).
• FC
• FCoE
• NVMe
The two formats, or type designators, for iSCSI node names are iqn and eui. The SVM iSCSI target always
uses the iqn-type designator. The initiator can use either the iqn-type or eui-type designator.
Each SVM running iSCSI has a default node name based on a reverse domain name and a unique encoding
number.
iqn.1992-08.com.netapp:sn.unique-encoding-number
The following example shows the default node name for a storage system with a unique encoding number:
1
iqn.1992-08.com.netapp:sn.812921059e6c11e097b3123478563412:vs.6
The iSCSI protocol is configured in ONTAP to use TCP port number 3260.
ONTAP does not support changing the port number for iSCSI. Port number 3260 is registered as part of the
iSCSI specification and cannot be used by any other application or service.
Related information
NetApp Documentation: ONTAP SAN Host Configuration
You can manage the availability of the iSCSI service on the iSCSI logical interfaces of the
storage virtual machine (SVM) by using the vserver iscsi interface enable or
vserver iscsi interface disable commands.
• Using Initiator software that uses the host’s standard Ethernet interfaces.
• Through an iSCSI host bus adapter (HBA): An iSCSI HBA appears to the host operating system as a SCSI
disk adapter with local disks.
• Using a TCP Offload Engine (TOE) adapter that offloads TCP/IP processing.
During the initial stage of an iSCSI session, the initiator sends a login request to the
storage system to begin an iSCSI session. The storage system then either permits or
denies the login request, or determine that a login is not required.
iSCSI authentication methods are:
• Challenge Handshake Authentication Protocol (CHAP)--The initiator logs in using a CHAP user name and
password.
You can specify a CHAP password or generate a hexadecimal secret password. There are two types of
CHAP user names and passwords:
2
◦ Inbound—The storage system authenticates the initiator.
◦ Outbound—This is an optional setting to enable the initiator to authenticate the storage system.
You can use outbound settings only if you define an inbound user name and password on the storage
system.
You can define the list of initiators and their authentication methods. You can also define a default
authentication method that applies to initiators that are not on this list.
Related information
Windows Multipathing Options with Data ONTAP: Fibre Channel and iSCSI
ONTAP provides a number of features for managing security for iSCSI initiators. You can
define a list of iSCSI initiators and the authentication method for each, display the
initiators and their associated authentication methods in the authentication list, add and
remove initiators from the authentication list, and define the default iSCSI initiator
authentication method for initiators not in the list.
Beginning with ONTAP 9.1 existing iSCSI security commands were enhanced to accept
an IP address range, or multiple IP addresses.
All iSCSI initiators must provide origination IP addresses when establishing a session or connection with a
target. This new functionality prevents an initiator from logging into the cluster if the origination IP address is
unsupported or unknown, providing a unique identification scheme. Any initiator originating from an
unsupported or unknown IP address will have their login rejected at the iSCSI session layer, preventing the
initiator from accessing any LUN or volume within the cluster.
Implement this new functionality with two new commands to help manage pre-existing entries.
Improve iSCSI initiator security management by adding an IP address range, or multiple IP addresses with the
vserver iscsi security add-initiator-address-range command.
Remove an IP address range, or multiple IP addresses, with the vserver iscsi security remove-
initiator-address-range command.
3
What CHAP authentication is
• If you define an inbound user name and password on the storage system, you must use the same user
name and password for outbound CHAP settings on the initiator. If you also define an outbound user name
and password on the storage system to enable bidirectional authentication, you must use the same user
name and password for inbound CHAP settings on the initiator.
• You cannot use the same user name and password for inbound and outbound settings on the storage
system.
• CHAP user names can be 1 to 128 bytes.
Passwords can be hexadecimal values or strings. For hexadecimal values, you should enter the value with
a prefix of “0x” or “0X”. A null password is not allowed.
ONTAP allows the use of special characters, non-English letters, numbers and spaces for CHAP
passwords (secrets). However, this is subject to host restrictions. If any of these are not allowed
by your specific host, they cannot be used.
For example, the Microsoft iSCSI software initiator requires both the initiator and target CHAP
passwords to be at least 12 bytes if IPsec encryption is not being used. The maximum password
length is 16 bytes regardless of whether IPsec is used.
How using iSCSI interface access lists to limit initiator interfaces can increase performance and
security
ISCSI interface access lists can be used to limit the number of LIFs in an SVM that an
initiator can access, thereby increasing performance and security.
When an initiator begins a discovery session using an iSCSI SendTargets command, it receives the IP
addresses associated with the LIF (network interface) that is in the access list. By default, all initiators have
access to all iSCSI LIFs in the SVM. You can use the access list to restrict the number of LIFs in an SVM that
an initiator has access to.
4
Internet Storage Name Service (iSNS)
The Internet Storage Name Service (iSNS) is a protocol that enables automated
discovery and management of iSCSI devices on a TCP/IP storage network. An iSNS
server maintains information about active iSCSI devices on the network, including their IP
addresses, iSCSI node names IQN’s, and portal groups.
You can obtain an iSNS server from a third-party vendor. If you have an iSNS server on your network
configured and enabled for use by the initiator and target, you can use the management LIF for a storage
virtual machine (SVM) to register all the iSCSI LIFs for that SVM on the iSNS server. After the registration is
complete, the iSCSI initiator can query the iSNS server to discover all the LIFs for that particular SVM.
If you decide to use an iSNS service, you must ensure that your storage virtual machines (SVMs) are properly
registered with an Internet Storage Name Service (iSNS) server.
If you do not have an iSNS server on your network, you must manually configure each target to be visible to
the host.
An iSNS server uses the Internet Storage Name Service (iSNS) protocol to maintain information about active
iSCSI devices on the network, including their IP addresses, iSCSI node names (IQNs), and portal groups.
The iSNS protocol enables automated discovery and management of iSCSI devices on an IP storage network.
An iSCSI initiator can query the iSNS server to discover iSCSI target devices.
NetApp does not supply or resell iSNS servers. You can obtain these servers from a vendor supported by
NetApp.
The iSNS server communicates with each storage virtual machine (SVM) through the SVM management LIF.
The management LIF registers all iSCSI target node name, alias, and portal information with the iSNS service
for a specific SVM.
In the following example, SVM “VS1” uses SVM management LIF “VS1_mgmt_lif” to register with the iSNS
server. During iSNS registration, an SVM sends all the iSCSI LIFs through the SVM management LIF to the
iSNS Server. After the iSNS registration is complete, the iSNS server has a list of all the LIFs serving iSCSI in
“VS1”. If a cluster contains multiple SVMs, each SVM must register individually with the iSNS server to use the
iSNS service.
5
In the next example, after the iSNS server completes the registration with the target, Host A can discover all
the LIFs for “VS1” through the iSNS server as indicated in Step 1. After Host A completes the discovery of the
LIFs for “VS1”, Host A can establish a connection with any of the LIFs in “VS1” as shown in Step 2. Host A is
not aware of any of the LIFs in “VS2” until management LIF “VS2_mgmt_LIF” for “VS2” registers with the iSNS
server.
6
However, if you define the interface access lists, the host can only use the defined LIFs in the interface access
list to access the target.
After iSNS is initially configured, ONTAP automatically updates the iSNS server when the SVM configuration
settings change.
A delay of a few minutes might occur between the time you make the configuration changes and when ONTAP
sends the update to the iSNS server. Force an immediate update of the iSNS information on the iSNS server:
vserver iscsi isns update
7
Stop an iSNS service vserver iscsi isns stop
See the man page for each command for more information.
Storage systems and hosts have adapters so that they can be connected to FC switches with cables.
When a node is connected to the FC SAN, each SVM registers the World Wide Port Name (WWPN) of its LIF
with the switch Fabric Name Service. The WWNN of the SVM and the WWPN of each LIF is automatically
assigned by ONTAP..
Direct-connection to nodes from hosts with FC is not supported, NPIV is required and this
requires a switch to be used.With iSCSI sessions, communication works with connections that
are either network routed or direct-connect. However, both of these methods are supported with
ONTAP.
WWPNs identify each LIF in an SVM configured to support FC. These LIFs utilize the physical FC ports in each
node in the cluster, which can be FC target cards, UTA or UTA2 configured as FC or FCoE in the nodes.
The WWPNs of the host’s HBAs are used to create an initiator group (igroup). An igroup is used to control
host access to specific LUNs. You can create an igroup by specifying a collection of WWPNs of initiators in
an FC network. When you map a LUN on a storage system to an igroup, you can grant all the initiators in
that group access to that LUN. If a host’s WWPN is not in an igroup that is mapped to a LUN, that host
does not have access to the LUN. This means that the LUNs do not appear as disks on that host.
You can also create port sets to make a LUN visible only on specific target ports. A port set consists of a
group of FC target ports. You can bind an igroup to a port set. Any host in the igroup can access the LUNs
only by connecting to the target ports in the port set.
WWPNs uniquely identify each FC logical interface. The host operating system uses the combination of the
WWNN and WWPN to identify SVMs and FC LIFs. Some operating systems require persistent binding to
8
ensure that the LUN appears at the same target ID on the host.
Worldwide names are created sequentially in ONTAP. However, because of the way ONTAP assigns them,
they might appear to be assigned in a non-sequential order.
Each adapter has a pre-configured WWPN and WWNN, but ONTAP does not use these pre-configured values.
Instead, ONTAP assigns its own WWPNs or WWNNs, based on the MAC addresses of the onboard Ethernet
ports.
The worldwide names might appear to be non-sequential when assigned for the following reasons:
• Worldwide names are assigned across all the nodes and storage virtual machines (SVMs) in the cluster.
• Freed worldwide names are recycled and added back to the pool of available names.
Fibre Channel switches have one worldwide node name (WWNN) for the device itself, and one worldwide port
name (WWPN) for each of its ports.
For example, the following diagram shows how the WWPNs are assigned to each of the ports on a 16-port
Brocade switch. For details about how the ports are numbered for a particular switch, see the vendor-supplied
documentation for that switch.
9
Although analogous in function, NVMe namespaces do not support all features supported by
LUNs.
Beginning with ONTAP 9.5 a license is required to support host-facing data access with NVMe. If NVMe is
enabled in ONTAP 9.4, a 90 day grace period is given to acquire the license after upgrading to ONTAP 9.5.
You can enable the license using the following command:
Related information
NetApp Technical Report 4684: Implementing and Configuring Modern SANs with NVMe/FC
ONTAP provides three basic volume provisioning options: thick provisioning, thin
provisioning, and semi-thick provisioning. Each option uses different ways to manage the
volume space and the space requirements for ONTAP block sharing technologies.
Understanding how the options work enables you to choose the best option for your
environment.
Putting SAN LUNs and NAS shares in the same FlexVol volume is not recommended. You
should provision separate FlexVol volumes specifically for your SAN LUNs and you should
provision separate FlexVol volumes specifically to your NAS shares. This simplifies
management and replication deployments and parallels the way FlexVol volumes are supported
in Active IQ Unified Manager (formerly OnCommand Unified Manager).
When a thinly provisioned volume is created, ONTAP does not reserve any extra space when the volume is
created. As data is written to the volume, the volume requests the storage it needs from the aggregate to
accommodate the write operation. Using thin-provisioned volumes enables you to overcommit your aggregate,
which introduces the possibility of the volume not being able to secure the space it needs when the aggregate
runs out of free space.
You create a thin-provisioned FlexVol volume by setting its -space-guarantee option to none.
When a thick-provisioned volume is created, ONTAP sets aside enough storage from the aggregate to ensure
that any block in the volume can be written to at any time. When you configure a volume to use thick
provisioning, you can employ any of the ONTAP storage efficiency capabilities, such as compression and
deduplication, to offset the larger upfront storage requirements.
You create a thick-provisioned FlexVol volume by setting its -space-slo (service level objective) option to
thick.
When a volume using semi-thick provisioning is created, ONTAP sets aside storage space from the aggregate
to account for the volume size. If the volume is running out of free space because blocks are in use by block-
10
sharing technologies, ONTAP makes an effort to delete protection data objects (Snapshot copies and
FlexClone files and LUNs) to free up the space they are holding. As long as ONTAP can delete the protection
data objects fast enough to keep pace with the space required for overwrites, the write operations continue to
succeed. This is called a “best effort” write guarantee.
Note: The following functionality is not supported on volumes that use semi-thick provisioning:
You create a semi-thick-provisioned FlexVol volume by setting its -space-slo (service level objective) option
to semi-thick.
A space-reserved file or LUN is one for which storage is allocated when it is created. Historically, NetApp has
used the term “thin-provisioned LUN” to mean a LUN for which space reservation is disabled (a non-space-
reserved LUN).
The following table summarizes the major differences in how the three volume provisioning options can be
used with space-reserved files and LUNs:
Notes
1. The ability to guarantee overwrites or provide a best-effort overwrite assurance requires that space
reservation is enabled on the LUN or file.
2. Protection data includes Snapshot copies, and FlexClone files and LUNs marked for automatic deletion
(backup clones).
3. Storage efficiency includes deduplication, compression, any FlexClone files and LUNs not marked for
automatic deletion (active clones), and FlexClone subfiles (used for Copy Offload).
ONTAP supports T10 SCSI thin-provisioned LUNs as well as NetApp thin-provisioned LUNs. T10 SCSI thin
provisioning enables host applications to support SCSI features including LUN space reclamation and LUN
space monitoring capabilities for blocks environments. T10 SCSI thin provisioning must be supported by your
SCSI host software.
You use the ONTAP space-allocation setting to enable/disable support for the T10 thin provisioning on a
LUN. You use the ONTAP space-allocation enable setting to enable T10 SCSI thin provisioning on a
LUN.
11
The [-space-allocation {enabled|disabled}] command in the ONTAP Command Reference
Manual has more information to enable/disable support for the T10 thin provisioning and to enable T10 SCSI
thin provisioning on a LUN.
ONTAP 9 commands
You can configure a volume for thin provisioning, thick provisioning, or semi-thick
provisioning.
About this task
Setting the -space-slo option to thick ensures the following:
• The entire volume is preallocated in the aggregate. You cannot use the volume create or volume
modify command to configure the volume’s -space-guarantee option.
• 100% of the space required for overwrites is reserved. You cannot use the volume modify command to
configure the volume’s -fractional-reserve option
• The entire volume is preallocated in the aggregate. You cannot use the volume create or volume
modify command to configure the volume’s -space-guarantee option.
• No space is reserved for overwrites. You can use the volume modify command to configure the
volume’s -fractional-reserve option.
• Automatic deletion of Snapshot copies is enabled.
Step
1. Configure volume provisioning options:
The -space-guarantee option defaults to none for AFF systems and for non-AFF DP volumes.
Otherwise, it defaults to volume. For existing FlexVol volumes, use the volume modify command to
configure provisioning options.
The following command configures vol1 on SVM vs1 for thin provisioning:
The following command configures vol1 on SVM vs1 for thick provisioning:
The following command configures vol1 on SVM vs1 for semi-thick provisioning:
12
cluster1::> volume create –vserver vs1 -volume vol1 -space-slo semi-
thick
You must set various options on the volume containing your LUN. The way you set the
volume options determines the amount of space available to LUNs in the volume.
Autogrow
You can enable or disable Autogrow. If you enable it, autogrow allows ONTAP to automatically increase the
size of the volume up to a maximum size that you predetermine. There must be space available in the
containing aggregate to support the automatic growth of the volume. Therefore, if you enable autogrow, you
must monitor the free space in the containing aggregate and add more when needed.
Autogrow cannot be triggered to support Snapshot creation. If you attempt to create a Snapshot copy and
there is insufficient space on the volume, the Snapshot creation fails, even with autogrow enabled.
If autogrow is disabled, the size of your volume will remain the same.
Autoshrink
You can enable or disable Autoshrink. If you enable it, autoshrink allows ONTAP to automatically decrease the
overall size of a volume when the amount of space consumed in the volume decreases a predetermined
threshold. This increases storage efficiency by triggering volumes to automatically release unused free space.
Snapshot autodelete
Snapshot autodelete automatically deletes Snapshot copies when one of the following occurs:
You can configure Snapshot autodelete to delete Snapshot copies from oldest to newest or from newest to
oldest. Snapshot autodelete does not delete Snapshot copies that are linked to Snapshot copies in cloned
volumes or LUNs.
If your volume needs additional space and you have enabled both autogrow and Snapshot autodelete, by
default, ONTAP attempts to acquire the needed space by triggering autogrow first. If enough space is not
acquired through autogrow, then Snapshot autodelete is triggered.
Snapshot reserve
Snapshot reserve defines the amount of space in the volume reserved for Snapshot copies. Space allocated to
Snapshot reserve cannot be used for any other purpose. If all of the space allocated for Snapshot reserve is
used, then Snapshot copies begin to consume additional space on the volume.
Before you move a volume that contains LUNs or namespaces, you must meet certain
13
requirements.
• For volumes containing one or more LUNs, you should have a minimum of two paths per LUN (LIFs)
connecting to each node in the cluster.
This eliminates single points of failure and enables the system to survive component failures.
• For volumes containing namespaces, the cluster must be running ONTAP 9.6 or later.
Volume move is not supported for NVMe configurations running ONTAP 9.5.
Fractional reserve, also called LUN overwrite reserve, enables you to turn off overwrite
reserve for space-reserved LUNs and files in a FlexVol volume. This can help you
maximize your storage utilization, but if your environment is negatively affected by write
operations failing due to lack of space, you must understand the requirements that this
configuration imposes.
The fractional reserve setting is expressed as a percentage; the only valid values are 0 and 100 percent. The
fractional reserve setting is an attribute of the volume.
Setting fractional reserve to 0 increases your storage utilization. However, an application accessing data
residing in the volume could experience a data outage if the volume is out of free space, even with the volume
guarantee set to volume. With proper volume configuration and use, however, you can minimize the chance of
writes failing. ONTAP provides a “best effort” write guarantee for volumes with fractional reserve set to 0 when
all of the following requirements are met:
This is not the default setting. You must explicitly enable automatic deletion, either at creation time or by
modifying the FlexClone file or FlexClone LUN after it is created.
This setting also ensures that FlexClone files and FlexClone LUNs are deleted when necessary.
Note that if your rate of change is high, in rare cases the Snapshot copy automatic deletion could fall behind,
resulting in the volume running out of space, even with all of the above required configuration settings in use.
In addition, you can optionally use the volume autogrow capability to decrease the likelihood of volume
14
Snapshot copies needing to be deleted automatically. If you enable the autogrow capability, you must monitor
the free space in the associated aggregate. If the aggregate becomes full enough that the volume is prevented
from growing, more Snapshot copies will probably be deleted as the free space in the volume is depleted.
If you cannot meet all of the above configuration requirements and you need to ensure that the volume does
not run out of space, you must set the volume’s fractional reserve setting to 100. This requires more free
space up front, but guarantees that data modification operations will succeed even when the technologies
listed above are in use.
The default value and allowed values for the fractional reserve setting depend on the guarantee of the volume:
None 0 0, 100
In a thinly provisioned environment, host side space management completes the process
of managing space from the storage system that has been freed in the host file system.
A host file system contains metadata to keep track of which blocks are available to store new data and which
blocks contain valid data that must not be overwritten. This metadata is stored within the LUN. When a file is
deleted in the host file system, the file system metadata is updated to mark that file’s blocks as free space.
Total file system free space is then recalculated to include the newly freed blocks. To the storage system, these
metadata updates appear no different from any other writes being performed by the host. Therefore, the
storage system is unaware that any deletions have occurred.
This creates a discrepancy between the amount of free space reported by the host and the amount of free
space reported by the underlying storage system. For example, suppose you have a newly provisioned 200-
GB LUN assigned to your host by your storage system. Both the host and the storage system report 200 GB of
free space. Your host then writes 100 GB of data. At this point, both the host and storage system report 100
GB of used space and 100 GB of unused space.
Then you delete 50 GB of data from your host. At this point, your host will report 50 GB of used space and 150
GB of unused space. However, your storage system will report 100 GB of used space and 100 GB of unused
space.
Host-side space management uses various methods to reconcile the space differential between the host and
the storage system.
If your host supports SCSI thin provisioning, you can enable the space-allocation
option in ONTAP to turn on automatic host-side space management.
Enabling SCSI thin provisioning enables you to do the following.
15
When data is deleted on a host that supports SCSI thin provisioning, host-side space management
identifies the blocks of deleted data on the host file system and automatically issues one or more SCSI
UNMAP commands to free corresponding blocks on the storage system.
• Notify the host when a LUN runs out of space while keeping the LUN online
On hosts that do not support SCSI thin provisioning, when the volume containing LUN runs out of space
and cannot automatically grow, ONTAP takes the LUN offline. However, on hosts that support SCSI thin
provisioning, ONTAP does not take the LUN offline when it runs out of space. The LUN remains online in
read-only mode and the host is notified that the LUN can no longer accept writes.
Related information
ONTAP SAN host configuration
If you set the space-allocation option to enabled, ONTAP notifies the host when
the volume has run out of space and the LUN in the volume cannot accept writes. This
option also enables ONTAP to reclaim space automatically when your host deletes data.
About this task
The space-allocation option is set to disabled by default, and you must take the LUN offline to enable
space allocation. After you enable space allocation, you must perform discovery on the host before the host
will recognize that space allocation has been enabled.
Steps
1. Take the LUN offline.
5. On the host, rescan all disks to ensure that the change to the -space-allocation option is correctly
discovered.
16
Host support for SCSI thin provisioning
To leverage the benefits of SCSI thin provisioning, it must be supported by your host.
SCSI thin provisioning uses the Logical Block Provisioning feature as defined in the SCSI
SBC-3 standard. Only hosts that support this standard can use SCSI thin provisioning in
ONTAP.
The following hosts currently support SCSI thin provisioning when you enable space allocation:
When you enable the space allocation functionality in ONTAP, you turn on the following SCSI thin provisioning
features:
You can use SnapCenter software to simplify some of the management and data
protection tasks associated with iSCSI and FC storage. SnapCenter is an optional
management package for Windows and UNIX hosts.
You can use SnapCenter Software to easily create virtual disks from pools of storage that can be distributed
among several storage systems and to automate storage provisioning tasks and simplify the process of
creating Snapshot copies and clones from Snapshot copies consistent with host data.
About igroups
Initiator groups (igroups) are tables of FC protocol host WWPNs or iSCSI host node
names. You can define igroups and map them to LUNs to control which initiators have
access to LUNs.
Typically, you want all of the host’s initiator ports or software initiators to have access to a LUN. If you are using
multipathing software or have clustered hosts, each initiator port or software initiator of each clustered host
needs redundant paths to the same LUN.
You can create igroups that specify which initiators have access to the LUNs either before or after you create
LUNs, but you must create igroups before you can map a LUN to an igroup.
Initiator groups can have multiple initiators, and multiple igroups can have the same initiator. However, you
17
cannot map a LUN to multiple igroups that have the same initiator. An initiator cannot be a member of igroups
of differing ostypes.
iqn.1991-
05.com.microsoft:host1
10:00:00:00:c9:2b:02:3c
10:00:00:00:c9:2b:47:a2
18
Follow the instructions in your Host Utilities documentation to obtain WWPNs and to find the iSCSI node
names associated with a specific host. For hosts running ESX software, use Virtual Storage Console.
VMware and Microsoft support copy offload operations to increase performance and
network throughput. You must configure your system to meet the requirements of the
VMware and Windows operating system environments to use their respective copy
offload functions.
When using VMware and Microsoft copy offload in virtualized environments, your LUNs must be aligned.
Unaligned LUNs can degrade performance.
Creating a virtualized environment by using storage virtual machines (SVMs) and LIFs enables you to expand
your SAN environment to all of the nodes in your cluster.
• Distributed management
You can log in to any node in the SVM to administer all of the nodes in a cluster.
With MPIO and ALUA, you have access to your data through any active iSCSI or FC LIFs for the SVM.
If you use SLM and portsets, you can limit which LIFs an initiator can use to access LUNs.
19
Example of LUN access with multiple SVMs in a cluster
A physical port can support multiple LIFs serving different SVMs. Because LIFs are associated with a particular
SVM, the cluster nodes can send the incoming data traffic to the correct SVM. In the following example, each
node from 1 through 4 has a LIF for SVM-2 using the physical port 0c on each node. Host 1 connects to LIF1.1
and LIF1.2 in SVM-1 to access LUN1. Host 2 connects to LIF2-1 and LIF2-2 in SVM-2 to access LUN2. Both
SVMs are sharing the physical port 0c on the nodes 1 and 2. SVM-2 has additional LIFs that Host 2 is using to
access LUNs 3 and 4. These LIFs are using physical port 0c on nodes 3 and 4. Multiple SVMs can share the
physical ports on the nodes.
20
Example of an active or unoptimized path (indirect) path to a LUN from a host system
In an active or unoptimized path (indirect) path, the data traffic travels over the cluster network. This issue
occurs only if all the active or optimized paths from a host are unavailable to handle traffic. If the path from
Host 2 to SVM-2 LIF2.4 is lost, then access to LUN3 and LUN4 traverses the cluster network. Access from
Host 2 uses LIF2.3 on node3. Then the traffic enters the cluster network switch and backs up to node4 for
access to the LUN3 and LUN4. It will then traverse back over the cluster network switch and then back out
through LIF2.3 to Host 2. This active or unoptimized path is used until the path to LIF2.4 is restored or a new
LIF is established for SVM-2 on another physical port on node 4.
You can configure two LIFs per node, one for each fabric being used with FC and to separate Ethernet
networks for iSCSI.
21
Improve VMware VAAI performance for ESX hosts
ONTAP supports certain VMware vStorage APIs for Array Integration (VAAI) features
when the ESX host is running ESX 4.1 or later. These features help offload operations
from the ESX host to the storage system and increase the network throughput. The ESX
host enables the features automatically in the correct environment.
The VAAI feature supports the following SCSI commands:
• EXTENDED_COPY
This feature enables the host to initiate the transfer of data between the LUNs or within a LUN without
involving the host in the data transfer. This results in saving ESX CPU cycles and increasing the network
throughput. The extended copy feature, also known as "copy offload," is used in scenarios such as cloning
a virtual machine. When invoked by the ESX host, the copy offload feature copies the data within the
storage system rather than going through the host network. Copy offload transfers data in the following
ways:
◦ Within a LUN
◦ Between LUNs within a volume
◦ Between LUNs on different volumes within a storage virtual machine (SVM)
◦ Between LUNs on different SVMs within a cluster
If this feature cannot be invoked, the ESX host automatically uses the standard READ and WRITE
commands for the copy operation.
• WRITE_SAME
This feature offloads the work of writing a repeated pattern, such as all zeros, to a storage array. The ESX
host uses this feature in operations such as zero-filling a file.
• COMPARE_AND_WRITE
This feature bypasses certain file access concurrency limits, which speeds up operations such as booting
up virtual machines.
The VAAI features are part of the ESX operating system and are automatically invoked by the ESX host when
you have set up the correct environment.
The copy offload feature currently does not support copying data between VMware
datastores that are hosted on different storage systems.
22
Determine if VAAI features are supported by ESX
To confirm whether the ESX operating system supports the VAAI features, you can check the vSphere Client or
use any other means of accessing the host. ONTAP supports the SCSI commands by default.
You can check your ESX host advanced settings to determine whether VAAI features are enabled. The table
indicates which SCSI commands correspond to ESX control names.
WRITE_SAME HardwareAcceleratedInit
COMPARE_AND_WRITE HardwareAcceleratedLocking
Microsoft Offloaded Data Transfer (ODX), also known as copy offload, enables direct
data transfers within a storage device or between compatible storage devices without
transferring the data through the host computer.
ONTAP supports ODX for both the SMB and SAN protocols.
In non-ODX file transfers, the data is read from the source and is transferred across the network to the host.
The host transfers the data back over the network to the destination. In ODX file transfer, the data is copied
directly from the source to the destination without passing through the host.
Because ODX offloaded copies are performed directly between the source and destination, significant
performance benefits are realized if copies are performed within the same volume, including faster copy time
for same volume copies, reduced utilization of CPU and memory on the client, and reduced network I/O
bandwidth utilization. If copies are across volumes, there might not be significant performance gains compared
to host-based copies.
For SAN environments, ODX is only available when it is supported by both the host and the storage system.
Client computers that support ODX and have ODX enabled automatically and transparently use offloaded file
transfer when moving or copying files. ODX is used regardless of whether you drag-and-drop files through
Windows Explorer or use command-line file copy commands, or whether a client application initiates file copy
requests.
If you plan to use ODX for copy offloads, you need to be familiar with volume support considerations, system
requirements, and software capability requirements.
• ONTAP
23
For optimal performance, the source volume should be greater than 260 GB.
ODX is supported in Windows Server 2012 or later and in Windows 8 or later. The Interoperability Matrix
contains the latest information about supported Windows clients.
The application that performs the data transfer must support ODX. Application operations that support ODX
include the following:
◦ Hyper-V management operations, such as creating and converting virtual hard disks (VHDs), managing
Snapshot copies, and copying files between virtual machines
◦ Windows Explorer operations
◦ Windows PowerShell copy commands
◦ Windows command prompt copy commands
The Microsoft TechNet Library contains more information about supported ODX applications on
Windows servers and clients.
• If you use compressed volumes, the compression group size must be 8K.
You can delete ODX files found in qtrees. You must not remove or modify any other ODX system files unless
you are told by technical support to do so.
When using the ODX feature, there are ODX system files that exist in every volume of the system. These files
enable point-in-time representation of data used during the ODX transfer. The following system files are in the
root level of each volume that contains LUNs or files to which data was offloaded:
You can use the copy-offload delete-tokens -path dir_path -node node_name command to
delete a qtree containing an ODX file.
24
Use cases for ODX
You should be aware of the use cases for using ODX on SVMs so that you can determine under what
circumstances ODX provides you with performance benefits.
Windows servers and clients that support ODX use copy offload as the default way of copying data across
remote servers. If the Windows server or client does not support ODX or the ODX copy offload fails at any
point, the copy or move operation falls back to traditional reads and writes for the copy or move operation.
The following use cases support using ODX copies and moves:
• Intra-volume
The source and destination files or LUNs are within the same volume.
The source and destination files or LUNs are on different volumes that are located on the same node. The
data is owned by the same SVM.
The source and destination files or LUNs are on different volumes that are located on different nodes. The
data is owned by the same SVM.
The source and destination file or LUNs are on different volumes that are located on the same node. The
data is owned by different SVMs.
The source and destination file or LUNs are on different volumes that are located on different nodes. The
data is owned by different SVMs.
• Inter-cluster
The source and destination LUNs are on different volumes that are located on different nodes across
clusters. This is only supported for SAN and does not work for SMB.
• With the ONTAP ODX implementation, you can use ODX to copy files between SMB shares and FC or
iSCSI attached virtual drives.
You can use Windows Explorer, the Windows CLI or PowerShell, Hyper-V, or other applications that
support ODX to copy or move files seamlessly using ODX copy offload between SMB shares and
connected LUNs, provided that the SMB shares and LUNs are on the same cluster.
• Hyper-V provides some additional use cases for ODX copy offload:
◦ You can use ODX copy offload pass-through with Hyper-V to copy data within or across virtual hard
disk (VHD) files or to copy data between mapped SMB shares and connected iSCSI LUNs within the
same cluster.
This allows copies from guest operating systems to pass through to the underlying storage.
25
◦ When creating fixed-sized VHDs, ODX is used for initializing the disk with zeros, using a well-known
zeroed token.
◦ ODX copy offload is used for virtual machine storage migration if the source and destination storage is
on the same cluster.
To take advantage of the use cases for ODX copy offload pass-through with Hyper-V, the
guest operating system must support ODX and the guest operating system’s disks must be
SCSI disks backed by storage (either SMB or SAN) that supports ODX. IDE disks on the
guest operating system do not support ODX pass-through.
SAN administration
SAN provisioning
The content in this section shows you how to configure and manage SAN environments
with the ONTAP command line interface (CLI) and System Manager in ONTAP 9.7 and
later releases.
If you are using the classic System Manager (available only in ONTAP 9.7 and earlier), see these topics:
• iSCSI protocol
• FC/FCoE protocol
You can use the iSCSI and FC protocols to provide storage in a SAN environment.
26
With iSCSI and FC, storage targets are called LUNs (logical units) and are presented to hosts as standard
block devices. You create LUNs and then map them to initiator groups (igroups). Initiator groups are tables of
FC host WWPs and iSCSI host node names and control which initiators have access to which LUNs.
FC targets connect to the network through FC switches and host-side adapters and are identified by world-
wide port names (WWPNs). iSCSI targets connect to the network through standard Ethernet network adapters
(NICs), TCP offload engine (TOE) cards with software initiators, converged network adapters (CNAs) or
dedicated host bust adapters (HBAs) and are identified by iSCSI qualified names (IQNs).
You must configure your switches for FCoE before your FC service can run over the
existing Ethernet infrastructure.
What you’ll need
• Your SAN configuration must be supported.
For more information about supported configurations, see the NetApp Interoperability Matrix Tool.
27
• A converged network adapter (CNA) must be installed on your host.
Steps
1. Use your switch documentation to configure your switches for FCoE.
2. Verify that the DCB settings for each node in the cluster have been correctly configured.
DCB settings are configured on the switch. Consult your switch documentation if the settings are incorrect.
3. Verify that the FCoE login is working when the FC target port online status is true.
If the FC target port online status is false, consult your switch documentation.
Related information
• NetApp Interoperability Matrix Tool
• NetApp Technical Report 3800: Fibre Channel over Ethernet (FCoE) End-to-End Deployment Guide
• Cisco MDS 9000 NX-OS and SAN-OS Software Configuration Guides
• Brocade products
System Requirements
Setting up LUNs involves creating a LUN, creating an igroup, and mapping the LUN to
the igroup. Your system must meet certain prerequisites before you can set up your
LUNs.
• The Interoperability Matrix must list your SAN configuration as supported.
• Your SAN environment must meet the SAN host and controller configuration limits specified in NetApp
Hardware Universe for your version of the ONTAP software.
• A supported version of Host Utilities must be installed.
• You must have SAN LIFs on the LUN owning node and the owning node’s HA partner.
Related information
• NetApp Interoperability Matrix Tool
• ONTAP SAN Host Configuration
• NetApp Technical Report 4017: Fibre Channel SAN Best Practices
28
What to know before you create a LUN
You should be aware of the following regarding the size of your LUNs.
• When you create a LUN , the actual size of the LUN might vary slightly based on the OS type of the LUN.
The LUN OS type cannot be modified after the LUN is created.
• If you create a LUN at the max LUN size, be aware that the actual size of the LUN might be slightly less.
ONTAP rounds down the limit to be slightly less.
• The metadata for each LUN requires approximately 64 KB of space in the containing aggregate. When you
create a LUN, you must ensure that the containing aggregate has enough space for the LUN’s metadata. If
the aggregate does not contain enough space for the LUN’s metadata, some hosts might not be able to
access the LUN.
Typically, the default LUN ID begins with 0 and is assigned in increments of 1 for each additional mapped LUN.
The host associates the LUN ID with the location and path name of the LUN. The range of valid LUN ID
numbers depends on the host. For detailed information, see the documentation provided with your Host
Utilities.
Before you can enable block access for a storage virtual machine (SVM) with FC or
iSCSI, you must have a license.
29
Example 1. Steps
System Manager
Verify and add your FC or iSCSI license with ONTAP System Manager (9.7 and later).
CLI
Verify and add your FC or iSCSI license with the ONTAP CLI.
2. If you do not have a active license for FC or iSCSI, add your license code.
This procedure creates new LUNs on an existing storage VM which already has the FC
or iSCSI protocol configured.
If you need to create a new storage VM and configure the FC or iSCSI protocol, see Configure an SVM for FC
or Configure an SVM for iSCSI.
If the FC license is not enabled, the LIFs and SVMs appear to be online but the operational status is down.
Asymmetric logical unit access (ALUA) is always enabled during LUN creation. You cannot
change the ALUA setting.
30
You must use single initiator zoning for all of the FC LIFs in the SVM to host the initiators.
Beginning with ONTAP 9.8, when you provision storage, QoS is enabled by default. You can disable QoS or
choose a custom QoS policy during the provisioning process or at a later time.
31
Example 2. Steps
System Manager
Create LUNs to provide storage for a SAN host using the FC or iSCSI protocol with ONTAP System
Manager (9.7 and later).
To complete this task using System Manager Classic (available with 9.7 and earlier) refer to iSCSI
configuration for Red Hat Enterprise Linux
Steps
1. Install the appropriate SAN host utilities on your host.
2. In System Manager, click Storage > LUNs and then click Add.
3. Enter the required information to create the LUN.
4. You can click More Options to do any of the following, depending upon your version of ONTAP.
Option Available
beginning with
• Assign QoS policy to LUNs instead of parent volume ONTAP 9.10.1
◦ More Options > Storage and Optimization
◦ Select Performance Service Level.
◦ To apply the QoS policy to individual LUNs instead of the entire volume,
select Apply these performance limits enforcements to each LUN.
• Create a new initiator group using existing initiator groups ONTAP 9.9.1
◦ More Options > HOST INFORMATION
◦ Select New initiator group using existing initiator groups.
32
• Disable QoS or choose a custom QoS policy ONTAP 9.8
◦ More Options > Storage and Optimization
◦ Select Performance Service Level.
NOTE: In ONTAP 9.9.1 and later, if you select a custom QoS policy, you
can also select manual placement on a specified local tier.
5. For FC, zone your FC switches by WWPN. Use one zone per initiator and include all target ports in
each zone.
6. Discover LUNs on your host.
For VMware vSphere, use Virtual Storage Console (VSC) to discover and initialize your LUNs.
CLI
Create LUNs to provide storage for a SAN host using the FC or iSCSI protocol with the ONTAP CLI.
2. If you do not have a license for FC or iSCSI, use the license add command.
For iSCSI:
33
For FC:
NetApp supports a minimum of one iSCSI or FC LIF per node for each SVM serving data. However,
two LIFS per node are required for redundancy.
5. Verify that your LIFs have been created and that their operational status is online:
Your LUN name cannot exceed 255 characters and cannot contain spaces.
34
11. Follow steps in your host documentation for enabling block access on your specific hosts.
12. Use the Host Utilities to complete the FC or iSCSI mapping and to discover your LUNs on the host.
Related information
• SAN Administration overview
• ONTAP SAN Host Configuration
• View and manage SAN initiator groups in System Manager
• NetApp Technical Report 4017: Fibre Channel SAN Best Practices
NVMe provisioning
NVMe Overview
You can use the non-volatile memory express (NVMe) protocol to provide storage in a
SAN environment. The NVMe protocol is optimized for performance with solid state
storage.
For NVMe, storage targets are called namespaces. An NVMe namespace is a quantity of non-volatile storage
that can be formatted into logical blocks and presented to a host as a standard block device. You create
namespaces and subsystems, and then map the namespaces to the subsystems, similar to the way LUNs are
provisioned and mapped to igroups for FC and iSCSI.
NVMe targets are connected to the network through a standard FC infrastructure using FC switches or a
standard TCP infrastructure using Ethernet switches and host-side adapters.
Support for NVMe varies based on your version of ONTAP. See NVMe support and limitations for details.
What NVMe is
The nonvolatile memory express (NVMe) protocol is a transport protocol used for accessing nonvolatile
storage media.
NVMe over Fabrics (NVMeoF) is a specification-defined extension to NVMe that enables NVMe-based
communication over connections other than PCIe. This interface allows for external storage enclosures to be
connected to a server.
NVMe is designed to provide efficient access to storage devices built with non-volatile memory, from flash
technology to higher performing, persistent memory technologies. As such, it does not have the same
limitations as storage protocols designed for hard disk drives. Flash and solid state devices (SSDs) are a type
of non-volatile memory (NVM). NVM is a type of memory that keeps its content during a power outage. NVMe
is a way that you can access that memory.
The benefits of NVMe include increased speeds, productivity, throughput, and capacity for data transfer.
Specific characteristics include the following:
35
• NMVe is more productive with Flash technologies enabling faster response times
• NVMe allows for multiple data requests for each “request” sent to the SSD.
NVMe takes less time to decode a “request” and does not require thread locking in a multithreaded
program.
• NVMe supports functionality that prevents bottlenecking at the CPU level and enables massive scalability
as systems expand.
An NVMe namespace is a quantity of non-volatile memory (NVM) that can be formatted into logical blocks.
Namespaces are used when a storage virtual machine is configured with the NVMe protocol and are the
equivalent of LUNs for FC and iSCSI protocols.
One or more namespaces are provisioned and connected to an NVMe host. Each namespace can support
various block sizes.
The NVMe protocol provides access to namespaces through multiple controllers. Using NVMe drivers, which
are supported on most operating systems, solid state drive (SSD) namespaces appear as standard-block
devices on which file systems and applications can be deployed without any modification.
A namespace ID (NSID) is an identifier used by a controller to provide access to a namespace. When setting
the NSID for a host or host group, you also configure the accessibility to a volume by a host. A logical block
can only be mapped to a single host group at a time, and a given host group does not have any duplicate
NSIDs.
An NVMe subsystem includes one or more NVMe controllers, namespaces, NVM subsystem ports, an NVM
storage medium, and an interface between the controller and the NVM storage medium. When you create an
NVMe namespace, by default it is not mapped to a subsystem. You can also choose to map it a new or existing
subsystem.
Related information
• Provision NVMe storage
• Map an NVMe namespace to a subsystem
• Configure SAN hosts and cloud clients
Beginning with ONTAP 9.5 a license is required to support NVMe. If NVMe is enabled in
ONTAP 9.4, a 90 day grace period is given to acquire the license after upgrading to
ONTAP 9.5.
You can enable the license using the following command:
Beginning with ONTAP 9.4, the non-volatile memory express (NVMe) protocol is available
36
for SAN environments. FC-NVMe uses the same physical setup and zoning practice as
traditional FC networks but allows for greater bandwidth, increased IOPs and reduced
latency than FC-SCSI.
NVMe support and limitations vary based on your version of ONTAP, your platform and your configuration. For
details on your specific configuration, see the NetApp Interoperability Matrix Tool.
Configuration
• You can set up your NVMe configuration with single nodes or HA pairs using a single fabric or multifabric.
• You should configure one management LIF for every SVM supporting SAN.
• The use of heterogeneous FC switch fabrics is not supported, except in the case of embedded blade
switches.
• Cascade, partial mesh, full mesh, core-edge, and director fabrics are all industry-standard methods of
connecting FC switches to a fabric, and all are supported.
A fabric can consist of one or multiple switches, and the storage controllers can be connected to multiple
switches.
Features
The following NVMe features are supported based on your version of ONTAP.
37
9.6 • 512 byte blocks and 4096 byte blocks for
namespaces
Protocols
Beginning with ONTAP 9.8, you can configure SCSI, NAS and NVMe protocols on the same storage virtual
machine (SVM).
In ONTAP 9.7 and earlier, NVMe can be the only protocol on the SVM.
9.9.1 • AFF 4
• ASA
Namespaces
When working with NVMe namespaces, you should be aware of the following:
• If you lose data in a LUN, it cannot be restored from a namespace, or vice versa.
• The space guarantee for namespaces is the same as the space guarantee of the containing volume.
• You cannot create a namespace on a volume transition from Data ONTAP operating in 7-mode.
• Namespaces do not support the following:
◦ Renaming
38
◦ Inter-volume move
◦ Inter-volume copy
◦ Copy on Demand
Additional limitations
See the NetApp Hardware Universe for a complete list of NVMe limits.
Related information
Best practices for modern SAN
If you want to use the NVMe protocol on a node, you must configure your SVM
specifically for NVMe.
What you’ll need
Your FC or Ethernet adapters must support NVMe. Supported adapters are listed in the NetApp Hardware
Universe.
39
Example 3. Steps
System Manager
Configure an storage VM for NVMe with ONTAP System Manager (9.7 and later).
CLI
Configure an storage VM for NVMe with the ONTAP CLI.
vserver show
2. Verify that you have NVMe or TCP capable adapters installed in your cluster:
For NVMe:
For TCP:
3. If you are running ONTAP 9.7 or earlier, remove all protocols from the SVM:
Beginning with ONTAP 9.8, it is not necessary to remove other protocols when adding NVMe.
40
4. Add the NVMe protocol to the SVM:
5. If you are running ONTAP 9.7 or earlier, verify that NVMe is the only protocol allowed on the SVM:
NVMe should be the only protocol displayed under the allowed protocols column.
41
network interface create -vserver <SVM_name> -lif <lif_name>
-role data -data-protocol fc-nvme -home-node <home_node> -home
-port <home_port>
If a warning message is displayed about the auto efficiency policy, it can be safely ignored.
Use these steps to create namespaces and provision storage for any NVMe supported
host on an existing storage VM.
Beginning with ONTAP 9.8, when you provision storage, QoS is enabled by default. You can disable QoS or
choose a custom QoS policy during the provisioning process or at a later time.
42
System Manager
Using ONTAP System Manager (9.7 and later), create namespaces to provide storage using the NVMe
protocol.
Steps
1. In System Manager, click Storage > NVMe Namespaces and then click Add.
2. If you are running ONTAP 9.8 or later and you want to disable QoS or choose a custom QoS policy,
click More Options and then, under Storage and Optimization select Performance Service Level.
3. Zone your FC switches by WWPN. Use one zone per initiator and include all target ports in each
zone.
4. On your host, discover the new namespaces.
5. Initialize the namespace and format it with a file system.
6. Verify that your host can write and read data on the namespace.
CLI
Using the ONTAP CLI, create namespaces to provide storage using the NVMe protocol.
This procedure creates an NVMe namespace and subsystem on an existing storage VM which has
already been configured for the NVMe protocol, then maps the namespace to the subsystem to allow data
access from your host system.
If you need to configure the storage VM for NVMe, see Configure an SVM for NVMe.
Steps
1. Verify that the SVM is configured for NVMe:
The NVMe subsystem name is case sensitive. It must contain 1 to 96 characters. Special characters
are allowed.
43
4. Verify that the subsystem was created:
Mapping an NVMe namespace to a subsystem allows data access from your host. You
can map an NVMe namespace to a subsystem when you provision storage or you can do
it after your storage has been provisioned.
Beginning with ONTAP 9.14.1, you can prioritize resource allocation for specific hosts. By default, when a host
is added to the NVMe subsystem, it is given regular priority. You can use the ONTAP command line interface
(CLI) to manually change the default priority from regular to high. Hosts assigned a high priority are allocated
larger I/O queue counts and queue-depths.
If you want to give a high priority to a host that was added to a subsystem in ONTAP 9.13.1 or
earlier, you can change the host priority.
Steps
44
1. Obtain the NQN from the host.
2. Add the host NQN to the subsystem:
If you want to change the default priority of the host from regular to high, use the -priority high
option. This option is available beginning with ONTAP 9.14.1.
Manage LUNs
Beginning with ONTAP 9.10.1, you can use System Manager to assign or remove Quality
of Service (QoS) polices on multiple LUNs at the same time.
If the QoS policy is assigned at the volume level, it must be changed at the volume level. You
can only edit the QoS policy at the LUN level if it was originally assigned at the LUN level.
Steps
1. In System Manager, click Storage > LUNs.
2. Select the LUN or LUNs you want to edit.
If you are editing more than one LUN at a time, the LUNs must belong to the same Storage Virtual Machine
(SVM). If you select LUNs that do not belong to the same SVM, the option to edit the QoS Policy Group is
not displayed.
Beginning with ONTAP 9.11.1, you can use the ONTAP CLI to in-place convert an
45
existing LUN to an NVMe namespace.
What you’ll need
• Specified LUN should not have any existing maps to an igroup.
• LUN should not be in a MetroCluster configured SVM or in an SM-BC relationship.
• LUN should not be a protocol endpoint or bound to a protocol endpoint.
• LUN should not have non-zero prefix and/or suffix stream.
• LUN should not be part of a snapshot or on the destination side of SnapMirror relationship as a read-only
LUN.
Step
1. Convert a LUN to an NVMe namespace:
Beginning with ONTAP 9.10.1 you can use System Manager to take LUNs offline. Prior to
ONTAP 9.10.1, you must use the ONTAP CLI to take LUNs offline.
System Manager
Steps
1. In System Manager, click Storage>LUNs.
2. Take a single LUN or multiple LUNs offline
Take multiple LUNs offline a. Select the LUNs you want to take offline.
b. Click More and select Take Offline.
CLI
You can only take one LUN offline at a time when using the CLI.
Step
1. Take the LUN offline:
46
Resize a LUN
The size to which you can increase your LUN varies depending upon your version of ONTAP.
ONTAP 9.8 and later • 128 TB for All-Flash SAN Array (ASA) platforms
• 16 TB for non-ASA platforms
You do not need to take the LUN offline to increase the size. However, after you have increased the size, you
must rescan the LUN on the host for the host to recognize the change in size.
See the Command Reference page for the lun resize command for more information about resizing a LUN.
47
Example 4. Steps
System Manager
Increase the size of a LUN with ONTAP System Manager (9.7 and later).
CLI
Increase the size of a LUN with the ONTAP CLI.
ONTAP operations round down the actual maximum size of the LUN so it is slightly
less than the expected value. Also, actual LUN size might vary slightly based on the
OS type of the LUN. To obtain the exact resized value, run the following commands in
advanced mode:
set -unit B
Before you decrease the size of a LUN, the host needs to migrate the blocks containing the LUN data into the
boundary of the smaller LUN size. You should use a tool such as SnapCenter to ensure that the LUN is
properly decreased without truncating blocks containing LUN data. Manually decreasing the size of your LUN
is not recommended.
After you decrease the size of your LUN, ONTAP automatically notifies the initiator that the LUN size has
decreased. However, additional steps might be required on your host for the host to recognize the new LUN
size. Check your host documentation for specific information about decreasing the size of the host file
structure.
48
Move a LUN
You can move a LUN across volumes within a storage virtual machine (SVM), but you
cannot move a LUN across SVMs. LUNs moved across volumes within an SVM are
moved immediately and without loss of connectivity.
What you’ll need
If your LUN is using Selective LUN Map (SLM), you should modify the SLM reporting-nodes list to include the
destination node and its HA partner before you move your LUN.
Data protection through Snapshot copies occurs at the volume level. Therefore, when you move a LUN, it falls
under the data protection scheme of the destination volume. If you do not have Snapshot copies established
for the destination volume, Snapshot copies of the LUN are not created. Also, all of the Snapshot copies of the
LUN stay in the original volume until those Snapshot copies are deleted.
For Solaris os_type LUNs that are 1 TB or larger, the host might experience a timeout during the
LUN move. For this LUN type, you should unmount the LUN before initiating the move.
49
Example 5. Steps
System Manager
Move a LUN with ONTAP System Manager (9.7 and later).
Beginning with ONTAP 9.10.1, you can use System Manager to create a new volume when you move a
single LUN. In ONTAP 9.8 and 9.9.1, the volume to which you are moving your LUN must exist before you
begin the LUN move.
Steps
In ONTAP 9.10.1, select to move the LUN to An existing volume or to a New volume.
3. Click Move.
CLI
Move a LUN with the ONTAP CLI.
During a very brief period, the LUN is visible on both the origin and destination volume. This is
expected and is resolved upon completion of the move.
Related information
• Selective LUN Map
Delete LUNs
You can delete a LUN from a storage virtual machine (SVM) if you no longer need the
LUN.
What you’ll need
The LUN must be unmapped from its igroup before you can delete it.
Steps
1. Verify that the application or host is not using the LUN.
50
2. Unmap the LUN from the igroup:
LUNs in Snapshot copies can be used as source LUNs for the lun copy command. When you copy a LUN
using the lun copy command, the LUN copy is immediately available for read and write access. The source
LUN is unchanged by creation of a LUN copy. Both the source LUN and the LUN copy exist as unique LUNs
with different LUN serial numbers. Changes made to the source LUN are not reflected in the LUN copy, and
changes made to the LUN copy are not reflected in the source LUN. The LUN mapping of the source LUN is
not copied to the new LUN; the LUN copy must be mapped.
Data protection through Snapshot copies occurs at the volume level. Therefore, if you copy a LUN to a volume
different from the volume of the source LUN, the destination LUN falls under the data protection scheme of the
destination volume. If you do not have Snapshot copies established for the destination volume, Snapshot
copies are not created of the LUN copy.
51
• A protocol-endpoint class LUN
Knowing the configured space and actual space used for your LUNs can help you
determine the amount of space that can be reclaimed when doing space reclamation, the
amount of reserved space that contains data, and the total configured size versus the
actual size used for a LUN.
Step
1. View the configured space versus the actual space used for a LUN:
lun show
The following example show the configured space versus the actual space used by the LUNs in the vs3
storage virtual machine (SVM):
You can control input/output (I/O) performance to LUNs by assigning LUNs to Storage
QoS policy groups. You might control I/O performance to ensure that workloads achieve
specific performance objectives or to throttle a workload that negatively impacts other
workloads.
About this task
Policy groups enforce a maximum throughput limit (for example, 100 MB/s). You can create a policy group
without specifying a maximum throughput, which enables you to monitor performance before you control the
workload.
You can also assign storage virtual machines (SVMs) with FlexVol volumes and LUNs to policy groups.
• The LUN must be contained by the SVM to which the policy group belongs.
You specify the SVM when you create the policy group.
• If you assign a LUN to a policy group, then you cannot assign the LUN’s containing volume or SVM to a
policy group.
52
For more information about how to use Storage QoS, see the System administration reference.
Steps
1. Use the qos policy-group create command to create a policy group.
2. Use the lun create command or the lun modify command with the -qos-policy-group parameter
to assign a LUN to a policy group.
3. Use the qos statistics commands to view performance data.
4. If necessary, use the qos policy-group modify command to adjust the policy group’s maximum
throughput limit.
Tools are available to help you effectively monitor your LUNs and avoid running out of
space.
• Active IQ Unified Manager is a free tool that enables you to manage all storage across all clusters in your
environment.
• System Manager is a graphical user interface built into ONTAP that enables you to manually manage
storage needs at the cluster level.
• OnCommand Insight presents a single view of your storage infrastructure and enables you to set up
automatic monitoring, alerts, and reporting when your LUNs, volumes, and aggregates are running out of
storage space.
LUNs that have been transitioned from Data ONTAP operating in 7-Mode to ONTAP have certain capabilities
and restrictions that affect the way the LUNs can be managed.
Restoring the volume transitions all of the LUNs captured in the Snapshot copy
• Restore a single LUN from a 7-Mode Snapshot copy using the snapshot restore-file command
• Create a clone of a LUN in a 7-Mode Snapshot copy
• Restore a range of blocks from a LUN captured in a 7-Mode Snapshot copy
53
• Create a FlexClone of the volume using a 7-Mode Snapshot copy
Related information
Copy-based transition
ONTAP might report I/O misalignments on properly aligned LUNs. In general, these
misalignment warnings can be disregarded as long as you are confident that your LUN is
properly provisioned and your partitioning table is correct.
LUNs and hard disks both provide storage as blocks. Because the block size for disks on the host is 512 bytes,
LUNs present blocks of that size to the host while actually using larger, 4-KB blocks to store data. The 512-
byte data block used by the host is referred to as a logical block. The 4-KB data block used by the LUN to store
data is referred to as a physical block. This means that there are eight 512-byte logical blocks in each 4-KB
physical block.
The host operating system can begin a read or write I/O operation at any logical block. I/O operations are only
considered aligned when they begin at the first logical block in the physical block. If an I/O operation begins at
a logical block that is not also the start of a physical block, the I/O is considered misaligned. ONTAP
automatically detects the misalignment and reports it on the LUN. However, the presence of misaligned I/O
does not necessarily mean that the LUN is also misaligned. It is possible for misaligned I/O to be reported on
properly aligned LUNs.
If you require further investigation, see the Knowledge Base article How to identify unaligned IO on LUNs?
54
For more information about tools for correcting alignment problems, see the following documentation: +
To achieve I/O alignment with your OS partitioning scheme, you should use the recommended ONTAP LUN
ostype value that most closely matches your operating system.
The partition scheme employed by the host operating system is a major contributing factor to I/O
misalignments. Some ONTAP LUN ostype values use a special offset known as a “prefix” to enable the
default partitioning scheme used by the host operating system to be aligned.
In some circumstances, a custom partitioning table might be required to achieve I/O alignment.
However, for ostype values with a “prefix” value greater than 0, a custom partition might create
misaligned I/O.
The LUN ostype values in the following table should be used based on your operating system.
hpux 0 0 HP-UX
aix 0 0 AIX
Linux distributions offer a wide variety of ways to use a LUN including as raw devices for databases, various
volume managers, and file systems. It is not necessary to create partitions on a LUN when used as a raw
55
device or as physical volume in a logical volume.
For RHEL 5 and earlier and SLES 10 and earlier, if the LUN will be used without a volume manager, you
should partition the LUN to have one partition that begins at an aligned offset, which is a sector that is an even
multiple of eight logical blocks.
You need to consider various factors when determining whether you should use the solaris ostype or the
solaris_efi ostype.
See the Solaris Host Utilities Installation and Administration Guide for detailed information.
LUNs used as ESX boot LUNs are typically reported by ONTAP as misaligned. ESX creates multiple partitions
on the boot LUN, making it very difficult to align. Misaligned ESX boot LUNs are not typically a performance
problem because the total amount of misaligned I/O is small. Assuming that the LUN was correctly provisioned
with the VMware ostype, no action is needed.
Related information
Guest VM file system partition/disk alignment for VMware vSphere, other virtual environments, and NetApp
storage systems
When no space is available for writes, LUNs go offline to preserve data integrity. LUNs
can run out of space and go offline for various reasons, and there are several ways you
can address the issue.
56
If the… You can…
Volume is full but there is space available in the • For space guarantee volumes, use the volume
containing aggregate modify command to increase the size of your
volume.
• For thinly provisioned volumes, use the volume
modify command to increase the maximum size
of your volume.
Related information
Disk and local tier (aggregate) management
The iSCSI LUNs appear as local disks to the host. If the storage system LUNs are not
available as disks on the host, you should verify the configuration settings.
Network connectivity Verify that there is TCP/IP connectivity between the host and storage system.
• From the storage system command line, ping the host interfaces that are
being used for iSCSI:
• From the host command line, ping the storage system interfaces that are
being used for iSCSI:
57
Configuration setting What to do
System requirements Verify that the components of your configuration are qualified. Also, verify that you
have the correct host operating system (OS) service pack level, initiator version,
ONTAP version, and other system requirements. The Interoperability Matrix
contains the most up-to-date system requirements.
Jumbo frames If you are using jumbo frames in your configuration, verify that jumbo frames are
enabled on all devices in the network path: the host Ethernet NIC, the storage
system, and any switches.
iSCSI service status Verify that the iSCSI service is licensed and started on the storage system.
Initiator login Verify that the initiator is logged in to the storage system. If the iscsi
initiator show command output shows no initiators are logged in, check the
initiator configuration on the host. Also verify that the storage system is configured
as a target of the initiator.
iSCSI node names (IQNs) Verify that you are using the correct initiator node names in the igroup
configuration. On the host, you can use the initiator tools and commands to
display the initiator node name. The initiator node names configured in the igroup
and on the host must match.
LUN mappings Verify that the LUNs are mapped to an igroup. On the storage system console,
you can use one of the following commands:
• lun mapping show displays all LUNs and the igroups to which they are
mapped.
• lun mapping show -igroup displays the LUNs mapped to a specific
igroup.
iSCSI LIFs enable Verify that the iSCSI logical interfaces are enabled.
Related information
NetApp Interoperability Matrix Tool
In addition to using Selective LUN Map (SLM), you can limit access to your LUNs through
igroups and portsets.
Portsets can be used with SLM to further restrict access of certain targets to certain initiators. When using SLM
with portsets, LUNs will be accessible on the set of LIFs in the portset on the node that owns the LUN and on
that node’s HA partner.
In the following example, initiator1 does not have a portset. Without a portset, initiator1 can access LUN1
through both LIF1 and LIF2.
58
You can limit access to LUN1 by using a portset. In the following example, initiator1 can access LUN1 only
through LIF1. However, initiator1 cannot access LUN1 through LIF2 because LIF2 is not in portset1.
Related information
• Selective LUN Map
• Create a portset and bind to an igroup
You can use System Manager to view and manage initiator groups (igroups) and
initiators.
About this task
• The initiator groups identify which hosts are able to access specific LUNs on the storage system.
• After an initiator and initiator groups are created, you can also edit them or delete them.
• To manage SAN initiators groups and initiators, you can perform the following tasks:
◦ View and manage SAN initiator groups
◦ View and manage SAN initiators
You can use System Manager to view a list of initiator groups (igroups). From the list, you can perform
additional operations.
Steps
1. In System Manager, click Hosts > SAN Initiator Groups.
The page displays a list of initiator groups (igroups). If the list is large, you can view additional pages of the
list by clicking the page numbers at the lower right corner of the page.
The columns display various information about the igroups. Beginning with 9.11.1, the connection status of
the igroup is also displayed. Hover over status alerts to view details.
2. (Optional): You can perform the following tasks by clicking the icons at the upper right corner of the list:
◦ Search
59
◦ Download the list.
◦ Show or Hide columns in the list.
◦ Filter the data in the list.
3. You can perform operations from the list:
◦
Click to add an igroup.
◦ Click the igroup name to view the Overview page that shows details about the igroup.
On the Overview page, you can view the LUNs associated with the igroup, and you can initiate the
operations to create LUNs and map the LUNs. Click All SAN Initiators to return to the main list.
◦ Hover over the igroup, then click next to an igroup name to edit or delete the igroup.
◦ Hover over the area to the left of the igroup name, then check the check box. If you click +Add to
Initiator Group, you can add that igroup to another igroup.
◦ In the Storage VM column, click the name of a storage VM to view details about it.
You can use System Manager to view a list of initiators. From the list, you can perform additional operations.
Steps
1. In System Manager, click Hosts > SAN Initiator Groups.
Beginning with 9.11.1, the connection status of the initiator is also displayed. Hover over status alerts to
view details.
3. (Optional): You can perform the following tasks by clicking the icons at the upper right corner of the list:
◦ Search the list for particular initiators.
◦ Download the list.
◦ Show or Hide columns in the list.
◦ Filter the data in the list.
Beginning with ONTAP 9.9.1, you can create an igroup that consists of other existing
igroups.
1. In System Manager, click Host > SAN Initiator Groups, and then click Add.
2. Enter the igroup Name and Description.
60
The description serves as the igroup alias.
The OS type of a nested igroup cannot be changed after the igroup is created.
You can use Search to find and select the initiator groups you want to add.
Beginning with ONTAP 9.9.1, you can map igroups to two or more LUNs simultaneously.
1. In System Manager, click Storage > LUNs.
2. Select the LUNs you want to map.
3. Click More, then click Map To Initiator Groups.
The selected igroups are added to the selected LUNs. The pre-existing mappings are not
overwritten.
In addition to using Selective LUN Map (SLM), you can create a portset and bind the
portset to an igroup to further limit which LIFs can be used by an initiator to access a
LUN.
If you do not bind a portset to an igroup, then all of the initiators in the igroup can access mapped LUNs
through all of the LIFs on the node owning the LUN and the owning node’s HA partner.
Unless you are using interface groups, two LIFs are recommended for redundancy for both iSCSI and FC.
Only one LIF is recommended for interface groups.
61
Example 6. Steps
System Manager
Beginning with ONTAP 9.10.1, you can use System Manager to create portsets and bind them to igroups.
If you need to create a portset and bind it to an igroup in an ONTAP release earlier than 9.10.1 you must
use the ONTAP CLI procedure.
1. In System Manager, click Network > Overview > Portsets, and click Add.
2. Enter the information for the new portset and click Add.
3. Click Hosts > SAN Initiator Groups.
4. To bind the portset to a new igroup, click Add.
To bind the portset to an existing igroup, select the igroup, click , and then click Edit Initiator Group.
Related information
View and manage initiators and igroups
CLI
1. Create a port set containing the appropriate LIFs:
If you are using FC, specify the protocol parameter as fcp. If you are using iSCSI, specify the
protocol parameter as iscsi.
Manage portsets
In addition to Selective LUN Map (SLM), you can use portsets to further limit which LIFs
can be used by an initiator to access a LUN.
Beginning with ONTAP 9.10.1, you can use System Manager to change the network interfaces associated with
portsets and to delete portsets.
62
Change network interfaces associated with a portset
Delete a portset
Selective LUN Map (SLM) reduces the number of paths from the host to the LUN. With
SLM, when a new LUN map is created, the LUN is accessible only through paths on the
node owning the LUN and its HA partner.
SLM enables management of a single igroup per host and also supports nondisruptive LUN move operations
that do not require portset manipulation or LUN remapping.
Portsets can be used with SLM to further restrict access of certain targets to certain initiators. When using SLM
with portsets, LUNs will be accessible on the set of LIFs in the portset on the node that owns the LUN and on
that node’s HA partner.
If your environment has a combination of LUNs created in an ONTAP 9 release and LUNs transitioned from
previous versions, you might need to determine whether Selective LUN Map (SLM) is enabled on a specific
LUN.
You can use the information displayed in the output of the lun mapping show -fields reporting-
nodes, node command to determine whether SLM is enabled on your LUN map. If SLM is not enabled, "-" is
displayed in the cells under the “reporting-nodes” column of the command output. If SLM is enabled, the list of
nodes displayed under the “nodes” column is duplicated in the “reporting-nodes” column.
If you are moving a LUN or a volume containing LUNs to another high availability (HA) pair within the same
cluster, you should modify the Selective LUN Map (SLM) reporting-nodes list before initiating the move to
ensure that active, optimized LUN paths are maintained.
Steps
1. Add the destination node and its partner node to the reporting-nodes list of the aggregate or volume:
If you have a consistent naming convention, you can modify multiple LUN mappings at the same time by
using igroup_prefix* instead of igroup_name.
63
2. Rescan the host to discover the newly added paths.
3. If your OS requires it, add the new paths to your multipath network I/O (MPIO) configuration.
4. Run the command for the needed move operation and wait for the operation to finish.
5. Verify that I/O is being serviced through the Active/Optimized path:
6. Remove the previous LUN owner and its partner node from the reporting-nodes list:
7. Verify that the LUN has been removed from the existing LUN map:
Ethernet networks vary greatly in performance. You can maximize the performance of the
network used for iSCSI by selecting specific configuration values.
Steps
1. Connect the host and storage ports to the same network.
2. Select the highest speed ports available, and dedicate them to iSCSI.
You should see Network management for using the CLI to configure Ethernet port flow control.
All devices in the data path, including initiators, targets, and switches, must support jumbo frames.
Otherwise, enabling jumbo frames actually reduces network performance substantially.
To configure a storage virtual machine (SVM) for iSCSI, you must create LIFs for the
SVM and assign the iSCSI protocol to those LIFs.
64
About this task
You need a minimum of one iSCSI LIF per node for each SVM serving data with the iSCSI protocol. For
redundancy, you should create at least two LIFs per node.
65
Example 7. Steps
System Manager
Configure an storage VM for iSCSI with ONTAP System Manager (9.7 and later).
CLI
Configure an storage VM for iSCSI with the ONTAP CLI.
2. Create a LIF for the SVMs on each node to use for iSCSI:
◦ For ONTAP 9.6 and later:
4. Verify that iSCSI is up and running and the target IQN for that SVM:
Related information
66
NetApp Technical Report 4080: Best Practices for Modern SAN
You can define a list of initiators and their authentication methods. You can also modify
the default authentication method that applies to initiators that do not have a user-defined
authentication method.
About this task
You can generate unique passwords using security policy algorithms in the product or you can manually
specify the passwords that you want to use.
Steps
1. Use the vserver iscsi security create command to create a security policy method for an
initiator.
Creates a security policy method for initiator iqn.1991-05.com.microsoft:host1 with inbound and outbound
CHAP user names and passwords.
Related information
• How iSCSI authentication works
• CHAP authentication
You can delete an iSCSI service for a storage virtual machine (SVM) if it is no longer
required.
What you’ll need
The administration status of the iSCSI service must be in the “down” state before you can delete an iSCSI
service. You can move the administration status to down with the vserver iscsi modify command.
Steps
1. Use the vserver iscsi modify command to stop the I/O to the LUN.
2. Use the vserver iscsi delete command to remove the iscsi service from the SVM.
3. Use the vserver iscsi show command to verify that you deleted the iSCSI service from the SVM.
67
vserver iscsi show -vserver vs1
Increasing the iSCSI session error recovery level enables you to receive more detailed
information about iSCSI error recoveries. Using a higher error recovery level might cause
a minor reduction in iSCSI session performance.
About this task
By default, ONTAP is configured to use error recovery level 0 for iSCSI sessions. If you are using an initiator
that has been qualified for error recovery level 1 or 2, you can choose to increase the error recovery level. The
modified session error recovery level affects only the newly created sessions and does not affect existing
sessions.
Beginning with ONTAP 9.4, the max-error-recovery-level option is not supported in the iscsi show
and iscsi modify commands.
Steps
1. Enter advanced mode:
vserver max-error-recovery-level
------- ------------------------
vs3 0
3. Change the error recovery level by using the iscsi modify command.
You can use the vserver iscsi isns command to configure the storage virtual
machine (SVM) to register with an iSNS server.
About this task
The vserver iscsi isns create command configures the SVM to register with the iSNS server. The
SVM does not provide commands that enable you to configure or manage the iSNS server. To manage the
iSNS server, you can use the server administration tools or the interface provided by the vendor for the iSNS
server.
Steps
1. On your iSNS server, ensure that your iSNS service is up and available for service.
2. Create the SVM management LIF on a data port:
68
network interface create -vserver SVM_name -lif lif_name -role data -data
-protocol none -home-node home_node_name -home-port home_port -address
IP_address -netmask network_mask
3. Create an iSCSI service on your SVM if one does not already exist:
6. If a default route does not exist for the SVM, create a default route:
Both IPv4 and IPv6 address families are supported. The address family of the iSNS server must be the
same as that of the SVM management LIF.
For example, you cannot connect anSVM management LIF with an IPv4 address to an iSNS server with an
IPv6 address.
There are a number of common iSCSI-related error messages that you can view with the
event log show command. You need to know what these messages mean and what
you can do to resolve the issues they identify.
The following table contains the most common error messages, and instructions for resolving them:
69
Message Explanation What to do
ISCSI: network interface The iSCSI service is not enabled You can use the iscsi
identifier disabled for on the interface. interface enable command to
use; incoming connection enable the iSCSI service on the
discarded interface. For example:
ISCSI: Authentication CHAP is not configured correctly You should check the CHAP
failed for initiator for the specified initiator. settings; you cannot use the same
nodename user name and password for
inbound and outbound settings on
the storage system:
After you upgrade to ONTAP 9.11.1 or later, you should manually enable automatic LIF
failover on all iSCSI LIFs created in ONTAP 9.10.1 or earlier.
Beginning with ONTAP 9.11.1, you can enable automatic LIF failover for iSCSI LIFs on All-flash SAN Array
platforms. If a storage failover occurs, the iSCSI LIF is automatically migrated from its home node or port to its
HA partner node or port and then back once the failover is complete. Or, if the port for iSCSI LIF becomes
unhealthy, the LIF is automatically migrated to a healthy port in its current home node and then back to its
original port once the port is healthy again. The enables SAN workloads running on iSCSI to resume I/O
service faster after a failover is experienced.
In ONTAP 9.11.1 and later, by default, newly created iSCSI LIFs are enabled for automatic LIF failover if one of
the following conditions is true:
By default, iSCSI LIFs created in ONTAP 9.10.1 and earlier are not enabled for automatic LIF failover. If there
are iSCSI LIFs on the SVM that are not enabled for automatic LIF failover, your newly created LIFs will not be
enabled for automatic LIF failover either. If automatic LIF failover is not enabled and there is a failover event
your iSCSI LIFs will not migrate.
70
Step
1. Enable automatic failover for an iSCSI LIF:
To update all iSCSI LIFs on the SVM, use -lif* instead of lif.
If you previously enabled automatic iSCSI LIF failover on iSCSI LIFs created in ONTAP 9.10.1 or earlier, you
have the option to disable it.
Step
1. Disable automatic failover for an iSCSI LIF:
To update all iSCSI LIFs on the SVM, use -lif* instead of lif.
Related Information
• Create a LIF
• Manually migrate a LIF
• Manually revert a LIF to its home port
• Configure failover settings on a LIF
Manage FC protocol
To configure a storage virtual machine (SVM) for FC, you must create LIFs for the SVM
and assign the FC protocol to those LIFs.
Before you begin
You must have an FC license and it must be enabled. If the FC license is not enabled, the LIFs and SVMs
appear to be online but the operational status is down. The FC service must be enabled for your LIFs and
SVMs to be operational. You must use single initiator zoning for all of the FC LIFs in the SVM to host the
initiators.
71
Example 8. Steps
System Manager
Configure an storage VM for iSCSI with ONTAP System Manager (9.7 and later).
CLI
1. Enable FC service on the SVM:
2. Create two LIFs for the SVMs on each node serving FC:
◦ For ONTAP 9.6 and later:
3. Verify that your LIFs have been created and that their operational status is online:
Related information
NetApp Support
You can delete an FC service for a storage virtual machine (SVM) if it is no longer
72
required.
What you’ll need
The administration status must be “down” before you can delete a FC service for an SVM. You can set the
administration status to down with either the vserver fcp modify command or the vserver fcp stop
command.
Steps
1. Use the vserver fcp stop command to stop the I/O to the LUN.
2. Use the vserver fcp delete command to remove the service from the SVM.
3. Use the vserver fcp show to verify that you deleted the FC service from your SVM:
For Fibre Channel over Ethernet (FCoE), jumbo frames for the Ethernet adapter portion
of the CNA should be configured at 9000 MTU. Jumbo frames for the FCoE adapter
portion of the CNA should be configured at greater than 1500 MTU. Only configure jumbo
frames if the initiator, target, and all intervening switches support and are configured for
jumbo frames.
Before you can use the NVMe protocol on your storage virtual machine (SVM), you must
start the NVMe service on the SVM.
Before you begin
NVMe must be allowed as a protocol on your system.
Steps
1. Change the privilege setting to advanced:
73
2. Verify that NVMe is allowed as a protocol:
If needed, you can delete the NVMe service from your storage virtual machine (SVM).
Steps
1. Change the privilege setting to advanced:
Resize a namespace
Beginning with ONTAP 9.10.1, you can use the ONTAP CLI to increase or decrease the
size of a NVMe namespace. You can use System Manager to increase the size of a
NVMe namespace.
System Manager
1. Click Storage > NVMe Namespaces.
2. Hoover over the namespace you want to increase, click , and then click Edit.
3. Under CAPACITY, change the size of the namespace.
CLI
1. Enter the following command: vserver nvme namespace modify -vserver SVM_name
-path path –size new_size_of_namespace
74
Decrease the size of a namespace
You must use the ONTAP CLI to decrease the size of a NVMe namespace.
Beginning with ONTAP 9.11.1, you can use the ONTAP CLI to convert in-place an
existing NVMe namespace to a LUN.
• Specified NVMe namespace should not have any existing maps to a Subsystem.
• Namespace should not be part of a Snapshot copy or on the destination side of SnapMirror relationship as
a read-only namespace.
• Since NVMe namespaces are only supported with specific platforms and network cards, this feature only
works with specific hardware.
Steps
1. Enter the following command to convert an NVMe namespace to a LUN:
Beginning with ONTAP 9.12.1 you can use the ONTAP command line interface (CLI) to
configure in-band (secure), bidirectional and unidirectional authentication between an
NVMe host and controller over the NVME/TCP and NVMe/FC protocols using DH-HMAC-
CHAP authentication. Beginning with ONTAP 9.14.1, in-band authentication can be
configured in System Manager.
To set up in-band authentication, each host or controller must be associated with a DH-HMAC-CHAP key
which is a combination of the NQN of the NVMe host or controller and an authentication secret configured by
the administrator. For an NVMe host or controller to authenticate its peer, it must know the key associated with
the peer.
In unidirectional authentication, a secret key is configured for the host, but not the controller. In bidirectional
authentication, a secret key is configured for both the host and the controller.
SHA-256 is the default hash function and 2048-bit is the default DH group.
75
System Manager
Beginning with ONTAP 9.14.1, you can use System Manager to configure in-band authentication while
creating or updating an NVMe subsystem, creating or cloning NVMe namespaces, or adding consistency
groups with new NVMe namespaces.
Steps
1. In System Manager, click Hosts > NVMe Subsystem and then click Add.
2. Add the NVMe subsystem name, and select the storage VM and host operating system.
3. Enter the Host NQN.
4. Select Use in-band authentication next to the Host NQN.
5. Provide the host secret and controller secret.
The DH-HMAC-CHAP key is a combination of the NQN of the NVMe host or controller and an
authentication secret configured by the administrator.
6. Select the preferred hash function and DH group for each host.
If you don’t select a hash function and a DH group, SHA-256 is assigned as the default hash function
and 2048-bit is assigned as the default DH group.
7. Optionally, click Add and repeat the steps as needed to add more host.
8. Click Save.
9. To verify that in-band authentication is enabled, click System Manager > Hosts > NVMe Subsystem
> Grid > Peek view.
A transparent key icon next to the host name indicates that unidirectional mode is enabled. An
opaque key next to the host name indicates bidirectional mode is enabled.
CLI
Steps
1. Add DH-HMAC-CHAP authentication to your NVMe subsystem:
2. Verify that the DH-HMAC CHAP authentication protocol is added to your host:
76
[ -dhchap-hash-function {sha-256|sha-512} ] Authentication Hash
Function
[ -dhchap-dh-group {none|2048-bit|3072-bit|4096-bit|6144-bit|8192-
bit} ]
Authentication
Diffie-Hellman
Group
[ -dhchap-mode {none|unidirectional|bidirectional} ]
Authentication Mode
3. Verify that the DH-HMAC CHAP authentication was performed during NVMe controller creation:
If you have configured in-band authentication over NVMe using DH-HMAC-CHAP, you
can choose to disable it at any time.
If you are reverting from ONTAP 9.12.1 or later to ONTAP 9.12.0 or earlier, you must disable in-band
authentication before you revert. If in-band authentication using DH-HMAC-CHAP is not disabled, revert will
fail.
Steps
1. Remove the host from the subsystem to disable DH-HMAC-CHAP authentication:
2. Verify that the DH-HMAC-CHAP authentication protocol is removed from the host:
77
vserver nvme subsystem host show
Beginning with ONTAP 9.14.1, you can configure your NVMe subsystem to prioritize
resource allocation for specific hosts. By default, when a host is added to the subsystem,
it is assigned a regular priority. Hosts assigned a high priority are allocated larger I/O
queue counts and queue-depths.
You can use the ONTAP command line interface (CLI) to manually change the default priority from regular to
high. To change the priority assigned to a host, you must remove the host from the subsystem and then add it
back.
Steps
1. Verify that the host priority is set to regular:
Beginning in ONTAP 9.14.1, host discovery of controllers using the NVMe/TCP protocol is
78
automated by default in IP-based fabrics.
If you previously disabled automated host discovery, but your needs have changed, you can re-enable it.
Steps
1. Enter advanced privilege mode:
If you do not need NVMe/TCP controllers to be automatically discovered by your host and you detect
unwanted multicast traffic on your network, you should disable this functionality.
Steps
1. Enter advanced privilege mode:
Beginning in ONTAP 9.14.1, by default, ONTAP supports the ability of NVMe/FC hosts to
79
identify virtual machines by a unique identifier and for NVMe/FC hosts to monitor virtual
machine resource utilization. This enhances host-side reporting and troubleshooting.
You can use the bootarg to disable this functionality.
Step
1. Disable the virtual machine identifier:
The following example disables the VMID on port 0g and port 0i.
fct_sli_appid_off == 0g,0i
Commands are available to manage onboard FC adapters and FC adapter cards. These
commands can be used to configure the adapter mode, display adapter information, and
change the speed.
Most storage systems have onboard FC adapters that can be configured as initiators or targets. You can also
use FC adapter cards configured as initiators or targets. Initiators connect to back-end disk shelves, and
possibly foreign storage arrays (FlexArray). Targets connect only to FC switches. Both the FC target HBA ports
and the switch port speed should be set to the same value and should not be set to auto.
Related information
SAN configuration
You can use FC commands to manage FC target adapters, FC initiator adapters, and
onboard FC adapters for your storage controller. The same commands are used to
manage FC adapters for the FC protocol and the FC-NVMe protocol.
FC initiator adapter commands work only at the node level. You must use the run -node node_name
command before you can use the FC initiator adapter commands.
80
If you want to… Use this command…
Modify FC target adapter parameters network fcp adapter modify
Display how long the FC protocol has been running run -node node_name uptime
Verify which expansion cards are installed and run -node node_name sysconfig -ac
whether there are any configuration errors
Verify which expansion cards are installed and run -node node_name sysconfig -ac
whether there are any configuration errors
Configure FC adapters
The same steps are used when configuring FC adapters for the FC protocol and the FC-NVMe protocol.
81
However, only certain FC adapters support FC-NVMe. See the NetApp Hardware Universe for a list of
adapters that support the FC-NVMe protocol.
Steps
1. Take the adapter offline:
If the adapter does not go offline, you can also remove the cable from the appropriate adapter port on the
system.
Steps
1. Remove all LIFs from the adapter:
If the adapter does not go offline, you can also remove the cable from the appropriate adapter port on the
system.
82
4. Reboot the node hosting the adapter you changed.
5. Verify that the FC ports are configured in the correct state for your configuration:
You can use specific commands to view information about your FC/UTA adapters.
FC target adapter
Step
1. Use the network fcp adapter show command to display adapter information: network fcp
adapter show -instance -node node1 -adapter 0a
The output displays system configuration information and adapter information for each slot that is used.
Steps
1. Boot your controller without the cables attached.
2. Run the system hardware unified-connect show command to see the port configuration and
modules.
3. View the port information before configuring the CNA and ports.
You should change the UTA2 port from Converged Network Adapter (CNA) mode to Fibre
Channel (FC) mode to support the FC initiator and FC target mode. You should change
the personality from CNA mode to FC mode when you need to change the physical
medium that connects the port to its network.
Steps
1. Take the adapter offline:
83
4. Notify your admin or VIF manager to delete or remove the port, as applicable:
◦ If the port is used as a home port of a LIF, is a member of an interface group (ifgrp), or hosts VLANs,
then an admin should do the following:
i. Move the LIFs, remove the port from the ifgrp, or delete the VLANs, respectively.
ii. Manually delete the port by running the network port delete command.
If the network port delete command fails, the admin should address the errors, and then run
the command again.
◦ If the port is not used as the home port of a LIF, is not a member of an ifgrp, and does not host VLANs,
then the VIF manager should remove the port from its records at the time of reboot.
If the VIF manager does not remove the port, then the admin must remove it manually after the reboot
by using the network port delete command.
Node: net-f8040-34-01
Speed(Mbps) Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status
--------- ------------ ---------------- ---- ---- -----------
--------
...
e0i Default Default down 1500 auto/10 -
e0f Default Default down 1500 auto/10 -
...
84
net-f8040-34::> network interface show -fields home-port, curr-port
For CNA, you should use a 10Gb Ethernet SFP. For FC, you should either use an 8 Gb SFP or a 16 Gb
SFP, before changing the configuration on the node.
You should change the optical modules on the unified target adapter (CNA/UTA2) to
support the personality mode you have selected for the adapter.
Steps
1. Verify the current SFP+ used in the card. Then, replace the current SFP+ with the appropriate SFP+ for the
preferred personality (FC or CNA).
2. Remove the current optical modules from the X1143A-R6 adapter.
85
3. Insert the correct modules for your preferred personality mode (FC or CNA) optics.
4. Verify that you have the correct SFP+ installed:
Supported SFP+ modules and Cisco-branded Copper (Twinax) cables are listed in the Hardware Universe.
Related information
NetApp Hardware Universe
The FC target mode is the default configuration for X1143A-R6 adapter ports. However,
ports on this adapter can be configured as either 10-Gb Ethernet and FCoE ports or as
16-Gb FC ports.
When configured for Ethernet and FCoE, X1143A-R6 adapters support concurrent NIC and FCoE target traffic
on the same 10-GBE port. When configured for FC, each two-port pair that shares the same ASIC can be
individually configured for FC target or FC initiator mode. This means that a single X1143A-R6 adapter can
support FC target mode on one two-port pair and FC initiator mode on another two-port pair.
Related information
NetApp Hardware Universe
SAN configuration
To configure the unified target adapter (X1143A-R6), you must configure the two adjacent
ports on the same chip in the same personality mode.
Steps
1. Configure the ports as needed for Fibre Channel (FC) or Converged Network Adapter (CNA) using the
system node hardware unified-connect modify command.
2. Attach the appropriate cables for FC or 10 Gb Ethernet.
3. Verify that you have the correct SFP+ installed:
For CNA, you should use a 10Gb Ethernet SFP. For FC, you should either use an 8 Gb SFP or a 16 Gb
SFP, based on the FC fabric being connected to.
You can prevent loss of connectivity during a port failure by configuring your system with
redundant paths to separate X1133A-R6 HBAs.
The X1133A-R6 HBA is a 4-port, 16 Gb FC adapter consisting of two 2-port pairs. The X1133A-R6 adapter can
be configured as target mode or initiator mode. Each 2-port pair is supported by a single ASIC (for example,
Port 1 and Port 2 on ASIC 1 and Port 3 and Port 4 on ASIC 2). Both ports on a single ASIC must be configured
86
to operate in the same mode, either target mode or initiator mode. If an error occurs with the ASIC supporting a
pair, both ports in the pair go offline.
To prevent this loss of connectivity, you configure your system with redundant paths to separate X1133A-R6
HBAs, or with redundant paths to ports supported by different ASICs on the HBA.
LIFs are connected to the SAN hosts. They can be removed from port sets, moved to
different nodes within a storage virtual machine (SVM), and deleted.
Related information
Network management
Steps
1. Create the LIF:
87
2. Verify that the LIF was created:
You only need to perform a LIF movement if you are changing the contents of your
cluster, for example, adding nodes to the cluster or deleting nodes from the cluster. If you
perform a LIF movement, you do not have to re-zone your FC fabric or create new iSCSI
sessions between the attached hosts of your cluster and the new target interface.
You cannot move a SAN LIF using the network interface move command. SAN LIF movement must be
performed by taking the LIF offline, moving the LIF to a different home node or port, and then bringing it back
online in its new location. Asymmetric Logical Unit Access (ALUA) provides redundant paths and automatic
path selection as part of any ONTAP SAN solution. Therefore, there is no I/O interruption when the LIF is taken
offline for the movement. The host simply retries and then moves I/O to another LIF.
• Replace one HA pair of a cluster with an upgraded HA pair in a way that is transparent to hosts accessing
LUN data
• Upgrade a target interface card
• Shift the resources of a storage virtual machine (SVM) from one set of nodes in a cluster to another set of
nodes in the cluster
If the LIF you want to delete or move is in a port set, you must remove the LIF from the
port set before you can delete or move the LIF.
About this task
You need to do Step 1 in the following procedure only if one LIF is in the port set. You cannot remove the last
LIF in a port set if the port set is bound to an initiator group. Otherwise, you can start with Step 2 if multiple
LIFs are in the port set.
Steps
1. If only one LIF is in the port set, use the lun igroup unbind command to unbind the port set from the
initiator group.
When you unbind an initiator group from a port set, all of the initiators in the initiator group
have access to all target LUNs mapped to the initiator group on all network interfaces.
2. Use the lun portset remove command to remove the LIF from the port set.
cluster1::> port set remove -vserver vs1 -portset ps1 -port-name lif1
88
Move a SAN LIF
If a node needs to be taken offline, you can move a SAN LIF to preserve its configuration
information, such as its WWPN, and avoid rezoning the switch fabric. Because a SAN LIF
must be taken offline before it is moved, host traffic must rely on host multipathing
software to provide nondisruptive access to the LUN. You can move SAN LIFs to any
node in a cluster, but you cannot move the SAN LIFs between storage virtual machines
(SVMs).
What you’ll need
If the LIF is a member of a port set, the LIF must have been removed from the port set before the LIF can be
moved to a different node.
Steps
1. View the administrative and operational status of the LIF:
Before you delete a LIF, you should ensure that the host connected to the LIF can access
the LUNs through another path.
What you’ll need
If the LIF you want to delete is a member of a port set, you must first remove the LIF from the port set before
you can delete the LIF.
89
System Manager
Delete a LIF with ONTAP System Manager (9.7 and later).
Steps
1. In System Manager, click Network > Overview, and then select Network Interfaces.
2. Select the storage VM from which you want to delete the LIF.
3. Click and select Delete.
CLI
Delete a LIF with the ONTAP CLI.
Steps
1. Verify the name of the LIF and current port to be deleted:
90
• You must create LIFs on the new nodes so that the LUN and volume movements are possible without
using the cluster interconnect network.
Configure iSCSI LIFs to return FQDN to host iSCSI SendTargets Discovery Operation
Beginning with ONTAP 9, iSCSI LIFs can be configured to return a Fully Qualified
Domain Name (FQDN) when a host OS sends an iSCSI SendTargets Discovery
Operation. Returning a FQDN is useful when there is a Network Address Translation
(NAT) device between the host OS and the storage service.
About this task
IP addresses on one side of the NAT device are meaningless on the other side, but FQDNs can have meaning
on both sides.
The FQDN value interoperability limit is 128 characters on all host OS.
Steps
1. Change the privilege setting to advanced:
In the following example, the iSCSI LIFs are configured to return storagehost-005.example.com as the
FQDN.
Related information
ONTAP 9 Commands
91
Recommended volume and file or LUN configuration combinations
There are specific combinations of FlexVol volume and file or LUN configurations you can
use, depending on your application and administration requirements. Understanding the
benefits and costs of these combinations can help you determine the right volume and
LUN configuration combination for your environment.
The following volume and LUN configuration combinations are recommended:
You can use SCSI thin provisioning on your LUNs in conjunction with any of these configuration combinations.
Benefits:
• All write operations within space-reserved files are guaranteed; they will not fail due to insufficient space.
• There are no restrictions on storage efficiency and data protection technologies on the volume.
• Enough space must be set aside from the aggregate up front to support the thickly provisioned volume.
• Space equal to twice the size of the LUN is allocated from the volume at LUN creation time.
Benefits:
• There are no restrictions on storage efficiency and data protection technologies on the volume.
• Space is allocated only as it is used.
• Write operations are not guaranteed; they can fail if the volume runs out of free space.
• You must manage the free space in the aggregate effectively to prevent the aggregate from running out of
free space.
Benefits:
Less space is reserved up front than for thick volume provisioning, and a best-effort write guarantee is still
provided.
92
• Write operations can fail with this option.
You can mitigate this risk by properly balancing free space in the volume against data volatility.
• You cannot rely on retention of data protection objects such as Snapshot copies and FlexClone files and
LUNs.
• You cannot use ONTAP block-sharing storage efficiency capabilities that cannot be automatically deleted,
including deduplication, compression, and ODX/Copy Offload.
Determine the correct volume and LUN configuration combination for your environment
Answering a few basic questions about your environment can help you determine the
best FlexVol volume and LUN configuration for your environment.
About this task
You can optimize your LUN and volume configurations for maximum storage utilization or for the security of
write guarantees. Based on your requirements for storage utilization and your ability to monitor and replenish
free space quickly, you must determine the FlexVol volume and LUN volumes appropriate for your installation.
Step
1. Use the following decision tree to determine the best volume and LUN configuration combination for your
environment:
93
Calculate rate of data growth for LUNs
You need to know the rate at which your LUN data is growing over time to determine
whether you should use space-reserved LUNs or non-space-reserved LUNs.
About this task
If you have a consistently high rate of data growth, then space-reserved LUNs might be a better option for you.
If you have a low rate of data growth, then you should consider non-space-reserved LUNs.
You can use tools such as OnCommand Insight to calculate your rate of data growth or you can calculate it
manually. The following steps are for manual calculation.
Steps
1. Set up a space-reserved LUN.
2. Monitor the data on the LUN for a set period of time, such as one week.
Make sure that your monitoring period is long enough to form a representative sample of regularly
occurring increases in data growth. For instance, you might consistently have a large amount of data
growth at the end of each month.
Example
In this example, you need a 200 GB LUN. You decide to monitor the LUN for a week and record the following
daily data changes:
• Sunday: 20 GB
• Monday: 18 GB
• Tuesday: 17 GB
• Wednesday: 20 GB
• Thursday: 20 GB
• Friday: 23 GB
• Saturday: 22 GB
This FlexVol volume and file or LUN configuration combination provides the ability to use
storage efficiency technologies and does not require you to actively monitor your free
space, because sufficient space is allocated up front.
The following settings are required to configure a space-reserved file or LUN in a volume using thick
provisioning:
94
Volume setting Value
Guarantee Volume
This FlexVol volume and file or LUN configuration combination requires the smallest
amount of storage to be allocated up front, but requires active free space management to
prevent errors due to lack of space.
The following settings are required to configure a non-space-reserved files or LUN in a thin-provisioned
volume:
Fractional reserve 0
Autogrow Optional
Additional considerations
When the volume or aggregate runs out of space, write operations to the file or LUN can fail.
If you do not want to actively monitor free space for both the volume and the aggregate, you should enable
95
Autogrow for the volume and set the maximum size for the volume to the size of the aggregate. In this
configuration, you must monitor aggregate free space actively, but you do not need to monitor the free space in
the volume.
Configuration settings for space-reserved files or LUNs with semi-thick volume provisioning
This FlexVol volume and file or LUN configuration combination requires less storage to be
allocated up front than the fully provisioned combination, but places restrictions on the
efficiency technologies you can use for the volume. Overwrites are fulfilled on a best-
effort basis for this configuration combination.
The following settings are required to configure a space-reserved LUN in a volume using semi-thick
provisioning:
Fractional reserve 0
Snapshot reserve 0
Technology restrictions
You cannot use the following volume storage efficiency technologies for this configuration combination:
• Compression
• Deduplication
• ODX and FlexClone Copy Offload
• FlexClone LUNs and FlexClone files not marked for automatic deletion (active clones)
• FlexClone subfiles
• ODX/Copy Offload
Additional considerations
The following facts must be considered when employing this configuration combination:
96
• When the volume that supports that LUN runs low on space, protection data (FlexClone LUNs and files,
Snapshot copies) is destroyed.
• Write operations can time out and fail when the volume runs out of free space.
Compression is enabled by default for AFF platforms. You must explicitly disable compression for any volume
for which you want to use semi-thick provisioning on an AFF platform.
Beginning with general availability in ONTAP 9.9.1, provides Zero Recovery Time Objective (Zero RTO) or
Transparent Application Failover (TAF) to enable automatic failover of business-critical applications in SAN
environments. SM-BC requires the installation of ONTAP Mediator 1.2 in a configuration with either two AFF
clusters or two All-Flash SAN Array (ASA) clusters.
Snapshot copy
Enables you to manually or automatically create, schedule, and maintain multiple backups of your LUNs.
Snapshot copies use only a minimal amount of additional volume space and do not have a performance cost. If
your LUN data is accidentally modified or deleted, that data can easily and quickly be restored from one of the
latest Snapshot copies.
Provides point-in-time, writable copies of another LUN in an active volume or in a Snapshot copy. A clone and
its parent can be modified independently without affecting each other.
Enables you to perform fast, space-efficient, on-request data recovery from Snapshot copies on an entire
volume. You can use SnapRestore to restore a LUN to an earlier preserved state without rebooting the storage
system.
Provides asynchronous disaster recovery by enabling you to periodically create Snapshot copies of data on
your volume; copy those Snapshot copies over a local or wide area network to a partner volume, usually on
another cluster; and retain those Snapshot copies. The mirror copy on the partner volume provides quick
availability and restoration of data from the time of the last Snapshot copy, if the data on the source volume is
corrupted or lost.
97
SnapVault backups (SnapMirror license required)
Provides storage efficient and long-term retention of backups. SnapVault relationships enable you to back up
selected Snapshot copies of volumes to a destination volume and retain the backups.
If you conduct tape backups and archival operations, you can perform them on the data that is already backed
up on the SnapVault secondary volume.
Configures access to LUNs, manages LUNs, and manages storage system Snapshot copies directly from a
Windows or UNIX hosts.
Support for most existing tape drives are included in ONTAP, as well as a method for tape vendors to
dynamically add support for new devices. ONTAP also supports the Remote Magnetic Tape (RMT) protocol,
enabling backup and recovery to any capable system.
Related information
NetApp Documentation: SnapDrive for UNIX
Snapshot copies are created at the volume level. If you copy or move a LUN to a different
volume, the Snapshot copy policy of the destination volune is applied to the copied or
moved volume. If Snapshot copies are not eestablished for the destination volume,
Snapshot copies will not be created of the moved or copied LUN.
You can restore a single LUN from a Snapshot copy without restoring the entire volume
that contains the single LUN. You can restore the LUN in place or to a new path in the
volume. The operation restores only the single LUN without impacting other files or LUNs
in the volume. You can also restore files with streams.
What you’ll need
• You must have enough space on your volume to complete the restore operation:
◦ If you are restoring a space-reserved LUN where the fractional reserve is 0%, you require one times
the size of the restored LUN.
◦ If you are restoring a space-reserved LUN where the fractional reserve is 100%, you require two times
the size of the restored LUN.
◦ If you are restoring a non-space-reserved LUN, you only require the actual space used for the restored
LUN.
98
• A Snapshot copy of the destination LUN must have been created.
If the restore operation fails, the destination LUN might be truncated. In such cases, you can use the
Snapshot copy to prevent data loss.
In rare cases, the LUN restore can fail, leaving the source LUN unusable. If this occurs, you can use the
Snapshot copy to return the LUN to the state just before the restore attempt.
• The destination LUN and source LUN must have the same OS type.
If your destination LUN has a different OS type from your source LUN, your host can lose data access to
the destination LUN after the restore operation.
Steps
1. From the host, stop all host access to the LUN.
2. Unmount the LUN on its host so that the host cannot access the LUN.
3. Unmap the LUN:
4. Determine the Snapshot copy you want to restore your LUN to:
99
Restore all LUNs in a volume from a Snapshot copy
You can use volume snapshot restore command to restore all the LUNs in a
specified volume from a Snapshot copy.
Steps
1. From the host, stop all host access to the LUNs.
Using SnapRestore without stopping all host access to LUNs in the volume can cause data corruption and
system errors.
2. Unmount the LUNs on that host so that the host cannot access the LUNs.
3. Unmap your LUNs:
4. Determine the Snapshot copy to which you want to restore your volume:
100
Delete one or more existing Snapshot copies from a volume
You can manually delete one or more existing Snapshot copies from the volume. You
might want to do this if you need more space on your volume.
Steps
1. Use the volume snapshot show command to verify which Snapshot copies you want to delete.
---Blocks---
Vserver Volume Snapshot Size Total% Used%
-------- ------- ----------------------- ----- ------ -----
vs3 vol3
snap1.2013-05-01_0015 100KB 0% 38%
snap1.2013-05-08_0015 76KB 0% 32%
snap2.2013-05-09_0010 76KB 0% 32%
snap2.2013-05-10_0010 76KB 0% 32%
snap3.2013-05-10_1005 72KB 0% 31%
snap3.2013-05-10_1105 72KB 0% 31%
snap3.2013-05-10_1205 72KB 0% 31%
snap3.2013-05-10_1305 72KB 0% 31%
snap3.2013-05-10_1405 72KB 0% 31%
snap3.2013-05-10_1505 72KB 0% 31%
10 entries were displayed.
The following example deletes all Snapshot copies on the volume vol3.
101
cluster::> volume snapshot delete -vserver vs3 -volume vol3 *
When you clone a LUN, block sharing occurs in the background and you cannot create a volume Snapshot
copy until the block sharing is finished.
You must configure the volume to enable the FlexClone LUN automatic deletion function with the volume
snapshot autodelete modify command. Otherwise, if you want FlexClone LUNs to be deleted
automatically but the volume is not configured for FlexClone auto delete, none of the FlexClone LUNs are
deleted.
When you create a FlexClone LUN, the FlexClone LUN automatic deletion function is disabled by default. You
must manually enable it on every FlexClone LUN before that FlexClone LUN can be automatically deleted. If
you are using semi-thick volume provisioning and you want the “best effort” write guarantee provided by this
option, you must make all FlexClone LUNs available for automatic deletion.
When you create a FlexClone LUN from a Snapshot copy, the LUN is automatically split from
the Snapshot copy by using a space-efficient background process so that the LUN does not
continue to depend on the Snapshot copy or consume any additional space. If this background
split has not been completed and this Snapshot copy is automatically deleted, that FlexClone
LUN is deleted even if you have disabled the FlexClone auto delete function for that FlexClone
LUN. After the background split is complete, the FlexClone LUN is not deleted even if that
Snapshot copy is deleted.
Related information
Logical storage management
You can use FlexClone LUNs to create multiple read/write copies of a LUN.
You might want to do this for the following reasons:
102
• You want to create a clone of a database for manipulation and projection operations, while preserving the
original data in an unaltered form.
• You want to access a specific subset of a LUN’s data (a specific logical volume or file system in a volume
group, or a specific file or set of files in a file system) and copy it to the original LUN, without restoring the
rest of the data in the original LUN. This works on operating systems that support mounting a LUN and a
clone of the LUN at the same time. SnapDrive for UNIX supports this with the snap connect command.
• You need multiple SAN boot hosts with the same operating system.
How a FlexVol volume can reclaim free space with autodelete setting
You can enable the autodelete setting of a FlexVol volume to automatically delete
FlexClone files and FlexClone LUNs. By enabling autodelete, you can reclaim a target
amount of free space in the volume when a volume is nearly full.
You can configure a volume to automatically start deleting FlexClone files and FlexClone LUNs when the free
space in the volume decreases below a particular threshold value, and automatically stop deleting clones when
a target amount of free space in the volume is reclaimed. Although, you cannot specify the threshold value that
starts the automatic deletion of clones, you can specify whether a clone is eligible for deletion, and you can
specify the target amount of free space for a volume.
A volume automatically deletes FlexClone files and FlexClone LUNs when the free space in the volume
decreases below a particular threshold and when both of the following requirements are met:
• The autodelete capability is enabled for the volume that contains the FlexClone files and FlexClone LUNs.
You can enable the autodelete capability for a FlexVol volume by using the volume snapshot
autodelete modify command. You must set the -trigger parameter to volume or snap_reserve
for a volume to automatically delete FlexClone files and FlexClone LUNs.
• The autodelete capability is enabled for the FlexClone files and FlexClone LUNs.
You can enable autodelete for a FlexClone file or FlexClone LUN by using the file clone create
command with the -autodelete parameter. As a result, you can preserve certain FlexClone files and
FlexClone LUNs by disabling autodelete for the clones and ensuring that other volume settings do not
override the clone setting.
Configure a FlexVol volume to automatically delete FlexClone files and FlexClone LUNs
You can enable a FlexVol volume to automatically delete FlexClone files and FlexClone
LUNs with autodelete enabled when the free space in the volume decreases below a
particular threshold.
What you’ll need
• The FlexVol volume must contain FlexClone files and FlexClone LUNs and be online.
• The FlexVol volume must not be a read-only volume.
Steps
1. Enable automatic deletion of FlexClone files and FlexClone LUNs in the FlexVol volume by using the
volume snapshot autodelete modify command.
◦ For the -trigger parameter, you can specify volume or snap_reserve.
103
◦ For the -destroy-list parameter, you must always specify lun_clone,file_clone regardless
of whether you want to delete only one type of clone.
The following example shows how you can enable volume vol1 to trigger the automatic deletion of
FlexClone files and FlexClone LUNs for space reclamation until 25% of the volume consists of free
space:
While enabling FlexVol volumes for automatic deletion, if you set the value of the
-commitment parameter to destroy, all the FlexClone files and FlexClone LUNs with
the -autodelete parameter set to true might be deleted when the free space in the
volume decreases below the specified threshold value. However, FlexClone files and
FlexClone LUNs with the -autodelete parameter set to false will not be deleted.
2. Verify that automatic deletion of FlexClone files and FlexClone LUNs is enabled in the FlexVol volume by
using the volume snapshot autodelete show command.
The following example shows that volume vol1 is enabled for automatic deletion of FlexClone files and
FlexClone LUNs:
3. Ensure that autodelete is enabled for the FlexClone files and FlexClone LUNs in the volume that you want
to delete by performing the following steps:
a. Enable automatic deletion of a particular FlexClone file or FlexClone LUN by using the volume file
clone autodelete command.
You can force a specific FlexClone file or FlexClone LUN to be automatically deleted by using the
volume file clone autodelete command with the -force parameter.
104
The following example shows that automatic deletion of the FlexClone LUN lun1_clone contained in
volume vol1 is enabled:
You can enable autodelete when you create FlexClone files and FlexClone LUNs.
b. Verify that the FlexClone file or FlexClone LUN is enabled for automatic deletion by using the volume
file clone show-autodelete command.
The following example shows that the FlexClone LUN lun1_clone is enabled for automatic deletion:
For more information about using the commands, see the respective man pages.
You can create copies of your LUNs by cloning the LUNs in the active volume. These
FlexClone LUNs are readable and writeable copies of the original LUNs in the active
volume.
What you’ll need
A FlexClone license must be installed.
Steps
1. You must have verified that the LUNs are not mapped to an igroup or are written to before making the
clone.
2. Use the lun show command to verify that the LUN exists.
105
Vserver Path State Mapped Type Size
-------- ----------------- --------- --------- -------- -------
vs1 /vol/vol1/lun1 online unmapped windows 47.07MB
3. Use the volume file clone create command to create the FlexClone LUN.
volume file clone create -vserver vs1 -volume vol1 -source-path lun1
-destination-path/lun1_clone
If you need the FlexClone LUN to be available for automatic deletion, you include -autodelete true. If
you are creating this FlexClone LUN in a volume using semi-thick provisioning, you must enable automatic
deletion for all FlexClone LUNs.
4. Use the lun show command to verify that you created a LUN.
You can use a Snapshot copy in your volume to create FlexClone copies of your LUNs.
FlexClone copies of LUNs are both readable and writeable.
What you’ll need
A FlexClone license must be installed.
Steps
1. Verify that the LUN is not mapped or being written to.
2. Create a Snapshot copy of the volume that contains the LUNs:
You must create a Snapshot copy (the backing Snapshot copy) of the LUN you want to clone.
106
source_path -snapshot-name snapshot_name -destination-path destination_path
If you need the FlexClone LUN to be available for automatic deletion, you include -autodelete true. If
you are creating this FlexClone LUN in a volume using semi-thick provisioning, you must enable automatic
deletion for all FlexClone LUNs.
Prevent a specific FlexClone file or FlexClone LUN from being automatically deleted
If you configure a FlexVol volume to automatically delete FlexClone files and FlexClone
LUNs, any clone that fits the criteria you specify might be deleted. If you have specific
FlexClone files or FlexClone LUNs that you want to preserve, you can exclude them from
the automatic FlexClone deletion process.
What you’ll need
A FlexClone license must be installed.
If you set the commitment level on the volume to try or disrupt, you can individually
preserve specific FlexClone files or FlexClone LUNs by disabling autodelete for those clones.
However, if you set the commitment level on the volume to destroy and the destroy lists
include lun_clone,file_clone, the volume setting overrides the clone setting, and all
FlexClone files and FlexClone LUNs can be deleted regardless of the autodelete setting for the
clones.
Steps
1. Prevent a specific FlexClone file or FlexClone LUN from being automatically deleted by using the volume
file clone autodelete command.
The following example shows how you can disable autodelete for FlexClone LUN lun1_clone contained in
vol1:
107
A FlexClone file or FlexClone LUN with autodelete disabled cannot be deleted automatically to reclaim
space on the volume.
2. Verify that autodelete is disabled for the FlexClone file or FlexClone LUN by using the volume file
clone show-autodelete command.
The following example shows that autodelete is false for the FlexClone LUN lun1_clone:
The procedure for creating and initializing the SnapVault relationship between a primary volume containing
LUNs and a secondary volume acting as a SnapVault backup is identical to the procedure used with FlexVol
volumes used for file protocols. This procedure is described in detail in Data Protection.
It is important to ensure that LUNs being backed up are in a consistent state before the Snapshot copies are
created and copied to the SnapVault secondary volume. Automating the Snapshot copy creation with
SnapCenter ensures that backed up LUNs are complete and usable by the original application.
There are three basic choices for restoring LUNs from a SnapVault secondary volume:
• You can map a LUN directly from the SnapVault secondary volume and connect a host to the LUN to
access the contents of the LUN.
The LUN is read-only and you can map only from the most recent Snapshot copy in the SnapVault backup.
Persistent reservations and other LUN metadata are lost. If desired, you can use a copy program on the
host to copy the LUN contents back to the original LUN if it is still accessible.
The LUN has a different serial number from the source LUN.
• You can clone any Snapshot copy in the SnapVault secondary volume to a new read-write volume.
You can then map any of the LUNs in the volume and connect a host to the LUN to access the contents of
the LUN. If desired, you can use a copy program on the host to copy the LUN contents back to the original
108
LUN if it is still accessible.
• You can restore the entire volume containing the LUN from any Snapshot copy in the SnapVault secondary
volume.
Restoring the entire volume replaces all of the LUNs, and any files, in the volume. Any new LUNs created
since the Snapshot copy was created are lost.
The LUNs retain their mapping, serial numbers, UUIDs, and persistent reservations.
You can access a read-only copy of a LUN from the latest Snapshot copy in a SnapVault
backup. The LUN ID, path, and serial number are different from the source LUN and must
first be mapped. Persistent reservations, LUN mappings, and igroups are not replicated
to the SnapVault secondary volume.
What you’ll need
• The SnapVault relationship must be initialized and the latest Snapshot copy in the SnapVault secondary
volume must contain the desired LUN.
• The storage virtual machine (SVM) containing the SnapVault backup must have one or more LIFs with the
desired SAN protocol accessible from the host used to access the LUN copy.
• If you plan to access LUN copies directly from the SnapVault secondary volume, you must create your
igroups on the SnapVault SVM in advance.
You can access a LUN directly from the SnapVault secondary volume without having to first restore or
clone the volume containing the LUN.
Steps
1. Run the lun show command to list the available LUNs in the SnapVault secondary volume.
In this example, you can see both the original LUNs in the primary volume srcvolA and the copies in the
SnapVault secondary volume dstvolB:
109
cluster::> lun show
2. If the igroup for the desired host does not already exist on the SVM containing the SnapVault secondary
volume, run the igroup create command to create an igroup.
This command creates an igroup for a Windows host that uses the iSCSI protocol:
3. Run the lun mapping create command to map the desired LUN copy to the igroup.
4. Connect the host to the LUN and access the contents of the LUN as desired.
You can restore a single LUN to a new location or to the original location. You can restore
from any Snapshot copy in the SnapVault secondary volume. To restore the LUN to the
original location, you first restore it to a new location and then copy it.
What you’ll need
• The SnapVault relationship must be initialized and the SnapVault secondary volume must contain an
appropriate Snapshot copy to restore.
• The storage virtual machine (SVM) containing the SnapVault secondary volume must have one or more
LIFs with the desired SAN protocol that are accessible from the host used to access the LUN copy.
• The igroups must already exist on the SnapVault SVM.
110
volume. You can use the LUN directly from the clone, or you can optionally copy the LUN contents back to the
original LUN location.
The LUN in the clone has a different path and serial number from the original LUN. Persistent reservations are
not retained.
Steps
1. Run the snapmirror show command to verify the secondary volume that contains the SnapVault
backup.
2. Run the volume snapshot show command to identify the Snapshot copy that you want to restore the
LUN from.
3. Run the volume clone create command to create a read-write clone from the desired Snapshot copy.
The volume clone is created in the same aggregate as the SnapVault backup. There must be enough
space in the aggregate to store the clone.
4. Run the lun show command to list the LUNs in the volume clone.
111
cluster::> lun show -vserver vserverB -volume dstvolB_clone
5. If the igroup for the desired host does not already exist on the SVM containing the SnapVault backup, run
the igroup create command to create an igroup.
This example creates an igroup for a Windows host that uses the iSCSI protocol:
6. Run the lun mapping create command to map the desired LUN copy to the igroup.
7. Connect the host to the LUN and access the contents of the LUN, as desired.
The LUN is read-write and can be used in place of the original LUN. Because the LUN serial number is
different, the host interprets it as a different LUN from the original.
8. Use a copy program on the host to copy the LUN contents back to the original LUN.
If one or more LUNs in a volume need to be restored from a SnapVault backup, you can
restore the entire volume. Restoring the volume affects all LUNs in the volume.
What you’ll need
The SnapVault relationship must be initialized and the SnapVault secondary volume must contain an
appropriate Snapshot copy to restore.
After restoring the volume, the LUNs remain mapped to the igroups they were mapped to just before the
restore. The LUN mapping might be different from the mapping at the time of the Snapshot copy. Persistent
112
reservations on the LUNs from host clusters are retained.
Steps
1. Stop I/O to all LUNs in the volume.
2. Run the snapmirror show command to verify the secondary volume that contains the SnapVault
secondary volume.
3. Run the volume snapshot show command to identify the Snapshot copy that you want to restore from.
4. Run the snapmirror restore command and specify the -source-snapshot option to specify the
Snapshot copy to use.
The destination you specify for the restore is the original volume you are restoring to.
5. If you are sharing LUNs across a host cluster, restore the persistent reservations on the LUNs from the
113
affected hosts.
In the following example, the LUN named lun_D was added to the volume after the Snapshot copy was
created. After restoring the entire volume from the Snapshot copy, lun_D no longer appears.
In the lun show command output, you can see the LUNs in the primary volume srcvolA and the read-only
copies of those LUNs in the SnapVault secondary volume dstvolB. There is no copy of lun_D in the SnapVault
backup.
After the volume is restored from the SnapVault secondary volume, the source volume no longer contains
114
lun_D. You do not need to remap the LUNs in the source volume after the restore because they are still
mapped.
How you can connect a host backup system to the primary storage system
You can back up SAN systems to tape through a separate backup host to avoid
performance degradation on the application host.
It is imperative that you keep SAN and NAS data separated for backup purposes. The figure below shows the
recommended physical configuration for a host backup system to the primary storage system. You must
configure volumes as SAN-only. LUNs can be confined to a single volume or the LUNs can be spread across
multiple volumes or storage systems.
Volumes on a host can consist of a single LUN mapped from the storage system or multiple LUNs using a
volume manager, such as VxVM on HP-UX systems.
Steps
115
1. Save the contents of the host file system buffers to disk.
You can use the command provided by your host operating system, or you can use SnapDrive for Windows
or SnapDrive for UNIX. You can also opt to make this step part of your SAN backup pre-processing script.
2. Use the volume snapshot create command to create a Snapshot copy of the production LUN.
3. Use the volume file clone create command to create a clone of the production LUN.
volume file clone create -vserver vs3 -volume vol3 -source-path lun1 -snapshot
-name snap_vol3 -destination-path lun1_backup
4. Use the lun igroup create command to create an igroup that includes the WWPN of the backup
server.
lun igroup create -vserver vs3 -igroup igroup3 -protocol fc -ostype windows
-initiator 10:00:00:00:c9:73:5b:91
5. Use the lun mapping create command to map the LUN clone you created in Step 3 to the backup
host.
lun mapping create -vserver vs3 -volume vol3 -lun lun1_backup -igroup igroup3
You can opt to make this step part of your SAN backup application’s post-processing script.
6. From the host, discover the new LUN and make the file system available to the host.
You can opt to make this step part of your SAN backup application’s post-processing script.
7. Back up the data in the LUN clone from the backup host to tape by using your SAN backup application.
8. Use the lun modify command to take the LUN clone offline.
10. Use the volume snapshot delete command to remove the Snapshot copy.
116
You should use this information in conjunction with basic SAN configuration documentation:
You should consider several things when setting up your iSCSI configuration.
• You can set up your iSCSI configuration with single nodes or with HA pairs.
Direct connect or the use of Ethernet switches is supported for connectivity. You must create LIFs for both
types of connectivity
• You should configure one management LIF for every storage virtual machine (SVM) supporting SAN.
• Selective LUN mapping (SLM) limits the paths that are being utilized in accessing the LUNs owned by an
HA pair.
This is the default behavior for LUNs created with ONTAP releases.
• HA pairs are defined as the reporting nodes for the Active/Optimized and the Active/Unoptimized paths that
will be used by the host in accessing the LUNs through ALUA.
• It is recommended that all SVMs in ISCSI configurations have a minimum of two LIF’s per node in separate
Ethernet networks for redundancy and MPIO across multiple paths.
• You need to create one or more iSCSI paths from each node in an HA pair, using logical interfaces (LIFs)
to allow access to LUNs that are serviced by the HA pair.
If a node fails, LIFs do not migrate or assume the IP addresses of the failed partner node. Instead, the
MPIO software, using ALUA on the host, is responsible for selecting the appropriate paths for LUN access
through LIFs.
• VLANs offer specific benefits, such as increased security and improved network reliability that you might
want to leverage in iSCSI.
You can configure the iSCSI SAN hosts to connect directly to a single node or by using
either one or multiple IP switches. You should determine whether you want a single-
switch configuration that is not completely redundant or a multi-switch configuration that
is completely redundant.
You can configure iSCSI SAN hosts in a direct-attached, single-switch, or multi-switch environment. If there are
multiple hosts connecting to the node, each host can be configured with a different operating system. For
single and multi-network configurations, the node can have multiple iSCSI connections to the switch, but
multipathing software that supports ALUA is required.
If there are multiple paths from the host to the controller, then ALUA must be enabled on the
host.
117
Direct-attached single-node configurations
In direct-attached configurations, one or more hosts are directly connected to the node.
In single-network single-node configurations, one switch connects a single node to one or more hosts.
Because there is a single switch, this configuration is not fully redundant.
In multi-network single-node configurations, two or more switches connect a single node to one or more hosts.
Because there are multiple switches, this configuration is fully redundant.
118
Ways to configure iSCSI SAN hosts with HA pairs
You can configure the iSCSI SAN hosts to connect to dual-node or multi-node
configurations by using either one or multiple IP switches. You should determine whether
you want a single-switch configuration that is not completely redundant or a multi-switch
configuration that is completely redundant.
You can configure iSCSI SAN hosts with single controllers and HA pairs on direct-attached, single-network, or
multi-network environments. HA pairs can have multiple iSCSI connections to each switch, but multipathing
software that supports ALUA is required on each host. If there are multiple hosts, you can configure each host
with a different operating system by checking the NetAppInteroperability Matrix Tool.
Direct-attachment
In a direct-attached configuration, one or more hosts are directly connected to the controllers.
Single-network HA pairs
In single-network HA pair configurations, one switch connects the HA pair to one or more hosts. Because there
is a single switch, this configuration is not fully redundant.
119
Multi-network HA pairs
In multi-network HA pair configurations, two or more switches connect the HA pair to one or more hosts.
Because there are multiple switches, this configuration is fully redundant.
A VLAN consists of a group of switch ports grouped together into a broadcast domain. A
VLAN can be on a single switch or it can span multiple switch chassis. Static and
dynamic VLANs enable you to increase security, isolate problems, and limit available
120
paths within your IP network infrastructure.
When you implement VLANs in large IP network infrastructures, you derive the following benefits:
• Increased security.
VLANs enable you to leverage existing infrastructure while still providing enhanced security because they
limit access between different nodes of an Ethernet network or an IP SAN.
Having too many paths slows reconnect times. If a host does not have a multipathing solution, you can use
VLANs to allow only one path.
Dynamic VLANs
Dynamic VLANs are MAC address-based. You can define a VLAN by specifying the MAC address of the
members you want to include.
Dynamic VLANs provide flexibility and do not require mapping to the physical ports where the device is
physically connected to the switch. You can move a cable from one port to another without reconfiguring the
VLAN.
Static VLANs
Static VLANs are port-based. The switch and switch port are used to define the VLAN and its members.
Static VLANs offer improved security because it is not possible to breach VLANs using media access control
(MAC) spoofing. However, if someone has physical access to the switch, replacing a cable and reconfiguring
the network address can allow access.
In some environments, it is easier to create and manage static VLANs than dynamic VLANs. This is because
static VLANs require only the switch and port identifier to be specified, instead of the 48-bit MAC address. In
addition, you can label switch port ranges with the VLAN identifier.
Configuration
• You can set up your NVMe configuration with single nodes or HA pairs using a single fabric or multifabric.
121
• You should configure one management LIF for every SVM supporting SAN.
• The use of heterogeneous FC switch fabrics is not supported, except in the case of embedded blade
switches.
• Cascade, partial mesh, full mesh, core-edge, and director fabrics are all industry-standard methods of
connecting FC switches to a fabric, and all are supported.
A fabric can consist of one or multiple switches, and the storage controllers can be connected to multiple
switches.
Features
The following NVMe features are supported based on your version of ONTAP.
Protocols
122
Protocol Beginning with ONTAP… Allowed by…
TCP 9.10.1 Default
FC 9.4 Default
Beginning with ONTAP 9.8, you can configure SCSI, NAS and NVMe protocols on the same storage virtual
machine (SVM).
In ONTAP 9.7 and earlier, NVMe can be the only protocol on the SVM.
9.9.1 • AFF 4
• ASA
Namespaces
When working with NVMe namespaces, you should be aware of the following:
• If you lose data in a LUN, it cannot be restored from a namespace, or vice versa.
• The space guarantee for namespaces is the same as the space guarantee of the containing volume.
• You cannot create a namespace on a volume transition from Data ONTAP operating in 7-mode.
• Namespaces do not support the following:
◦ Renaming
◦ Inter-volume move
◦ Inter-volume copy
◦ Copy on Demand
Additional limitations
123
See the NetApp Hardware Universe for a complete list of NVMe limits.
Related information
Best practices for modern SAN
• You should configure one management LIF for every storage virtual machine (SVM) supporting SAN.
• Multiple hosts, using different operating systems, such as Windows, Linux, or UNIX, can access the
storage solution at the same time.
Hosts require that a supported multipathing solution be installed and configured. Supported operating
systems and multipathing solutions can be verified on the Interoperability Matrix.
• ONTAP supports single, dual, or multiple node solutions that are connected to multiple physically
independent storage fabrics; a minimum of two are recommended for SAN solutions.
This provides redundancy at the fabric and storage system layers. Redundancy is particularly important
because these layers typically support many hosts.
• The use of heterogeneous FC switch fabrics is not supported, except in the case of embedded blade
switches.
• Cascade, partial mesh, full mesh, core-edge, and director fabrics are all industry-standard methods of
connecting FC switches to a fabric, and all are supported.
A fabric can consist of one or multiple switches, and the storage controllers can be connected to multiple
switches.
Related information
NetApp Interoperability Matrix Tool
You can configure FC and FC-NVMe SAN hosts with single nodes through one or more
fabrics. N-Port ID Virtualization (NPIV) is required and must be enabled on all FC
switches in the fabric. You cannot directly attach FC or FC-NMVE SAN hosts to single
nodes without using an FC switch.
You can configure FC or FC-NVMe SAN hosts with single nodes through a single fabric or multifabrics. The FC
target ports (0a, 0c, 0b, 0d) in the illustrations are examples. The actual port numbers vary depending on the
124
model of your storage node and whether you are using expansion adapters.
In single-fabric single-node configurations, there is one switch connecting a single node to one or more hosts.
Because there is a single switch, this configuration is not fully redundant. All hardware platforms that support
FC and FC-NVMe support single-fabric single-node configurations. However, the FAS2240 platform requires
the X1150A-R6 expansion adapter to support a single-fabric single-node configuration.
The following figure shows a FAS2240 single-fabric single-node configuration. It shows the storage controllers
side by side, which is how they are mounted in the FAS2240-2. For the FAS2240-4, the controllers are
mounted one above the other. There is no difference in the SAN configuration for the two models.
In multifabric single-node configurations, there are two or more switches connecting a single node to one or
more hosts. For simplicity, the following figure shows a multifabric single-node configuration with only two
fabrics, but you can have two or more fabrics in any multifabric configuration. In this figure, the storage
controller is mounted in the top chassis and the bottom chassis can be empty or can have an IOMX module, as
it does in this example.
125
Related information
NetApp Technical Report 4684: Implementing and Configuring Modern SANs with NVMe/FC
You can configure FC and FC-NVMe SAN hosts to connect to HA pairs through one or
more fabrics. You cannot directly attach FC or FC-NVMe SAN hosts to HA pairs without
using a switch.
You can configure FC and FC-NVMe SAN hosts with single fabric HA pairs or with multifabric HA pairs. The FC
target port numbers (0a, 0c, 0d, 1a, 1b) in the illustrations are examples. The actual port numbers vary
depending on the model of your storage node and whether you are using expansion adapters.
Single-fabric HA pairs
In single-fabric HA pair configurations, there is one fabric connecting both controllers in the HA pair to one or
more hosts. Because the hosts and controllers are connected through a single switch, single-fabric HA pairs
are not fully redundant.
All platforms that support FC configurations support single-fabric HA pair configurations, except the FAS2240
platform. The FAS2240 platform only supports single-fabric single-node configurations.
126
Multifabric HA pairs
In multifabric HA pairs, there are two or more switches connecting HA pairs to one or more hosts. For
simplicity, the following multifabric HA pair figure shows only two fabrics, but you can have two or more fabrics
in any multifabric configuration:
For best performance, you should consider certain best practices when configuring your
127
FC switch.
A fixed link speed setting is the best practice for FC switch configurations, especially for large fabrics because
it provides the best performance for fabric rebuilds and can significantly save time. Although autonegotiation
provides the greatest flexibility, FC switch configuration does not always perform as expected, and it adds time
to the overall fabric-build sequence.
All of the switches that are connected to the fabric must support N_Port ID virtualization (NPIV) and must have
NPIV enabled. ONTAP uses NPIV to present FC targets to a fabric.
For details about which environments are supported, see the NetApp Interoperability Matrix Tool.
For FC and iSCSI best practices, see NetApp Technical Report 4080: Best Practices for Modern SAN.
The maximum supported FC hop count between a host and storage system depends on
the switch supplier and storage system support for FC configurations.
The hop count is defined as the number of switches in the path between the initiator (host) and target (storage
system). Cisco also refers to this value as the diameter of the SAN fabric.
Related information
NetApp Downloads: Brocade Scalability Matrix Documents
FC target ports can be configured to run at different speeds. You should set the target
port speed to match the speed of the device to which it connects. All target ports used by
a given host should be set to the same speed.
FC target ports can be used for FC-NVMe configurations in the exact same way they are used for FC
configurations.
You should set the target port speed to match the speed of the device to which it connects instead of using
autonegotiation. A port that is set to autonegotiation can take longer to reconnect after a takeover/giveback or
other interruption.
You can configure onboard ports and expansion adapters to run at the following speeds. Each controller and
expansion adapter port can be configured individually for different speeds as needed.
128
4 Gb ports 8 Gb ports 16 Gb ports 32 Gb ports
• 4 Gb • 8 Gb • 16 Gb • 32 Gb
• 2 Gb • 4 Gb • 8 Gb • 16 Gb
• 1 Gb • 2 Gb • 4 Gb • 8 Gb
UTA2 ports can use an 8 Gb SFP+ adapter to support 8, 4, and 2 Gb speeds, if required.
For best performance and highest availability, you should use the recommended FC
target port configuration.
The following table shows the preferred port usage order for onboard FC and FC-NVMe target ports. For
expansion adapters, the FC ports should be spread so that they do not use the same ASIC for connectivity.
The preferred slot order is listed in NetApp Hardware Universe for the version of ONTAP software used by your
controller.
• AFF A300
• AFF A700
• AFF A700s
• AFF A800
The FAS22xx and FAS2520 systems do not have onboard FC ports and do not support add-on
adapters.
Controller Port pairs with shared ASIC Number of target ports: Preferred
ports
FAS9000, AFF A700, AFF A700s None All data ports are on expansion
and AFF A800 adapters. See NetApp Hardware
Universe for more information.
0g+0h 2: 0e, 0g
3: 0e, 0g, 0h
129
Controller Port pairs with shared ASIC Number of target ports: Preferred
ports
FAS8200 and AFF A300 0g+0h 1: 0g
2: 0g, 0h
8020 0c+0d 1: 0c
2: 0c, 0d
62xx 0a+0b 1: 0a
0c+0d 2: 0a, 0c
3: 0a, 0c, 0b
32xx 0c+0d 1: 0c
2: 0c, 0d
3: 0c, 0e, 0d
Commands are available to manage onboard FC adapters and FC adapter cards. These
commands can be used to configure the adapter mode, display adapter information, and
change the speed.
Most storage systems have onboard FC adapters that can be configured as initiators or targets. You can also
use FC adapter cards configured as initiators or targets. Initiators connect to back-end disk shelves, and
possibly foreign storage arrays (FlexArray). Targets connect only to FC switches. Both the FC target HBA ports
and the switch port speed should be set to the same value and should not be set to auto.
You can use FC commands to manage FC target adapters, FC initiator adapters, and
onboard FC adapters for your storage controller. The same commands are used to
manage FC adapters for the FC protocol and the FC-NVMe protocol.
FC initiator adapter commands work only at the node level. You must use the run -node node_name
130
command before you can use the FC initiator adapter commands.
Display how long the FC protocol has been running run -node node_name uptime
Verify which expansion cards are installed and run -node node_name sysconfig -ac
whether there are any configuration errors
Verify which expansion cards are installed and run -node node_name sysconfig -ac
whether there are any configuration errors
You can configure individual FC ports of onboard adapters and certain FC adapter cards
for initiator mode. Initiator mode is used to connect the ports to tape drives, tape libraries,
131
or third-party storage with FlexArray Virtualization or Foreign LUN Import (FLI).
What you’ll need
• LIFs on the adapter must be removed from any port sets of which they are members.
• All LIF’s from every storage virtual machine (SVM) using the physical port to be modified must be migrated
or destroyed before changing the personality of the physical port from target to initiator.
Steps
1. Remove all LIFs from the adapter:
If the adapter does not go offline, you can also remove the cable from the appropriate adapter port on the
system.
You can configure individual FC ports of onboard adapters and certain FC adapter cards
for target mode. Target mode is used to connect the ports to FC initiators.
About this task
Each onboard FC port can be individually configured as an initiator or a target. Ports on certain FC adapters
can also be individually configured as either a target port or an initiator port, just like the onboard FC ports. A
list of adapters that can be configured for target mode is available in the NetApp Hardware Universe.
The same steps are used when configuring FC adapters for the FC protocol and the FC-NVMe protocol.
132
However, only certain FC adapters support FC-NVMe. See the NetApp Hardware Universe for a list of
adapters that support the FC-NVMe protocol.
Steps
1. Take the adapter offline:
If the adapter does not go offline, you can also remove the cable from the appropriate adapter port on the
system.
You can use the network fcp adapter show command to display system
configuration and adapter information for any FC adapter in the system.
Step
1. Display information about the FC adapter by using the network fcp adapter show command.
The output displays system configuration information and adapter information for each slot that is used.
You should set your adapter target port speed to match the speed of the device to which
it connects, instead of using autonegotiation. A port that is set to autonegotiation can take
longer time to reconnect after a takeover/giveback or other interruption.
What you’ll need
All LIFs that use this adapter as their home port must be offline.
133
Steps
1. Take all of the LIFs on this adapter offline:
If the adapter does not go offline, you can also remove the cable from the appropriate adapter port on the
system.
You cannot modify the adapter speed beyond the maximum speed.
Supported FC ports
The number of onboard FC ports and CNA/UTA2 ports configured for FC varies based on
the model of the controller. FC ports are also available through supported FC target
expansion adapters or additional UTA2 cards configured with FC SFP+ adapters.
The NetApp Hardware Universe contains a complete list of onboard FC ports on each controller model.
• FC ports are only available on FAS2240 systems through the X1150A-R6 expansion adapter.
134
The NetApp Hardware Universe contains a complete list of target expansion adapters for each controller
model.
• The ports on some FC expansion adapters are configured as initiators or targets at the factory and cannot
be changed.
Others can be individually configured as either target or initiator FC ports, just like the onboard FC ports. A
complete list is available in NetApp Hardware Universe.
You can prevent loss of connectivity during a port failure by configuring your system with
redundant paths to separate X1133A-R6 HBAs.
The X1133A-R6 HBA is a 4-port, 16 Gb FC adapter consisting of two 2-port pairs. The X1133A-R6 adapter can
be configured as target mode or initiator mode. Each 2-port pair is supported by a single ASIC (for example,
Port 1 and Port 2 on ASIC 1 and Port 3 and Port 4 on ASIC 2). Both ports on a single ASIC must be configured
to operate in the same mode, either target mode or initiator mode. If an error occurs with the ASIC supporting a
pair, both ports in the pair go offline.
To prevent this loss of connectivity, you configure your system with redundant paths to separate X1133A-R6
HBAs, or with redundant paths to ports supported by different ASICs on the HBA.
By default the X1143A-R6 adapter is configured in FC target mode, but you can configure
its ports as either 10 Gb Ethernet and FCoE (CNA) ports or as 16 Gb FC initiator or target
ports. This requires different SFP+ adapters.
When configured for Ethernet and FCoE, X1143A-R6 adapters support concurrent NIC and FCoE target traffic
on the same 10-GBE port. When configured for FC, each two-port pair that shares the same ASIC can be
individually configured for FC target or FC initiator mode. This means that a single X1143A-R6 adapter can
support FC target mode on one two-port pair and FC initiator mode on another two-port pair. Port pairs
connected to the same ASIC must be configured in the same mode.
In FC mode, the X1143A-R6 adapter behaves just like any existing FC device with speeds up to 16 Gbps. In
CNA mode, you can use the X1143A-R6 adapter for concurrent NIC and FCoE traffic sharing the same 10 GbE
port. CNA mode only supports FC target mode for the FCoE function.
To configure the unified target adapter (X1143A-R6), you must configure the two adjacent
ports on the same chip in the same personality mode.
Steps
1. Configure the ports as needed for Fibre Channel (FC) or Converged Network Adapter (CNA) using the
system node hardware unified-connect modify command.
2. Attach the appropriate cables for FC or 10 Gb Ethernet.
3. Verify that you have the correct SFP+ installed:
135
network fcp adapter show -instance -node -adapter
For CNA, you should use a 10Gb Ethernet SFP. For FC, you should either use an 8 Gb SFP or a 16 Gb
SFP, based on the FC fabric being connected to.
You should change the UTA2 port from Converged Network Adapter (CNA) mode to Fibre
Channel (FC) mode to support the FC initiator and FC target mode. You should change
the personality from CNA mode to FC mode when you need to change the physical
medium that connects the port to its network.
Steps
1. Take the adapter offline:
4. Notify your admin or VIF manager to delete or remove the port, as applicable:
◦ If the port is used as a home port of a LIF, is a member of an interface group (ifgrp), or hosts VLANs,
then an admin should do the following:
i. Move the LIFs, remove the port from the ifgrp, or delete the VLANs, respectively.
ii. Manually delete the port by running the network port delete command.
If the network port delete command fails, the admin should address the errors, and then run
the command again.
◦ If the port is not used as the home port of a LIF, is not a member of an ifgrp, and does not host VLANs,
then the VIF manager should remove the port from its records at the time of reboot.
If the VIF manager does not remove the port, then the admin must remove it manually after the reboot
by using the network port delete command.
Node: net-f8040-34-01
Speed(Mbps) Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status
--------- ------------ ---------------- ---- ---- -----------
--------
136
...
e0i Default Default down 1500 auto/10 -
e0f Default Default down 1500 auto/10 -
...
137
Any changes will take effect after rebooting the system. Use the
"system node reboot" command to reboot.
For CNA, you should use a 10Gb Ethernet SFP. For FC, you should either use an 8 Gb SFP or a 16 Gb
SFP, before changing the configuration on the node.
You should change the optical modules on the unified target adapter (CNA/UTA2) to
support the personality mode you have selected for the adapter.
Steps
1. Verify the current SFP+ used in the card. Then, replace the current SFP+ with the appropriate SFP+ for the
preferred personality (FC or CNA).
2. Remove the current optical modules from the X1143A-R6 adapter.
3. Insert the correct modules for your preferred personality mode (FC or CNA) optics.
4. Verify that you have the correct SFP+ installed:
Supported SFP+ modules and Cisco-branded Copper (Twinax) cables are listed in the NetApp Hardware
Universe.
To view the settings for your unified target adapter (X1143A-R6), you must run the
system hardware unified-connect show command to display all modules on
your controller.
Steps
1. Boot your controller without the cables attached.
2. Run the system hardware unified-connect show command to see the port configuration and
modules.
3. View the port information before configuring the CNA and ports.
138
Ways to Configure FCoE
FCoE configurations require Ethernet switches that explicitly support FCoE features. FCoE configurations are
validated through the same interoperability and quality assurance process as FC switches. Supported
configurations are listed in the Interoperability Matrix. Some of the parameters included in these supported
configurations are the switch model, the number of switches that can be deployed in a single fabric, and the
supported switch firmware version.
The FC target expansion adapter port numbers in the illustrations are examples. The actual port numbers
might vary, depending on the expansion slots in which the FCoE target expansion adapters are installed.
Using FCoE initiators (CNAs), you can connect hosts to both controllers in an HA pair through FCoE switches
to FC target ports. The FCoE switch must also have FC ports. The host FCoE initiator always connects to the
FCoE switch. The FCoE switch can connect directly to the FC target or can connect to the FC target through
FC switches.
The following illustration shows host CNAs connecting to an FCoE switch, and then to an FC switch before
connecting to the HA pair:
139
FCoE initiator to FCoE target
Using host FCoE initiators (CNAs), you can connect hosts to both controllers in an HA pair to FCoE target
ports (also called UTAs or UTA2s) through FCoE switches.
140
FCoE initiator to FCoE and FC targets
Using host FCoE initiators (CNAs), you can connect hosts to both controllers in an HA pair to FCoE and FC
target ports (also called UTAs or UTA2s) through FCoE switches.
141
FCoE mixed with IP storage protocols
Using host FCoE initiators (CNAs), you can connect hosts to both controllers in an HA pair to FCoE target
ports (also called UTAs or UTA2s) through FCoE switches. FCoE ports cannot use traditional link aggregation
to a single switch. Cisco switches support a special type of link aggregation (Virtual Port Channel) that does
support FCoE. A Virtual Port Channel aggregates individual links to two switches. You can also use Virtual Port
Channels for other Ethernet traffic. Ports used for traffic other than FCoE, including NFS, SMB, iSCSI, and
other Ethernet traffic, can use regular Ethernet ports on the FCoE switches.
142
FCoE initiator and target combinations
Certain combinations of FCoE and traditional FC initiators and targets are supported.
FCoE initiators
You can use FCoE initiators in host computers with both FCoE and traditional FC targets in storage controllers.
The host FCoE initiator must connect to an FCoE DCB (data center bridging) switch; direct connection to a
target is not supported.
FC FCoE Yes
FCoE FC Yes
143
FCoE targets
You can mix FCoE target ports with 4-Gb, 8-Gb, or 16-Gb FC ports on the storage controller regardless of
whether the FC ports are add-in target adapters or onboard ports. You can have both FCoE and FC target
adapters in the same storage controller.
The rules for combining onboard and expansion FC ports still apply.
The maximum supported Fibre Channel over Ethernet (FCoE) hop count between a host
and storage system depends on the switch supplier and storage system support for FCoE
configurations.
The hop count is defined as the number of switches in the path between the initiator (host) and target (storage
system). Documentation from Cisco Systems also refers to this value as the diameter of the SAN fabric.
For end-to-end FCoE connections, the FCoE switches must be running a firmware version that supports
Ethernet inter-switch links (ISLs).
5 for FCoE
Cisco 7
An FC, FC-NVMe or FCoE zone is a logical grouping of one or more ports within a fabric.
For devices to be able see each other, connect, create sessions with one another, and
communicate, both ports need to have a common zone membership. Single initiator
zoning is recommended.
This occurs even in small environments and is one of the best arguments for implementing zoning. The
logical fabric subsets created by zoning eliminate crosstalk problems.
• Zoning reduces the number of available paths to a particular FC, FC-NVMe, or FCoE port and reduces the
number of paths between a host and a particular LUN that is visible.
144
For example, some host OS multipathing solutions have a limit on the number of paths they can manage.
Zoning can reduce the number of paths that an OS multipathing driver sees. If a host does not have a
multipathing solution installed, you need to verify that only one path to a LUN is visible by using either
zoning in the fabric or a combination of Selective LUN Mapping (SLM) and portsets in the SVM.
• Zoning increases security by limiting access and connectivity to end-points that share a common zone.
Ports that have no zones in common cannot communicate with one another.
• Zoning improves SAN reliability by isolating problems that occur and helps to reduce problem resolution
time by limiting the problem space.
• You should implement zoning any time, if four or more hosts are connected to a SAN or if SLM is not
implemented on the nodes to a SAN.
• Although World Wide Node Name zoning is possible with some switch vendors, World Wide Port Name
zoning is required to properly define a specific port and to use NPIV effectively.
• You should limit the zone size while still maintaining manageability.
Multiple zones can overlap to limit size. Ideally, a zone is defined for each host or host cluster.
• You should use single-initiator zoning to eliminate crosstalk between initiator HBAs.
Zoning based on World Wide Name (WWN) specifies the WWN of the members to be
included within the zone. When zoning in ONTAP, you must use World Wide Port Name
(WWPN) zoning.
WWPN zoning provides flexibility because access is not determined by where the device is physically
connected to the fabric. You can move a cable from one port to another without reconfiguring zones.
For Fibre Channel paths to storage controllers running ONTAP, be sure the FC switches are zoned using the
WWPNs of the target logical interfaces (LIFs), not the WWPNs of the physical ports on the node. For more
information on LIFs, see the ONTAP Network Management Guide.
Network management
Individual zones
In the recommended zoning configuration, there is one host initiator per zone. The zone
consists of the host initiator port and one or more target LIFs on the storage nodes that
are providing access to the LUNs up to the desired number of paths per target. This
means that hosts accessing the same nodes cannot see each other’s ports, but each
initiator can access any node.
You should add all LIF’s from the storage virtual machine (SVM) into the zone with the host initiator. This allows
you to move volumes or LUNs without editing your existing zones or creating new zones.
For Fibre Channel paths to nodes running ONTAP, be sure that the FC switches are zoned using the WWPNs
of the target logical interfaces (LIFs), not the WWPNs of the physical ports on the node. The WWPNs of the
145
physical ports start with “50” and the WWPNs of the LIFs start with “20”.
Single-fabric zoning
In a single-fabric configuration, you can still connect each host initiator to each storage
node. Multipathing software is required on the host to manage multiple paths. Each host
should have two initiators for multipathing to provide resiliency in the solution.
Each initiator should have a minimum of one LIF from each node that the initiator can access. The zoning
should allow at least one path from the host initiator to the HA pair of nodes in the cluster to provide a path for
LUN connectivity. This means that each initiator on the host might only have one target LIF per node in its zone
configuration. If there is a requirement for multipathing to the same node or multiple nodes in the cluster, then
each node will have multiple LIFs per node in its zone configuration. This enables the host to still access its
LUNs if a node fails or a volume containing the LUN is moved to a different node. This also requires the
reporting nodes to be set appropriately.
Single-fabric configurations are supported, but are not considered highly available. The failure of a single
component can cause loss of access to data.
In the following figure, the host has two initiators and is running multipathing software. There are two zones:
The naming convention used in this figure is just a recommendation of one possible naming
convention that you can choose to use for your ONTAP solution.
If the configuration included more nodes, the LIFs for the additional nodes would be included in these zones.
146
In this example, you could also have all four LIFs in each zone. In that case, the zones would be as follows:
The host operating system and multipathing software have to support the number of supported
paths that are being used to access the LUNs on the nodes. To determine the number of paths
used to access the LUNs on nodes, see the SAN configuration limits section.
Related information
NetApp Hardware Universe
In dual-fabric configurations, you can connect each host initiator to each cluster node.
Each host initiator uses a different switch to access the cluster nodes. Multipathing
software is required on the host to manage multiple paths.
Dual-fabric configurations are considered high availability because access to data is maintained if a single
component fails.
In the following figure, the host has two initiators and is running multipathing software. There are two zones.
SLM is configured so that all nodes are considered as reporting nodes.
The naming convention used in this figure is just a recommendation of one possible naming
convention that you can choose to use for your ONTAP solution.
Each host initiator is zoned through a different switch. Zone 1 is accessed through Switch 1. Zone 2 is
accessed through Switch 2.
Each initiator can access a LIF on every node. This enables the host to still access its LUNs if a node fails.
SVMs have access to all iSCSI and FC LIFs on every node in a clustered solution based on the setting for
Selective LUN Map (SLM) and the reporting node configuration. You can use SLM, portsets, or FC switch
zoning to reduce the number of paths from an SVM to the host and the number of paths from an SVM to a
LUN.
If the configuration included more nodes, the LIFs for the additional nodes would be included in these zones.
147
The host operating system and multipathing software have to support the number of paths that
is being used to access the LUNs on the nodes.
Related information
NetApp Hardware Universe
When using Cisco FC and FCoE switches, a single fabric zone must not contain more
than one target LIF for the same physical port. If multiple LIFs on the same port are in the
same zone, then the LIF ports might fail to recover from a connection loss.
Regular FC switches are used for the FC-NVMe protocol in the exact same way they are used for the FC
protocol.
• Multiple LIFs for the FC and FCoE protocols, can share physical ports on a node as long as they are in
different zones.
• FC-NVMe and FCoE cannot share the same physical port.
• FC and FC-NVMe can share the same 32 Gb physical port.
• Cisco FC and FCoE switches require each LIF on a given port to be in a separate zone from the other LIFs
on that port.
• A single zone can have both FC and FCoE LIFs. A zone can contain a LIF from every target port in the
cluster, but be careful to not exceed the host’s path limits and verify the SLM configuration.
• LIFs on different physical ports can be in the same zone.
• Cisco switches require that LIFs be separated.
148
Requirements for shared SAN configurations
Shared SAN configurations are defined as hosts that are attached to both ONTAP
storage systems and other vendors' storage systems. Accessing ONTAP storage
systems and other vendors' storage systems from a single host is supported as long as
several requirements are met.
For all of the host operating systems, it is a best practice to use separate adapters to connect to each vendor’s
storage systems. Using separate adapters reduces the chances of conflicting drivers and settings. For
connections to an ONTAP storage system, the adapter model, BIOS, firmware, and driver must be listed as
supported in the NetApp Interoperability Matrix Tool.
You should set the required or recommended timeout values and other storage parameters for the host. You
must always install the NetApp software or apply the NetApp settings last.
• For AIX, you should apply the values from the AIX Host Utilities version that is listed in the Interoperability
Matrix Tool for your configuration.
• For ESX, you should apply host settings by using Virtual Storage Console for VMware vSphere.
• For HP-UX, you should use the HP-UX default storage settings.
• For Linux, you should apply the values from the Linux Host Utilities version that is listed in the
Interoperability Matrix Tool for your configuration.
• For Solaris, you should apply the values from the Solaris Host Utilities version that is listed in the
Interoperability Matrix Tool for your configuration.
• For Windows, you should install the Windows Host Utilities version that is listed in the Interoperability
Matrix Tool for your configuration.
Related information
NetApp Interoperability Matrix Tool
ONTAP always uses Asymmetric Logical Unit Access (ALUA) for both FC and iSCSI
paths. Be sure to use host configurations that support ALUA for FC and iSCSi protocols.
Beginning with ONTAP 9.5 multipath HA pair failover/giveback is supported for NVMe configurations using
Asynchronous Namespace Access (ANA). In ONTAP 9.4, NVMe only supports one path from host to target.
The application host needs to manage path failover to its high availability (HA) partner.
For information about which specific host configurations support ALUA or ANA, see the NetApp Interoperability
Matrix Tool and ONTAP SAN Host Configuration for your host operating system.
If there is more than one path from the storage virtual machine (SVM) logical interfaces
(LIFs) to the fabric, multipathing software is required. Multipathing software is required on
the host any time the host can access a LUN through more than one path.
The multipathing software presents a single disk to the operating system for all paths to a LUN. Without
149
multipathing software, the operating system could treat each path as a separate disk, which can lead to data
corruption.
Your solution is considered to have multiple paths if you have any of the following:
• A single initiator port in the host attaching to multiple SAN LIFs in the SVM
• Multiple initiator ports attaching to a single SAN LIF in the SVM
• Multiple initiator ports attaching to multiple SAN LIFs in the SVM
In single-fabric single-node configurations, multipathing software is not required if you only have a single path
from the host to the node.
Multipathing software is recommended in HA configurations. In addition to Selective LUN Map, using FC switch
zoning or portsets to limit the paths used to access LUNs is recommended.
You should not exceed more than eight paths from your host to each node in your cluster,
paying attention to the total number of paths that can be supported for the host OS and
the multipathing used on the host.
You should have a minimum of two paths per LUN connecting to each reporting node through Selective LUN
Map (SLM) being used by the storage virtual machine (SVM) in your cluster. This eliminates single points of
failure and enables the system to survive component failures.
If you have four or more nodes in your cluster or more than four target ports being used by the SVMs in any of
your nodes, you can use the following methods to limit the number of paths that can be used to access LUNs
on your nodes so that you do not exceed the recommended maximum of eight paths.
• SLM
SLM reduces the number of paths from the host to LUN to only paths on the node owning the LUN and the
owning node’s HA partner. SLM is enabled by default.
Related information
SAN administration
Configuration limits
The number of nodes per cluster supported by ONTAP varies depending on your version
of ONTAP, the storage controller models in your cluster, and the protocol of your cluster
nodes.
About this task
150
If any node in the cluster is configured for FC, FC-NVMe, FCoE, or iSCSI, that cluster is limited to the SAN
node limits. Node limits based on the controllers in your cluster are listed in the Hardware Universe.
Steps
1. Go to NetApp Hardware Universe.
2. Click Platforms in the upper left (next to the Home button) and select the platform type.
3. Select the check box next to your version of ONTAP.
4. Select the check boxes next to the platforms used in your solution.
5. Unselect the Select All check box in the Choose Your Specifications column.
6. Select the Max Nodes per Cluster (NAS/SAN) check box.
7. Click Show Results.
Related information
NetApp Hardware Universe
Determine the number of supported hosts per cluster in FC and FC-NVMe configurations
The maximum number of SAN hosts that can be connected to a cluster varies greatly
based upon your specific combination of multiple cluster attributes, such as the number of
hosts connected to each cluster node, initiators per host, sessions per host, and nodes in
the cluster.
About this task
For FC and FC-NVMe configurations, you should use the number of initiator-target nexuses (ITNs) in your
system to determine whether you can add more hosts to your cluster.
An ITN represents one path from the host’s initiator to the storage system’s target. The maximum number of
ITNs per node in FC and FC-NVMe configurations is 2,048. As long as you are below the maximum number of
ITNs, you can continue to add hosts to your cluster.
To determine the number of ITNs used in your cluster, perform the following steps for each node in the cluster.
Steps
1. Identify all the LIFs on a given node.
2. Run the following command for every LIF on the node:
The number of entries displayed at the bottom of the command output represents your number of ITNs for
that LIF.
151
Determine the supported number of hosts in iSCSI configurations
The maximum number of SAN hosts that can be connected in iSCSI configurations varies
greatly based on your specific combination of multiple cluster attributes, such as the
number of hosts connected to each cluster node, initiators per host, logins per host, and
nodes in the cluster.
About this task
The number of hosts that can be directly connected to a node or that can be connected through one or more
switches depends on the number of available Ethernet ports. The number of available Ethernet ports is
determined by the model of the controller and the number and type of adapters installed in the controller. The
number of supported Ethernet ports for controllers and adapters is available in the Hardware Universe.
For all multi-node cluster configurations, you must determine the number of iSCSI sessions per node to know
whether you can add more hosts to your cluster. As long as your cluster is below the maximum number of
iSCSI sessions per node, you can continue to add hosts to your cluster. The maximum number of iSCSI
sessions per node varies based on the types of controllers in your cluster.
Steps
1. Identify all of the target portal groups on the node.
2. Check the number of iSCSI sessions for every target portal group on the node:
The number of entries displayed at the bottom of the command output represents your number of iSCSI
sessions for that target portal group.
3. Record the number of iSCSI sessions displayed for each target portal group.
4. Add the number of iSCSI sessions for each target portal group on the node.
Fibre Channel switches have maximum configuration limits, including the number of
logins supported per port, port group, blade, and switch. The switch vendors document
their supported limits.
Each FC logical interface (LIF) logs into an FC switch port. The total number of logins from a single target on
the node equals the number of LIFs plus one login for the underlying physical port. Do not exceed the switch
vendor’s configuration limits for logins or other configuration values. This also holds true for the initiators being
used on the host side in virtualized environments with NPIV enabled. Do not exceed the switch vendor’s
configuration limits for logins for either the target or the initiators being used in the solution.
You can find the configuration limits for Brocade switches in the Brocade Scalability Guidelines.
You can find the configuration limits for Cisco switches in the Cisco Configuration Limits guide for your version
of Cisco switch software.
152
Calculate queue depth overview
You might need to tune your FC queue depth on the host to achieve the maximum values
for ITNs per node and FC port fan-in. The maximum number of LUNs and the number of
HBAs that can connect to an FC port are limited by the available queue depth on the FC
target ports.
About this task
Queue depth is the number of I/O requests (SCSI commands) that can be queued at one time on a storage
controller. Each I/O request from the host’s initiator HBA to the storage controller’s target adapter consumes a
queue entry. Typically, a higher queue depth equates to better performance. However, if the storage controller’s
maximum queue depth is reached, that storage controller rejects incoming commands by returning a QFULL
response to them. If a large number of hosts are accessing a storage controller, you should plan carefully to
avoid QFULL conditions, which significantly degrade system performance and can lead to errors on some
systems.
In a configuration with multiple initiators (hosts), all hosts should have similar queue depths. Because of the
inequality in queue depth between hosts connected to the storage controller through the same target port,
hosts with smaller queue depths are being deprived of access to resources by hosts with larger queue depths.
The following general recommendations can be made about “tuning” queue depths:
Steps
1. Count the total number of FC initiators in all of the hosts that connect to one FC target port.
2. Multiply by 128.
◦ If the result is less than 2,048, set the queue depth for all initiators to 128.
You have 15 hosts with one initiator connected to each of two target ports on the storage controller. 15
× 128 = 1,920. Because 1,920 is less than the total queue depth limit of 2,048, you can set the queue
depth for all of your initiators to 128.
◦ If the result is greater than 2,048, go to step 3.
You have 30 hosts with one initiator connected to each of two target ports on the storage controller. 30
× 128 = 3,840. Because 3,840 is greater than the total queue depth limit of 2,048, you should choose
one of the options under step 3 for remediation.
3. Choose one of the following options to add more hosts to the storage controller.
◦ Option 1:
i. Add more FC target ports.
ii. Redistribute your FC initiators.
iii. Repeat steps 1 and 2.
The desired queue depth of 3,840 exceeds the available queue depth per port. To remedy this, you
can add a two-port FC target adapter to each controller, then rezone your FC switches so that 15 of
153
your 30 hosts connect to one set of ports, and the remaining 15 hosts connect to a second set of
ports. The queue depth per port is then reduced to 15 × 128 = 1,920.
◦ Option 2:
i. Designate each host as “large” or “small” based on its expected I/O need.
ii. Multiply the number of large initiators by 128.
iii. Multiply the number of small initiators by 32.
iv. Add the two results together.
v. If the result is less than 2,048, set the queue depth for large hosts to 128 and the queue depth for
small hosts to 32.
vi. If the result is still greater than 2,048 per port, reduce the queue depth per initiator until the total
queue depth is less than or equal to 2,048.
To estimate the queue depth needed to achieve a certain I/O per second throughput,
use this formula:
For example, if you need 40,000 I/O per second with a response time of 3
milliseconds, the needed queue depth = 40,000 × (.003) = 120.
The maximum number of hosts that you can connect to a target port is 64, if you decide to limit the queue
depth to the basic recommendation of 32. However, if you decide to have a queue depth of 128, then you can
have a maximum of 16 hosts connected to one target port. The larger the queue depth, the fewer hosts that a
single target port can support. If your requirement is such that you cannot compromise on the queue depth,
then you should get more target ports.
The desired queue depth of 3,840 exceeds the available queue depth per port. You have 10 “large” hosts that
have high storage I/O needs, and 20 “small” hosts that have low I/O needs. Set the initiator queue depth on the
large hosts to 128 and the initiator queue depth on the small hosts to 32.
Your resulting total queue depth is (10 × 128) + (20 × 32) = 1,920.
You can spread the available queue depth equally across each initiator.
You might need to change the queue depths on your host to achieve the maximum values
for ITNs per node and FC port fan-in.
AIX hosts
You can change the queue depth on AIX hosts using the chdev command. Changes made using the chdev
command persist across reboots.
Examples:
• To change the queue depth for the hdisk7 device, use the following command:
154
chdev -l hdisk7 -a queue_depth=32
• To change the queue depth for the fcs0 HBA, use the following command:
The default value for num_cmd_elems is 200. The maximum value is 2,048.
It might be necessary to take the HBA offline to change num_cmd_elems and then bring it
back online using the rmdev -l fcs0 -R and makdev -l fcs0 -P commands.
HP-UX hosts
You can change the LUN or device queue depth on HP-UX hosts using the kernel parameter
scsi_max_qdepth. You can change the HBA queue depth using the kernel parameter max_fcp_reqs.
scsi_max_qdepth can be dynamically changed on a running system using the -u option on the kmtune
command. The change will be effective for all devices on the system. For example, use the following
command to increase the LUN queue depth to 64:
kmtune -u -s scsi_max_qdepth=64
It is possible to change queue depth for individual device files using the scsictl command. Changes
using the scsictl command are not persistent across system reboots. To view and change the queue
depth for a particular device file, execute the following command:
scsictl -a /dev/rdsk/c2t2d0
• The default value for max_fcp_reqs is 512. The maximum value is 1024.
The kernel must be rebuilt and the system must be rebooted for changes to max_fcp_reqs to take effect.
To change the HBA queue depth to 256, for example, use the following command:
kmtune -u -s max_fcp_reqs=256
Solaris hosts
You can set the LUN and HBA queue depth for your Solaris hosts.
• For LUN queue depth: The number of LUNs in use on a host multiplied by the per-LUN throttle (lun-queue-
depth) must be less than or equal to the tgt-queue-depth value on the host.
• For queue depth in a Sun stack: The native drivers do not allow for per LUN or per target max_throttle
settings at the HBA level. The recommended method for setting the max_throttle value for native
drivers is on a per-device type (VID_PID) level in the /kernel/drv/sd.conf and
/kernel/drv/ssd.conf files. The host utility sets this value to 64 for MPxIO configurations and 8 for
Veritas DMP configurations.
Steps
155
1. # cd/kernel/drv
2. # vi lpfc.conf
3. Search for /tft-queue (/tgt-queue)
tgt-queue-depth=32
Use the esxcfg-module command to change the HBA timeout settings. Manually updating the esx.conf file
is not recommended.
Steps
1. Log on to the service console as the root user.
2. Use the #vmkload_mod -l command to verify which Qlogic HBA module is currently loaded.
3. For a single instance of a Qlogic HBA, run the following command:
This example uses qla2300_707 module. Use the appropriate module based on the output
of vmkload_mod -l.
#/usr/sbin/esxcfg-boot -b
#reboot
Use the esxcfg-module command to change the HBA timeout settings. Manually updating the esx.conf file
is not recommended.
Steps
1. Log on to the service console as the root user.
2. Use the #vmkload_mod -l grep lpfc command to verify which Emulex HBA is currently loaded.
156
3. For a single instance of an Emulex HBA, enter the following command:
Depending on the model of the HBA, the module can be either lpfcdd_7xx or lpfcdd_732.
The above command uses the lpfcdd_7xx module. You should use the appropriate module
based on the outcome of vmkload_mod -l.
Running this command will set the LUN queue depth to 16 for the HBA represented by lpfc0.
The LUN queue depth for lpfc0 and the LUN queue depth for lpfc1 is set to 16.
#esxcfg-boot -b
On Windows hosts, you can use the LPUTILNT utility to update the queue depth for Emulex HBAs.
Steps
1. Run the LPUTILNT utility located in the C:\WINNT\system32 directory.
2. Select Drive Parameters from the menu on the right side.
3. Scroll down and double-click QueueDepth.
If you are setting QueueDepth greater than 150, the following Windows Registry value also
need to be increased appropriately:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\lpxnds\Paramete
rs\Device\NumberOfRequests
On Windows hosts, you can use theand the SANsurfer HBA manager utility to update the queue depths for
Qlogic HBAs.
Steps
1. Run the SANsurfer HBA manager utility.
2. Click on HBA port > Settings.
3. Click Advanced HBA port settings in the list box.
4. Update the Execution Throttle parameter.
157
Linux hosts for Emulex HBA
You can update the queue depths of an Emulex HBA on a Linux host. To make the updates persistent across
reboots, you must then create a new RAM disk image and reboot the host.
Steps
1. Identify the queue depth parameters to be modified:
The list of queue depth parameters with their description is displayed. Depending on your operating system
version, you can modify one or more of the following queue depth parameters:
The lpfc_tgt_queue_depth parameter is applicable only for Red Hat Enterprise Linux 7.x systems,
SUSE Linux Enterprise Server 11 SP4 systems and 12.x systems.
2. Update the queue depths by adding the queue depth parameters to the /etc/modprobe.conf file for a
Red Hat Enterprise Linux 5.x system and to the /etc/modprobe.d/scsi.conf file for a Red Hat
Enterprise Linux 6.x or 7.x system, or a SUSE Linux Enterprise Server 11.x or 12.x system.
Depending on your operating system version, you can add one or more of the following commands:
For more information, see the System administration for your version of Linux operating system.
4. Verify that the queue depth values are updated for each of the queue depth parameter that you have
modified:
cat /sys/class/scsi_host/host_number/lpfc_lun_queue_depthcat
/sys/class/scsi_host/host_number/lpfc_tgt_queue_depthcat
/sys/class/scsi_host/host_number/lpfc_hba_queue_depth
158
Linux hosts for QLogic HBA
You can update the device queue depth of a QLogic driver on a Linux host. To make the updates persistent
across reboots, you must then create a new RAM disk image and reboot the host. You can use the QLogic
HBA management GUI or command-line interface (CLI) to modify the QLogic HBA queue depth.
This task shows how to use the QLogic HBA CLI to modify the QLogic HBA queue depth
Steps
1. Identify the device queue depth parameter to be modified:
You can modify only the ql2xmaxqdepth queue depth parameter, which denotes the maximum queue
depth that can be set for each LUN. The default value is 64 for RHEL 7.5 and later. The default value is 32
for RHEL 7.4 and earlier.
For more information, see the System administration for your version of Linux operating system.
◦ If you want to modify the parameter only for the current session, run the following command:
cat /sys/module/qla2xxx/parameters/ql2xmaxqdepth
4. Modify the QLogic HBA queue depth by updating the firmware parameter Execution Throttle from the
QLogic HBA BIOS.
159
a. Log in to the QLogic HBA management CLI:
/opt/QLogic_Corporation/QConvergeConsoleCLI/qaucli
[root@localhost ~]#
/opt/QLogic_Corporation/QConvergeConsoleCLI/qaucli
Using config file:
/opt/QLogic_Corporation/QConvergeConsoleCLI/qaucli.cfg
Installation directory: /opt/QLogic_Corporation/QConvergeConsoleCLI
Working dir: /root
QConvergeConsole
Main Menu
1: Adapter Information
**2: Adapter Configuration**
3: Adapter Updates
4: Adapter Diagnostics
5: Monitoring
6: FabricCache CLI
7: Refresh
8: Help
9: Exit
c. From the list of adapter configuration parameters, select the HBA Parameters option.
160
1: Adapter Alias
2: Adapter Port Alias
**3: HBA Parameters**
4: Persistent Names (udev)
5: Boot Devices Configuration
6: Virtual Ports (NPIV)
7: Target Link Speed (iiDMA)
8: Export (Save) Configuration
9: Generate Reports
10: Personality
11: FEC
(p or 0: Previous Menu; m or 98: Main Menu; ex or 99: Quit)
Please Enter Selection: 3
d. From the list of HBA ports, select the required HBA port.
e. From the HBA Parameters menu, select the Display HBA Parameters option to view the current
value of the Execution Throttle option.
=======================================================
HBA : 2 Port: 1
SN : BFD1524C78510
HBA Model : QLE2562
HBA Desc. : QLE2562 PCI Express to 8Gb FC Dual Channel
FW Version : 8.01.02
161
WWPN : 21-00-00-24-FF-8D-98-E0
WWNN : 20-00-00-24-FF-8D-98-E0
Link : Online
=======================================================
162
g. From the HBA Parameters menu, select the Configure HBA Parameters option to modify the HBA
parameters.
h. From the Configure Parameters menu, select the Execute Throttle option and update the value of
this parameter.
=======================================================
HBA : 2 Port: 1
SN : BFD1524C78510
HBA Model : QLE2562
HBA Desc. : QLE2562 PCI Express to 8Gb FC Dual Channel
FW Version : 8.01.02
WWPN : 21-00-00-24-FF-8D-98-E0
WWNN : 20-00-00-24-FF-8D-98-E0
Link : Online
=======================================================
1: Connection Options
2: Data Rate
3: Frame Size
4: Enable HBA Hard Loop ID
5: Hard Loop ID
6: Loop Reset Delay (seconds)
7: Enable BIOS
8: Enable Fibre Channel Tape Support
9: Operation Mode
10: Interrupt Delay Timer (100 microseconds)
11: Execution Throttle
12: Login Retry Count
13: Port Down Retry Count
14: Enable LIP Full Login
15: Link Down Timeout (seconds)
16: Enable Target Reset
17: LUNs per Target
18: Enable Receive Out Of Order Frame
19: Enable LR Ext. Credits
20: Commit Changes
21: Abort Changes
163
i. Press Enter to continue.
j. From the Configure Parameters menu, select the Commit Changes option to save the changes.
k. Exit the menu.
Related information
• Understanding MetroCluster data protection and disaster recovery
• Knowledge Base article: What are AIX Host support considerations in a MetroCluster configuration?
• Knowledge Base article: Solaris host support considerations in a MetroCluster configuration
In a SAN environment, you can configure the front-end switches to avoid overlap when
the old port goes offline and the new port comes online.
During switchover, the FC port on the surviving site might log in to the fabric before the fabric has detected that
the FC port on the disaster site is offline and has removed this port from the name and directory services.
If the FC port on the disaster is not yet removed, the fabric login attempt of the FC port at the surviving site
might be rejected due to a duplicate WWPN. This behavior of the FC switches can be changed to honor the
login of the previous device and not the existing one. You should verify the effects of this behavior on other
fabric devices. Contact the switch vendor for more information.
164
Example 9. Steps
Cisco switch
1. Connect to the switch and log in.
2. Enter configuration mode:
switch# config t
switch(config)#
3. Overwrite the first device entry in the name server database with the new device:
4. In switches that are running NX-OS 8.x, confirm that the flogi quiesce timeout is set to zero:
a. Display the quiesce timerval:
b. If the output in the previous step does not indicate that the timerval is zero, then set it to zero:
Brocade switch
1. Connect to the switch and log in.
2. Enter the switchDisable command.
3. Enter the configure command, and press y at the prompt.
4. Choose setting 1:
165
6. Enter the switchEnable command.
Related information
Performing switchover for tests or maintenance
166
Copyright information
Copyright © 2023 NetApp, Inc. All Rights Reserved. Printed in the U.S. No part of this document covered by
copyright may be reproduced in any form or by any means—graphic, electronic, or mechanical, including
photocopying, recording, taping, or storage in an electronic retrieval system—without prior written permission
of the copyright owner.
Software derived from copyrighted NetApp material is subject to the following license and disclaimer:
THIS SOFTWARE IS PROVIDED BY NETAPP “AS IS” AND WITHOUT ANY EXPRESS OR IMPLIED
WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
AND FITNESS FOR A PARTICULAR PURPOSE, WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL
NETAPP BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
NetApp reserves the right to change any products described herein at any time, and without notice. NetApp
assumes no responsibility or liability arising from the use of products described herein, except as expressly
agreed to in writing by NetApp. The use or purchase of this product does not convey a license under any
patent rights, trademark rights, or any other intellectual property rights of NetApp.
The product described in this manual may be protected by one or more U.S. patents, foreign patents, or
pending applications.
LIMITED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject to restrictions as set
forth in subparagraph (b)(3) of the Rights in Technical Data -Noncommercial Items at DFARS 252.227-7013
(FEB 2014) and FAR 52.227-19 (DEC 2007).
Data contained herein pertains to a commercial product and/or commercial service (as defined in FAR 2.101)
and is proprietary to NetApp, Inc. All NetApp technical data and computer software provided under this
Agreement is commercial in nature and developed solely at private expense. The U.S. Government has a non-
exclusive, non-transferrable, nonsublicensable, worldwide, limited irrevocable license to use the Data only in
connection with and in support of the U.S. Government contract under which the Data was delivered. Except
as provided herein, the Data may not be used, disclosed, reproduced, modified, performed, or displayed
without the prior written approval of NetApp, Inc. United States Government license rights for the Department
of Defense are limited to those rights identified in DFARS clause 252.227-7015(b) (FEB 2014).
Trademark information
NETAPP, the NETAPP logo, and the marks listed at http://www.netapp.com/TM are trademarks of NetApp, Inc.
Other company and product names may be trademarks of their respective owners.
167