Basic Concepts of Netapp Ontap 9
Basic Concepts of Netapp Ontap 9
Figure :
You must choose whether you want to complete this lab using OnCommand System Manager, NetApp's
GUI management tool, or the Command Line Interface (CLI) for configuring the ONTAP system in this lab.
This document contains two complete versions of the lab guide, one which utilizes System Manager for the lab's
ONTAP configuration activities, and another that utilizes the CLI. Both versions walk you through the same set of
management tasks.
If you want to use System Manager, begin here.
If you want to use the CLI, begin here.
1 GUI Introduction.............................................................................................................................. 5
2 Introduction...................................................................................................................................... 6
2.3 Prerequisites.............................................................................................................................. 7
3 Lab Environment........................................................................................................................... 10
4 Lab Activities................................................................................................................................. 12
4.1 Clusters.....................................................................................................................................12
4.1.1 Connect to the Cluster with OnCommand System Manager............................................................................. 13
4.1.4 Networks............................................................................................................................................................. 26
5 References....................................................................................................................................168
8 Introduction.................................................................................................................................. 171
11.1 Clusters.................................................................................................................................179
11.1.1 Advanced Drive Partitioning........................................................................................................................... 180
11.2.3 Create a Volume and Map It to the Namespace Using the CLI.....................................................................195
12 References..................................................................................................................................257
2.3 Prerequisites
This lab introduces NetApp ONTAP, and makes no assumptions that the user has previous experience with
ONTAP. The lab does assume some basic familiarity with storage system related concepts such as RAID, CIFS,
NFS, LUNs, and DNS.
This lab includes steps for mapping shares and mounting LUNs on a Windows client. These steps assume that
the lab user has a basic familiarity with Microsoft Windows.
This lab also includes steps for mounting NFS volumes and LUNs on a Linux client. All steps are performed from
the Linux command line, and assumes a basic working knowledge of the Linux command line. A basic working
knowledge of a text editor such as vi may be useful, but is not required.
Figure 2-1:
Once PuTTY launches you can connect to one of the hosts in the lab by following these steps. This
example shows a user connecting to the ONTAP cluster named cluster1.
2. By default PuTTY should launch into the Basic options for your PuTTY session display as shown in the
screen shot. If you accidentally navigate away from this view just click on the Session category item to
return to this view.
3. Use the scrollbar in the Saved Sessions box to navigate down to the desired host and double-click it to
open the connection. A terminal window will open and you will be prompted to log into the host. You can
find the correct username and password for the host in the Lab Host Credentials table found in the Lab
Environment section of this guide.
Figure 2-2:
Figure 3-1:
All of the servers and storage controllers presented in this lab are virtual devices, and the networks that
interconnect them are exclusive to your lab session. While we encourage you to follow the demonstration steps
outlined in this lab guide, you are free to deviate from this guide and experiment with other ONTAP features that
interest you. While the virtual storage controllers (vsims) used in this lab offer nearly all of the same functionality
as physical storage controllers, they are not capable of providing the same performance as a physical controller,
which is why these labs are not suitable for performance testing.
Table 1 provides a list of the servers and storage controller nodes in the lab, along with their IP address.
Table 2 lists the NetApp software that is pre-installed on the various hosts in this lab.
Hostname Description
Data ONTAP DSM v4.1 for Windows MPIO, Windows Unified Host Utility Kit
JUMPHOST
v7.0.0, NetApp PowerShell Toolkit v4.2.0
RHEL1, RHEL2 Linux Unified Host Utilities Kit v7.0
4.1 Clusters
Expected Completion Time: 20 Minutes
A cluster is a group of physical storage controllers, or nodes, that are joined together for the purpose of serving
data to end users. The nodes in a cluster can pool their resources together so that the cluster can distribute it's
work across the member nodes. Communication and data transfer between member nodes (such as when a
client accesses data on a node other than the one actually hosting the data) takes place over a 10Gb cluster-
interconnect network to which all the nodes are connected, while management and client data traffic passes over
separate management and data networks configured on the member nodes.
Clusters typically consist of one, or more, NetApp storage controller High Availability (HA) pairs. Both controllers
in an HA pair actively host and serve data, but they are also capable of taking over their partner's responsibilities
in the event of a service disruption by virtue of their redundant cable paths to each other's disk storage. Having
multiple HA pairs in a cluster allows the cluster to scale out to handle greater workloads, and to support non-
disruptive migrations of volumes and client connections to other nodes in the cluster resource pool. This means
that cluster expansion and technology refreshes can take place while the cluster remains fully online, and serving
data.
Since clusters are almost always comprised of one or more HA pairs, a cluster almost always contains an even
number of controller nodes. There is one exception to this rule, the single node cluster, which is a special
cluster configuration that supports small storage deployments using a single physical controller head. The primary
difference between single node and standard clusters, besides the number of nodes, is that a single node cluster
does not have a cluster network. Single node clusters can be converted into traditional multi-node clusters, at
which point they become subject to all the standard cluster requirements like the need to utilize an even number
of nodes consisting of HA pairs. This lab does not contain a single node cluster, so does not discuss them further.
ONTAP 9 clusters that only serve NFS and CIFS can scale up to a maximum of 24 nodes, although the node limit
can be lower depending on the model of FAS controller in use. ONTAP 9 clusters that also host iSCSI and FC
can scale up to a maximum of 8 nodes, but once again the limit may be lower depending on the FAS controller
model.
OnCommand System Manager is NetApp's browser-based management tool for configuring and managing
NetApp storage systems and clusters. Prior to 8.3, System Manager was a separate application that you had
to download and install on your client OS. As of 8.3, System Manager has moved on-board the cluster, so you
just point your web browser to the cluster management address. The on-board System Manager interface is
essentially the same that NetApp offered in the System Manager 3.1, the version you install on a client.
On the Jumphost, the Windows 2012R2 Server desktop you see when you first connect to the lab, open the web
browser of your choice. This lab guide uses Chrome, but you can use Firefox or Internet Explorer if you prefer one
of those. All three browsers already have System Manager set as the browser home page.
1. Launch Chrome to open System Manager.
Figure 4-1:
Figure 4-2:
System Manager is now logged in to cluster1, and displays the Dashboard page for the cluster.
System Manager's user interface (UI) has undergone some fundamental redesign in ONTAP 9 in order
to improve usability. If you are unfamiliar with System Manager, or have used a prior version, here is a
quick introduction to the new ONTAP 9 System Manager UI layout.
Previous versions of System Manager displayed tabs on the left side of the window that corresponded to
three different configuration views of the cluster: Cluster, Storage Virtual Machines, and Nodes. ONTAP
9 System Manager has removed these left-side tabs in favor of a simplified row of tabs near the top of
the window called the command bar. The command bar tabs offer more streamlined access to the most
commonly needed management actions.
The remainder of this section introduces the basic layout of the new System Manager interface, focusing
on the controls available on the command bar.
3. The Dashboard is the page you first see when you log into System Manager, and displays summary
information for the whole cluster. You can return to this view at any time by using the Dashboard tab.
4. Many of the commonly accessed configuration settings for the cluster and cluster nodes are now directly
accessed using the Hardware and Diagnostics tab.
5. Additional configuration settings for the cluster can be accessed by clicking on the Configurations tab.
(You may need to expand your browser to see this tab.)
6. The Network tab on the command bar provides access to the all the network interfaces for the cluster
and the storage virtual machines.
7. The Storage Virtual Machines tab allows you to manage individual Storage Virtual Machines (SVMs,
also known as Vservers).
8. The LUNs tab allows you to manage individual LUNs.
9. The Protection tab allows you to manage settings for SnapMirror and SnapVault relationships.
3 8 7 6 4 9 5
10
Figure 4-3:
Note: As you use System Manager in this lab, you may encounter situations where buttons at
the bottom of a System Manager pane are beyond the viewing size of the window, and no scroll
bar exists to allow you to scroll down to see them. If this happens, then you have two options;
either increase the size of the browser window (you might need to increase the resolution of
your Jumphost desktop to accommodate the larger browser window), or in the System Manager
window, use the tab key to cycle through all the various fields and buttons, which eventually
forces the window to scroll down to the non-visible items.
Disks, whether Hard Disk Drives (HDD) or Solid State Disks (SSD), are the fundamental unit of physical storage
in ONTAP, and are tied to a specific cluster node by virtue of their physical connectivity (i.e., cabling) to a given
controller head.
ONTAP manages disks in groups called aggregates. An aggregate defines the RAID properties for a group of
disks that are all physically attached to the same node. A given disk can only be a member of a single aggregate.
By default each cluster node has one aggregate known as the root aggregate, which is a group of the node's
local disks that host the node's ONTAP operating system. A node's root aggregate is automatically created
during ONTAP installation in a minimal RAID-DP configuration This means it is initially comprised of 3 disks
Figure 4-4:
Figure 4-5:
5. Click on the Inventory tab inside the top of the Disks pane.
Figure 4-6:
Figure 4-7:
Figure 4-8:
The only aggregates that exist on a newly created cluster are the node root aggregates. The root aggregate
should not be used to host user data, so in this section you will create a new aggregate on each of the nodes in
cluster1 so they can host the storage virtual machines, volumes, and LUNs that you will create later in this lab.
A node can host multiple aggregates depending on the data sizing, performance, and isolation needs of the
storage workloads that it will be hosting. When you create a Storage Virtual Machine (SVM) you assign it to use
one or more specific aggregates to host the SVM's volumes. You can assign multiple SVMs to use the same
aggregate, which offers greater flexibility in managing storage space, whereas dedicating an aggregate to just a
single SVM provides greater workload isolation.
In this lab activity, you create a single user data aggregate on each node in the cluster.
If you completed the last exercise then System Manager should still be displaying the contents of the Aggregates
view. If you skipped that exercise then, starting from the Dashboard view, you can navigate to the Aggregates
view by going to Hardware and Diagnostics > Aggregates.
1. Click on the Create button to launch the Create Aggregate Wizard.
Figure 4-9:
Figure 4-10:
Figure 4-11:
The Select Disk Type window closes, and focus returns to the Create Aggregate window.
6. The Disk Type should now show as VMDISK.
7. Set the Number of Disks to 5.
8. Click Create to create the new aggregate, and to close the wizard.
Figure 4-12:
The Create Aggregate window closes, and focus returns to the Aggregates view in System Manager.
The newly created aggregate should now be visible in the list of aggregates.
9. Select the entry for the aggregate aggr1_cluster1_01 if it is not already selected.
10. Click the Details tab to view more detailed information about this aggregate's configuration.
11. Notice that aggr1_cluster1_01 is a 64-bit aggregate. In earlier versions of clustered Data ONTAP
8, an aggregate could be either 32-bit or 64-bit, but Data ONTAP 8.3 and later only supports 64-bit
aggregates. If you have an existing clustered Data ONTAP 8.x system that has 32-bit aggregates and
you plan to upgrade that cluster to 8.3+, you must convert those 32-bit aggregates to 64-bit aggregates
prior to the upgrade. The procedure for that migration is not covered in this lab, so if you need further
details then please refer to the clustered Data ONTAP documentation.
11
10
Figure 4-13:
Now repeat the process to create a new aggregate on the node "cluster1-02".
12. Click the Create button again.
12
Figure 4-14:
14
Figure 4-15:
15 16
Figure 4-16:
The Select Disk Type window closes, and focus returns to the Create Aggregate window.
17. The Disk Type should now show as VMDISK.
17
18
19
Figure 4-17:
The Create Aggregate window closes, and focus returns to the Aggregates view in System Manager.
20. The new aggregate, aggr1_cluster1_02 now appears in the cluster's aggregate list.
Figure 4-18:
4.1.4 Networks
This section discusses the network components that ONTAP provides to manage your cluster.
Ports are the physical Ethernet and Fibre Channel connections on each node, the interface groups (ifgrps) you
can create to aggregate those connections, and the VLANs you can use to subdivide them.
A logical interface (LIF) is essentially an IP address that is associated with a port, and has a number of associated
characteristics such as an assigned home node, an assigned physical home port, a list of physical ports it can fail
over to, an assigned SVM, a role, a routing group, and so on. A given LIF can only be assigned to a single SVM,
and since LIFs are mapped to physical network ports on cluster nodes this means that an SVM runs, in part, on
all nodes that are hosting its LIFs.
Routing tables in ONTAP are defined for each Storage Virtual Machine. Since each SVM has it's own routing
table, changes to one SVM's routing table does not have impact on any other SVM's routing table.
IPspaces were introduced in ONTAP 8.3, and allow you to configure an ONTAP cluster to logically separate
one IP network from another, even if those two networks are using the same IP address range. IPspaces are a
multi-tenancy feature that allow storage service providers to share a cluster between different companies while
still separating storage traffic for privacy and security. Every cluster includes a default IPspace to which ONTAP
automatically assigns new SVMs, and that default IPspace is probably sufficient for most NetApp customers who
deploy a cluster within a single company or organization that uses a non-conflicting IP address range.
Broadcast Domains are collections of ports that all have access to the same layer 2 networks, both physical
and virtual (i.e., VLANs). Every IPspace has it's own set of Broadcast Domains, and ONTAP provides a default
broadcast domain to go along with the default IPspace. Broadcast domains are used by ONTAP to determine
what ports an SVM can use for it's LIFs.
1
2
Figure 4-19:
Review the Port Details section at the bottom of the Network pane and note that the e0c e0g ports on
both cluster nodes are all part of this broadcast domain. These are the network ports that you will use in
this lab.
Now create a new Subnet for this lab.
4. Select the Subnets tab, and notice that there are no subnets listed in the pane. Unlike Broadcast
Domains and IPSpaces, ONTAP does not provide a default Subnet.
5. Click the Create button.
Figure 4-20:
10
Figure 4-21:
12
Figure 4-22:
The Select Broadcast Domain window close, and focus returns to the Create Subnet window.
13. The values in your Create Subnet window should now match those shown in the following screen
shot, the only possible exception being for the IP Addresses field, whose value may differ depending on
what value range you chose to enter to match your plans for the lab.
14. If it is not already displayed, click on the Show ports on this domain link under the Broadcast Domain
textbox to see the list of ports that this broadcast domain includes.
15. Click Create.
14
15
Figure 4-23:
The Create Subnet window closes, and focus returns to the Subnets tab in System Manager.
16. Notice that the main pane of the Subnets tab now includes an entry for your newly created subnet,
and that the lower portion of the pane includes metrics tracking the consumption of the IP addresses
that belong to this subnet.
Figure 4-24:
Feel free to explore the contents of the other available tabs on the Network page. Here is a brief
summary of the information available on those tabs.
The Ethernet Ports tab displays the physical NICs on your controller, which will be a
superset of the NICs that you saw previously listed as belonging to the default broadcast
domain. The other NICs you will see listed on the Ethernet Ports tab include the node's
cluster network NICs.
The Network Interfaces tab displays a list of all of the LIFs on your cluster.
The FC/FCoE Adapters tab lists all the WWPNs for all the controllers NICs in the event they
will be used for iSCSI or FCoE connections. The simulated NetApp controllers you are using
in this lab do not include FC adapters, and this lab does not make use of FCoE.
In this section you will create a new SVM named svm1 on the cluster and will configure it to serve out a volume
over NFS and CIFS. You will be configuring two NAS data LIFs on the SVM, one per node in the cluster.
Start by creating the storage virtual machine.
1. In System Manager, select the SVMs tab.
2. Click Create to launch the Storage Virtual Machine Setup wizard.
Figure 4-25:
Figure 4-26:
The Storage Virtual Machine (SVM) Setup wizard advances to the Configure CIFS/NFS protocol step.
8. Set the Assign IP Address dropdown to Using a subnet.
Figure 4-27:
10
Figure 4-28:
The Add Details window closes, and focus returns to the Storage Virtual Machine (SVM) Setup
window.
11. Click Browse next to the Port text box.
Figure 4-29:
13
Figure 4-30:
The Select Network Port or Adapter window closes, and focus returns to the protocols portion of the
Storage Virtual Machine (SVM) Setup wizard.
14. The Port text box should have been populated with the cluster and port value you just selected.
15. Set the CIFS Server Name: value to svm1.
16. Set the Active Directory: value to demo.netapp.com.
17. Set the Administrator Name: value to Administrator.
18. Set the Password: value to Netapp1!.
19. The optional Provision a volume for CIFS storage text boxes offer a quick way to provision a simple
volume and CIFS share at SVM creation time, with the caveat that this share will not be multi-protocol.
Since in most cases when you create a share it will be for an existing SVM, rather than create a share
here this lab guide will show that more full-featured volume creation procedure in the following sections.
19
15
16
17
18
Figure 4-31:
22
Figure 4-32:
The SVM Administration step of the Storage Virtual Machine (SVM) Setup wizard opens. This window
allows you to set up an administrative account for this specific SVM so you can delegate administrative
tasks to an SVM-specific administrator without giving that administrator cluster-wide privileges. As the
comments in this wizard window indicate, this account must also exist for use with SnapDrive. Although
you will not be using SnapDrive in this lab, it is a good idea to create this account, and you will do so
here.
23. The User Name is pre-populated with the value vsadmin.
24. Set the Password and Confirm Password text boxes to netapp123.
25. When finished, click Submit & Continue.
24
25
Figure 4-33:
27
Figure 4-34:
The window closes, and focus returns to the System Manager window, which now displays a
summary page for your newly created svm1 SVM.
28. Notice that in the Details sub-pane of the window the CIFS protocol is listed with a green background.
This indicates that a CIFS server is running for this SVM.
29. Notice too, that the NFS protocol is listed with a green background, which indicates that there is a
running NFS server for this SVM.
Figure 4-35:
The New Storage Virtual Machine Setup Wizard only provisions a single LIF when creating a new SVM.
NetApp best practice is to configure a LIF on both nodes in an HA pair so that a client can access the
SVM's shares through either node. To comply with that best practice you will now create a second LIF
hosted on the other node in the cluster.
30. Select the Network tab on the menu bar at the top of System Manager.
31. Select the Network Interfaces tab under the Network pane.
32. Select the only LIF listed for the svm1 SVM. Notice that this LIF is named svm1_cifs_nfs_lif1 (you may
need to scroll down in the list of interfaces to see it). Follow this same naming convention for the new
LIF that you will be creating.
33. Click Create to launch the Network Interface Create Wizard.
31
33
32
Figure 4-36:
38
39
Figure 4-37:
40
41
Figure 4-38:
The Add Details window closes, and focus returns to the Create Network Interface window.
42. Expand the Port Selection list box, and select the entry for cluster1-02 port e0c.
42
43
Figure 4-39:
The Create Network Interface window closes, and focus returns to the Network pane in System
Manager.
44. Notice that a new entry for the svm1_cifs_nfs_lif2 LIF is now present under the Network Interfaces
tab. Select this entry and review the LIF's properties in the lower pane.
Figure 4-40:
Lastly, you need to configure DNS delegation for the SVM so that Linux and Windows clients can
intelligently utilize all of svm1's configured NAS LIFs. To achieve this objective, the DNS server must
delegate to the cluster the responsibility for the DNS zone corresponding to the SVM's hostname,
which in this case will be svm1.demo.netapp.com. The lab's DNS server is already configured to
delegate this responsibility, but you must also configure the SVM to accept it. System Manager does
not currently include the capability to configure DNS delegation so you will need to use the CLI for this
purpose.
45. Open a PuTTY connection to cluster1 following the instructions in the Accessing the Command Line
section at the beginning of this guide. Log in using the username admin and the password Netapp1!,
then enter the following commands.
46. Validate that delegation is working correctly by opening PowerShell on the Jumphost and using the
nslookup command as shown in the following CLI output. If the nslookup command returns different IP
addresses on different lookup attempts then delegation is working correctly. If the nslookup command
returns a Non-existent domain error, then delegation is not working correctly, and you will need to
review the ONTAP commands you entered for any errors. Also notice in the following CLI output that
different executions of the nslookup command return different addresses, demonstrating that DNS load
balancing is working correctly.
Windows PowerShell
Copyright (C) 2013 Microsoft Corporation. All rights reserved.
PS C:\Users\Administrator.DEMO> nslookup svm1.demo.netapp.com
Server: dc1.demo.netapp.com
Address: 192.168.0.253
Non-authoritative answer:
Name: svm1.demo.netapp.com
Address: 192.168.0.132
PS C:\Users\Administrator.DEMO> nslookup svm1.demo.netapp.com
Server: dc1.demo.netapp.com
Address: 192.168.0.253
Non-authoritative answer:
Name: svm1.demo.netapp.com
Address: 192.168.0.131
PS C:\Users\Administrator.DEMO
ONTAP configures CIFS and NFS on a per SVM basis. When you created the svm1 SVM in the previous
section, you set up and enabled CIFS and NFS for that SVM. However, it is important to understand that clients
cannot yet access the SVM using CIFS and NFS. That is partially because you have not yet created any volumes
on the SVM, but also because you have not told the SVM what you want to share, and who you want to share it
with.
Each SVM has its own namespace. A namespace is a logical grouping of a single SVM's volumes into a directory
hierarchy that is private to just that SVM, with the root of that hierarchy hosted on the SVM's root volume
(svm1_root in the case of the svm1 SVM), and it is through this namespace that the SVM shares data to CIFS
and NFS clients. The SVM's other volumes are junctioned (i.e., mounted) within that root volume, or within other
volumes that are already junctioned into the namespace. This hierarchy presents NAS clients with a unified,
centrally maintained view of the storage encompassed by the namespace, regardless of where those junctioned
volumes physically reside in the cluster. CIFS and NFS clients cannot access a volume that has not been
junctioned into the namespace.
CIFS and NFS clients can access the entire namespace by mounting a single NFS export or CIFS share declared
at the top of the namespace. While this is a very powerful capability, there is no requirement to make the whole
namespace accessible. You can create CIFS shares at any directory level in the namespace, and you can
create different NFS export rules at junction boundaries for individual volumes, and for individual qtrees within a
junctioned volume.
ONTAP does not utilize an /etc/exports file to export NFS volumes; instead it uses a policy model that dictates
the NFS client access rules for the associated volumes. An NFS-enabled SVM implicitly exports the root of its
namespace and automatically associates that export with the SVM's default export policy. But that default policy
is initially empty, and until it is populated with access rules no NFS clients will be able to access the namespace.
The SVM's default export policy applies to the root volume and also to any volumes that an administrator
junctions into the namespace, but an administrator can optionally create additional export policies in order to
implement different access rules within the namespace. You can apply export policies to a volume as a whole
and to individual qtrees within a volume, but a given volume or qtree can only have one associated export policy.
While you cannot create NFS exports at any other directory level in the namespace, NFS clients can mount from
any level in the namespace by leveraging the namespace's root export.
In this section of the lab, you are going to configure a default export policy for your SVM so that any volumes you
junction into its namespace will automatically pick up the same NFS export rules. You will also create a single
CIFS share at the top of the namespace so that all the volumes you junction into that namespace are accessible
through that one share. Finally, since your SVM will be sharing the same data over NFS and CIFS, you will be
setting up name mapping between UNIX and Windows user accounts to facilitate smooth multi protocol access to
the volumes and files in the namespace.
Figure 4-41:
3. Click the Volumes button to display a list of the volumes that belong to the SVM svm1.
4. Select the svm1_root volume if it is not already selected.
Figure 4-42:
The root volume hosts the namespace for the SVM. The root volume is not large; only 20 MB in this
example. Root volumes are small because they are only intended to house the junctions that organize
the SVM's volumes. All of the files hosted on the SVM should reside inside other volumes that are
junctioned into the namespace, rather than directly in the SVM's root volume.
Confirm that CIFS and NFS are running for the svm1 SVM.
5. Click the Overview button (which is next to the svm1 dropdown).
6. In the Protocol Status pane, observe the green check marks above the NFS and CIFS links. These
green check marks indicate that the NFS and CIFS servers for this SVM are running.
7. Click the CIFS link.
Figure 4-43:
The SVM Settings view opens, and displays the Configuration tab for the CIFS protocol.
8. Note that the Service Status field is listed as Started, which indicates that there is a running CIFS
server for this SVM. If CIFS was not already running for this SVM, you could configure and start it using
the Setup button found under the Configuration tab.
Figure 4-44:
Figure 4-45:
At this point, you have confirmed that your SVM has a running CIFS server and a running NFS server.
However, you have not yet configured those two servers to actually serve any data. The first step in that
process is to configure the SVM's default NFS export policy.
When you create an SVM that supports NFS, ONTAP automatically creates a default NFS export
policy for that SVM. That default export policy contains an empty list of access rules, and without any
access rules the policy will not allow clients to access any exports. If you create an access rule in the
default export policy now, then when you create and junction in new volumes later in this lab they
will automatically be accessible to NFS clients. If any of this seems a bit confusing, do not worry; the
concept should become clearer as you work through this section and the next one.
12. In the left pane of the SVM Settings tab, under the Policies section, select Export Policies.
13. In the Policy pane that now displays on the right, select the default policy.
14. Click the Add button in the bottom portion of the Export Policies pane.
14
12
Figure 4-46:
The Create Export Rule window opens. Using this dialog you can create any number of rules that
provide fine grained client access control, and specify their application order. For this lab, you are going
to create a single rule that grants unfettered access to any host on the lab's private network.
15. Set the Client Specification: value to 0.0.0.0/0, which is equivalent to all clients.
16. Set the Rule Index: number to 1
17. In the Access Protocols: area, check the CIFS and NFS check boxes. The default values in the other
fields in the window are acceptable.
18. When you finish entering these values, click OK.
17 15
18
Figure 4-47:
The Create Export Policy window closes and focus returns to the Export Policies pane in System
Manager.
19. The new access rule you created now shows up in the bottom portion of the pane.
Figure 4-48:
With this updated default export policy in place, NFS clients are now able to mount the root of the svm1
SVM's namespace, and use that mount to access any volumes that you junction into the namespace.
Now create a CIFS share for the svm1 SVM. You are going to create a single share named nsroot at
the root of the SVM's namespace.
20. On the menu bar that contains the SVM selection drop down, click Shares.
21. In the Shares pane, select Create Share.
Figure 4-49:
22
23
24
Figure 4-50:
The Create Share window closes, and focus returns to Shares pane in System Manager. The new
nsroot share now shows up in the list of shares, but you are not finished yet.
25
26
Figure 4-51:
Figure 4-52:
There are other settings to check in this window, so do not close it yet.
28. Click the Options tab.
29. You do not want users to be able to store files inside your root volume, so ensure that the Enable as
read-only check box is checked. Other check boxes that should be checked by default include Enable
Oplocks, Browsable, and Notify Change. All other check boxes should be cleared.
30. If you had to change any of the settings listed on the previous screen then the Save and Close button
will become active, and you should click it. Otherwise, click the Cancel button.
29
30
Figure 4-53:
The Edit nsroot Settings window closes, and focus returns to the Shares pane in System Manager.
Setup of the \\svm1\nsroot CIFS share is now complete.
For this lab you have created just one share at the root of your namespace that allows users to access
any volume mounted in the namespace through that share. The advantage of this approach is that it
reduces the number of mapped drives that you have to manage on your clients; any changes you make
to the namespace, such as adding/removing volumes or changing junction locations, become instantly
visible to your clients. If you prefer to use multiple shares then clustered Data ONTAP allows you to
create additional shares rooted at any directory level within the namespace.
3
2
Figure 4-54:
5
6
7
8
9
Figure 4-55:
The window closes and focus returns to the Name Mapping pane in System Manager.
10. Click the Add button again to create another mapping rule.
Figure 4-56:
11
12
13
14
15
Figure 4-57:
The second Add Name Mapping window closes, and focus again returns to the Name Mapping pane
in System Manager.
16
Figure 4-58:
Volumes, or FlexVols, are the dynamically sized containers used by ONTAP to store data. A volume only resides
in a single aggregate at a time, but any given aggregate can host multiple volumes. Unlike an aggregate, which
can associate with multiple SVMS, a volume can only associate to a single SVM. The maximum size of a volume
can vary depending on what storage controller model is hosting it.
An SVM can host multiple volumes. While there is no specific limit on the number of FlexVols that can be
configured for a given SVM, each storage controller node is limited to hosting no more than 500 or 1000 FlexVols
(varies based on controller model), which means that there is an effective limit on the total number of volumes
that a cluster can host, depending on how many nodes there are in your cluster.
Each storage controller node has a root aggregate (e.g., aggr0_<nodename>) that contains the node's ONTAP
operating system.
Important: Do not use the node's root aggregate to host any other volumes or user data; always create
additional aggregates and volumes for that purpose.
ONTAP FlexVols support a number of storage efficiency features including thin provisioning, deduplication, and
compression. One specific storage efficiency feature you will see in the section of the lab is thin provisioning,
which dictates how space for a FlexVol is allocated in its containing aggregate.
When you create a FlexVol with a volume guarantee of type volume you are thickly provisioning the volume,
pre-allocating all of the space for the volume on the containing aggregate, which ensures that the volume will
never run out of space unless the volume reaches 100% capacity. When you create a FlexVol with a volume
guarantee of none you are thinly provisioning the volume, only allocating space for it on the containing
aggregate at the time and in the quantity that the volume actually requires the space to store the data.
This latter configuration allows you to increase your overall space utilization, and even oversubscribe an
aggregate by allocating more volumes on it than the aggregate could actually accommodate if all the subscribed
volumes reached their full size. However, if an oversubscribed aggregate does fill up, then all it's volumes will run
out of space before they reach their maximum volume size, therefore oversubscription deployments generally
require a greater degree of administrative vigilance around space utilization.
Figure 4-59:
Figure 4-60:
The Create Volume window closes, and focus returns to the Volumes pane in System Manager.
5. The newly created engineering volume now appears in the Volumes list. Notice that the volume is 10 GB
in size, and is thin provisioned.
Figure 4-61:
6. Click Namespace.
7. Notice that ONTAP automatically junctioned in the engineering volume under the root of the SVM's
namespace, and that this volume has inherited the default NFS Export Policy.
Figure 4-62:
Since you have already configured the access rules for the default policy, the volume is instantly
accessible to NFS clients. As you can see in the preceding screen shot, the engineering volume was
junctioned as /engineering, meaning that any client that had mapped a share to \\svm1\nsroot or NFS
mounted svm1:/ would now instantly see the engineering directory in the respective share and NFS
mount.
Now create a second volume.
8. Click the Volumes button again.
9. Click Create to launch the Create Volume wizard.
Figure 4-63:
Figure 4-64:
The Create Volume window closes, and focus returns again to the Volumes pane in System
Manager. The newly created eng_users volume should now appear in the Volumes list.
12. Select the eng_users volume in the volumes list, and examine the details for this volume in the
General box at the bottom of the pane. Specifically, note that this volume has a Junction Path value of
/eng_users.
Figure 4-65:
You do have more options for junctioning than just placing your volumes into the root of your
namespace. In the case of the eng_users volume, you will re-junction that volume underneath the
engineering volume, and shorten the junction name to take advantage of an already intuitive context.
13. Click Namespace.
14. In the Namespace pane, select the eng_users junction point.
15. Click Unmount.
14
Figure 4-66:
The Unmount Volume window opens asking for confirmation that you really want to unmount the
volume from the namespace.
16. Click Unmount.
16
Figure 4-67:
The Unmount Volume window closes, and focus returns to the NameSpace pane in System
Manager. The eng_users volume no longer appears in the junction list for the namespace, and since
it is no longer junctioned in the namespace, clients can no longer access it or even see it. Now you will
junction the volume in at another location in the namespace.
17. Click Mount.
Figure 4-68:
18
19
Figure 4-69:
21
Figure 4-70:
The Browse For Junction Path window closes, and focus returns to the Mount Volume window.
22. The fields in the Mount Volume window should now all contain values as follows:
Volume Name: eng_users.
Junction Name: users.
Junction Path: /engineering.
23. When ready, click Mount.
22
23
Figure 4-71:
The Mount Volume window closes, and focus returns to the Namespace pane in System Manager.
24
Figure 4-72:
You can also create a junction within user created directories. For example, from a CIFS or NFS client
you could create a folder named Projects inside the engineering volume, and then create a widgets
volume that junctions in under the projects folder. In that scenario, the namespace path to the widgets
volume contents would be /engineering/projects/widgets.
Now you will create a couple of qtrees within the eng_users volume, one for each of the users bob
and susan.
25. Click Qtrees.
26. Click Create to launch the Create Qtree wizard.
26
Figure 4-73:
27
28
Figure 4-74:
30
Figure 4-75:
The Select a Volume window closes, and focus returns to the Create Qtree window.
31. The Volume field is now populated with eng_users.
32. Select the Quota tab.
Figure 4-76:
The Quota tab is where you define the space usage limits you want to apply to the qtree. You will not
actually be implementing any quota limits in this lab.
33. Click the Create button to finish creating the qtree.
33
Figure 4-77:
The Create Qtree window closes, and focus returns to the Qtrees pane in System Manager.
34. The new bob qtree is now present in the qtrees list.
35. Now create a qtree for the user account susan by clicking the Create button.
34
Figure 4-78:
37
Figure 4-79:
The Create Qtree window closes, and focus returns to the Qtrees pane in System Manager.
38. At this point you should see both the bob and susan qtrees in System Manager.
38
Figure 4-80:
The svm1 SVM is up and running and is configured for NFS and CIFS access, so it's time to validate that
everything is working properly by mounting the NFS export on a Linux host, and the CIFS share on a Windows
host. You should complete both parts of this section so you can see that both hosts are able to seamlessly access
the volume and it's files.
This part of the lab demonstrates connecting the Windows client Jumphost to the CIFS share \\svm1\nsroot
using the Windows GUI.
1. On the Windows host Jumphost, open Windows Explorer by clicking on the folder icon on the task bar.
Figure 4-81:
4
2
Figure 4-82:
Figure 4-83:
Figure 4-84:
File Explorer displays the contents of the engineering folder. Next you will create a file in this folder to
confirm that you can write to it.
8. Notice that the eng_users volume that you junctioned in as users is visible inside this folder.
9. Right-click in the empty space in the right pane of File Explorer.
10. In the context menu, select New > Text Document, and name the resulting file cifs.txt.
10
Figure 4-85:
11. Double-click the cifs.txt file you just created to open it with Notepad.
Tip: If you do not see file extensions in your lab, you can enable that by going to the View
menu at the top of Windows Explorer and checking the File Name Extensions check box.
12. In Notepad, enter some text. Ensure that you put a carriage return at the end of the line, otherwise
when you later view the contents of this file on Linux the command shell prompt will appear on the
same line as the file contents.
13. Use the File > Save menu in Notepad to save the file's updated contents to the share. If write access
is working properly then the save operation will complete silently (i.e., you will not receive an error
message).
13
12
Figure 4-86:
Close Notepad and the File Explorer windows to finish this exercise.
This section demonstrates how to connect a Linux client to the NFS volume svm1:/ using the Linux command line.
1. Follow the instructions in the Accessing the Command Line section at the beginning of this lab guide to
open PuTTY and connect to the system rhel1. Log in as the user root with the password Netapp1!.
2. Verify that there are no NFS volumes currently mounted on rhel1.
[root@rhel1 ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_rhel1-lv_root 11877388 4962504 6311544 45% /
tmpfs 444612 76 444536 1% /dev/shm
/dev/sda1 495844 40084 430160 9% /boot
[root@rhel1 ~]#
3. Create the /svm1 directory to serve as a mount point for the NFS volume you will be shortly mounting.
[root@rhel1 ~]# echo "svm1:/ /svm1 nfs rw,defaults 0 0" >> /etc/fstab
[root@rhel1 ~]#
5. Verify the fstab file contains the new entry you just created.
[root@rhel1 ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_rhel1-lv_root 11877388 4962508 6311540 45% /
tmpfs 444612 76 444536 1% /dev/shm
/dev/sda1 495844 40084 430160 9% /boot
svm1:/ 19456 128 19328 1% /svm1
[root@rhel1 ~]#
9. Notice that you can see the engineering volume that you previously junctioned into the SVM's
namespace.
[root@rhel1 svm1]# ls
engineering
[root@rhel1 svm1]#
11. Display the contents of the cifs.txt file you created earlier.
Tip: When you cat the cifs.txt file, if the shell prompt winds up on the same line as the file
output then that indicates that you forgot to include a newline at the end of the file when you
created the file on Windows.
ONTAP 8.2.1 introduced the ability to NFS export qtrees. This optional section explains how to configure qtree
exports, and demonstrates how to set different export rules for a given qtree. For this exercise you will work with
the qtrees you created in the previous section.
Qtrees had many capabilities in Data ONTAP 7-mode that are no longer present in cluster mode. Qtrees do still
exist in cluster mode, but their purpose is essentially now limited to just quota management, with most other 7-
Figure 4-87:
3. Click Qtrees.
4. Select the entry for the susan qtree.
5. Click the Change Export Policy button.
Figure 4-88:
Figure 4-89:
Figure 4-90:
10
11
Figure 4-91:
The Create Export Rule window closes, and focus returns to the Create Export Policy window.
12. The new access rule now is now present in the rules window, and the rule's Access Protocols
entry indicates that there are no protocol restrictions. If you had selected all the available protocol
checkboxes when creating this rule, then each of those selected protocols would have been explicitly
listed here.
13. Click Create.
13
Figure 4-92:
The Create Export Policy window closes, and focus returns to the Export Policy window.
14. The Export Policy: textbox now displays rhe1l-only.
15. Click Save.
14
15
Figure 4-93:
The Export Policy window closes, and focus returns to the Export Policies pane in System Manager.
16
Figure 4-94:
17. Now you need to validate that the more restrictive export policy that you've applied to the qtree susan
is working as expected. If you still have an active PuTTY session open to the Linux host rhel1, then
bring that window up now, otherwise open a new PuTTY session to that host (username = root,
password = Netapp1!). Run the following commands to verify that you can still access the susan qtree
from rhel1.
18. Now open a PuTTY connection to the Linux host rhel2 (again, username = root and password =
Netapp1!). This host should be able to access all the volumes and qtrees in the svm1 namespace
*except* susan, which should give a permission denied error because that qtree's associated export
policy only grants access to the host rhel1.
In this section you will create a new SVM named svmluns on the cluster. You will create the SVM, configure it
for iSCSI, and create four data LIFs to support LUN access to the SVM (two on each cluster node).
Return to the System Manager window, and start the procedure to create a new storage virtual machine.
1. On the command bar in System Manager, click SVMs.
Figure 4-95:
Figure 4-96:
Figure 4-97:
Figure 4-98:
The Add Details window closes, and focus returns to the Configure iSCSI Protocol step in the
Storage Virtual Machine (SVM) Setup window.
9. The Provision a LUN for iSCSI Storage (Optional) section shows how to quickly create a LUN when
first creating an SVM. This lab guide does not use that method, but instead shows you the much more
common activity of adding a new volume and LUN to an existing SVM in a later step.
10
Figure 4-99:
Once you check the Review or modify LIF configuration check box, the Configure iSCSI Protocol
window changes to include a list of the LIFs that the wizard plans to create.
11. Take note of the LIF interface names and home ports that the wizard has chosen to create.
12. Since this lab utilizes a cluster that only has two nodes, and those nodes are configured as an HA pair,
there is no need to create a portset as ONTAP's automatically configured Selective LUN Mapping is
more than sufficient for this lab. In other words, leave Number of portsets at 0.
13. Click Submit & Continue.
11
13
Figure 4-100:
The wizard advances to the SVM Administration step. Unlike data LIFS for NAS protocols, which
automatically support both data and management functionality, iSCSI LIFs only support data protocols
and so you must create a dedicated management LIF for this new SVM.
14. Set the fields in the window as follows:
Password: netapp123
Confirm Password: netapp123
15. Set the Assign IP Address dropdown to Using a subnet.
15
Figure 4-101:
16
Figure 4-102:
The Add Details window closes, and focus returns to the SVM Administration step of the Storage
Virtual Machine (SVM) Setup wizard.
17
Figure 4-103:
19
Figure 4-104:
The Select Network Port or Adapter window closes, and focus returns to the SVM Administration
step of the Storage Virtual Machine (SVM) Setup wizard.
20. Click Submit & Continue.
Figure 4-105:
The wizard advances to the New Storage Virtual Machine (SVM) Summary step. Review the contents
of this window, taking note of the names, IP addresses, and port assignments for the 4 iSCSI LIFs, and
the management LIF that the wizard created for you.
21. Click OK to close the window.
Figure 4-106:
The New Storage Virtual Machine (SVM) Summary window closes, and focus returns to System
Manager, which now displays a summary view for the new svmluns SVM.
22. Observe that Protocols listing under the Details pane lists iSCSI with a green background, indicating
that iSCSI is running.
Figure 4-107:
In an earlier section you created a new SVM and configured it for iSCSI. In the following sub-sections you will
perform the remaining steps needed to configure and use a LUN under Windows:
Gather the iSCSI Initiator Name of the Windows client.
Create a thin provisioned Windows volume, create a thin provisioned Windows LUN within that volume,
and map the LUN so it can be accessed by the Windows client.
Mount the LUN on a Windows client leveraging multi-pathing.
You must complete all of the subsections of this section in order to use the LUN from the Windows client.
Figure 4-108:
Figure 4-109:
Figure 4-110:
Figure 4-111:
The iSCSI Properties window closes, and focus returns to the Windows Explorer Administrator Tools
window. Leave this window open because you will need to access other tools later in the lab.
Figure 4-112:
Figure 4-113:
Figure 4-114:
Figure 4-115:
Figure 4-116:
Figure 4-117:
11
11
Figure 4-118:
14
Figure 4-119:
15
Figure 4-120:
The Initiator-Group Summary window closes, and focus returns to the Initiator Mapping step of the
Create LUN wizard.
16. Click the checkbox under the map column next to the winigrp initiator group.
Caution: This is a critical step because this is where you actually map the new LUN to the new
igroup.
17. Click Next to continue.
17
Figure 4-121:
The wizard advances to the Storage Quality of Service Properties step. You will not be creating any
QoS policies in this lab. If you are interested in learning about QoS, please see the Hands-on Lab for
Advanced Concepts for NetApp ONTAP.
18. Click Next to continue.
Figure 4-122:
The wizards advances to the LUN Summary step, where you can review your selections before
proceeding with creating the LUN.
19. If everything looks correct, click Next.
Figure 4-123:
The wizard begins the task of creating the volume that contains the LUN, creating the LUN, and
mapping the LUN to the new igroup. As it finishes each step, the wizard displays a green check mark in
the window next to that step.
20. Click the Finish button to terminate the wizard.
Figure 4-124:
The Create LUN wizard window closes, and focus returns to the LUNs view in System Manager.
21. The new LUN windows.lun now shows up in the LUNs view, and if you select it you can review its
details in the bottom pane.
Figure 4-125:
ONTAP 8.2 introduced a space reclamation feature that allows ONTAP to reclaim space from a thin
provisioned LUN when the client deletes data from it, and also allows ONTAP to notify the client when
the LUN cannot accept writes due to lack of space on the volume. This feature is supported by VMware
ESX 5.0 and later, Red Hat Enterprise Linux 6.2 and later, and Microsoft Windows 2012. Jumphost is
running Windows 2012R2 and so you will enable the space reclamation feature for your Windows LUN.
You can only enable space reclamation through the ONTAP command line.
22. In the cluster1 CLI, view whether space reclamation is enabled for the LUN.
Figure 4-126:
Figure 4-127:
The MPIO Properties window closes and focus returns to the Administrative Tools window for
Jumphost. Now you need to begin the process of connecting Jumphost to the LUN.
5. In Administrative Tools, double-click the iSCSI Initiator tool.
Figure 4-128:
Figure 4-129:
The Discovery tab is where you begin the process of discovering LUNs, and to do that you must define
a target portal to scan. You are going to manually add a target portal to Jumphost.
9. Click the Discover Portal button.
Figure 4-130:
The Discover Target Portal window opens. Here you will specify the first of the IP addresses that the
ONTAP Create LUN wizard assigned your iSCSI LIFs when you created the svmluns SVM. Recall that
the wizard assigned your LIFs IP addresses in the range 192.168.0.133-192.168.0.136.
10. Set the IP Address or DNS name textbox to 192.168.0.133, the first address in the range for your
LIFs.
11. Click OK.
10
11
Figure 4-131:
The Discover Target Portal window closes, and focus returns to the iSCSI Initiator Properties
window.
12. The Target Portals list now contains an entry for the IP address you entered in the previous step.
13. Click on the Targets tab.
12
Figure 4-132:
The Targets tab opens to show you the list of discovered targets.
14. In the Discovered targets list select the only listed target. Observe that the target's status is Inactive,
because although you have discovered it you have not yet connected to it. Also note that the Name of
the discovered target in your lab will have a different value than what you see in this guide; that name
string is uniquely generated for each instance of the lab.
Note: Make a mental note of that string value as you will see it a lot as you continue to
configure iSCSI in later steps of this procedure.
15. Click the Connect button.
Figure 4-133:
16
17
Figure 4-134:
18
19
Figure 4-135:
The Advanced Setting window closes, and focus returns to the Connect to Target window.
20. Click OK.
Figure 4-136:
The Connect to Target window closes, and focus returns to the iSCSI Initiator Properties window.
21. Notice that the status of the listed discovered target has changed from Inactive to Connected.
21
Figure 4-137:
Up to this point you have added a single path to your iSCSI LUN, using the address for the
cluster1-01_iscsi_lif_1 LIF the Create LUN wizard created on the node cluster1-01 for the svmluns
SVM. Now you are going to add each of the other SAN LIFs present on the svmluns SVM. To begin this
procedure you must first edit the properties of your existing connection.
22. Still on the Targets tab, select the discovered target entry for your existing connection.
23. Click Properties.
23
Figure 4-138:
The Properties window opens. From this window you will start to connect alternate paths for your newly
connected LUN. You will repeat this procedure 3 times, once for each of the remaining LIFs that are
present on the svmluns SVM.
Figure 4-139:
26
27
Figure 4-140:
29
Figure 4-141:
The Advanced Settings window closes, and focus returns to the Connect to Target window.
30. Click OK.
Figure 4-142:
The Connect to Target window closes, and focus returns to the Properties window where there are
now 2 entries shown in the identifier list.
Repeat steps 24 - 30 for each of the last two remaining LIF IP addresses. When you have finished
adding all the additional paths the Identifiers list in the Properties window should contain 4 entries.
31. There are 4 entries in the Identifier list when you are finished, indicating that there are 4 sessions,
one for each path. Note that it is normal for the identifier values in your lab to differ from those in the
screenshot.
32. Click OK.
32
Figure 4-143:
The Properties window closes, and focus returns to the iSCSI Properties window.
33. Click OK.
Figure 4-144:
The iSCSI Properties window closes, and focus returns to the desktop of Jumphost. If the
Administrative Tools window is not still open on your desktop, open it again now.
If all went well, the Jumphost is now connected to the LUN using multi-pathing, so it is time to format
your LUN and build a filesystem on it.
34. In Administrative Tools, double-click the Computer Management tool.
Figure 4-145:
35
Figure 4-146:
36. When you launch Disk Management, an Initialize Disk dialog will open informing you that you must
initialize a new disk before Logical Disk Manager can access it.
Note: If you see more than one disk listed, then MPIO has not correctly recognized that the
multiple paths you set up are all for the same LUN. If this occurs, you need to cancel the
Initialize Disk dialog, quit Computer Manager, and go back to the iSCSI Initiator tool to review
36
Figure 4-147:
The Initialize Disk window closes, and focus returns to the Disk Management view in the Computer
Management window.
37. The new disk shows up in the disk list at the bottom of the window, and has a status of Unallocated.
38. Right-click inside the Unallocated box for the disk (if you right-click outside this box you will get the
incorrect context menu), and select New Simple Volume from the context menu.
37
Figure 4-148:
Figure 4-149:
Figure 4-150:
42
Figure 4-151:
44
Figure 4-152:
The wizard advances to the Completing the New Simple Volume Wizard step.
45. Click Finish.
Figure 4-153:
The New Simple Volume Wizard window closes, and focus returns to the Disk Management view of
the Computer Management window.
46. The new WINLUN volume now shows as Healthy in the disk list at the bottom of the window,
indicating that the new LUN is mounted and ready to use.
47. Before you complete this section of the lab, take a look at the MPIO configuration for this LUN by right-
clicking inside the box for the WINLUN volume. From the context menu select Properties.
47
Figure 4-154:
49
50
Figure 4-155:
The NETAPP LUN C-Mode Multi-Path Disk Device Properties window opens.
51. Click the MPIO tab.
52. Notice that you are using the Data ONTAP DSM for multi-path access rather than the Microsoft DSM.
We recommend using the Data ONTAP DSM software, as it is the most full-featured option available,
although the Microsoft DSM is also supported.
53. The MPIO policy is set to Least Queue Depth. A number of different multi-pathing policies are
available, but the configuration shown here sends LUN I/O down the path that has the fewest
outstanding I/O requests. You can click the More information about MPIO policies link at the bottom
of the dialog window for details about all the available policies.
54. The top two paths show both a Path State and TPG State as Active/Optimized. These paths are
connected to the node cluster1-01, and the Least Queue Depth policy makes active use of both paths
to this node. Conversely, the bottom two paths show a Path State of Unavailable, and a TPG State
of Active/Unoptimized. These paths are connected to the node cluster1-02, and only enter a Path
State of Active/Optimized if the node cluster1-01 becomes unavailable, or if the volume hosting the
LUN migrates over to the node cluster1-02.
55. When you finish reviewing the information in this dialog, click OK to exit. If you changed any of the
values in this dialog you should consider using the Cancel button to discard those changes.
53
52
54
55
Figure 4-156:
The NETAPP LUN C-Mode Multi-Path Disk Device Properties window closes, and focus returns to the
WINLUN (E:) Properties window.
56. Click OK.
Figure 4-157:
Figure 4-158:
You may see a pop-up message from Microsoft Windows stating that you must format the disk in drive
E: before you can use it. (This window might be obscured by one of the other windows on the desktop,
but do not close the Administrative tools window as you will be using it again shortly.) As you may
recall, you did format the LUN during the New Simple Volume Wizard", meaning this is an erroneous
disk format message.
58. Click Cancel to ignore the format request.
58
Figure 4-159:
Finally, verify that Windows has detected that the new LUN supports space reclamation. Remember
that only Windows 2012 and newer OSs support this feature, and you must have a suitable version of
NetApp Windows Unified Host Utilities v6.0.2, or later installed. Jumphost meets this criteria.
59. In the Administrative Tools window, double-click Defragment and Optimize drives.
Figure 4-160:
61
Figure 4-161:
Figure 4-162:
Feel free to open Windows Explorer on Jumphost, and verify that you can create a file on the E: drive.
This completes this exercise.
In an earlier section you created a new SVM, and configured it for iSCSI. In the following sub-sections you will
perform the remaining steps needed to configure and use a LUN under Linux:
Gather the iSCSI Initiator Name of the Linux client.
Create a thin provisioned Linux volume, create a thin provisioned Linux LUN named linux.lun within
that volume, and map the LUN to the Linux client.
Mount the LUN on the Linux client.
You must complete all of the following subsections in order to use the LUN from the Linux client. Note that you
are not required to complete the Windows LUN section before starting this section of the lab guide, but the screen
shots and command line output shown here assumes that you have. If you did not complete the Windows LUN
section, the differences will not affect your ability to create and mount the Linux LUN.
Figure 4-163:
Figure 4-164:
Figure 4-165:
Figure 4-166:
10
Figure 4-167:
Figure 4-168:
12
12
Figure 4-169:
15. When you finish entering the value, click the Create button.
15
Figure 4-170:
16
Figure 4-171:
The Initiator-Group Summary window closes, and focus returns to the Initiators Mapping step of the
Create LUN wizard.
17. Click the checkbox under the Map column next to the linigrp initiator group. This is a critical step
because this is where you actually map the new LUN to the new igroup.
18. Click Next to continue.
18
Figure 4-172:
The wizard advances to the Storage Quality of Service Properties step. You will not create any QoS
policies in this lab. If you are interested in learning about QoS, please see the Hands-on Lab for
Advanced Concepts for NetApp ONTAP lab.
19. Click Next to continue.
Figure 4-173:
The wizard advances to the LUN Summary step, where you can review your selections before
proceeding to create the LUN.
20. If everything looks correct, click Next.
Figure 4-174:
The wizard begins the task of creating the volume that will contain the LUN, creating the LUN, and
mapping the LUN to the new igroup. As it finishes each step the wizard displays a green check mark in
the window next to that step.
21. Click Finish to terminate the wizard.
Figure 4-175:
The Create LUN wizard window closes, and focus returns to the LUNs view in System Manager.
22. The new LUN linux.lun now shows up in the LUNs view, and if you select it you can review its details
in the bottom pane.
Figure 4-176:
The new Linux LUN now exists, and is mapped to your rhel1 client.
ONTAP 8.2 introduced a space reclamation feature that allows ONTAP to reclaim space from a thin
provisioned LUN when the client deletes data from it, and also allows ONTAP to notify the client when
the LUN cannot accept writes due to lack of space on the volume. This feature is supported by VMware
ESX 5.0 and later, Red Hat Enterprise Linux 6.2 and later, and Microsoft Windows 2012. The RHEL
clients used in this lab are running version 6.7 and so you will enable the space reclamation feature for
your Linux LUN. You can only enable space reclamation through the ONTAP command line.
23. In the cluster1 CLI, view whether space reclamation is enabled for the LUN.
4. You will find that the Red Hat Linux hosts in the lab have pre-installed the DM-Multipath packages and
a /etc/multipath.conf file pre-configured to support multi-pathing so that the RHEL host can access the
LUN using all of the SAN LIFs you created for the svmluns SVM.
5. You now need to start the iSCSI software service on rhel1, and configure it to start automatically at boot
time. Note that a force-start is only necessary the very first time you start the iscsid service on host.
6. Next discover the available targets using the iscsiadm command. Note that the exact values used
for the node paths may differ in your lab from what is shown in this example, and that after running
this command there will still not yet be active iSCSI sessions because you have not yet created the
necessary device files.
7. Create the devices necessary to support the discovered nodes, after which the sessions become active.
8. At this point the Linux client sees the LUN over all four paths, but it does not yet understand that all four
paths represent the same LUN.
[root@rhel1 ~]#
9. Since the lab includes a pre-configured /etc/multipath.conf file, you just need to start the multipathd
service to handle the multiple path management and configure it to start automatically at boot time.
10. The multipath command displays the configuration of DM-Multipath, and the multipath -ll command
displays a list of the multipath devices. DM-Multipath maintains a device file under /dev/mapper that
you use to access the multipathed LUN (in order to create a filesystem on it and to mount it). The
first line of output from the multipath -ll command lists the name of that device file (in this example
3600a0980774f6a34515d464d486c7137). The autogenerated name for this device file will likely differ
in your copy of the lab. Also pay attention to the output of the sanlun lun show -p command which
shows information about the ONTAP path of the LUN, the LUN's size, its device file name under /dev/
mapper, the multipath policy, and also information about the various device paths themselves.
You can see even more detail about the configuration of multipath and the LUN as a whole by issuing
the multipath -v3 -d -ll or iscsiadm -m session -P 3 commands. Because the output of these
commands is rather lengthy, it is omitted here, but you are welcome to run these commands in your lab.
11. The LUN is now fully configured for multipath access, so the only steps remaining before you can use
the LUN on the Linux host is to create a filesystem and mount it. When you run the following commands
in your lab you will need to substitute in the /dev/mapper/ string that identifies your LUN (get that
string from the output of ls -l /dev/mapper).
Note: You can use bash /lintab completion when entering the multipath file name to save
yourself some tedious typing.
The discard option for mount allows the Red Hat host to utilize space reclamation for the LUN.
12. To have RHEL automatically mount the LUN's filesystem at boot time, run the following command
(modified to reflect the multipath device path being used in your instance of the lab) to add the mount
information to the /etc/fstab file. Enter the following command as a single line.
8.3 Prerequisites
This lab introduces NetApp ONTAP, and makes no assumptions that the user has previous experience with
ONTAP. The lab does assume some basic familiarity with storage system related concepts such as RAID, CIFS,
NFS, LUNs, and DNS.
This lab includes steps for mapping shares and mounting LUNs on a Windows client. These steps assume that
the lab user has a basic familiarity with Microsoft Windows.
This lab also includes steps for mounting NFS volumes and LUNs on a Linux client. All steps are performed from
the Linux command line, and assumes a basic working knowledge of the Linux command line. A basic working
knowledge of a text editor such as vi may be useful, but is not required.
Figure 8-1:
Once PuTTY launches you can connect to one of the hosts in the lab by following these steps. This
example shows a user connecting to the ONTAP cluster named cluster1.
2. By default PuTTY should launch into the Basic options for your PuTTY session display as shown in the
screen shot. If you accidentally navigate away from this view just click on the Session category item to
return to this view.
3. Use the scrollbar in the Saved Sessions box to navigate down to the desired host and double-click it to
open the connection. A terminal window will open and you will be prompted to log into the host. You can
find the correct username and password for the host in the Lab Host Credentials table found in the Lab
Environment section of this guide.
Figure 8-2:
Figure 9-1:
All of the servers and storage controllers presented in this lab are virtual devices, and the networks that
interconnect them are exclusive to your lab session. While we encourage you to follow the demonstration steps
outlined in this lab guide, you are free to deviate from this guide and experiment with other ONTAP features that
interest you. While the virtual storage controllers (vsims) used in this lab offer nearly all of the same functionality
as physical storage controllers, they are not capable of providing the same performance as a physical controller,
which is why these labs are not suitable for performance testing.
Table 1 provides a list of the servers and storage controller nodes in the lab, along with their IP address.
Table 2 lists the NetApp software that is pre-installed on the various hosts in this lab.
Hostname Description
Data ONTAP DSM v4.1 for Windows MPIO, Windows Unified Host Utility Kit
JUMPHOST
v7.0.0, NetApp PowerShell Toolkit v4.2.0
RHEL1, RHEL2 Linux Unified Host Utilities Kit v7.0
When using tab completion, if the ONTAP command interpreter is unable to identify a unique expansion it will
display a list of potential matches similar to what using the ? character does.
cluster1::> cluster s
Error: Ambiguous command. Possible matches include:
cluster show
cluster statistics
cluster1::>
ONTAP commands are structured hierarchically. When you log in you are placed at the root of that command
hierarchy, but you can step into a lower branch of the hierarchy by entering one of the base commands. For
example, when you first log in to the cluster, enter the ? command to see the list of available base commands, as
follows:
cluster1::> ?
up Go up one directory
cluster> Manage clusters
dashboard> (DEPRECATED)-Display dashboards
event> Manage system events
exit Quit the CLI session
export-policy Manage export policies and rules
history Show the history of commands for this CLI session
job> Manage jobs and job schedules
lun> Manage LUNs
man Display the on-line manual pages
metrocluster> Manage MetroCluster
network> Manage physical and virtual network connections
qos> QoS settings
redo Execute a previous command
rows Show/Set the rows for this CLI session
run Run interactive or non-interactive commands in
the nodeshell
security> The security directory
set Display/Set CLI session settings
snapmirror> Manage SnapMirror
statistics> Display operational statistics
storage> Manage physical storage, including disks,
aggregates, and failover
system> The system directory
top Go to the top-level directory
volume> Manage virtual storage, including volumes,
snapshots, and mirrors
vserver> Manage Vservers
cluster1::>
cluster1::> vserver
cluster1::vserver> ?
active-directory> Manage Active Directory
add-aggregates Add aggregates to the Vserver
add-protocols Add protocols to the Vserver
audit> Manage auditing of protocol requests that the
Vserver services
check> The check directory
cifs> Manage the CIFS configuration of a Vserver
context Set Vserver context
create Create a Vserver
dashboard> The dashboard directory
data-policy> Manage data policy
delete Delete a Vserver
export-policy> Manage export policies and rules
fcp> Manage the FCP service on a Vserver
fpolicy> Manage FPolicy
group-mapping> The group-mapping directory
iscsi> Manage the iSCSI services on a Vserver
locks> Manage Client Locks
modify Modify a Vserver
name-mapping> The name-mapping directory
nfs> Manage the NFS configuration of a Vserver
peer> Create and manage Vserver peer relationships
remove-aggregates Remove aggregates from the Vserver
remove-protocols Remove protocols from the Vserver
rename Rename a Vserver
security> Manage ontap security
services> The services directory
show Display Vservers
show-protocols Show protocols for Vserver
smtape> The smtape directory
start Start a Vserver
stop Stop a Vserver
vscan> Manage Vscan
cluster1::vserver>
Notice how the prompt changes to reflect that you are now in the vserver sub-hierarchy, and that some of the
subcommands have sub-hierarchies of their own. To return to the root of the hierarchy issue the top command;
you can also navigate upwards one level at a time by using the up or .. commands.
cluster1::vserver> top
cluster1::>
The ONTAP command interpreter supports command history. By repeatedly hitting the up arrow key you can step
through the series of commands you ran earlier, and you can selectively execute a given command again when
you find it by hitting the Enter key. You can also use the left and right arrow keys to edit the command before you
run it again.
11.1 Clusters
Expected Completion Time: 20 Minutes
A cluster is a group of physical storage controllers, or nodes, that are joined together for the purpose of serving
data to end users. The nodes in a cluster can pool their resources together so that the cluster can distribute it's
work across the member nodes. Communication and data transfer between member nodes (such as when a
client accesses data on a node other than the one actually hosting the data) takes place over a 10Gb cluster-
interconnect network to which all the nodes are connected, while management and client data traffic passes over
separate management and data networks configured on the member nodes.
Clusters typically consist of one, or more, NetApp storage controller High Availability (HA) pairs. Both controllers
in an HA pair actively host and serve data, but they are also capable of taking over their partner's responsibilities
in the event of a service disruption by virtue of their redundant cable paths to each other's disk storage. Having
multiple HA pairs in a cluster allows the cluster to scale out to handle greater workloads, and to support non-
disruptive migrations of volumes and client connections to other nodes in the cluster resource pool. This means
that cluster expansion and technology refreshes can take place while the cluster remains fully online, and serving
data.
Since clusters are almost always comprised of one or more HA pairs, a cluster almost always contains an even
number of controller nodes. There is one exception to this rule, the single node cluster, which is a special
cluster configuration that supports small storage deployments using a single physical controller head. The primary
difference between single node and standard clusters, besides the number of nodes, is that a single node cluster
does not have a cluster network. Single node clusters can be converted into traditional multi-node clusters, at
which point they become subject to all the standard cluster requirements like the need to utilize an even number
of nodes consisting of HA pairs. This lab does not contain a single node cluster, so does not discuss them further.
ONTAP 9 clusters that only serve NFS and CIFS can scale up to a maximum of 24 nodes, although the node limit
can be lower depending on the model of FAS controller in use. ONTAP 9 clusters that also host iSCSI and FC
can scale up to a maximum of 8 nodes, but once again the limit may be lower depending on the FAS controller
model.
Disks, whether Hard Disk Drives (HDD) or Solid State Disks (SSD), are the fundamental unit of physical storage
in ONTAP, and are tied to a specific cluster node by virtue of their physical connectivity (i.e., cabling) to a given
controller head.
ONTAP manages disks in groups called aggregates. An aggregate defines the RAID properties for a group of
disks that are all physically attached to the same node. A given disk can only be a member of a single aggregate.
By default each cluster node has one aggregate known as the root aggregate, which is a group of the node's
local disks that host the node's ONTAP operating system. A node's root aggregate is automatically created
during ONTAP installation in a minimal RAID-DP configuration This means it is initially comprised of 3 disks
(1 data, 2 parity), and has a name that begins the string aggr0. For example, in this lab the root aggregate of
the node cluster1-01 is named aggr0_cluster1_01., and the root aggregate of the node cluster1-02 is named
aggr0_cluster1_02.
On higher end FAS systems that have many disks, the requirement to dedicate 3 disks for each controller's root
aggregate is not a burden, but for entry level FAS systems that only have 24 or 12 disks this root aggregate disk
overhead requirement significantly reduces the disks available for storing user data. To improve usable capacity,
NetApp introduced Advanced Drive Partitioning in ONTAP 8.3, which divides the Hard Disk Drives (HDDs) on
nodes that have this feature enabled into two partitions; a small root partition, and a much larger data partition.
ONTAP allocates the root partitions to the node root aggregate, and the data partitions for data aggregates. Each
partition behaves like a virtual disk, so in terms of RAID, ONTAP treats these partitions just like physical disks
when creating aggregates. The key benefit is that a much higher percentage of the node's overall disk capacity is
now available to host user data.
ONTAP only supports HDD partitioning for FAS 22xx, FAS25xx,, and only for drives installed in the internal shelf
on those models. Advanced Drive Partitioning can only be enabled at system installation time. To convert an
existing system to use Advanced Drive Partitioning you must completely evacuate the affected drives and re-
install ONTAP.
All-Flash FAS (AFF) supports a variation of Advanced Drive Partitioning that utilizes SSDs instead of HDDs. The
capability is available for entry-level, mid-range, and high-end AFF platforms. Data ONTAP 8.3 also introduces
SSD partitioning for use with Flash Pools, but the details of that feature lie outside the scope of this lab.
In this section, you use the CLI to determine if a cluster node is utilizing Advanced Drive Partitioning.
If you do not already have a PuTTY session established to cluster1, launch PuTTY as described in the Accessing
the Command Line section at the beginning of this guide, and connect to the host cluster1 using the username
admin and the password Netapp1!.
1. List all of the physical disks attached to the cluster:
cluster1::>
The preceding command listed a total of 24 disks, 12 for each of the nodes in this two-node cluster.
The container type for all the disks is shared, which indicates that the disks are partitioned. For disks
that are not partitioned, you would typically see values like spare, data, parity, and dparity. The
Owner field indicates which node the disk is assigned to, and the Container Name field indicates which
aggregate the disk is assigned to. Notice that two disks for each node do not have a Container Name
listed; these are spare disks that ONTAP can use as replacements in the event of a disk failure.
2. At this point, the only aggregates that exist on this new cluster are the root aggregates. List the
aggregates that exist on the cluster:
cluster1::>
3. Now list the disks that are members of the root aggregate for the node cluster-01. Here is the command
that you would ordinarily use to display that information for an aggregate that is not using partitioned
disks.
Info: This cluster has partitioned disks. To get a complete list of spare disk
capacity use "storage aggregate show-spare-disks".
One or more aggregates queried for use shared disks. Use "storage
aggregate show-status" to get correct set of disks associated with these
aggregates.
cluster1::>
4. As you can see, in this instance the preceding command is not able to produce a list of disks because
this aggregate is using shared disks. Instead it refers you to use the storage aggregate show command
to query the aggregate for a list of it's assigned disk partitions.
cluster1::>
The output shows that aggr0_cluster1_01 is comprised of 10 disks, each with a usable size of 14.24 GB,
and you know that the aggregate is using the listed disk's root partitions because aggr0_cluster1_01 is a
root aggregate.
For a FAS controller that uses Advanced Drive Partitioning, ONTAP automatically determines the size
of the root and data disk partitions at system installation time based on the quantity and size of the
available disks assigned to each node. In this lab each cluster node has twelve 32 GB hard disks, and
the spare disks listed here reflect the available capacity of the data partitions, which as you can see each
have approximately 14 GB of available space. (You may have noticed that this is less than 50% of each
disk's 32 GB physical capacity. This is due to the relatively small size of the simulator disks used in this
lab. When using disks that are hundreds of GB or larger, then the root partition will consume a much
smaller percentage of each disk's total capacity.)
5. The ONTAP CLI includes a diagnostic level command that provides a more comprehensive single view
of a system's partitioned disks. The following command shows the partitioned disks that belong to the
node cluster1-01.
The only aggregates that exist on a newly created cluster are the node root aggregates. The root aggregate
should not be used to host user data, so in this section you will create a new aggregate on each of the nodes in
cluster1 so they can host the storage virtual machines, volumes, and LUNs that you will create later in this lab.
A node can host multiple aggregates depending on the data sizing, performance, and isolation needs of the
storage workloads that it will host. When you create a Storage Virtual Machine (SVM) you assign it to use one or
more specific aggregates to host the SVM's volumes. Multiple SVMs can be assigned to use the same aggregate,
which offers greater flexibility in managing storage space, whereas dedicating an aggregate to just a single SVM
provides greater workload isolation.
For this lab, you will be creating a single user data aggregate on each node in the cluster.
1. Display a list of the disks attached to the node cluster-01. (Note that you can omit the -nodelist option
to display a list of the disks in the entire cluster.)
Note: By default the PuTTY window may wrap output lines because the window is too small;
if this is the case for you then simply expand the window by selecting its edge and dragging it
wider, after which any subsequent output will utilize the visible width of the window.
Info: This cluster has partitioned disks. To get a complete list of spare disk
capacity use "storage aggregate show-spare-disks".
VMw-1.25 28.44GB - 0 VMDISK shared aggr0_cluster1_01 cluster1-01
VMw-1.26 28.44GB - 1 VMDISK shared aggr0_cluster1_01 cluster1-01
VMw-1.27 28.44GB - 2 VMDISK shared aggr0_cluster1_01 cluster1-01
VMw-1.28 28.44GB - 3 VMDISK shared aggr0_cluster1_01 cluster1-01
VMw-1.29 28.44GB - 4 VMDISK shared aggr0_cluster1_01 cluster1-01
VMw-1.30 28.44GB - 5 VMDISK shared aggr0_cluster1_01 cluster1-01
VMw-1.31 28.44GB - 6 VMDISK shared aggr0_cluster1_01 cluster1-01
VMw-1.32 28.44GB - 8 VMDISK shared aggr0_cluster1_01 cluster1-01
VMw-1.33 28.44GB - 9 VMDISK shared aggr0_cluster1_01 cluster1-01
VMw-1.34 28.44GB - 10 VMDISK shared aggr0_cluster1_01 cluster1-01
VMw-1.35 28.44GB - 11 VMDISK shared - cluster1-01
VMw-1.36 28.44GB - 12 VMDISK shared - cluster1-01
VMw-1.37 28.44GB - 0 VMDISK shared aggr0_cluster1_02 cluster1-02
VMw-1.38 28.44GB - 1 VMDISK shared aggr0_cluster1_02 cluster1-02
VMw-1.39 28.44GB - 2 VMDISK shared aggr0_cluster1_02 cluster1-02
VMw-1.40 28.44GB - 3 VMDISK shared aggr0_cluster1_02 cluster1-02
VMw-1.41 28.44GB - 4 VMDISK shared aggr0_cluster1_02 cluster1-02
VMw-1.42 28.44GB - 5 VMDISK shared aggr0_cluster1_02 cluster1-02
VMw-1.43 28.44GB - 6 VMDISK shared aggr0_cluster1_02 cluster1-02
VMw-1.44 28.44GB - 8 VMDISK shared aggr0_cluster1_02 cluster1-02
VMw-1.45 28.44GB - 9 VMDISK shared aggr0_cluster1_02 cluster1-02
VMw-1.46 28.44GB - 10 VMDISK shared aggr0_cluster1_02 cluster1-02
VMw-1.47 28.44GB - 11 VMDISK shared - cluster1-02
VMw-1.48 28.44GB - 12 VMDISK shared - cluster1-02
24 entries were displayed.
cluster1::>
cluster1::>
First Plex
cluster1::>
First Plex
cluster1::>
cluster1::>
11.1.3 Networks
This section discusses the network components that ONTAP provides to manage your cluster.
cluster1::>
2. Display a list of the cluster's broadcast domains. Remember that broadcast domains are scoped to
a single IPspace. The e0a ports on the cluster nodes are part of the Cluster broadcast domain in the
Cluster IPspace. The remaining ports are part of the Default broadcast domain in the Default IPspace.
cluster1::>
cluster1::>
4. ONTAP does not include a default subnet, so you will need to create a subnet now. The specific
command you use depends on which sections of this lab guide you plan to complete, because you need
to correctly align the IP address pool in your lab with the IP addresses used in those sections.
If you plan to complete the NAS portion of this lab, enter the following command. Use this
command as well if you plan to complete both the NAS and SAN portions of this lab.
If you only plan to complete the SAN portion of this lab, then enter the following command
instead.
5. Re-display the list of the cluster's subnets. This example assumes you plan to complete the whole lab.
cluster1::>
6. If you are want to see a list of all of the network ports on your cluster, use the following command.
Node: cluster1-01
Speed(Mbps) Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status
--------- ------------ ---------------- ---- ---- ----------- --------
e0a Cluster Cluster up 1500 auto/1000 healthy
e0b Cluster Cluster up 1500 auto/1000 healthy
e0c Default Default up 1500 auto/1000 healthy
e0d Default Default up 1500 auto/1000 healthy
e0e Default Default up 1500 auto/1000 healthy
e0f Default Default up 1500 auto/1000 healthy
e0g Default Default up 1500 auto/1000 healthy
Node: cluster1-02
Speed(Mbps) Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status
--------- ------------ ---------------- ---- ---- ----------- --------
e0a Cluster Cluster up 1500 auto/1000 healthy
cluster1::>
In this section you create a new SVM named svm1 on the cluster, and configure it to serve out a volume over
NFS and CIFS. You will configure two NAS data LIFs on the SVM, one per node in the cluster.
Start by creating the storage virtual machine.
If you do not already have a PuTTY connection open to cluster1 then open one now following the directions in
the Accessing the Command Line section at the beginning of this lab guide. The username is admin and the
password is Netapp1!.
1. Create the SVM named svm1. Notice that the ONTAP command line syntax refers to storage virtual
machines as vservers.
cluster1::>
Vserver: svm1
Protocols: nfs, cifs, fcp, iscsi, ndmp
cluster1::>
3. Remove the FCP, iSCSI, and NDMP protocols from the SVM svm1, leaving only CIFS and NFS.
Vserver: svm1
Protocols: nfs, cifs
cluster1::>
cluster1::>
cluster1::>
7. Notice that there are not any LIFs defined for the SVM svm1 yet. Create the svm1_cifs_nfs_lif1 data
LIF for svm1.
cluster1::> network interface create -vserver svm1 -lif svm1_cifs_nfs_lif1 -role data
-data-protocol nfs,cifs -home-node cluster1-01 -home-port e0c -subnet-name Demo
-firewall-policy mgmt
cluster1::>
cluster1::> network interface create -vserver svm1 -lif svm1_cifs_nfs_lif2 -role data
-data-protocol nfs,cifs -home-node cluster1-02 -home-port e0c -subnet-name Demo
-firewall-policy mgmtcluster1::>
cluster1::>
11. Configure the DNS domain and nameservers for the svm1 SVM.
Configure the LIFs to accept DNS delegation responsibility for the svm1.demo.netapp.com zone so that
you can advertise addresses for both of the NAS data LIFs that belong to svm1. You could have done
this as part of the network interface create commands, but we opted to perform it separately here so
you could see how to modify an existing LIF.
13. Configure lif1 to accept DNS delegation responsibility for the svm1.demo.netapp.com zone.
14. Configure lif2 to accept DNS delegation responsibility for the svm1.demo.netapp.com zone.
16. Verify that DNS delegation is working correctly by opening a PuTTY connection to the Linux host rhel1
(username root" and password Netapp1!) and executing the following commands. If the delegation is
working correctly you should see IP addresses returned for the host svm1.demo.netapp.com, and if you
run the command several times you will eventually see that the responses vary the returned address
between the SVM's two LIFs.
17. This completes the planned LIF configuration changes for svm1, so now display a detailed configuration
report for the LIF svm1_cifs_nfs_lif1.
cluster1::>
When you issued the vserver create command to create svm1 you included an option to enable CIFS,
but that command did not actually create a CIFS server for the SVM. Now it is time to create that CIFS
server.
18. Display the status of the cluster's CIFS servers.
cluster1::> vserver cifs create -vserver svm1 -cifs-server svm1 -domain demo.netapp.com
In order to create an Active Directory machine account for the CIFS server, you
must supply the name and password of a Windows account with sufficient
privileges to add computers to the "CN=Computers" container within the
"DEMO.NETAPP.COM" domain.
cluster1::>
As with CIFS, when you created svm1 you included an option to enable NFS, but that command did not
actually create the NFS server. Now it is time to create that NFS server.
21. Display the status of the NFS server for svm1.
cluster1::> vserver nfs create -vserver svm1 -v3 enabled -access true
cluster1::>
23. Display the status of the NFS server for svm1 again.
ONTAP configures CIFS and NFS on a per SVM basis. When you created the svm1 SVM in the previous
section, you set up and enabled CIFS and NFS for that SVM. However, it is important to understand that clients
cannot yet access the SVM using CIFS and NFS. That is partially because you have not yet created any volumes
on the SVM, but also because you have not told the SVM what you want to share, and who you want to share it
with.
Each SVM has its own namespace. A namespace is a logical grouping of a single SVM's volumes into a directory
hierarchy that is private to just that SVM, with the root of that hierarchy hosted on the SVM's root volume
(svm1_root in the case of the svm1 SVM), and it is through this namespace that the SVM shares data to CIFS
and NFS clients. The SVM's other volumes are junctioned (i.e., mounted) within that root volume, or within other
volumes that are already junctioned into the namespace. This hierarchy presents NAS clients with a unified,
centrally maintained view of the storage encompassed by the namespace, regardless of where those junctioned
volumes physically reside in the cluster. CIFS and NFS clients cannot access a volume that has not been
junctioned into the namespace.
CIFS and NFS clients can access the entire namespace by mounting a single NFS export or CIFS share declared
at the top of the namespace. While this is a very powerful capability, there is no requirement to make the whole
namespace accessible. You can create CIFS shares at any directory level in the namespace, and you can
create different NFS export rules at junction boundaries for individual volumes and for individual qtrees within a
junctioned volume.
ONTAP does not utilize an /etc/exports file to export NFS volumes; instead it uses a policy model that dictates
the NFS client access rules for the associated volumes. An NFS-enabled SVM implicitly exports the root of its
namespace and automatically associates that export with the SVM's default export policy. But that default policy
is initially empty, and until it is populated with access rules no NFS clients will be able to access the namespace.
The SVM's default export policy applies to the root volume, and also to any volumes that an administrator
junctions into the namespace, but an administrator can optionally create additional export policies in order to
implement different access rules within the namespace. You can apply export policies to a volume as a whole
and to individual qtrees within a volume, but a given volume or qtree can only have one associated export policy.
While you cannot create NFS exports at any other directory level in the namespace, NFS clients can mount from
any level in the namespace by leveraging the namespace's root export.
In this section of the lab, you configure a default export policy for your SVM so that any volumes you junction into
its namespace will automatically pick up the same NFS export rules. You will also create a single CIFS share
at the top of the namespace so that all the volumes you junction into that namespace are accessible through
that one share. Finally, since your SVM will be sharing the same data over NFS and CIFS, you will set up name
mapping between UNIX and Windows user accounts to facilitate smooth multi protocol access to the volumes and
files in the namespace.
When you create an SVM, ONTAP automatically creates a root volume to hold that SVM's namespace. An SVM
always has a root volume, whether or not it is configured to support NAS protocols.
1. Verify that CIFS is running by default for the SVM svm1.
Vserver: svm1
General NFS Access: true
NFS v3: enabled
NFS v4.0: disabled
UDP Protocol: enabled
TCP Protocol: enabled
Default Windows User: -
NFSv4.0 ACL Support: disabled
NFSv4.0 Read Delegation Support: disabled
NFSv4.0 Write Delegation Support: disabled
NFSv4 ID Mapping Domain: defaultv4iddomain.com
NFSv4 Grace Timeout Value (in secs): 45
Preserves and Modifies NFSv4 ACL (and NTFS File Permissions in Unified Security Style):
enabled
NFSv4.1 Minor Version Support: disabled
Rquota Enable: disabled
NFSv4.1 Parallel NFS Support: enabled
NFSv4.1 ACL Support: disabled
NFS vStorage Support: disabled
NFSv4 Support for Numeric Owner IDs: enabled
Default Windows Group: -
NFSv4.1 Read Delegation Support: disabled
NFSv4.1 Write Delegation Support: disabled
NFS Mount Root Only: enabled
NFS Root Only: disabled
Permitted Kerberos Encryption Types: des, des3, aes-128, aes-256
Showmount Enabled: disabled
Set the Protocol Used for Name Services Lookups for Exports: udp
NFSv3 MS-DOS Client Support: disabled
cluster1::>
cluster1::>
Vserver: svm1
Policy Name: default
Rule Index: 1
Access Protocol: cifs, nfs
List of Client Match Hostnames, IP Addresses, Netgroups, or Domains: 0.0.0.0/0
RO Access Rule: any
RW Access Rule: any
User ID To Which Anonymous Users Are Mapped: 65534
Superuser Security Types: any
Honor SetUID Bits in SETATTR: true
Allow Creation of Devices: true
cluster1::>
cluster1::>
10. Create a share at the root of the namespace for the SVM svm1.
cluster1::> vserver cifs share create -vserver svm1 -share-name nsroot -path /
cluster1::>
cluster1::>
Set up CIFS <-> NFS user name mapping for the SVM svm1.
12. Display a list of the current name mappings.
Vserver: svm1
Direction: win-unix
Position Hostname IP Address/Mask
-------- ---------------- ----------------
1 - - Pattern: demo\\administrator
Replacement: root
Vserver: svm1
Direction: unix-win
Position Hostname IP Address/Mask
-------- ---------------- ----------------
1 - - Pattern: root
Replacement: demo\\administrator
2 entries were displayed.
cluster1::>
11.2.3 Create a Volume and Map It to the Namespace Using the CLI
Volumes, or FlexVols, are the dynamically sized containers used by ONTAP to store data. A volume only resides
in a single aggregate at a time, but any given aggregate can host multiple volumes. Unlike an aggregate, which
can associate with multiple SVMS, a volume can only associate to a single SVM. The maximum size of a volume
can vary depending on what storage controller model is hosting it.
An SVM can host multiple volumes. While there is no specific limit on the number of FlexVols that can be
configured for a given SVM, each storage controller node is limited to hosting no more than 500 or 1000 FlexVols
(varies based on controller model), which means that there is an effective limit on the total number of volumes
that a cluster can host, depending on how many nodes there are in your cluster.
Each storage controller node has a root aggregate (for example, aggr0_<nodename>) that contains the node's
ONTAP operating system. Do not use the node's root aggregate to host any other volumes or user data; always
create additional aggregates and volumes for that purpose.
ONTAP FlexVols support a number of storage efficiency features including thin provisioning, deduplication, and
compression. One specific storage efficiency feature you will see in the section of the lab is thin provisioning,
which dictates how space for a FlexVol is allocated in its containing aggregate.
When you create a FlexVol with a volume guarantee of type volume you are thickly provisioning the volume,
pre-allocating all of the space for the volume on the containing aggregate, which ensures that the volume will
never run out of space unless the volume reaches 100% capacity. When you create a FlexVol with a volume
guarantee of none you are thinly provisioning the volume, only allocating space for it on the containing
aggregate at the time and in the quantity that the volume actually requires the space to store the data.
This latter configuration allows you to increase your overall space utilization, and even oversubscribe an
aggregate by allocating more volumes on it than the aggregate could actually accommodate if all the subscribed
volumes reached their full size. However, if an oversubscribed aggregate does fill up then all it's volumes will run
out of space before they reach their maximum volume size, therefore oversubscription deployments generally
require a greater degree of administrative vigilance around space utilization.
cluster1::>
cluster1::>
cluster1::>
cluster1::>
cluster1::>
cluster1::>
cluster1::>
8. Display detailed information about the volume engineering. Notice here that the volume is reporting as
thin provisioned (Space Guarantee Style is set to none), and that the Export Policy is set to default.
cluster1::>
9. View how much disk space this volume is actually consuming in it's containing aggregate. The Total
Footprint value represents the volume's total consumption. The value here is so small because this
volume is thin provisioned, and you have not yet added any data to it. If you had thick provisioned the
volume, then the footprint here would have been 1 GB, the full size of the volume.
Vserver : svm1
Volume : engineering
cluster1::>
cluster1::> volume qtree create -vserver svm1 -volume eng_users -qtree bob
cluster1::>
cluster1::> volume qtree create -vserver svm1 -volume eng_users -qtree susan
cluster1::>
cluster1::>
13. Produce a detailed report of the configuration for the qtree bob.
cluster1::>
The svm1 SVM is up and running and is configured for NFS and CIFS access, so it's time to validate that
everything is working properly by mounting the NFS export on a Linux host, and the CIFS share on a Windows
host. You should complete both parts of this section so you can see that both hosts are able to seamlessly access
the volume and it's files.
This part of the lab demonstrates connecting the Windows client Jumphost to the CIFS share \\svm1\nsroot
using the Windows GUI.
1. On the Windows host Jumphost, open Windows Explorer by clicking on the folder icon on the task bar.
Figure 11-1:
4
2
Figure 11-2:
Figure 11-3:
Figure 11-4:
File Explorer displays the contents of the engineering folder. Next you will create a file in this folder to
confirm that you can write to it.
8. Notice that the eng_users volume that you junctioned in as users is visible inside this folder.
9. Right-click in the empty space in the right pane of File Explorer.
10. In the context menu, select New > Text Document, and name the resulting file cifs.txt.
10
Figure 11-5:
11. Double-click the cifs.txt file you just created to open it with Notepad.
Tip: If you do not see file extensions in your lab, you can enable that by going to the View
menu at the top of Windows Explorer and checking the File Name Extensions check box.
12. In Notepad, enter some text. Ensure that you put a carriage return at the end of the line, otherwise
when you later view the contents of this file on Linux the command shell prompt will appear on the
same line as the file contents.
13. Use the File > Save menu in Notepad to save the file's updated contents to the share. If write access
is working properly then the save operation will complete silently (i.e., you will not receive an error
message).
13
12
Figure 11-6:
Close Notepad and the File Explorer windows to finish this exercise.
This section demonstrates how to connect a Linux client to the NFS volume svm1:/ using the Linux command line.
1. Follow the instructions in the Accessing the Command Line section at the beginning of this lab guide to
open PuTTY and connect to the system rhel1. Log in as the user root with the password Netapp1!.
2. Verify that there are no NFS volumes currently mounted on rhel1.
[root@rhel1 ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_rhel1-lv_root 11877388 4962504 6311544 45% /
tmpfs 444612 76 444536 1% /dev/shm
/dev/sda1 495844 40084 430160 9% /boot
[root@rhel1 ~]#
3. Create the /svm1 directory to serve as a mount point for the NFS volume you will be shortly mounting.
[root@rhel1 ~]# echo "svm1:/ /svm1 nfs rw,defaults 0 0" >> /etc/fstab
[root@rhel1 ~]#
5. Verify the fstab file contains the new entry you just created.
[root@rhel1 ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_rhel1-lv_root 11877388 4962508 6311540 45% /
tmpfs 444612 76 444536 1% /dev/shm
/dev/sda1 495844 40084 430160 9% /boot
svm1:/ 19456 128 19328 1% /svm1
[root@rhel1 ~]#
9. Notice that you can see the engineering volume that you previously junctioned into the SVM's
namespace.
[root@rhel1 svm1]# ls
engineering
[root@rhel1 svm1]#
11. Display the contents of the cifs.txt file you created earlier.
Tip: When you cat the cifs.txt file, if the shell prompt winds up on the same line as the file
output then that indicates that you forgot to include a newline at the end of the file when you
created the file on Windows.
ONTAP 8.2.1 introduced the ability to NFS export qtrees. This optional section explains how to configure qtree
exports and will demonstrate how to set different export rules for a given qtree. For this exercise you will be
working with the qtrees you created in the previous section.
Qtrees had many capabilities in Data ONTAP 7-mode that are no longer present in cluster mode. Qtrees do still
exist in ONTAP, but their purpose is essentially now limited to just quota management, with most other 7-mode
cluster1::>
5. Add a rule to the policy so that only the Linux host rhel1 will be granted access.
cluster1::>
cluster1::>
Vserver: svm1
Policy Name: rhel1-only
Rule Index: 1
Access Protocol: any
List of Client Match Hostnames, IP Addresses, Netgroups, or Domains: 192.168.0.61
cluster1::>
cluster1::>
cluster1::>
cluster1::> volume qtree modify -vserver svm1 -volume eng_users -qtree susan
-export-policy rhel1-only
cluster1::>
11. Display the configuration of the susan qtree. Notice the Export Policy field shows that this qtree is
using the rhel1-only export policy.
cluster1::> volume qtree show -vserver svm1 -volume eng_users -qtree susan
cluster1::>
12. Produce a report showing the export policy assignments for all the volumes and qtrees that belong to
svm1.
cluster1::>
[root@rhel1 users]# ls
bob susan
[root@rhel1 users]#
Next validate that rhel2 has different access rights to the qtree. This host should be able to access
all the volumes and qtrees in the svm1 namespace *except* susan, which should give a permission
denied error because that qtree's associated export policy only grants access to the host rhel1.
Note: Open a PuTTY connection to the Linux host rhel2 (again, username = root and password
= Netapp1!).
18. Create a mount point for the svm1 NFS volume.
[root@rhel2 users]# ls
bob susan
[root@rhel2 users]#
If you do not already have a PuTTY session open to cluster1, open one now following the instructions in the
Accessing the Command Line section at the beginning of this lab guide and enter the following commands.
cluster1::>
2. Create the SVM svmluns on aggregate aggr1_cluster1_01. Note that the ONTAP command line syntax
still refers to storage virtual machines as vservers.
cluster1::>
Vserver: svmluns
Vserver Type: data
Vserver Subtype: default
Vserver UUID: fe75684a-61c8-11e6-b805-005056986697
Root Volume: svmluns_root
Aggregate: aggr1_cluster1_01
NIS Domain: -
Root Volume Security Style: unix
LDAP Client: -
Default Volume Language Code: C.UTF-8
Snapshot Policy: default
Comment:
Quota Policy: default
List of Aggregates Assigned: -
Limit on Maximum Number of Volumes allowed: unlimited
Vserver Admin State: running
cluster1::>
8. Create 4 SAN LIFs for the SVM svmluns, 2 per node. To save some typing, remember that you can use
the up arrow to recall previous commands that you can then edit and execute.
cluster1::> network interface create -vserver svmluns -lif svmluns_admin_lif1 -role data
-data-protocol none -home-node cluster1-01 -home-port e0c -subnet-name Demo
-failover-policy system-defined -firewall-policy mgmt
cluster1::>
cluster1::>
cluster1::>
12. Display a list of all the volumes on the cluster to see the root volume for the svmluns SVM.
cluster1::>
In an earlier section you created a new SVM and configured it for iSCSI. In the following sub-sections you will
perform the remaining steps needed to configure and use a LUN under Windows:
Gather the iSCSI Initiator Name of the Windows client.
Create a thin provisioned Windows volume, create a thin provisioned Windows LUN within that volume,
and map the LUN so it can be accessed by the Windows client.
Mount the LUN on a Windows client leveraging multi-pathing.
You must complete all of the subsections of this section in order to use the LUN from the Windows client.
Figure 11-7:
Figure 11-8:
Figure 11-9:
Figure 11-10:
The iSCSI Properties window closes, and focus returns to the Windows Explorer Administrator Tools
window. Leave this window open because you will need to access other tools later in the lab.
Warning: The export-policy "default" has no rules in it. The volume will therefore be
inaccessible.
Do you want to continue? {y|n}: y
[Job 53] Job is queued: Create winluns.
[Job 53] Job succeeded: Successful
cluster1::>
Note: Remember that export policies are only applicable for NAS protocols. You can ignore the
warning that the default policy has no rules since the svmluns SVM is only configured for the iscsi
protocol.
3. Display a list of the volumes on the cluster.
cluster1::>
cluster1::>
cluster1::>
8. Create a new igroup named winigrp that you will use to manage access to the new LUN, and add
Jumphost's initiator to the group.
cluster1::>
cluster1::> lun map -vserver svmluns -volume winluns -lun windows.lun -igroup winigrp
cluster1::>
cluster1::>
cluster1::>
cluster1::>
ONTAP 8.2 introduced a space reclamation feature that allows ONTAP to reclaim space from a thin
provisioned LUN when the client deletes data from it, and also allows ONTAP to notify the client when
the LUN cannot accept writes due to lack of space on the volume. This feature is supported by VMware
ESX 5.0 and later, Red Hat Enterprise Linux 6.2 and later, and Microsoft Windows 2012. Jumphost is
cluster1::>
cluster1::>
cluster1::>
Figure 11-11:
Figure 11-12:
The MPIO Properties window closes and focus returns to the Administrative Tools window for
Jumphost. Now you need to begin the process of connecting Jumphost to the LUN.
5. In Administrative Tools, double-click the iSCSI Initiator tool.
Figure 11-13:
Figure 11-14:
The Discovery tab is where you begin the process of discovering LUNs, and to do that you must define
a target portal to scan. You are going to manually add a target portal to Jumphost.
9. Click the Discover Portal button.
Figure 11-15:
The Discover Target Portal window opens. Here you will specify the first of the IP addresses that the
ONTAP Create LUN wizard assigned your iSCSI LIFs when you created the svmluns SVM. Recall that
the wizard assigned your LIFs IP addresses in the range 192.168.0.133-192.168.0.136.
10. Set the IP Address or DNS name textbox to 192.168.0.133, the first address in the range for your
LIFs.
11. Click OK.
10
11
Figure 11-16:
The Discover Target Portal window closes, and focus returns to the iSCSI Initiator Properties
window.
12. The Target Portals list now contains an entry for the IP address you entered in the previous step.
13. Click on the Targets tab.
12
Figure 11-17:
The Targets tab opens to show you the list of discovered targets.
14. In the Discovered targets list select the only listed target. Observe that the target's status is Inactive,
because although you have discovered it you have not yet connected to it. Also note that the Name of
the discovered target in your lab will have a different value than what you see in this guide; that name
string is uniquely generated for each instance of the lab.
Note: Make a mental note of that string value as you will see it a lot as you continue to
configure iSCSI in later steps of this procedure.
15. Click the Connect button.
Figure 11-18:
16
17
Figure 11-19:
18
19
Figure 11-20:
The Advanced Setting window closes, and focus returns to the Connect to Target window.
20. Click OK.
Figure 11-21:
The Connect to Target window closes, and focus returns to the iSCSI Initiator Properties window.
21. Notice that the status of the listed discovered target has changed from Inactive to Connected.
21
Figure 11-22:
Up to this point you have added a single path to your iSCSI LUN, using the address for the
cluster1-01_iscsi_lif_1 LIF the Create LUN wizard created on the node cluster1-01 for the svmluns
SVM. Now you are going to add each of the other SAN LIFs present on the svmluns SVM. To begin this
procedure you must first edit the properties of your existing connection.
22. Still on the Targets tab, select the discovered target entry for your existing connection.
23. Click Properties.
23
Figure 11-23:
The Properties window opens. From this window you will start to connect alternate paths for your newly
connected LUN. You will repeat this procedure 3 times, once for each of the remaining LIFs that are
present on the svmluns SVM.
Figure 11-24:
26
27
Figure 11-25:
29
Figure 11-26:
The Advanced Settings window closes, and focus returns to the Connect to Target window.
30. Click OK.
Figure 11-27:
The Connect to Target window closes, and focus returns to the Properties window where there are
now 2 entries shown in the identifier list.
Repeat steps 24 - 30 for each of the last two remaining LIF IP addresses. When you have finished
adding all the additional paths the Identifiers list in the Properties window should contain 4 entries.
31. There are 4 entries in the Identifier list when you are finished, indicating that there are 4 sessions,
one for each path. Note that it is normal for the identifier values in your lab to differ from those in the
screenshot.
32. Click OK.
32
Figure 11-28:
The Properties window closes, and focus returns to the iSCSI Properties window.
33. Click OK.
Figure 11-29:
The iSCSI Properties window closes, and focus returns to the desktop of Jumphost. If the
Administrative Tools window is not still open on your desktop, open it again now.
If all went well, the Jumphost is now connected to the LUN using multi-pathing, so it is time to format
your LUN and build a filesystem on it.
34. In Administrative Tools, double-click the Computer Management tool.
Figure 11-30:
35
Figure 11-31:
36. When you launch Disk Management, an Initialize Disk dialog will open informing you that you must
initialize a new disk before Logical Disk Manager can access it.
Note: If you see more than one disk listed, then MPIO has not correctly recognized that the
multiple paths you set up are all for the same LUN. If this occurs, you need to cancel the
Initialize Disk dialog, quit Computer Manager, and go back to the iSCSI Initiator tool to review
36
Figure 11-32:
The Initialize Disk window closes, and focus returns to the Disk Management view in the Computer
Management window.
37. The new disk shows up in the disk list at the bottom of the window, and has a status of Unallocated.
38. Right-click inside the Unallocated box for the disk (if you right-click outside this box you will get the
incorrect context menu), and select New Simple Volume from the context menu.
37
Figure 11-33:
Figure 11-34:
Figure 11-35:
42
Figure 11-36:
44
Figure 11-37:
The wizard advances to the Completing the New Simple Volume Wizard step.
45. Click Finish.
Figure 11-38:
The New Simple Volume Wizard window closes, and focus returns to the Disk Management view of
the Computer Management window.
46. The new WINLUN volume now shows as Healthy in the disk list at the bottom of the window,
indicating that the new LUN is mounted and ready to use.
47. Before you complete this section of the lab, take a look at the MPIO configuration for this LUN by right-
clicking inside the box for the WINLUN volume. From the context menu select Properties.
47
Figure 11-39:
49
50
Figure 11-40:
The NETAPP LUN C-Mode Multi-Path Disk Device Properties window opens.
51. Click the MPIO tab.
52. Notice that you are using the Data ONTAP DSM for multi-path access rather than the Microsoft DSM.
We recommend using the Data ONTAP DSM software, as it is the most full-featured option available,
although the Microsoft DSM is also supported.
53. The MPIO policy is set to Least Queue Depth. A number of different multi-pathing policies are
available, but the configuration shown here sends LUN I/O down the path that has the fewest
outstanding I/O requests. You can click the More information about MPIO policies link at the bottom
of the dialog window for details about all the available policies.
54. The top two paths show both a Path State and TPG State as Active/Optimized. These paths are
connected to the node cluster1-01, and the Least Queue Depth policy makes active use of both paths
to this node. Conversely, the bottom two paths show a Path State of Unavailable, and a TPG State
of Active/Unoptimized. These paths are connected to the node cluster1-02, and only enter a Path
State of Active/Optimized if the node cluster1-01 becomes unavailable, or if the volume hosting the
LUN migrates over to the node cluster1-02.
55. When you finish reviewing the information in this dialog, click OK to exit. If you changed any of the
values in this dialog you should consider using the Cancel button to discard those changes.
53
52
54
55
Figure 11-41:
The NETAPP LUN C-Mode Multi-Path Disk Device Properties window closes, and focus returns to the
WINLUN (E:) Properties window.
56. Click OK.
Figure 11-42:
Figure 11-43:
You may see a pop-up message from Microsoft Windows stating that you must format the disk in drive
E: before you can use it. (This window might be obscured by one of the other windows on the desktop,
but do not close the Administrative tools window as you will be using it again shortly.) As you may
recall, you did format the LUN during the New Simple Volume Wizard", meaning this is an erroneous
disk format message.
58. Click Cancel to ignore the format request.
58
Figure 11-44:
Finally, verify that Windows has detected that the new LUN supports space reclamation. Remember
that only Windows 2012 and newer OSs support this feature, and you must have a suitable version of
NetApp Windows Unified Host Utilities v6.0.2, or later installed. Jumphost meets this criteria.
59. In the Administrative Tools window, double-click Defragment and Optimize drives.
Figure 11-45:
61
Figure 11-46:
Figure 11-47:
Feel free to open Windows Explorer on Jumphost, and verify that you can create a file on the E: drive.
This completes this exercise.
In an earlier section you created a new SVM, and configured it for iSCSI. In the following sub-sections you will
perform the remaining steps needed to configure and use a LUN under Linux:
Gather the iSCSI Initiator Name of the Linux client.
Create a thin provisioned Linux volume, create a thin provisioned Linux LUN named linux.lun within
that volume, and map the LUN to the Linux client.
Mount the LUN on the Linux client.
You must complete all of the following subsections in order to use the LUN from the Linux client. Note that you
are not required to complete the Windows LUN section before starting this section of the lab guide, but the screen
shots and command line output shown here assumes that you have. If you did not complete the Windows LUN
section, the differences will not affect your ability to create and mount the Linux LUN.
5. Create the thin provisioned Linux LUN linux.lun on the volume linluns.
cluster1::> lun create -vserver svmluns -volume linluns -lun linux.lun -size 10GB
-ostype linux -space-reserve disabled
Created a LUN of size 10g (10742215680)
cluster1::>
9. Create a new igroup named linigrp that grants rhel1 access to the LUN linux.lun.
cluster1::> lun map -vserver svmluns -volume linluns -lun linux.lun -igroup linigrp
cluster1::>
Data ONTAP 8.2 introduced a space reclamation feature that allows Data ONTAP to reclaim space
from a thin provisioned LUN when the client deletes data from it, and also allows Data ONTAP to
notify the client when the LUN cannot accept writes due to lack of space on the volume. This feature
is supported by VMware ESX 5.0 and later, Red Hat Enterprise Linux 6.2 and later, and Microsoft
Windows 2012. The RHEL clients used in this lab are running version 6.7 and so you will enable the
space reclamation feature for your Linux LUN.
17. Display the space reclamation setting for the LUN linux.lun.
19. Display the new space reclamation setting for the LUN linux.lun.
4. You will find that the Red Hat Linux hosts in the lab have pre-installed the DM-Multipath packages and
a /etc/multipath.conf file pre-configured to support multi-pathing so that the RHEL host can access the
LUN using all of the SAN LIFs you created for the svmluns SVM.
5. You now need to start the iSCSI software service on rhel1, and configure it to start automatically at boot
time. Note that a force-start is only necessary the very first time you start the iscsid service on host.
6. Next discover the available targets using the iscsiadm command. Note that the exact values used
for the node paths may differ in your lab from what is shown in this example, and that after running
this command there will still not yet be active iSCSI sessions because you have not yet created the
necessary device files.
7. Create the devices necessary to support the discovered nodes, after which the sessions become active.
8. At this point the Linux client sees the LUN over all four paths, but it does not yet understand that all four
paths represent the same LUN.
[root@rhel1 ~]#
9. Since the lab includes a pre-configured /etc/multipath.conf file, you just need to start the multipathd
service to handle the multiple path management and configure it to start automatically at boot time.
10. The multipath command displays the configuration of DM-Multipath, and the multipath -ll command
displays a list of the multipath devices. DM-Multipath maintains a device file under /dev/mapper that
you use to access the multipathed LUN (in order to create a filesystem on it and to mount it). The
first line of output from the multipath -ll command lists the name of that device file (in this example
3600a0980774f6a34515d464d486c7137). The autogenerated name for this device file will likely differ
in your copy of the lab. Also pay attention to the output of the sanlun lun show -p command which
shows information about the ONTAP path of the LUN, the LUN's size, its device file name under /dev/
mapper, the multipath policy, and also information about the various device paths themselves.
You can see even more detail about the configuration of multipath and the LUN as a whole by issuing
the multipath -v3 -d -ll or iscsiadm -m session -P 3 commands. Because the output of these
commands is rather lengthy, it is omitted here, but you are welcome to run these commands in your lab.
11. The LUN is now fully configured for multipath access, so the only steps remaining before you can use
the LUN on the Linux host is to create a filesystem and mount it. When you run the following commands
in your lab you will need to substitute in the /dev/mapper/ string that identifies your LUN (get that
string from the output of ls -l /dev/mapper).
Note: You can use bash /lintab completion when entering the multipath file name to save
yourself some tedious typing.
The discard option for mount allows the Red Hat host to utilize space reclamation for the LUN.
12. To have RHEL automatically mount the LUN's filesystem at boot time, run the following command
(modified to reflect the multipath device path being used in your instance of the lab) to add the mount
information to the /etc/fstab file. Enter the following command as a single line.
NetApp provides no representations or warranties regarding the accuracy, reliability, or serviceability of any
information or recommendations provided in this publication, or with respect to any results that may be obtained
by the use of the information or observance of any recommendations provided herein. The information in this
document is distributed AS IS, and the use of this information or the implementation of any recommendations or
techniques herein is a customers responsibility and depends on the customers ability to evaluate and integrate
them into the customers operational environment. This document and the information contained herein may be
used solely in connection with the NetApp products discussed in this document.
Go further, faster
2016NetApp, Inc. All rights reserved. No portions of this document may be reproduced without prior written consent
of NetApp, Inc. Specifications are subject to change without notice. NetApp, the NetApp logo, Data ONTAP,
ONTAP, OnCommand, SANtricity, FlexPod, SnapCenter, and SolidFire are trademarks or registered
trademarks of NetApp, Inc. in the United States and/or other countries. All other brands or products are trademarks or
registered trademarks of their respective holders and should be treated as such.