Clustered Data ONTAP 83 MetroCluster Installation
Clustered Data ONTAP 83 MetroCluster Installation
Contents
MetroCluster documentation ...................................................................... 8
Preparing for the MetroCluster installation ............................................ 10
Differences between 7-Mode and clustered Data ONTAP MetroCluster
configurations ...................................................................................................... 10
Differences between the clustered Data ONTAP MetroCluster configurations ........ 11
Considerations for configuring cluster peering ......................................................... 12
Prerequisites for cluster peering .................................................................... 12
Considerations when using dedicated ports .................................................. 13
Considerations when sharing data ports ........................................................ 13
Considerations for MetroCluster configurations with native disk shelves or
array LUNs .......................................................................................................... 14
Considerations when transitioning from 7-Mode to clustered Data ONTAP ........... 14
Configuration of new MetroCluster systems ............................................................. 15
Preconfigured component passwords ............................................................ 15
Hardware setup checklist .............................................................................. 15
Software setup checklist ................................................................................ 17
Choosing the correct installation procedure for your configuration ..... 20
Cabling a four-node or two-node fabric-attached MetroCluster
configuration .......................................................................................... 22
Parts of a fabric MetroCluster configuration ............................................................. 23
Local HA pairs .............................................................................................. 27
Redundant FC-to-SAS bridges ...................................................................... 27
Redundant FC switch fabrics ........................................................................ 28
The cluster peering network .......................................................................... 28
Required MetroCluster components and naming guidelines for fabric
configurations ...................................................................................................... 29
Worksheet for FC switches and FC-to-SAS bridges ................................................. 31
Installing and cabling MetroCluster components ...................................................... 33
Racking the hardware components ............................................................... 33
Cabling the HBA adapters to the FC switches .............................................. 34
Cabling the ISLs between MetroCluster sites ............................................... 37
Recommended port assignments for FC switches ......................................... 38
Cabling the cluster interconnect in four-node configurations ....................... 41
Cabling the cluster peering connections ........................................................ 41
Cabling the HA interconnect, if necessary .................................................... 42
Cabling the management and data connections ............................................ 43
Installing FC-to-SAS bridges and SAS disk shelves ................................................ 43
Preparing for the installation ......................................................................... 44
Installing the FC-to-SAS bridge and SAS shelves ........................................ 45
Configuring the FC switches ..................................................................................... 49
Configuring the FC switches by running a configuration file ....................... 49
4 | MetroCluster Installation and Configuration Guide
MetroCluster documentation
There are a number of documents that can help you configure, operate, and monitor a MetroCluster
configuration.
Library Content
NetApp Documentation: MetroCluster in All MetroCluster guides
clustered Data ONTAP
NetApp Documentation: Clustered Data All Data ONTAP express guides
ONTAP Express Guides
NetApp Documentation: Data ONTAP 8 All Data ONTAP guides
(current releases)
Guide Content
NetApp Technical Report 4375: MetroCluster A technical overview of the MetroCluster
for Data ONTAP Version 8.3 Overview and configuration and operation.
Best Practices
Best practices for MetroCluster
configuration.
Clustered Data ONTAP 8.3 MetroCluster How to install a MetroCluster system that has
Installation Express Guide been received from the factory. You should use
this guide only if the following is true:
Disaster recovery
MetroCluster documentation | 9
Guide Content
MetroCluster Service Guide Guidelines for maintenance in a
MetroCluster configuration
Clustered Data ONTAP 8.3 Data Protection How mirrored aggregates work
Guide
SyncMirror
SnapMirror
SnapVault
Related concepts
Parts of a fabric MetroCluster configuration on page 23
Cluster configuration
In all configurations, each of the two MetroCluster sites is configured as a Data ONTAP cluster. In a
two-node MetroCluster configuration, each node is configured as a single-node cluster.
Related information
Clustered Data ONTAP 8.3 Data Protection Guide
Clustered Data ONTAP 8.3 Cluster Peering Express Guide
Connectivity requirements
The subnet used in each cluster for intercluster communication must meet the following
requirements:
The subnet must belong to the broadcast domain that contains the ports used for intercluster
communication.
IP addresses used for intercluster LIFs do not need to be in the same subnet, but having them in
the same subnet is a simpler configuration.
You must have considered whether the subnet will be dedicated to intercluster communication or
shared with data communication.
The intercluster network must be configured so that cluster peers have pair-wise full-mesh
connectivity within the applicable IPspace, which means that each pair of clusters in a cluster peer
relationship has connectivity among all of their intercluster LIFs.
A cluster's intercluster LIFs must use the same IP address version: all IPv4 addresses or all IPv6
addresses. Similarly, all of the intercluster LIFs of the peered clusters must use the same IP
addressing version.
Port requirements
The ports that will be used for intercluster communication must meet the following requirements:
The broadcast domain that is used for intercluster communication must include at least two ports
per node so that intercluster communication can fail over from one port to another.
The ports added to a broadcast domain can be physical network ports, VLANs, or interface
groups (ifgrps).
You must have considered whether the ports used for intercluster communication will be shared
with data communication.
Preparing for the MetroCluster installation | 13
Firewall requirements
Firewalls and the intercluster firewall policy must allow the following:
ICMP service
TCP to the IP addresses of all of the intercluster LIFs over all of the following ports: 10000,
11104, and 11105
HTTPS
The default intercluster firewall policy allows access through the HTTPS protocol and from all
IP addresses (0.0.0.0/0), but the policy can be altered or replaced.
Cluster requirements
Clusters must meet the following requirements:
Each cluster must have a unique name.
You cannot create a cluster peering relationship with any cluster that has the same name or is in a
peer relationship with a cluster of the same name.
The time on the clusters in a cluster peering relationship must be synchronized within 300
seconds (5 minutes).
Cluster peers can be in different time zones.
If the amount of available WAN bandwidth is similar to that of the LAN ports and the replication
interval is such that replication occurs while regular client activity exists, then you should
dedicate Ethernet ports for intercluster replication to avoid contention between replication and the
data protocols.
If the network utilization generated by the data protocols (CIFS, NFS, and iSCSI) is such that the
network utilization is above 50 percent, then you should dedicate ports for replication to allow for
nondegraded performance if a node failover occurs.
When physical 10 GbE ports are used for data and replication, you can create VLAN ports for
replication and dedicate the logical ports for intercluster replication.
The bandwidth of the port is shared between all VLANs and the base port.
Consider the data change rate and replication interval and whether the amount of data that must
be replicated on each interval requires enough bandwidth that it might cause contention with data
protocols if sharing data ports.
For a high-speed network, such as a 10-Gigabit Ethernet (10-GbE) network, a sufficient amount
of local LAN bandwidth might be available to perform replication on the same 10-GbE ports that
are used for data access.
In many cases, the available WAN bandwidth is far less than 10 GbE LAN bandwidth .
All nodes in the cluster might have to replicate data and share the available WAN bandwidth,
making data port sharing more acceptable.
Sharing ports for data and replication eliminates the extra port counts required to dedicate ports
for replication.
The maximum transmission unit (MTU) size of the replication network will be the same size as
that used on the data network.
Consider the data change rate and replication interval and whether the amount of data that must
be replicated on each interval requires enough bandwidth that it might cause contention with data
protocols if sharing data ports.
When data ports for intercluster replication are shared, the intercluster LIFs can be migrated to
any other intercluster-capable port on the same node to control the specific data port that is used
for replication.
Related concepts
Planning and installing a MetroCluster configuration with array LUNs on page 154
Related tasks
Cabling a four-node or two-node fabric-attached MetroCluster configuration on page 22
Related information
FlexArray Virtualization Installation Requirements and Reference Guide
1. PDUs
3. FC switches, if applicable
4. Nodes
Steps
1. Parts of a fabric MetroCluster configuration on page 23
2. Required MetroCluster components and naming guidelines for fabric configurations on page 29
3. Worksheet for FC switches and FC-to-SAS bridges on page 31
4. Installing and cabling MetroCluster components on page 33
5. Installing FC-to-SAS bridges and SAS disk shelves on page 43
6. Configuring the FC switches on page 49
Cabling a four-node or two-node fabric-attached MetroCluster configuration | 23
FC-to-SAS bridges
The FC-to-SAS bridges connect the SAS storage stacks to the FC switches, providing bridging
between the two protocols.
FC switches
The FC switches provide the long haul backbone ISL between the two sites. The switches provide
the two storage fabrics that allow data mirroring to the remote storage pools.
FC_switch_A_1 FC_switch_B_1
controller_A_1 controller_B_1
FC Long-haul ISLs
FC
cluster interconnect
cluster interconnect
HA interconnect FC_bridge_A_1 FC_bridge_B_1
HA interconnect
SAS stack SAS stack
FC_bridge_A_2 FC_bridge_B_2
FC FC
controller_A_2 Long-haul ISLs controller_B_2
FC_switch_A_2 FC_switch_B_2
cluster_A cluster_B
The configuration consists of two clusters, one at each geographically separated site.
The HA pairs are configured as switchless clusters, without cluster interconnect switches.
A switched configuration is supported but not shown.
The following illustration shows a more detailed view of the connectivity in a single MetroCluster
cluster (both clusters have the same configuration):
Cabling a four-node or two-node fabric-attached MetroCluster configuration | 25
FC connections from each controller's HBAs and FC-VI adapters to each of the FC switches.
SAS connections between each SAS shelf and from the top and bottom of each stack to an FC-to-
SAS bridge.
Ethernet connections from the controllers to the customer-provided network used for cluster
peering.
SVM configuration is replicated over the cluster peering network.
FC_switch_A_1 FC_switch_B_1
controller_A_1 controller_B_1
FC Long-haul ISLs
FC
FC_bridge_A_1 FC_bridge_B_1
FC_bridge_A_2 FC_bridge_B_2
FC FC
Long-haul ISLs
FC_switch_A_2 FC_switch_B_2
cluster_A cluster_B
The configuration consists of two clusters, one at each geographically separated site.
Note: In the two-node configuration, the nodes are not configured as an HA pair.
The following illustration shows a more detailed view of the connectivity in a single MetroCluster
cluster (both clusters have the same configuration):
to partner
site
FC_switch_A_1
Long-haul
FC ISL
FC_bridge_A_1
controller_A_1
SAS-attached shelf
SAS-attached shelf
SAS-attached shelf
FC_bridge_A_2
Ethernet (controller_A_1)
Long-haul
FC ISL
Fibre Channel (controller_A_1) FC_switch_A_2
FC connections from each controller module's HBAs to FC-to-SAS bridge for each SAS shelf
stack.
SAS connections between each SAS shelf and from the top and bottom of each stack to an FC-to-
SAS bridge.
Ethernet connections from the controllers to the customer-provided network used for cluster
peering.
SVM configuration is replicated over the cluster peering network.
Local HA pairs
In a four-node MetroCluster configuration, each site consists of two storage controllers configured as
an HA pair. This allows local redundancy so that if one storage controller fails, its local HA partner
can take over. Such failures can be handled without a MetroCluster switchover operation.
Local HA failover and giveback operations are performed with the storage failover commands,
in the same manner as a non-MetroCluster configuration.
FC_switch_A_1 FC_switch_B_1
controller_A_1 controller_B_1
FC Long-haul ISL
FC
(one to four)
cluster interconnect
cluster interconnect
FC_bridge_A_1 FC_bridge_B_1
HA interconnect
HA interconnect
SAS stack SAS stack
FC_bridge_A_2 FC_bridge_B_2
FC FC
controller_A_2 Long-haul ISL controller_B_2
FC_switch_A_2 FC_switch_B_2
(one to four)
cluster_A cluster_B
Related information
Clustered Data ONTAP 8.3 High-Availability Configuration Guide
FC_switch_A_1 FC_switch_B_1
controller_A_1 controller_B_1
FC Long-haul ISL
FC
(one or two)
cluster interconnect
cluster interconnect
FC_bridge_A_1 FC_bridge_B_1
HA interconnect
HA interconnect
SAS stack SAS stack
FC_bridge_A_2 FC_bridge_B_2
FC FC
controller_A_2 Long-haul ISL controller_B_2
FC_switch_A_2 FC_switch_B_2
(one or two)
cluster_A cluster_B
FC fabric 1
Cluster peering network
FC_switch_A_1 FC_switch_B_1
controller_A_1 controller_B_1
FC Long-haul ISL
FC
(one or two)
cluster interconnect
cluster interconnect
FC_bridge_A_1 FC_bridge_B_1
HA interconnect
HA interconnect
FC_bridge_A_2 FC_bridge_B_2
FC FC
controller_A_2 Long-haul ISL controller_B_2
FC_switch_A_2 FC_switch_B_2
(one or two)
cluster_A cluster_B
FC fabric 2
FC_switch_A_1 FC_switch_B_1
controller_A_1 controller_B_1
FC Long-haul ISL
FC
(one or two)
cluster interconnect
cluster interconnect
FC_bridge_A_1 FC_bridge_B_1
HA interconnect
HA interconnect
SAS stack SAS stack
FC_bridge_A_2 FC_bridge_B_2
FC FC
controller_A_2 Long-haul ISL controller_B_2
FC_switch_A_2 FC_switch_B_2
(one or two)
cluster_A cluster_B
Related concepts
Considerations for configuring cluster peering on page 12
Related tasks
Cabling the cluster peering connections on page 41
Peering the clusters on page 135
Related information
Clustered Data ONTAP 8.3 Cluster Peering Express Guide
Required components
Because of the hardware redundancy in the MetroCluster configuration, there are two of each
component at each site. The sites are arbitrarily assigned the letters A and B and the individual
components are arbitrarily assigned the numbers 1 and 2.
The MetroCluster configuration also includes SAS storage shelves that connect to the FC-to-SAS
bridges.
Steps
1. Racking the hardware components on page 33
2. Cabling the HBA adapters to the FC switches on page 34
3. Cabling the ISLs between MetroCluster sites on page 37
4. Recommended port assignments for FC switches on page 38
5. Cabling the cluster interconnect in four-node configurations on page 41
6. Cabling the cluster peering connections on page 41
7. Cabling the HA interconnect, if necessary on page 42
8. Cabling the management and data connections on page 43
Steps
5. Install the disk shelves, power them on, and set the shelf IDs.
SAS Disk Shelves Installation and Service Guide for DS4243, DS2246, DS4486, and DS4246
You must power-cycle each disk shelf.
Shelf IDs must be unique for each SAS disk shelf within the entire MetroCluster configuration
(including both sites).
The bridge Ready LED might take up to 30 seconds to illuminate, indicating that the bridge
has completed its power-on self test sequence.
Step
switch_A_1
0 1 2 3 4 5 6 7 8 9 10 11
switch_A_2
0 1 2 3 4 5 6 7 8 9 10 11
Cisco
Controller 1 HBA Controller 2 HBA
switch_A_1
1 2 3 4 5 6 7 8 9 10 11 12
switch_A_2
1 2 3 4 5 6 7 8 9 10 11 12
36 | MetroCluster Installation and Configuration Guide
switch_A_1
0 1 2 3 4 5 6 7 8 9 10 11
switch_A_2
0 1 2 3 4 5 6 7 8 9 10 11
Single Brocade
connections.
Controller 1 HBA Controller 2 HBA
This is
port a port b port c port d port a port b port c port d
supported on
8020
systems
only.
switch_A_1
0 1 2 3 4 5 6 7 8 9 10 11
switch_A_2
0 1 2 3 4 5 6 7 8 9 10 11
Cisco
Note: The Brocade and Cisco switches use different port numbering:
The following tables show both Brocade and Cisco port numbering:
Cabling a four-node or two-node fabric-attached MetroCluster configuration | 37
Cabling to FC_switch_x_1
Single or dual FC connection to Connect this site To this port on
each switch? component and port... FC_switch_x_1...
Brocade Cisco
Dual controller_x_1 HBA 1 2
port a
controller_x_1 HBA 2 3
port c
controller_x_2 HBA 4 5
port a
controller_x_2 HBA 5 6
port c
Single controller_x_1 HBA 1 2
port c
controller_x_2 HBA 4 5
port c
Cabling to FC_switch_x_2
Single or dual FC connection to Connect this site To this port on
each switch? component and port... FC_switch_x_2...
Brocade Cisco
Dual controller_x_1 HBA 1 2
port b
controller_x_1 HBA 2 3
port d
controller_x_2 HBA 4 5
port b
controller_x_2 HBA 5 6
port d
Single controller_x_1 HBA 1 2
port d
controller_x_2 HBA 4 5
port d
Related concepts
Port assignments for FC switches in a four-node configuration on page 38
Step
Related concepts
Port assignments for FC switches in a four-node configuration on page 38
Port assignments for FC switches in a two-node configuration on page 40
FC_switch_x_1 FC_switch_x_2
Component and port Brocad Brocade Cisco Brocade Brocade Cisco
e 6505 6510 9148 or 6505 6510 9148 or
Cisco Cisco
9148S 9148S
controller_x_1 FC-VI 0 0 1 - - -
port a
Cabling a four-node or two-node fabric-attached MetroCluster configuration | 39
FC_switch_x_1 FC_switch_x_2
Component and port Brocad Brocade Cisco Brocade Brocade Cisco
e 6505 6510 9148 or 6505 6510 9148 or
Cisco Cisco
9148S 9148S
controller_x_1 FC-VI - - - 0 0 1
port b
controller_x_1 HBA 1 1 2 - - -
port a
controller_x_1 HBA - - - 1 1 2
port b
controller_x_1 HBA 2 2 3 - - -
port c
controller_x_1 HBA - - - 2 2 3
port d
controller_x_2 FC-VI 3 3 4 - - -
port a
controller_x_2 FC-VI - - - 3 3 4
port b
controller_x_2 HBA 4 4 5 - - -
port a
controller_x_2 HBA - - - 4 4 5
port b
controller_x_2 HBA 5 5 6 - - -
port c
controller_x_2 HBA - - - 5 5 6
port d
bridge_x_1_port- 6 6 7 6 6 7
number port 1
bridge_x_1_port- 7 7 8 7 7 8
number port 1
bridge_x_1_port- 12 8 9 12 8 9
number port 1
bridge_x_1_port- 13 9 10 13 9 10
number port 1
ISL port 1 8 20 36 8 20 36
ISL port 2 9 21 40 9 21 40
ISL port 3 10 22 44 10 22 44
ISL port 4 11 23 48 11 23 48
Related tasks
Cabling the HBA adapters to the FC switches on page 34
Cabling the ISLs between MetroCluster sites on page 37
40 | MetroCluster Installation and Configuration Guide
Note: The cabling is the same for each FC switch in the switch fabric.
FC_switch_x_1 FC_switch_x_2
Component and port Brocad Brocade Cisco Brocade Brocade Cisco
e 6505 6510 9148 or 6505 6510 9148 or
9148S 9148S
controller_x_1 FC-VI 0 0 1 - - -
port a
controller_x_1 FC-VI - - - 0 0 1
port b
controller_x_1 HBA 1 1 2 - - -
port a
controller_x_1 HBA - - - 1 1 2
port b
controller_x_1 HBA 2 2 3 - - -
port c
controller_x_1 HBA - - - 2 2 3
port d
bridge_x_1_port- 6 6 7 6 6 7
number port 1
bridge_x_1_port- 7 7 8 7 7 8
number port 1
bridge_x_1_port- 12 8 9 12 8 9
number port 1
bridge_x_1_port- 13 9 10 13 9 10
number port 1
ISL port 1 8 20 36 8 20 36
ISL port 2 9 21 40 9 21 40
ISL port 3 10 22 44 10 22 44
ISL port 4 11 23 48 11 23 48
Cabling a four-node or two-node fabric-attached MetroCluster configuration | 41
Step
1. Cable the cluster interconnect from one controller to the other, or, if cluster interconnect switches
are used, from each controller to the switches.
Related information
Clustered Data ONTAP 8.3 Network Management Guide
Step
1. Identify and cable at least two ports for cluster peering and ensure they have network connectivity
with the partner cluster.
Cluster peering can be done on dedicated ports or on data ports. Using dedicated ports ensures
higher throughput for the cluster peering traffic.
Related concepts
Considerations for configuring cluster peering on page 12
42 | MetroCluster Installation and Configuration Guide
Related information
Clustered Data ONTAP 8.3 Data Protection Guide
Clustered Data ONTAP 8.3 Cluster Peering Express Guide
The HA interconnect must be cabled only if the storage controllers in the HA pair are in separate
chassis.
Some storage controller models support two controllers in a single chassis, in which case they use
an internal HA interconnect.
Steps
b. Connect port ib0b on the first controller in the HA pair to port ib0b
on the other controller.
62xx
a. Connect port 2a (top port of NVRAM8 card in vertical slot 2) on the
first controller in the HA pair to port 2a on the other controller.
32xx
a. Connect port c0a on the first controller in the HA pair to port c0a on
the other controller.
b. Connect port c0b on the first controller in the HA pair to port c0b on
the other controller.
Related information
Installation and Setup Instructions FAS8040/FAS8060 Systems
Installation and setup Instructions FAS80xx Systems with I/O Expansion Modules
Installation and Setup Instructions FAS8020 systems
Installation and Setup Instructions 62xx Systems
Installation and Setup Instructions 32xx Systems
Cabling a four-node or two-node fabric-attached MetroCluster configuration | 43
Step
1. Cable the controller's management and data ports to the management and data networks at the
local site.
Installation and Setup Instructions FAS8040/FAS8060 Systems
Installation and setup Instructions FAS80xx Systems with I/O Expansion Modules
Installation and Setup Instructions FAS8020 systems
Installation and Setup Instructions 62xx Systems
Installation and Setup Instructions 32xx Systems
Steps
1. Preparing for the installation on page 44
2. Installing the FC-to-SAS bridge and SAS shelves on page 45
Related concepts
Example of a four-node MetroCluster configuration with disks and array LUNs on page 194
Your system must already be installed in a rack if it was not shipped in a system cabinet.
Your configuration must be using supported hardware models and software versions.
NetApp Interoperability Matrix Tool
After opening the Interoperability Matrix, you can use the Storage Solution field to select your
MetroCluster solution (fabric MetroCluster or stretch MetroCluster, with or without FlexArray).
You use the Component Explorer to select the components and Data ONTAP version to refine
your search. You can click Show Results to display the list of supported configurations that
match the criteria.
Each FC switch must have one FC port available for one bridge to connect to it.
The computer you are using to set up the bridges must be running an ATTO-supported web
browser to use the ATTO ExpressNAV GUI.
The ATTO-supported web browsers are Internet Explorer 8 and 9, and Mozilla Firefox 3.
Cabling a four-node or two-node fabric-attached MetroCluster configuration | 45
The ATTO Product Release Notes have an up-to-date list of supported web browsers. You can
access this document from the ATTO web site as described in the following steps.
Steps
SAS Disk Shelves Installation and Service Guide for DS4243, DS2246, DS4486, and DS4246
2. Download content from the ATTO web site and from the NetApp web site:
a. From NetApp Support, navigate to the ATTO FibreBridge Description page by clicking
Software, scrolling to Protocol Bridge and choosing ATTO FibreBridge from the drop-
down menu
c. Access the ATTO web site using the link provided and download the following:
d. Navigate to the ATTO FibreBridge 6500N Firmware Download page by clicking Continue
at the end of the ATTO FibreBridge Description page.
Download the bridge firmware file using Steps 1 through 3 of that procedure.
You update the firmware on each bridge later in this procedure.
Make a copy of the ATTO FibreBridge 6500N Firmware Download page and release notes
for reference when you are instructed to update the firmware on each bridge.
3. Gather the hardware and information needed to use the recommended bridge management
interfaces, the ATTO ExpressNAV GUI, and the ATTO QuickNAV utility:
b. Determine a non-default user name and password for accessing the bridges.
You should change the default user name and password.
c. Obtain an IP address, subnet mask, and gateway information for the Ethernet management 1
port on each bridge.
d. Disable VPN clients on the computer you are using for setup.
Active VPN clients cause the QuickNAV scan for bridges to fail.
Steps
1. Connect the Ethernet management 1 port on each bridge to your network using an Ethernet cable.
Note: The Ethernet management 1 port enables you to quickly download the bridge firmware
(using ATTO ExpressNAV or FTP management interfaces) and to retrieve core files and extract
logs.
2. Configure the Ethernet management 1 port for each bridge by following the procedure in the
ATTO FibreBridge 6500N Installation and Operation Manual, section 2.0.
Note: When running QuickNAV to configure an Ethernet management port, only the Ethernet
management port that is connected by the Ethernet cable is configured. For example, if you
also wanted to configure the Ethernet management 2 port, you would need to connect the
Ethernet cable to port 2 and run QuickNAV.
c. Configure the connection mode that the bridges use to communicate across the FC network.
You must set the bridge connection mode to ptp (point-to-point).
For example, if you were to use the command line interface (CLI) to set the bridge FC 1 port's
basic required configuration, you would enter the following commands; the last command saves
the configuration changes:
set ipaddress
set subnet
set ipgateway
set FCDataRate 1 8Gb
set FCConnMode 1 ptp
set SNMP enabled
set bridgename
SaveConfiguration
Note: To set the IP address without the Quicknav utility, you need to have a serial connection
to the FibreBridge.
4. Update the firmware on each bridge to the latest version by following the instructionsstarting
with Step 4on the FibreBridge 6500N Download page.
Cabling a four-node or two-node fabric-attached MetroCluster configuration | 47
5. Cable the disk shelves to the bridges by completing the following substeps:
Note: Do not force a connector into a port. The SAS cable QSFP connectors are keyed; when
oriented correctly into a SAS port, the QSFP connector clicks into place and the disk shelf SAS
port LNK LED illuminates green. For disk shelves, you insert a SAS cable connector with the
pull tab oriented down (on the underside of the connector).
c. For each stack of disk shelves, cable IOM B circle port of the last shelf to SAS port A on
FibreBridge B.
Each bridge has one path to its stack of disk shelves; bridge A connects to the A-side of the stack
through the first shelf, and bridge B connects to the B-side of the stack through the last shelf.
Note: The bridge SAS port B is disabled.
The following illustration shows a set of bridges cabled to a stack of three disk shelves:
M1 SAS A M1 SAS A
Last
shelf
6. Verify that each bridge can detect all disk drives and disk shelves it is connected to.
48 | MetroCluster Installation and Configuration Guide
b. Click the link and enter your user name and the password that you
designated when you configured the bridge.
The ATTO FibreBridge 6500N status page appears with a menu to
the left.
Example
The output shows the devices (disks and disk shelves) that the bridge is connected to. Output lines
are sequentially numbered so you can quickly count the devices. For example, the following
output shows that 10 disks are connected.
Note: If the text response truncated appears at the beginning of the output, you can use
Telnet to connect to the bridge and enter the same command to see all the output.
7. Verify the command output shows the bridge is connected to all disks and disk shelves in the
stack that it is supposed to be connected to.
Choices
Configuring the FC switches by running a configuration file on page 49
Configuring the Cisco or Brocade FC switches manually on page 50
Choices
Configuring Brocade FC switches with configuration files on page 49
Configuring the Cisco FC switches with configuration files on page 50
Steps
3. On the Fibre Channel Switch for Brocade page, click View & Download.
4. On the Fibre Channel Switch - Brocade page, click the MetroCluster link.
5. Follow the directions on the MetroCluster Configuration Files for Brocade Switches
description page to download and run the files.
Steps
3. On the Fibre Channel Switch for Cisco page, click the View & Download button.
4. On the Fibre Channel Switch - Cisco page, click the MetroCluster link.
5. Follow the directions on the MetroCluster Configuration Files for Cisco Switches Description
page to download and run the files.
Choices
Configuring the Brocade FC switches on page 50
Configuring the Cisco FC switches on page 68
You must have a PC or UNIX workstation with Telnet or SSH access to the FC switches.
You must be using four supported Brocade switches of the same model with the same Brocade
Fabric Operating System (FOS) version and licensing.
NetApp Interoperability Matrix Tool
Cabling a four-node or two-node fabric-attached MetroCluster configuration | 51
After opening the Interoperability Matrix, you can use the Storage Solution field to select your
MetroCluster solution (fabric MetroCluster or stretch MetroCluster, with or without FlexArray).
You use the Component Explorer to select the components and Data ONTAP version to refine
your search. You can click Show Results to display the list of supported configurations that
match the criteria.
You must have four switches; the MetroCluster configuration requires four switches.
The four switches must be connected to two fabrics of two switches each, with each fabric
spanning both sites.
Two initiator ports must be connected from each storage controller to each fabric.
Each storage controller must have four initiator ports available to connect to the switch fabrics.
ISLs must have the same length and same speed ISLs in one fabric.
Different lengths can be used in the different fabrics. The same speed must be used in all fabrics.
Metro-E and TDM (SONET/SDH) are not supported; any non-FC native framing or signaling is
not supported.
Metro-E means Ethernet framing/signaling occurs either natively over a Metro distance or
through some TDM, MPLS, or WDM.
TDMs, FCR (native FC Routing) or FCIP extensions are not supported for the MetroCluster FC
switch fabric.
Third-party encryption devices are not supported on any link in the MetroCluster FC switch
fabric, including the ISL links across the WAN.
Certain switches in the MetroCluster FC switch fabric support encryption or compression, and
sometimes support both.
NetApp Interoperability Matrix Tool
After opening the Interoperability Matrix, you can use the Storage Solution field to select your
MetroCluster solution (fabric MetroCluster or stretch MetroCluster, with or without FlexArray).
You use the Component Explorer to select the components and Data ONTAP version to refine
your search. You can click Show Results to display the list of supported configurations that
match the criteria.
Steps
1. Reviewing Brocade license requirements on page 52
2. Setting the Brocade FC switch values to factory defaults on page 52
3. Configuring the basic switch settings on page 55
4. Configuring the E-ports on a Brocade FC switch on page 57
5. Configuring the non-E-ports on the Brocade switch on page 61
6. Configuring zoning on Brocade FC switches on page 62
7. Setting ISL encryption on Brocade 6510 switches on page 65
Related information
NetApp Interoperability Matrix Tool
52 | MetroCluster Installation and Configuration Guide
Trunking license for systems using more than one ISL, as recommended.
You can verify that the licenses are installed by using the licenseshow command. If you do not
have these licenses, contact your sales representative before proceeding.
Steps
This ensures the switch will remain disabled after a reboot or fastboot. If this command is not
available, use the switchdisable command.
Example
The following example shows the command on BrocadeSwitchA:
BrocadeSwitchA:admin> switchcfgpersistentdisable
BrocadeSwitchA:admin> switchcfgpersistentdisable
Example
The following example shows the command on BrocadeSwitchA:
4. Set all ports to their default values by issuing the following command for each port:
portcfgdefault
Example
The following example shows the commands on FC_switch_A_1:
FC_switch_A_1:admin> portcfgdefault 0
FC_switch_A_1:admin> portcfgdefault 1
...
FC_switch_A_1:admin> portcfgdefault 39
FC_switch_B_1:admin> portcfgdefault 0
FC_switch_B_1:admin> portcfgdefault 1
...
FC_switch_B_1:admin> portcfgdefault 39
Example
The following example shows the commands on FC_switch_A_1:
FC_switch_A_1:admin> cfgdisable
FC_switch_A_1:admin> cfgclear
FC_switch_A_1:admin> cfgsave
FC_switch_B_1:admin> cfgdisable
FC_switch_B_1:admin> cfgclear
FC_switch_B_1:admin> cfgsave
Example
The following example shows the command on FC_switch_A_1:
FC_switch_A_1:admin> configdefault
FC_switch_B_1:admin> configdefault
54 | MetroCluster Installation and Configuration Guide
Example
The following example shows the command on FC_switch_A_1:
FC_switch_A_1:admin> switchcfgtrunk 0
FC_switch_B_1:admin> switchcfgtrunk 0
8. On Brocade 6510 switches, disable the Brocade Virtual Fabrics (VF) feature:
fosconfig options
Example
The following example shows the command on FC_switch_A_1:
Example
The following example shows the commands on FC_switch_A_1:
Example
The following example shows the command on FC_switch_A_1:
FC_switch_A_1:admin> reboot
FC_switch_B_1:admin> reboot
Using that example, domain IDs 5 and 7 form fabric_1 and domain IDs 6 and 8 form fabric_2.
Steps
b. Press Enter in response to the prompts until you get to RSCN Transmission Mode, and then
set that value to y.
Example
FC_switch_A_1:admin> configure
Fabric parameters = y
Domain_id = 5
.
.
RSCN Transmission Mode (yes, y, no, n): [no] y
3. If you are using two or more ISLs per fabric, configure in-order-delivery of frames:
These steps must be performed on each switch fabric.
a. Enable in-order-delivery:
iodset
d. Verify the IOD settings by using the iodshow, aptpolicy and dlsshow commands.
Example
For example, issue the following commands on FC_switch_A_1:
FC_switch_A_1:admin> iodshow
IOD is set
FC_switch_A_1:admin> aptpolicy
Current Policy: 1 0(ap)
FC_switch_A_1:admin> dlsshow
DLS is not set
4. Enable the trap for T11-FC-ZONE-SERVER-MIB to provide successful health monitoring of the
switches in Data ONTAP:
Example
On FC_switch_A_1:
FC_switch_A_1:admin> reboot
On FC_switch_B_1:
FC_switch_B_1:admin> reboot
Example
On FC_switch_A_1:
FC_switch_A_1:admin> switchcfgpersistentenable
On FC_switch_B_1:
Cabling a four-node or two-node fabric-attached MetroCluster configuration | 57
FC_switch_B_1:admin> switchcfgpersistentenable
All ISLs in an FC switch fabric must be configured with the same speed and distance.
The combination of the switch port and SFP must support the speed.
The ISL must be using one of the supported speeds: 4 Gbps, 8 Gbps, or 16 Gbps.
The distance can be as far as 200 km but must be supported by the FC switch model.
NetApp Interoperability Matrix Tool
After opening the Interoperability Matrix, you can use the Storage Solution field to select your
MetroCluster solution (fabric MetroCluster or stretch MetroCluster, with or without FlexArray).
You use the Component Explorer to select the components and Data ONTAP version to refine
your search. You can click Show Results to display the list of supported configurations that
match the criteria.
The ISL link must have a dedicated lambda, and the link must be supported by Brocade for the
distance, switch type, and FOS.
Steps
You must use the highest common speed supported by the components in the path.
Example
In the following example, there is one ISL for each fabric:
FC_switch_A_1:admin> portcfgspeed 10 16
FC_switch_B_1:admin> portcfgspeed 10 16
In the following example, there are two ISLs for each fabric:
FC_switch_A_1:admin> portcfgspeed 10 16
FC_switch_A_1:admin> portcfgspeed 11 16
FC_switch_B_1:admin> portcfgspeed 10 16
FC_switch_B_1:admin> portcfgspeed 11 16
2. If more than one ISL for each fabric is used, enable trunking for each ISL port:
portcfgtrunkport port-number 1
Example
FC_switch_A_1:admin> portcfgtrunkport 10 1
FC_switch_A_1:admin> portcfgtrunkport 11 1
FC_switch_B_1:admin> portcfgtrunkport 10 1
FC_switch_B_1:admin> portcfgtrunkport 11 1
Example
In the following example, there is one ISL per switch fabric:
Example
In the following example, there are two ISLs per switch fabric:
Cabling a four-node or two-node fabric-attached MetroCluster configuration | 59
Example
The following example shows the output for a configuration that uses two ISLs cabled to port 10
and port 11:
Ports of Slot 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
----------------+---+---+---+---+-----+---+---+---+----+---+---+---+-----+---+---+---
Speed AN AN AN AN AN AN 8G AN AN AN 16G 16G AN AN AN AN
Fill Word 0 0 0 0 0 0 3 0 0 0 3 3 3 0 0 0
AL_PA Offset 13 .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
Trunk Port .. .. .. .. .. .. .. .. .. .. ON ON .. .. .. ..
Long Distance .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
VC Link Init .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
Locked L_Port .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
Locked G_Port .. .. .. .. .. .. ON .. .. .. .. .. .. .. .. ..
Disabled E_Port .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
Locked E_Port .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
ISL R_RDY Mode .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
RSCN Suppressed .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
Persistent Disable.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
LOS TOV enable .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
NPIV capability ON ON ON ON ON ON ON ON ON ON ON ON ON ON ON ON
NPIV PP Limit 126 126 126 126 126 126 126 126 126 126 126 126 126 126 126 126
QOS E_Port AE AE AE AE AE AE AE AE AE AE AE AE AE AE AE AE
Mirror Port .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
Rate Limit .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
Credit Recovery ON ON ON ON ON ON ON ON ON ON ON ON ON ON ON ON
Fport Buffers .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
Port Auto Disable .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
CSCTL mode .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
Fault Delay 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Example
The distance is 3 km, then 1.5 x 3km = 4.5. This is lower than 10, so the ISL must be set to the
LE distance level.
Example
The distance is 20 km, then 1.5 x 20km = 30. The ISL must be set to 30. and use the LS distance
level.
A vc_link_init value of 1 uses the ARB fill word (default). A value of 0 uses IDLE. The
required value might depend on the link being used. The commands must be repeated for each
ISL port.
60 | MetroCluster Installation and Configuration Guide
Example
For an ISL distance of 3 km, as given in the example in the previous step, the setting is 4.5 with
the default vc_link_init value of 1:
Example
For an ISL distance of 20 km, as given in the example in the previous step, the setting is 30 with
the default vc_link_init value of 1:
FC_switch_A_1:admin> portcfglongdistance 10 LS 1 30
FC_switch_B_1:admin> portcfglongdistance 10 LS 1 30
Example
The following example shows output is a configuration that uses ISLs on port 10 and port 11:
FC_switch_A_1:admin> portbuffershow
Example
The following example shows the output for a configuration that uses ISLs on port 10 and port
11:
FC_switch_A_1:admin> switchshow
switchName: FC_switch_A_1
switchType: 71.2
switchState:Online
switchMode: Native
switchRole: Subordinate
switchDomain: 5
switchId: fffc01
switchWwn: 10:00:00:05:33:86:89:cb
zoning: OFF
switchBeacon: OFF
FC_switch_B_1:admin> switchshow
switchName: FC_switch_B_1
switchType: 71.2
switchState:Online
Cabling a four-node or two-node fabric-attached MetroCluster configuration | 61
switchMode: Native
switchRole: Principal
switchDomain: 7
switchId: fffc03
switchWwn: 10:00:00:05:33:8c:2e:9a
zoning: OFF
switchBeacon: OFF
Example
FC_switch_A_1:admin> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
-----------------------------------------------------------------
1: fffc01 10:00:00:05:33:86:89:cb 10.10.10.55 0.0.0.0 "FC_switch_A_1"
3: fffc03 10:00:00:05:33:8c:2e:9a 10.10.10.65 0.0.0.0 >"FC_switch_B_1"
FC_switch_B_1:admin> fabricshow
Switch ID Worldwide Name Enet IP Addr FC IP Addr Name
----------------------------------------------------------------
1: fffc01 10:00:00:05:33:86:89:cb 10.10.10.55 0.0.0.0 "FC_switch_A_1"
10. Repeat the previous steps for the second FC switch fabric.
Related concepts
Port assignments for FC switches in a four-node configuration on page 38
Steps
You should use the highest common speed, which is the highest speed supported by all
components in the data path: the SFP, the switch port that the SFP is installed on, and the
connected device (HBA, bridge, etc).
For example, the components might have the following supported speeds:
The highest common speed in this case is 4 GB, so the port should be configured for a speed of 4
GB.
Example
FC_switch_A_1:admin> portcfgspeed 6 4
FC_switch_B_1:admin> portcfgspeed 6 4
Example
FC_switch_A_1:admin> portcfgshow
FC_switch_B_1:admin> portcfgshow
Speed is set to 4G
Ports of Slot 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
-----------------+---+---+---+---+-----+---+---+---+-----+---+---+---+-----+---+---+---
Speed AN AN AN AN AN AN 4G AN AN AN AN AN AN AN AN AN
Fill Word 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0
AL_PA Offset 13 .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
Trunk Port .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
Long Distance .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
VC Link Init .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
Locked L_Port .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
Locked G_Port .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
Disabled E_Port .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
Locked E_Port .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
ISL R_RDY Mode .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
RSCN Suppressed .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
Persistent Disable .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
LOS TOV enable .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
NPIV capability ON ON ON ON ON ON ON ON ON ON ON ON ON ON ON ON
NPIV PP Limit 126 126 126 126 126 126 126 126 126 126 126 126 126 126 126 126
QOS E_Port AE AE AE AE AE AE AE AE AE AE AE AE AE AE AE AE
Mirror Port .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
Rate Limit .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
Credit Recovery ON ON ON ON ON ON ON ON ON ON ON ON ON ON ON ON
Fport Buffers .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
Port Auto Disable .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
CSCTL mode .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
Fault Delay 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
One port
connecting to an
FC-to-SAS bridge.
Note: Zoning for the fabric can be configured from one switch in the fabric. In this example, it is
configured on Switch_A_1.
The examples in the following steps use these ports and zones:
Steps
Example
In this example, ports 0 and 3 of domain 1 (Switch_A_1) and ports 0 and 4 of domain 3
(Switch_B_1) are members of the FC-VI zone.
Example
In the following example, ports 1, 2, 4 and 5 of domain 5 (Switch_A_1) connect with the HBAs
on the storage controllers. Port 6 in domain 5 (Switch_A_1) connects to an FC-to-SAS bridge.
Note: You should give each zone a descriptive name. In this example, it is "STOR_A_1_6" that
identifies the zone as a storage zone for the target port 6 at Site_A.
Example
4. Enter the cfgadd config_name zone;zone... command if you want to add more zones to the
configuration.
Example
Example
Switch_A_1:admin> cfgsave
Example
Effective configuration:
cfg: CFG_1
zone: QOSH1_FCVI_1
5,0
7,0
5,3
7,3
zone: STOR_A_1_6
5,1
5,2
5,4
5,5
Cabling a four-node or two-node fabric-attached MetroCluster configuration | 65
7,1
7,2
7,4
7,5
5,6
zone: STOR_B_1_6
5,1
5,2
5,4
5,5
7,1
7,2
7,4
7,5
7,6
------------------------------------
~ - Invalid configuration
* - Member does not exist
# - Invalid usage of broadcast zone
You must have selected two switches from the same fabric.
Step
1. Disable the virtual fabric by entering the following command at the switch console:
fosconfig --disable vf
Steps
b. Set the other parameters such as Domain, WWN Based persistent PID and so on.
Steps
2. Set the authentication policy on the switch to on by entering the following command:
authUtil --policy -sw on
a. Provide the worldwide name (WWN) of the other switch in the fabric for the parameter
Enter peer WWN, Domain, or switch name.
Cabling a four-node or two-node fabric-attached MetroCluster configuration | 67
b. Provide the peer secret for the parameter Enter peer secret.
c. Provide the local secret for the parameter Enter local secret.
d. Enter the following value for the parameter Are you done.
Y
Example
The following is an example of setting authentication secret:
Enter peer WWN, Domain, or switch name (Leave blank when done):
10:00:00:05:33:76:2e:99
Enter peer secret: <hidden>
Re-enter peer secret: <hidden>
Enter local secret: <hidden>
Re-enter local secret: <hidden>
Enter peer WWN, Domain, or switch name (Leave blank when done):
Are you done? (yes, y, no, n): [no] yes
Saving data to key store... Done.
Steps
Example
In the following example, the encryption is enabled on ports 8 and 12:
portCfgEncrypt --enable 8
portCfgEncrypt --enable 12
Example
The following example shows the encryption is enabled on ports 8 and 12:
User Encryption
Port configured Active
---- ---------- ------
8 yes yes
9 No No
10 No No
11 No No
12 yes yes
You must be using four supported Cisco switches of the same model with the same NX-OS
version and licensing.
Not all switches are supported for connectivity to the ATTO FiberBridge model.
Encryption and compression in the Cisco FC storage fabric is not supported in the MetroCluster
configuration.
NetApp Interoperability Matrix Tool
After opening the Interoperability Matrix, you can use the Storage Solution field to select your
MetroCluster solution (fabric MetroCluster or stretch MetroCluster, with or without FlexArray). You
use the Component Explorer to select the components and Data ONTAP version to refine your
Cabling a four-node or two-node fabric-attached MetroCluster configuration | 69
search. You can click Show Results to display the list of supported configurations that match the
criteria.
The following requirements apply to the Inter-Switch Link (ISL) connections:
Fibre Channel over IP (FCIP) is not supported for ISL connections in a MetroCluster
environment.
ISLs of different speeds and lengths are supported between switches in the same fabric.
Steps
1. Reviewing Cisco license requirements on page 69
2. Setting the Cisco FC switch to factory defaults on page 70
3. Configure the Cisco FC switch basic settings and community string on page 70
4. Acquiring licenses for ports on page 71
5. Enabling ports in a Cisco MDS 9148 or 9148S switch on page 72
6. Configuring the F-ports on a Cisco FC switch on page 73
7. Assigning buffer-to-buffer credits to F-Ports in the same port group as the ISL on page 74
8. Creating and configuring the VSANs on Cisco FC switches on page 76
9. Configuring the E-ports on the Cisco FC switch on page 80
10. Configuring zoning on a Cisco FC switch on page 83
11. Ensuring the FC switch configuration is saved on page 86
Related information
NetApp Interoperability Matrix Tool
ENTERPRISE_PKG
This enables you to use the QoS feature in Cisco switches.
PORT_ACTIVATION_PKG
You can use this license for Cisco 9148 switches. This license enables you to activate or
deactivate ports on the switches as long as only 16 ports are active at any given time. By default,
16 ports are enabled in Cisco MDS 9148 switches.
FM_SERVER_PKG
This enables you to manage fabrics simultaneously and to manage switches through a web
browser.
The FM_SERVER_PKG license also enables performance management features such as
performance thresholds, threshold monitoring, and so on. For more information about this license,
see the Cisco Fabric Manager Server Package.
You can verify that the licenses are installed by using the show license usage command. If you
do not have these licenses, contact your sales representative before proceeding.
70 | MetroCluster Installation and Configuration Guide
Steps
1. Make a console connection and log in to both switches in the same fabric.
2. Issue the following command to set the switch back to its default settings:
write erase
You can respond y when prompted to confirm the command. This erases all licenses and
configuration information on the switch.
4. Repeat the write erase and reload commands on the other switch.
After issuing the reload command, the switch reboots and then prompts with setup questions. At
that point, proceed to the next section.
The following example shows the process on a fabric consisting of FC_switch_A_1 and
FC_switch_B_1.
Steps
1. If the switch does not display the setup questions, issue the following command to configure the
basic switch settings:
setup
2. Accept the default responses to the setup questions until you are prompted for the SNMP
community string.
3. Set the community string to public (all lowercase) to allow access from the Data ONTAP Health
Monitors.
Cabling a four-node or two-node fabric-attached MetroCluster configuration | 71
Example
The following example shows the commands on FC_switch_A_1:
FC_switch_A_1# setup
Configure read-only SNMP community string (yes/no) [n]: y
SNMP community string : public
Note: Please set the SNMP community string to "Public" or
another value of your choosing.
Configure default switchport interface state (shut/noshut)
[shut]: noshut
Configure default switchport port mode F (yes/no) [n]: n
Configure default zone policy (permit/deny) [deny]: deny
Enable full zoneset distribution? (yes/no) [n]: yes
FC_switch_B_1# setup
Configure read-only SNMP community string (yes/no) [n]: y
SNMP community string : public
Note: Please set the SNMP community string to "Public" or
another value of your choosing.
Configure default switchport interface state (shut/noshut)
[shut]: noshut
Configure default switchport port mode F (yes/no) [n]: n
Configure default zone policy (permit/deny) [deny]: deny
Enable full zoneset distribution? (yes/no) [n]: yes
Steps
1. Issue the following command to show license usage for a switch fabric:
show port-resources module 1
Determine which ports require licenses. If some of those ports are unlicensed, determine if you
have extra licensed ports and consider removing the licenses from them.
b. Remove the license from the port using the following command:
no port-license acquire
b. Make the port eligible to acquire a license using the "port license" command:
port-license
Switch_A_1# conf t
Switch_A_1(config)# interface fc1/2
Switch_A_1(config)# shut
Switch_A_1(config-if)# no port-license acquire
Switch_A_1(config-if)# exit
Switch_A_1(config)# interface fc1/1
Switch_A_1(config-if)# port-license
Switch_A_1(config-if)# port-license acquire
Switch_A_1(config-if)# no shut
Switch_A_1(config-if)# end
Switch_A_1# copy running-config startup-config
Switch_B_1# conf t
Switch_B_1(config)# interface fc1/2
Switch_B_1(config)# shut
Switch_B_1(config-if)# no port-license acquire
Switch_B_1(config-if)# exit
Switch_B_1(config)# interface fc1/1
Switch_B_1(config-if)# port-license
Switch_B_1(config-if)# port-license acquire
Switch_B_1(config-if)# no shut
Switch_B_1(config-if)# end
Switch_B_1# copy running-config startup-config
You can manually enable 16 ports in a Cisco MDS 9148 or 9148S switch.
The Cisco switches enable you to apply the POD license on random ports, as opposed to applying
them in sequence.
Cabling a four-node or two-node fabric-attached MetroCluster configuration | 73
Cisco switches require that you use one port from each port group, unless you need more than 12
ports.
Steps
2. License and acquire the required port in a port group by entering the following commands in
sequence:
config t
interface port_number
shut
port-license acquire
no shut
Example
For example, the following command licenses and acquires Port fc 1/45:
switch# config t
switch(config)#
switch(config)# interface fc 1/45
switch(config-if)#
switch(config-if)# shut
switch(config-if)# port-license acquire
switch(config-if)# no shut
switch(config-if)# end
Steps
6. Set the rate mode of the switch port to dedicated by issuing the following command:
switchport rate-mode dedicated
Switch_A_1# config t
FC_switch_A_1(config)# interface fc 1/1
FC_switch_A_1(config-if)# shutdown
FC_switch_A_1(config-if)# switchport mode F
FC_switch_A_1(config-if)# switchport speed 8000
FC_switch_A_1(config-if)# switchport rate-mode dedicated
FC_switch_A_1(config-if)# no shutdown
FC_switch_A_1(config-if)# end
FC_switch_A_1# copy running-config startup-config
FC_switch_B_1# config t
FC_switch_B_1(config)# interface fc 1/1
FC_switch_B_1(config-if)# switchport mode F
FC_switch_B_1(config-if)# switchport speed 8000
FC_switch_B_1(config-if)# switchport rate-mode dedicated
FC_switch_B_1(config-if)# no shutdown
FC_switch_B_1(config-if)# end
FC_switch_B_1# copy running-config startup-config
Assigning buffer-to-buffer credits to F-Ports in the same port group as the ISL
You must assign the buffer-to-buffer credits to the F-ports if they are in the same port group as the
ISL. If the ports do not have the required buffer-to-buffer credits, the ISL could be inoperative. This
task is not required if the F-ports are not in the same port group as the ISL port.
Steps
2. Enter the following command to set the interface configuration mode for the port:
interface port-ID
4. If the port is not already in F mode, set the port to F mode by entering the following command:
switchport mode F
5. Set the buffer-to-buffer credit of the non-E ports to 1 by using the following command:
switchport fcrxbbcredit 1
9. Verify the buffer-to-buffer credit assigned to a port by entering the following commands:
show port-resources module 1
In this example, port fc1/40 is the ISL. Ports fc1/37, fc1/38 and fc1/39 are in the same port
group and must be configured.
The following commands show the port range being configured for fc1/37 through fc1/39:
FC_switch_A_1# conf t
FC_switch_A_1(config)# interface fc1/37-39
FC_switch_A_1(config-if)# shut
FC_switch_A_1(config-if)# switchport mode F
FC_switch_A_1(config-if)# switchport fcrxbbcredit
1FC_switch_A_1(config-if)# no shut
FC_switch_A_1(config-if)# exit
FC_switch_A_1# copy running-config startup-config
FC_switch_B_1# conf t
FC_switch_B_1(config)# interface fc1/37-39
FC_switch_B_1(config-if)# shut
FC_switch_B_1(config-if)# switchport mode F
FC_switch_B_1(config-if)# switchport fcrxbbcredit 1
FC_switch_A_1(config-if)# no shut
FC_switch_A_1(config-if)# exit
FC_switch_B_1# copy running-config startup-config
76 | MetroCluster Installation and Configuration Guide
The following commands and system output show that the settings are properly applied:
--------------------------------------------------------------------
Interfaces in the Port-Group B2B Credit Bandwidth Rate
Mode
Buffers (Gbps)
--------------------------------------------------------------------
fc1/37 32 8.0 dedicated
fc1/38 1 8.0 dedicated
fc1/39 1 8.0 dedicated
...
--------------------------------------------------------------------
Interfaces in the Port-Group B2B Credit Bandwidth Rate Mode
Buffers (Gbps)
--------------------------------------------------------------------
fc1/37 32 8.0 dedicated
fc1/38 1 8.0 dedicated
fc1/39 1 8.0 dedicated
...
Steps
a. Issue the following command to enter configuration mode if you have not done so already:
config t
Example
The following example shows the commands on FC_switch__A_1:
FC_switch_A_1# conf t
FC_switch_A_1(config)# vsan database
FC_switch_A_1(config-vsan-db)# vsan 10
FC_switch_A_1(config-vsan-db)# vsan 10 name FCVI_1_10
FC_switch_A_1(config-vsan-db)# end
FC_switch_A_1# copy running-config startup-config
FC_switch_B_1# conf t
FC_switch_B_1(config)# vsan database
FC_switch_B_1(config-vsan-db)# vsan 10
FC_switch_B_1(config-vsan-db)# vsan 10 name FCVI_1_10
FC_switch_B_1(config-vsan-db)# end
FC_switch_B_1# copy running-config startup-config
For the FC-VI VSAN, the ports connecting the two local FC-VI ports will be added. In the
following example, the port s are fc1/1 and fc1/13:
Example
FC_switch_A_1# conf t
FC_switch_A_1(config)# vsan database
FC_switch_A_1(config)# vsan 10 interface fc1/1
FC_switch_A_1(config)# vsan 10 interface fc1/13
FC_switch_A_1(config)# end
FC_switch_A_1# copy running-config startup-config
FC_switch_B_1# conf t
FC_switch_B_1(config)# vsan database
FC_switch_B_1(config)# vsan 10 interface fc1/1
FC_switch_B_1(config)# vsan 10 interface fc1/13
FC_switch_B_1(config)# end
FC_switch_B_1# copy running-config startup-config
Example
a. Enable the in-order-guarantee of exchanges for the VSAN by entering the following
command:
in-order-guarantee vsan vsan-ID
b. Enable load balancing for the VSAN by entering the following command:
vsan vsan-ID loadbalancing src-dst-id
Example
The following example shows the commands on FC_switch__A_1:
FC_switch_A_1# config t
FC_switch_A_1(config)# in-order-guarantee vsan 10
FC_switch_A_1(config)# vsan database
FC_switch_A_1(config-vsan-db)# vsan 10 loadbalancing src-dst-id
FC_switch_A_1(config-vsan-db)# end
FC_switch_A_1# copy running-config startup-config
FC_switch_B_1# config t
FC_switch_B_1(config)# in-order-guarantee vsan 10
FC_switch_B_1(config)# vsan database
FC_switch_B_1(config-vsan-db)# vsan 10 loadbalancing src-dst-id
FC_switch_B_1(config-vsan-db)# end
FC_switch_B_1# copy running-config startup-config
b. Enable the QoS and create a class map by entering the following commands in sequence:
qos enable
qos class-map class_name match-any
c. Add the class map created in a previous step to the policy map by entering the following
command:
class class_name
e. Add the VSAN to the policy map created in step 2 by entering the following command:
qos service policy policy_name vsan vsanid
Example
The following example shows the commands on FC_switch__A_1:
FC_switch_A_1# conf t
FC_switch_A_1(config)# qos enable
FC_switch_A_1(config)# qos class-map FCVI_1_10_Class match-any
FC_switch_A_1(config)# qos policy-map FCVI_1_10_Policy
FC_switch_A_1(config-pmap)# class FCVI_1_10_Class
Cabling a four-node or two-node fabric-attached MetroCluster configuration | 79
FC_switch_B_1# conf t
FC_switch_B_1(config)# qos enable
FC_switch_B_1(config)# qos class-map FCVI_1_10_Class match-any
FC_switch_B_1(config)# qos policy-map FCVI_1_10_Policy
FC_switch_B_1(config-pmap)# class FCVI_1_10_Class
FC_switch_B_1(config-pmap-c)# priority high
FC_switch_B_1(config-pmap-c)# exit
FC_switch_B_1(config)# exit
FC_switch_B_1(config)# qos service policy FCVI_1_10_Policy vsan 10
FC_switch_B_1(config)# end
FC_switch_B_1# copy running-config startup-config
Example
The following example shows the commands on FC_switch_A_1:
FC_switch_A_1# conf t
FC_switch_A_1(config)# vsan database
FC_switch_A_1(config-vsan-db)# vsan 20
FC_switch_A_1(config-vsan-db)# vsan 20 name STOR_1_20
FC_switch_A_1(config-vsan-db)# end
FC_switch_A_1# copy running-config startup-config
FC_switch_B_1# conf t
FC_switch_B_1(config)# vsan database
FC_switch_B_1(config-vsan-db)# vsan 20
FC_switch_B_1(config-vsan-db)# vsan 20 name STOR_1_20
FC_switch_B_1(config-vsan-db)# end
FC_switch_B_1# copy running-config startup-config
Example
The following example shows the commands on FC_switch_A_1:
FC_switch_A_1# conf t
FC_switch_A_1(config)# vsan database
FC_switch_A_1(config)# vsan 20 interface fc1/5
FC_switch_A_1(config)# vsan 20 interface fc1/9
FC_switch_A_1(config)# vsan 20 interface fc1/17
80 | MetroCluster Installation and Configuration Guide
FC_switch_B_1# conf t
FC_switch_B_1(config)# vsan database
FC_switch_B_1(config)# vsan 20 interface fc1/5
FC_switch_B_1(config)# vsan 20 interface fc1/9
FC_switch_B_1(config)# vsan 20 interface fc1/17
FC_switch_B_1(config)# vsan 20 interface fc1/21
FC_switch_B_1(config)# vsan 20 interface fc1/25
FC_switch_B_1(config)# vsan 20 interface fc1/29
FC_switch_B_1(config)# vsan 20 interface fc1/33
FC_switch_B_1(config)# vsan 20 interface fc1/37
FC_switch_B_1(config)# end
FC_switch_B_1# copy running-config startup-config
Steps
1. Use the following table to determine the adjusted required BBCs per kilometer for possible port
speeds.
To determine the correct number of BBCs, you multiply the Adjusted BBCs required
(determined from the table below) by the distance in kilometers between the
switches. The adjustment factor of 1.5 is required to account for FC-VI framing behavior.
Speed in Gbps BBCs required per Adjusted BBCs required (BBCs per
kilometer km x 1.5)
1 0.5 0.75
2 1 1.5
4 2 3
8 4 6
16 8 12
Example
For example, to compute the required number of credits for a distance of 30 km on a 4-Gbps link,
make the following calculation:
Speed in Gbps is 4
3 x 30 = 90
3. Specify the port you are configuring by entering the following command:
interface port-name
12. Repeat the previous steps for the matching ISL port on the partner switch in the fabric.
Example
The following example shows port fc1/41 configured for a distance of 30 km and 8 Gbps:
FC_switch_A_1# conf t
FC_switch_A_1# shutdown
FC_switch_A_1# switchport rate-mode dedicated
FC_switch_A_1# switchport speed 8000
FC_switch_A_1# switchport fcrxbbcredit 60
FC_switch_A_1# switchport mode E
FC_switch_A_1# switchport trunk mode on
FC_switch_A_1# switchport trunk allowed vsan 10
FC_switch_A_1# switchport trunk allowed vsan add 20
FC_switch_A_1# channel-group 1
fc1/36 added to port-channel 1 and disabled
FC_switch_B_1# conf t
FC_switch_B_1# shutdown
FC_switch_B_1# switchport rate-mode dedicated
FC_switch_B_1# switchport speed 8000
FC_switch_B_1# switchport fcrxbbcredit 60
FC_switch_B_1# switchport mode E
FC_switch_B_1# switchport trunk mode on
82 | MetroCluster Installation and Configuration Guide
13. Issue the following command on both switches to restart the ports:
no shutdown
14. Repeat the previous steps for the other ISL ports in the fabric.
15. Add native vsan to port-channel interface on both switches in the same fabric:
interface port-channel number
switchport trunk allowed vsan add native_san_id
Trunk vsans (admin allowed and active) shows all the allowed VSANs.
The member list shows all the ISL ports that were added to the port-channel.
The port VSAN number shuld be the same as the VSAN that contains the ISLs (usually native
vsan 1).
Example
18. Copy the updated configuration to the startup configuration on both fabrics:
copy running-config startup-config
Example
FC_switch_A_1(config-if)# end
FC_switch_A_1# copy running-config startup-config
FC_switch_B_1(config-if)# end
FC_switch_B_1# copy running-config startup-config
Related concepts
Port assignments for FC switches in a four-node configuration on page 38
Steps
Example
Example
The following example shows two zonesets being disabled:
Example
Example
FC_switch_A_1# conf t
FC_switch_A_1(config)# no system default zone default-zone permit
FC_switch_A_1(config)# system default zone distribute full
FC_switch_A_1(config)# no zone default-zone permit 10
FC_switch_A_1(config)# no zone default-zone permit 20
FC_switch_A_1(config)# zoneset distribute full vsan 10
FC_switch_A_1(config)# zoneset distribute full vsan 20
FC_switch_A_1(config)# end
FC_switch_A_1# copy running-config startup-config
FC_switch_B_1# conf t
FC_switch_B_1(config)# no system default zone default-zone permit
FC_switch_B_1(config)# system default zone distribute full
FC_switch_B_1(config)# no zone default-zone permit 10
FC_switch_B_1(config)# no zone default-zone permit 20
FC_switch_B_1(config)# zoneset distribute full vsan 10
FC_switch_B_1(config)# zoneset distribute full vsan 20
FC_switch_B_1(config)# end
FC_switch_B_1# copy running-config startup-config
Example
FC_switch_A_1# conf t
FC_switch_A_1(config)# zone name STOR_Zone_1_20_25 vsan 20
FC_switch_A_1(config-zone)# member interface fc1/5 swwn
20:00:00:05:9b:24:cb:78
FC_switch_A_1(config-zone)# member interface fc1/9 swwn
20:00:00:05:9b:24:cb:78
FC_switch_A_1(config-zone)# member interface fc1/17 swwn
20:00:00:05:9b:24:cb:78
FC_switch_A_1(config-zone)# member interface fc1/21 swwn
20:00:00:05:9b:24:cb:78
FC_switch_A_1(config-zone)# member interface fc1/5 swwn
20:00:00:05:9b:24:12:99
FC_switch_A_1(config-zone)# member interface fc1/9 swwn
20:00:00:05:9b:24:12:99
FC_switch_A_1(config-zone)# member interface fc1/17 swwn
20:00:00:05:9b:24:12:99
FC_switch_A_1(config-zone)# member interface fc1/21 swwn
20:00:00:05:9b:24:12:99
FC_switch_A_1(config-zone)# member interface fc1/25 swwn
20:00:00:05:9b:24:cb:78
FC_switch_A_1(config-zone)# end
FC_switch_A_1# copy running-config startup-config
5. Create an FCVI zone set and add the FCVI ports to it:
These steps only need to be performed on one switch in the fabric.
b. Add FCVI zones to the zone set by entering the following command:
member FCVI_zonename
Example
FC_switch_A_1# conf t
FC_switch_A_1(config)# zoneset name FCVI_Zoneset_1_20 vsan 20
FC_switch_A_1(config-zoneset)# member FCVI_Zone_1_20_25
FC_switch_A_1(config-zoneset)# member FCVI_Zone_1_20_29
...
FC_switch_A_1(config-zoneset)# exit
FC_switch_A_1(config)# zoneset activate name FCVI_ZoneSet_1_20 vsan 20
FC_switch_A_1(config)# exit
FC_switch_A_1# copy running-config startup-config
Step
Example
Storage controllers
The storage controllers connect directly to the storage using SAS cables.
Each storage controller is configured as a DR partner to a storage controller on the partner site.
When the MetroCluster is enabled, the system will automatically pair the two nodes with lowest
system IDs in each of the two clusters as DR partners.
Required components
Because of the hardware redundancy in the MetroCluster configuration, there are two of each
component at each site. The sites are arbitrarily assigned the letters A and B and the individual
components are arbitrarily assigned the numbers 1 and 2.
The MetroCluster configuration also includes SAS storage shelves that connect to the FC-to-SAS
bridges.
Note: FlexArray systems support array LUNs and have shelf_B_2_2 shelf_A_2_2
different storage requirements.
Requirements for a MetroCluster configuration with
array LUNs on page 155
If you are using SAS optical single-mode breakout cables, the following rules apply:
The point-to-point (QSFP-to-QSFP) path of a single single-mode cable cannot exceed 500
meters.
The total end-to-end path (sum of point-to-point paths from the controller to the last shelf)
cannot exceed 510 meters.
The total path includes the set of breakout cables, patch panels, and inter-panel cables.
You must connect all eight (four pairs) of the SC, LC, or MTRJ breakout connectors to the
patch panel.
90 | MetroCluster Installation and Configuration Guide
The SAS cables can be SAS copper, SAS optical, or a mix depending on whether or not your
system meets the requirements for using the type of cable.
If you are using a mix of SAS copper cables and SAS optical cables, the following rules apply:
Shelf-to-shelf connections in a stack must be all SAS copper cables or all SAS optical cables.
If the shelf-to-shelf connections are SAS optical cables, the shelf-to-controller connections to
that stack must also be SAS optical cables.
If the shelf-to-shelf connections are SAS copper cables, the shelf-to-controller connections to
that stack can be SAS optical cables or SAS copper cables.
All components must be supported.
NetApp Interoperability Matrix Tool
After opening the Interoperability Matrix, you can use the Storage Solution field to select your
MetroCluster solution (fabric MetroCluster or stretch MetroCluster, with or without FlexArray).
You use the Component Explorer to select the components and Data ONTAP version to refine
your search. You can click Show Results to display the list of supported configurations that
match the criteria.
Disk shelves connected with SAS optical cables require a version of disk shelf firmware that
supports SAS optical cables.
Best practice is to update all disk shelves in the storage system with the latest version of disk
shelf firmware.
Note: Do not revert disk shelf firmware to a version that does not support SAS optical cables.
The cable QSFP connector end connects to a disk shelf or a SAS port on a controller.
The QSFP connectors are keyed; when oriented correctly into a SAS port the QSFP connector
clicks into place and the disk shelf SAS port link LED, labeled LNK (Link Activity), illuminates
green. Do not force a connector into a port.
Choices
Racking the hardware components on page 90
Cabling the controllers to each other and the storage shelves on page 91
Cabling the cluster peering connections on page 92
Cabling the management and data connections on page 92
Steps
Shelf IDs must be unique for each SAS disk shelf within the entire MetroCluster configuration
(including both sites).
Steps
fc-vi b fc-vi b
fc-vi a fc-vi a
Controller 1 Controller 2
Slot 1 Slot 1
A A
B B
C C
D D
Stack 1 Stack 2
ACP SAS ACP SAS
IOM A
First
shelf
IOM B
Last
shelf
Step
1. Identify and cable at least two ports for cluster peering and ensure they have network connectivity
with the partner cluster.
Cluster peering can be done on dedicated ports or on data ports. Using dedicated ports ensures
higher throughput for the cluster peering traffic.
Related concepts
Considerations for configuring cluster peering on page 12
Related information
Clustered Data ONTAP 8.3 Data Protection Guide
Clustered Data ONTAP 8.3 Cluster Peering Express Guide
Step
1. Cable the controller's management and data ports to the management and data networks at the
local site.
Installation and Setup Instructions FAS8040/FAS8060 Systems
Installation and setup Instructions FAS80xx Systems with I/O Expansion Modules
Installation and Setup Instructions FAS8020 systems
Installation and Setup Instructions 62xx Systems
Installation and Setup Instructions 32xx Systems
94
Required components
Because of the hardware redundancy in the MetroCluster configuration, there are two of each
component at each site. The sites are arbitrarily assigned the letters A and B and the individual
components are arbitrarily assigned the numbers 1 and 2.
The MetroCluster configuration also includes SAS storage shelves that connect to the FC-to-SAS
bridges.
Note: FlexArray systems support array LUNs and have shelf_B_2_2 shelf_A_2_2
different storage requirements.
Requirements for a MetroCluster configuration with
array LUNs on page 155
Storage controllers
96 | MetroCluster Installation and Configuration Guide
The storage controllers are not connected directly to the storage but connect to FC-to-SAS
bridges.
Each storage controller is configured as a DR partner to a storage controller on the partner site.
When the MetroCluster is enabled, the system will automatically pair the two nodes with lowest
system IDs in each of the two clusters as DR partners.
The storage controllers are connected to each other by FC cables between each controller's FC-VI
adapters.
FC-to-SAS bridges
The FC-to-SAS bridges connect the SAS storage stacks to the FC switches, providing bridging
between the two protocols.
The following illustration shows a simplified view of the MetroCluster configuration. For some
connections, a single line represents multiple, redundant connections between the components. Data
and management network connections are not shown.
FC_bridge_A_1 FC_bridge_B_1
FC_bridge_A_2 FC_bridge_B_2
cluster_A cluster_B
Related information
NetApp Interoperability Matrix Tool
Cabling a two-node bridge-attached stretch MetroCluster configuration | 97
Steps
1. Racking the hardware components on page 98
2. Cabling the controllers to each other on page 99
3. Cabling the cluster peering connections on page 99
4. Cabling the management and data connections on page 99
Steps
Shelf IDs must be unique for each SAS disk shelf within the entire MetroCluster configuration
(including both sites).
a. Secure the L brackets on the front of the bridge to the front of the rack (flush-mount) with
the four screws.
The openings in the bridge L brackets are compliant with rack standard ETA-310-X for 19-
inch (482.6 mm) racks.
For more information and an illustration of the installation, see the ATTO FibreBridge 6500N
Installation and Operation Manual.
b. Connect each bridge to a power source that provides a proper ground.
Cabling a two-node bridge-attached stretch MetroCluster configuration | 99
The bridge Ready LED might take up to 30 seconds to illuminate, indicating that the bridge
has completed its power-on self test sequence.
Step
fc-vi b fc-vi b
fc-vi a fc-vi a
Step
1. Identify and cable at least two ports for cluster peering and ensure they have network connectivity
with the partner cluster.
Cluster peering can be done on dedicated ports or on data ports. Using dedicated ports ensures
higher throughput for the cluster peering traffic.
Related concepts
Considerations for configuring cluster peering on page 12
Related information
Clustered Data ONTAP 8.3 Data Protection Guide
Clustered Data ONTAP 8.3 Cluster Peering Express Guide
You can connect the controller and cluster switch management ports to existing switches in your
network or to new dedicated network switches such as NetApp CN1601 cluster management
switches.
Step
1. Cable the controller's management and data ports to the management and data networks at the
local site.
Installation and Setup Instructions FAS8040/FAS8060 Systems
Installation and setup Instructions FAS80xx Systems with I/O Expansion Modules
Installation and Setup Instructions FAS8020 systems
Installation and Setup Instructions 62xx Systems
Installation and Setup Instructions 32xx Systems
Steps
1. Preparing for the installation on page 101
2. Installing the FC-to-SAS bridge and SAS shelves in a fabric-attached configuration on page 102
3. Cabling the FC-to-SAS bridges to the controller module in a two-node direct-attached
configuration on page 106
Related concepts
Example of a four-node MetroCluster configuration with disks and array LUNs on page 194
Your system must already be installed in a rack if it was not shipped in a system cabinet.
Your configuration must be using supported hardware models and software versions.
NetApp Interoperability Matrix Tool
After opening the Interoperability Matrix, you can use the Storage Solution field to select your
MetroCluster solution (fabric MetroCluster or stretch MetroCluster, with or without FlexArray).
You use the Component Explorer to select the components and Data ONTAP version to refine
your search. You can click Show Results to display the list of supported configurations that
match the criteria.
Each FC switch must have one FC port available for one bridge to connect to it.
102 | MetroCluster Installation and Configuration Guide
The computer you are using to set up the bridges must be running an ATTO-supported web
browser to use the ATTO ExpressNAV GUI.
The ATTO-supported web browsers are Internet Explorer 8 and 9, and Mozilla Firefox 3.
The ATTO Product Release Notes have an up-to-date list of supported web browsers. You can
access this document from the ATTO web site as described in the following steps.
Steps
SAS Disk Shelves Installation and Service Guide for DS4243, DS2246, DS4486, and DS4246
2. Download content from the ATTO web site and from the NetApp web site:
a. From NetApp Support, navigate to the ATTO FibreBridge Description page by clicking
Software, scrolling to Protocol Bridge and choosing ATTO FibreBridge from the drop-
down menu
c. Access the ATTO web site using the link provided and download the following:
d. Navigate to the ATTO FibreBridge 6500N Firmware Download page by clicking Continue
at the end of the ATTO FibreBridge Description page.
Download the bridge firmware file using Steps 1 through 3 of that procedure.
You update the firmware on each bridge later in this procedure.
Make a copy of the ATTO FibreBridge 6500N Firmware Download page and release notes
for reference when you are instructed to update the firmware on each bridge.
3. Gather the hardware and information needed to use the recommended bridge management
interfaces, the ATTO ExpressNAV GUI, and the ATTO QuickNAV utility:
b. Determine a non-default user name and password for accessing the bridges.
You should change the default user name and password.
c. Obtain an IP address, subnet mask, and gateway information for the Ethernet management 1
port on each bridge.
d. Disable VPN clients on the computer you are using for setup.
Active VPN clients cause the QuickNAV scan for bridges to fail.
The system connectivity requirements for maximum distances for disk shelves, FC switches, and
backup tape devices using 50-micron, multimode fiber-optic cables, also apply to FibreBridge
bridges.
The Site Requirements Guide has detailed information about system connectivity requirements.
Steps
1. Connect the Ethernet management 1 port on each bridge to your network using an Ethernet cable.
Note: The Ethernet management 1 port enables you to quickly download the bridge firmware
(using ATTO ExpressNAV or FTP management interfaces) and to retrieve core files and extract
logs.
2. Configure the Ethernet management 1 port for each bridge by following the procedure in the
ATTO FibreBridge 6500N Installation and Operation Manual, section 2.0.
Note: When running QuickNAV to configure an Ethernet management port, only the Ethernet
management port that is connected by the Ethernet cable is configured. For example, if you
also wanted to configure the Ethernet management 2 port, you would need to connect the
Ethernet cable to port 2 and run QuickNAV.
c. Configure the connection mode that the bridges use to communicate across the FC network.
You must set the bridge connection mode to ptp (point-to-point).
For example, if you were to use the command line interface (CLI) to set the bridge FC 1 port's
basic required configuration, you would enter the following commands; the last command saves
the configuration changes:
set ipaddress
set subnet
set ipgateway
set FCDataRate 1 8Gb
set FCConnMode 1 ptp
set SNMP enabled
set bridgename
SaveConfiguration
Note: To set the IP address without the Quicknav utility, you need to have a serial connection
to the FibreBridge.
The ATTO FibreBridge 6500N Installation and Operation Manual has the most current
information on available commands and how to use them.
4. Update the firmware on each bridge to the latest version by following the instructionsstarting
with Step 4on the FibreBridge 6500N Download page.
c. For each stack of disk shelves, cable IOM B circle port of the last shelf to SAS port A on
FibreBridge B.
Each bridge has one path to its stack of disk shelves; bridge A connects to the A-side of the stack
through the first shelf, and bridge B connects to the B-side of the stack through the last shelf.
Note: The bridge SAS port B is disabled.
The following illustration shows a set of bridges cabled to a stack of three disk shelves:
M1 SAS A M1 SAS A
Last
shelf
6. Verify that each bridge can detect all disk drives and disk shelves it is connected to.
Cabling a two-node bridge-attached stretch MetroCluster configuration | 105
b. Click the link, and then enter your user name and the password that
you designated when you configured the bridge.
The ATTO FibreBridge 6500N status page appears with a menu to
the left.
Example
The output shows the devices (disks and disk shelves) that the bridge is connected to. Output lines
are sequentially numbered so that you can quickly count the devices. For example, the following
output shows that 10 disks are connected:
Note: If the text response truncated appears at the beginning of the output, you can use
Telnet to connect to the bridge and enter the same command to see all of the output.
7. Verify that the command output shows that the bridge is connected to all disks and disk shelves in
the stack that it is supposed to be connected to.
Steps
a. Cable FC port 1 of the bridge to an 8-Gb or 4-Gb FC port on the controller in cluster_A.
b. Cable FC port 2 of bridge to the same speed FC port on the controller at cluster_A.
2. Repeat the previous step on the other bridges until all bridges have been cabled.
107
The 7-Mode fabric MetroCluster configuration must be using SAS storage shelves only.
If the existing configuration includes FC storage shelves (such as the DS14mk4 FC), FC switch
fabric sharing is not supported.
The SFPs on the switch ports used by the new, clustered MetroCluster configuration must support
16-Gbps rates.
The existing 7-Mode fabric MetroCluster can remain connected to ports using 8-Gbps or 16-Gbps
SFPs.
On each of the four Brocade 6510 switches, ports 24 through 45 must be available to connect the
ports of the new MetroCluster components.
You should ensure that the existing Inter-Switch Links (ISLs) are on ports 46 and 47.
The Brocade 6510 switches must be running a FOS firmware version that is supported on both
the 7-Mode fabric MetroCluster and clustered Data ONTAP MetroCluster configuration.
Steps
1. Reviewing Brocade license requirements on page 108
2. Racking the hardware components on page 108
3. Cabling the new MetroCluster controllers to the existing FC fabrics on page 109
4. Configuring switch fabrics sharing between the 7-Mode and clustered MetroCluster configuration
on page 110
Related information
7-Mode Transition Tool 2.1 Data and Configuration Transition Guide
108 | MetroCluster Installation and Configuration Guide
Trunking license for systems using more than one ISL, as recommended.
You can verify that the licenses are installed by using the licenseshow command. If you do not
have these licenses, contact your sales representative before proceeding.
Steps
5. Install the disk shelves, power them on, and set the shelf IDs.
SAS Disk Shelves Installation and Service Guide for DS4243, DS2246, DS4486, and DS4246
You must power-cycle each disk shelf.
Shelf IDs must be unique for each SAS disk shelf within the entire MetroCluster configuration
(including both sites).
a. Secure the L brackets on the front of the bridge to the front of the rack (flush-mount) with
the four screws.
Configuring hardware for sharing a Brocade 6510 FC fabric during transition | 109
The openings in the bridge L brackets are compliant with rack standard ETA-310-X for 19-
inch (482.6 mm) racks.
For more information and an illustration of the installation, see the ATTO FibreBridge 6500N
Installation and Operation Manual.
b. Connect each bridge to a power source that provides a proper ground.
The bridge Ready LED might take up to 30 seconds to illuminate, indicating that the bridge
has completed its power-on self test sequence.
Steps
1. Cable the FC-VI and HBA ports according to the following table:
Site A Site B
Connect this Site A FC_switch_ Connect this Site B FC_switch_B_1
component and port... A_1 port... component and port... port...
controller_A_1 FC-VI port 1 32 controller_B_1 FC-VI port 32
1
controller_A_1 HBA port 1 33 controller_B_1 HBA port 33
1
controller_A_1 HBA port 2 34 controller_B_1 HBA port 34
2
controller_A_2 FC-VI port 1 35 controller_B_2 FC-VI port 35
1
controller_A_2 HBA 1 36 controller_B_2 HBA 1 36
controller_A_2 HBA 2 37 controller_B_2 HBA 2 37
2. Cable each FC-SAS bridge in the first switch fabric to the FC switches.
The number of bridges varies depending on the number of SAS storage stacks.
Site A Site B
Cable this site A bridge... FC_switch Cable this Site B bridge... FC_switch_B_1
_A_1 port...
port...
bridge_A_1_38 38 bridge_B_1_38 38
bridge_A_1_39 39 bridge_B_1_39 39
bridge_A_1_40 40 bridge_B_1_40 40
110 | MetroCluster Installation and Configuration Guide
Site A Site B
Cable this site A bridge... FC_switch Cable this Site B bridge... FC_switch_B_1
_A_1 port...
port...
bridge_A_1_41 41 bridge_B_1_41 41
bridge_A_1_42 42 bridge_B_1_42 42
bridge_A_1_43 43 bridge_B_1_43 43
bridge_A_1_44 44 bridge_B_1_44 44
bridge_A_1_45 45 bridge_B_1_45 45
Site A Site B
Cable this site A bridge... FC_switch Cable this Site B bridge... FC_switch_B_2
_A_2 port...
port...
bridge_A_2_38 38 bridge_B_2_38 38
bridge_A_2_39 39 bridge_B_2_39 39
bridge_A_2_40 40 bridge_B_2_40 40
bridge_A_2_41 41 bridge_B_2_41 41
bridge_A_2_42 42 bridge_B_2_42 42
bridge_A_2_43 43 bridge_B_2_43 43
bridge_A_2_44 44 bridge_B_2_44 44
bridge_A_2_45 45 bridge_B_2_45 45
Steps
Example
The following example shows the command issued on FC_switch_A_1:
FC_switch_A_1:admin> switchCfgPersistentDisable
FC_switch_B_1:admin> switchCfgPersistentDisable
2. Ensure that the 7-Mode MetroCluster configuration is functioning correctly using the redundant
fabric:
Example
node_A> cf status
Controller Failover enabled, node_A is up.
VIA Interconnect is up (link 0 down, link 1 up).
Example
.
.
.
Example
Steps
Example
The following example shows the zone FCVI_TI_FAB_2.
Example
The following example shows the deletion of zone FCVI_TI_FAB_2.
Example
The output should be similar to the following:
5. Enable in-order-delivery:
iodset
Configuring hardware for sharing a Brocade 6510 FC fabric during transition | 113
6. Select Advanced Performance Tuning (APT) policy 1, the Port Based Routing Policy:
aptpolicy 1
Example
The output should be similar to the following:
Brocade-6510:admin> iodshow
IOD is set
Brocade-6510:admin> aptpolicy
Current Policy: 1
3 : Default Policy
1: Port Based Routing Policy
2: Device Based Routing Policy (FICON support only)
3: Exchange Based Routing Policy
Brocade-6510:admin> dlsshow
Ensuring ISLs are in the same port group and configuring zoning
You must make sure that the Inter-Switch Links (ISLs) are in the same port group and configure
zoning for the MetroCluster configurations to successfully share the switch fabrics.
Steps
1. If the ISLs are not in the same port group, move one of the ISL ports to the same port group as the
other one.
You can use any available port except 32 through 45, which are used by the new MetroCluster
configuration. The recommended ISL ports are 46 and 47.
2. Follow the steps in Configuring zoning on a Brocade FC switch on page 62 to enable trunking
and the QoS zone.
The port numbers when sharing fabrics are different than those shown in the section. When
sharing, use ports 46 and 47 for the ISL ports. If you moved your ISL ports, you need to use the
procedure in Configuring the E-ports (ISL ports) on a Brocade FC switch on page 57 to configure
the ports.
3. Follow the steps in Configuring the non-E ports on the Brocade switch on page 61 to configure
the non-E ports.
4. Do not delete the zones or zone sets that already exist in the backend switches (for the 7-Mode
fabric MetroCluster) except the Traffic Isolation (TI) zones in Step 3.
5. Follow the steps in Configuring the E-ports (ISL ports) on a Brocade FC switch on page 57 to add
the zones required by the new MetroCluster to the existing zone sets.
114 | MetroCluster Installation and Configuration Guide
Example
The following example shows the commands and system output for creating the zones:
Brocade-6510-2K0GG:admin> cfgsave
You are about to save the Defined zoning configuration. This
action will only save the changes on Defined configuration.
Do you want to save the Defined zoning configuration only? (yes, y,
no, n): [no] yes
Nothing changed: nothing to save, returning ...
Brocade-6510-2K0GG:admin>
Steps
Example
The following example shows the command issued on FC_switch_A_1:
FC_switch_A_1:admin> switchCfgPersistentEnable
FC_switch_B_1:admin> switchCfgPersistentEnable
2. Verify that the switches are online and all devices are properly logged in:
switchShow
Configuring hardware for sharing a Brocade 6510 FC fabric during transition | 115
Example
The following example shows the command issued on FC_switch_A_1:
FC_switch_A_1:admin> switchShow
FC_switch_B_1:admin> switchShow
3. Run the fmc_dc utility to ensure that the 7-Mode fabric MetroCluster is functioning correctly.
You can ignore errors related to Traffic Isolation (TI) zoning and trunking.
Steps
1. Gathering required information and reviewing the workflow on page 117
2. Similarities and differences between regular cluster and MetroCluster configurations on page 122
3. Setting a previously used controller module to system defaults in Maintenance mode on page 122
4. Verifying disk assignment in Maintenance mode in a four-node configuration on page 123
5. Verifying disk assignment in Maintenance mode in a two-node configuration on page 128
6. Verifying the HA state of components is mcc or mcc-2n in Maintenance mode on page 129
7. Setting up Data ONTAP on page 130
8. Configuring the clusters into a MetroCluster configuration on page 135
9. Checking for MetroCluster configuration errors with Config Advisor on page 151
Configuring the MetroCluster software in Data ONTAP | 117
Site A switch information (if not using two-node switchless cluster configuration or
two-node MetroCluster configuration)
When you cable the system, you need a host name and management IP address for each cluster
switch. This information if using two-node switchless cluster or using two-node MetroCluster
configuration (one node at each site).
Site B switch information (if not using two-node switchless cluster configuration or
two-node MetroCluster configuration)
When you cable the system, you need a host name and management IP address for each cluster
switch. This information if using two-node switchless cluster or using two-node MetroCluster
configuration (one node at each site).
Steps
After you issue the command, wait until the system stops at the LOADER prompt.
4. Boot the node back into Maintenance mode to enable the configuration changes to take effect.
Steps
2. If necessary, explicitly assign disks on the attached disk shelves to the appropriate pool with the
disk assign command.
Using wildcards in the command enables you to assign all the disks on a disk shelf with one
command.
3. Show the disk shelf IDs and bays for each disk:
storage show disk x
Configuring the MetroCluster software in Data ONTAP | 125
Steps
1. If you have not done so, boot each system into Maintenance mode.
2. Assign the disk shelves to the nodes located at the first site (site A):
Disk shelves at the same site as the node are assigned to pool 0 and disk shelves located at the
partner site are assigned to pool 1.
You should assign an equal number of shelves to each pool.
a. On the first node, systematically assign the local disk shelves to pool 0 and the remote disk
shelves to pool 1:
disk assign -shelf local-switch-name:shelf-name.port -p pool
Example
If storage controller Controller_A_1 has four shelves, you issue the following commands:
b. Repeat the process for the second node at the local site, systematically assigning the local disk
shelves to pool 0 and the remote disk shelves to pool 1:
disk assign -shelf local-switch-name:shelf-name.port -p pool
Example
If storage controller Controller_A_2 has four shelves, you issue the following commands:
3. Assign the disk shelves to the nodes located at the second site (site B):
126 | MetroCluster Installation and Configuration Guide
Disk shelves at the same site as the node are assigned to pool 0 and disk shelves located at the
partner site are assigned to pool 1.
You should assign an equal number of shelves to each pool.
a. On the first node at the remote site, systematically assign its local disk shelves to pool 0 and
its remote disk shelves to pool 1:
disk assign -shelf local-switch-nameshelf-name -p pool
Example
If storage controller Controller_B_1 has four shelves, you issue the following commands:
b. Repeat the process for the second node at the remote site, systematically assigning its local
disk shelves to pool 0 and its remote disk shelves to pool 1:
disk assign -shelf shelf-name -p pool
Example
If storage controller Controller_B_2 has four shelves, you issue the following commands:
Note: Pool 0 always contains the disks that are found at the same site as the storage system that
owns them.
Pool 1 always contains the disks that are remote to the storage system that owns them.
Steps
1. If you have not done so, boot each system into Maintenance mode.
2. Assign the disks to the nodes located at the first site (site A):
You should assign an equal number of disks to each pool.
a. On the first node, systematically assign half the disks on each shelf to pool 0 and the other
half to the HA partner's pool 0:
disk assign -disk disk-name -p pool -n number-of-disks
Example
If storage controller Controller_A_1 has four shelves, each with 8 SSDs, you issue the
following commands:
b. Repeat the process for the second node at the local site, systematically assigning half the disks
on each shelf to pool 1 and the other half to the HA partner's pool 1:
disk assign -disk disk-name -p pool
Example
If storage controller Controller_A_1 has four shelves, each with 8 SSDs, you issue the
following commands:
3. Assign the disks to the nodes located at the second site (site B):
You should assign an equal number of disks to each pool.
a. On the first node at the remote site, systematically assign half the disks on each shelf to pool 0
and the other half to the HA partner's pool 0:
disk assign -disk disk-name -p pool
Example
If storage controller Controller_B_1 has four shelves, each with 8 SSDs, you issue the
following commands:
128 | MetroCluster Installation and Configuration Guide
b. Repeat the process for the second node at the remote site, systematically assigning half the
disks on each shelf to pool 1 and the other half to the HA partner's pool 1:
disk assign -disk disk-name -p pool
Example
If storage controller Controller_B_2 has four shelves, each with 8 SSDs, you issue the
following commands:
Steps
2. If necessary, you can explicitly assign disks on the attached disk shelves to the appropriate pool
with the disk assign command.
Using wildcards in the command enables you to assign all the disks on a disk shelf with one
command.
3. Show the disk shelf IDs and bays for each disk:
storage show disk x
Steps
1. In Maintenance mode, display the HA state of the controller module and chassis:
ha-config show
The correct HA state depends on whether you have a four-node or two-node MetroCluster
configuration:
Number of controllers in the MetroCluster HA state for all components should be...
configuration
Four mcc
Two mcc-2n
2. If the displayed system state of the controller is not mcc, set the HA state for the controller
module to mcc or mcc-2n:
3. If the displayed system state of the chassis is not mcc, set the HA state for the chassis to mcc or
mcc-2n:
Choices
Running System Setup in a four-node MetroCluster configuration on page 130
Setting up the clusters in a two-node MetroCluster configuration on page 133
You must not have configured the Service Processor prior to performing this task.
Steps
1. If you have not already done so, power up each node and let them boot up.
If the system is in Maintenance mode, issue the halt command to exit Maintenance mode, and
then issue the following command from the LOADER prompt:
boot_ontap
Example
The output should be similar to the following:
2. Enable the AutoSupport tool by following the directions provided by the system.
Example
The prompts are similar to the following:
4. If you have a four-node MetroCluster configuration, confirm that nodes are configured in high-
availability mode:
storage failover show -fields mode
If not, you must issue the following command on each node and reboot the node:
storage failover modify -mode ha -node localhost
This command configures high availability mode but does not enable storage failover. Storage
failover is automatically enabled when the MetroCluster configuration is performed later in the
configuration process.
132 | MetroCluster Installation and Configuration Guide
Example
The following example shows output for two controllers in cluster_A. If it is a two-node
MetroCluster configuration, the output will show only one node.
6. If you are creating a two-node switchless cluster (a cluster without cluster interconnect switches),
enable the switchless-cluster networking mode:
You can respond y when prompted to continue into advanced mode. The advanced mode
prompt appears (*>).
7. Launch the System Setup tool as directed by the information that appears on the system console
after the initial boot.
8. Use the System Setup tool to configure each node and create the cluster, but do not create
aggregates.
Note: You create mirrored aggregates in later tasks.
Steps
This system will send event messages and weekly reports to NetApp
Technical
Support. To disable this feature, enter "autosupport modify -support
disable"
within 24 hours. Enabling AutoSupport can significantly speed problem
determination and resolution should a problem occur on your system.
For
further information on AutoSupport, see:
http://support.netapp.com/autosupport/
2. Because you are using the CLI to set up the cluster, exit the Node Setup wizard; Node Setup
wizard:
exit
The Node Setup would be used to configure the node's node management interface for use with
System Setup
The Node Setup wizard exits, and a login prompt appears, warning that you have not completed
the setup tasks:
134 | MetroCluster Installation and Configuration Guide
Exiting the node setup wizard. Any changes you made have been saved.
Warning: You have exited the node setup wizard before completing all
of the tasks. The node is not configured. You can complete node setup
by typing
"node setup" in the command line interface.
login:
You can return to cluster setup at any time by typing "cluster setup".
To accept a default or omit a question, do not enter a value.
6. Accept the system defaults by pressing Enter, or enter your own values by typing no and then
pressing Enter.
7. Follow the prompts to complete the Cluster Setup wizard, pressing Enter to accept the default
values or typing your own values and then pressing Enter.
The default values are determined automatically based on your platform and network
configuration.
8. After you complete the Cluster Setup wizard and it exits, verify that the cluster is active and the
first node is healthy:
cluster show
Example
The following example shows a cluster in which the first node (cluster1-01) is healthy and
eligible to participate:
If it becomes necessary to change any of the settings you entered for the admin SVM or node
SVM, you can access the Cluster Setup wizard by using the cluster setup command.
Configuring the MetroCluster software in Data ONTAP | 135
Steps
1. Reviewing the preconfigured SVMs and LIFs on page 135
2. Manually peering the clusters on page 137
Related concepts
Considerations when using dedicated ports on page 13
Considerations when sharing data ports on page 13
Related references
Prerequisites for cluster peering on page 12
Related information
Clustered Data ONTAP 8.3 Data Protection Guide
Clustered Data ONTAP 8.3 Cluster Peering Express Guide
For 32xx systems with two controllers in the chassis, or a controller and a blank, slot 1 is used.
136 | MetroCluster Installation and Configuration Guide
Choices
Configuring intercluster LIFs to use dedicated intercluster ports on page 137
Configuring intercluster LIFs to share data ports on page 141
replace the ports, networks, IP addresses, subnet masks, and subnets with those specific to your
environment.
Steps
1. List the ports in the cluster by using network port show command.
Example
2. Determine whether any of the LIFs are using ports that are dedicated for replication by using the
network interface show command.
Example
Ports e0e and e0f do not appear in the following output; therefore, they do not have any LIFs
located on them:
3. If a LIF is using a port that you want dedicated to intercluster connectivity, migrate the LIF to a
different port.
a. Migrate the LIF to another port by using the network interface migrate command.
Example
The following example assumes that the data LIF named cluster01_data01 uses port e0e and
you want only an intercluster LIF to use that port:
b. You might need to modify the migrated LIF home port to reflect the new port where the LIF
should reside by using the network interface modify command:
Configuring the MetroCluster software in Data ONTAP | 139
Example
4. Group the ports that you will use for the intercluster LIFs by using the network interface
failover-groups create command.
Example
5. Display the failover-group that you created by using the network interface failover-
groups show command.
Example
6. Create an intercluster LIF on the admin SVM cluster01 by using the network interface
create command.
Example
This example uses the LIF naming convention adminSVMname_icl# for the intercluster LIF:
7. Verify that the intercluster LIFs were created properly by using the network interface show
command.
Example
cluster01_icl01
up/up 192.168.1.201/24 cluster01-01 e0e true
cluster01_icl02
up/up 192.168.1.202/24 cluster01-02 e0e true
cluster01-01_mgmt1
up/up 192.168.0.xxx/24 cluster01-01 e0c true
cluster01-02_mgmt1
up/up 192.168.0.xxx/24 cluster01-02 e0c true
8. Verify that the intercluster LIFs are configured for redundancy by using the network
interface show command with the -role intercluster and -failover parameters.
Example
The LIFs in this example are assigned the e0e home port on each node. If the e0e port fails, the
LIF can fail over to the e0f port.
9. Display the routes in the cluster by using the network route show command to determine
whether intercluster routes are available or you must create them.
Creating a route is required only if the intercluster addresses in both clusters are not on the same
subnet and a specific route is needed for communication between the clusters.
Example
In this example, no intercluster routes are available:
10. If communication between intercluster LIFs in different clusters requires routing, create an
intercluster route by using the network route create command.
The gateway of the new route should be on the same subnet as the intercluster LIF.
Example
In this example, 192.168.1.1 is the gateway address for the 192.168.1.0/24 network. If the
destination is specified as 0.0.0.0/0, then it becomes a default route for the intercluster network.
11. Verify that you created the routes correctly by using the network route show command.
Example
0.0.0.0/0 192.168.0.1 20
cluster01
0.0.0.0/0 192.168.0.1 10
0.0.0.0/0 192.168.1.1 40
12. Repeat these steps to configure intercluster networking in the peer cluster.
13. Verify that the ports have access to the proper subnets, VLANs, and so on.
Dedicating ports for replication in one cluster does not require dedicating ports in all clusters; one
cluster might use dedicated ports, while the other cluster shares data ports for intercluster
replication.
Related concepts
Considerations when using dedicated ports on page 13
Steps
1. List the ports in the cluster by using the network port show command:
Example
2. Create an intercluster LIF on the admin SVM cluster01 by using the network interface
create command.
Example
This example uses the LIF naming convention
adminSVMname_icl#
for the intercluster LIF:
3. Verify that the intercluster LIFs were created properly by using the network interface show
command with the -role intercluster parameter:
Example
4. Verify that the intercluster LIFs are configured to be redundant by using the network
interface show command with the -role intercluster and -failover parameters.
Example
The LIFs in this example are assigned the e0c port on each node. If the e0c port fails, the LIF can
fail over to the e0d port.
5. Display the routes in the cluster by using the network route show command to determine
whether intercluster routes are available or you must create them.
Creating a route is required only if the intercluster addresses in both clusters are not on the same
subnet and a specific route is needed for communication between the clusters.
Example
In this example, no intercluster routes are available:
Example
In this example, 192.168.1.1 is the gateway address for the 192.168.1.0/24 network. If the
destination is specified as 0.0.0.0/0, then it becomes a default route for the intercluster network.
Configuring the MetroCluster software in Data ONTAP | 143
7. Verify that you created the routes correctly by using the network route show command.
Example
Related concepts
Considerations when sharing data ports on page 13
Intercluster LIFs should be created in the IPspaces of both clusters you want to peer.
You should ensure that the intercluster LIFs of the clusters can route to each other.
If there are different administrators for each cluster, the passphrase used to authenticate the
cluster peer relationship should be agreed upon.
Steps
1. Create the cluster peer relationship on each cluster by using the cluster peer create
command.
The passphrase that you use is not displayed as you type it.
If you created a nondefault IPspace to designate intercluster connectivity, you use the ipspace
parameter to select that IPspace.
Example
In the following example, cluster01 is peered with a remote cluster named cluster02. Cluster01 is
a two-node cluster that has one intercluster LIF per node. The IP addresses of the intercluster
LIFs created in cluster01 are 192.168.2.201 and 192.168.2.202. Similarly, cluster02 is a two-node
cluster that has one intercluster LIF per node. The IP addresses of the intercluster LIFs created in
cluster02 are 192.168.2.203 and 192.168.2.204. These IP addresses are used to create the cluster
peer relationship.
144 | MetroCluster Installation and Configuration Guide
If DNS is configured to resolve host names for the intercluster IP addresses, you can use host
names in the peer-addrs option. It is not likely that intercluster IP addresses frequently
change; however, using host names allows intercluster IP addresses to change without having to
modify the cluster peer relationship.
Example
In the following example, an IPspace called IP01A was created on cluster01 for intercluster
connectivity. The IP addresses used in the previous example are used in this example to create the
cluster peer relationship.
2. Display the cluster peer relationship by using the cluster peer show command with the -
instance parameter.
Displaying the cluster peer relationship verifies that the relationship was established successfully.
Example
3. Preview the health of the nodes in the peer cluster by using the cluster peer health show
command.
Previewing the health checks the connectivity and status of the nodes on the peer cluster.
Example
cluster02 cluster02-01
Data: interface_reachable
ICMP: interface_reachable true true true
cluster02-02
Data: interface_reachable
ICMP: interface_reachable true true true
Steps
Example
The following command mirrors the root aggregate for controller_A_1:
This creates an aggregate with a local plex located at the local MetroCluster site and a remote
plex located at the remote MetroCluster site.
2. Repeat the previous steps for each node in the MetroCluster configuration.
You should know what drives or array LUNs will be used in the new aggregate.
If you have multiple drive types in your system (heterogeneous storage), you should understand
how you can ensure that the correct drive type is selected.
Drives and array LUNs are owned by a specific node; when you create an aggregate, all drives in
that aggregate must be owned by the same node, which becomes the home node for that
aggregate.
Aggregate names should conform to the naming scheme you determined when you planned your
MetroCluster configuration.
The Clustered Data ONTAP Data Protection Guide contains more information about mirroring
aggregates.
Steps
2. Create the aggregate by using the storage aggregate create -mirror true command.
If you are logged in to the cluster on the cluster management interface, you can create an
aggregate on any node in the cluster. To ensure that the aggregate is created on a specific node,
use the -node parameter or specify drives that are owned by that node.
You can specify the following options:
146 | MetroCluster Installation and Configuration Guide
Aggregate's home node (that is, the node that owns the aggregate in normal operation)
List of specific drives or array LUNs that are to be added to the aggregate
Maximum number of drives or array LUNs that can be included in a RAID group
For more information about these options, see the storage aggregate create man page.
Example
The following command creates a mirrored aggregate with 10 disks:
There must be at least two nonroot mirrored data aggregates on each cluster, and all aggregates
must be mirrored.
You can verify this with the storage aggregate show command.
Steps
1. Enter the metrocluster configure command in the following format, depending on whether
you .
Example
The following command enables MetroCluster configuration on all nodes in the DR group that
contains controller_A_1:
Example
3. Confirm the MetroCluster configuration from both sites in the MetroCluster configuration:
Example
Example
Steps
This command must be repeated on all four switches in the MetroCluster configuration.
Example
The following example shows the command to add a switch with IP address of 10.10.10.10:
It might take up to 15 minutes to reflect all data due to the 15-minute polling interval.
Example
The following example shows the command given to verify the MetroCluster's FC switches are
configured:
controller_A_1::>
If the switch's worldwide name (WWN) is shown, the Data ONTAP health monitor is able to
contact and monitor the FC switch.
Related information
Clustered Data ONTAP 8.3 System Administration Guide
Steps
Example
The command will run as a background job:
Component Result
------------------- ---------
nodes ok
lifs ok
config-replication ok
aggregates ok
clusters ok
5 entries were displayed.
2. Display more detailed results from the most recent metrocluster check run command:
metrocluster check aggregate show
metrocluster check cluster show
metrocluster check config-replication show
metrocluster check lif show
metrocluster check node show
150 | MetroCluster Installation and Configuration Guide
The metrocluster check show commands show the results of the most recent
metrocluster check run command. You should always run the metrocluster check
run command prior to using the metrocluster check show commands to ensure that the
information displayed is current.
Example
The following example shows the metrocluster check aggregate show output for a
healthy four-node MetroCluster configuration:
The following example shows the metrocluster check cluster show output for a healthy
four-node MetroCluster configuration. It indicates that the clusters are ready to perform a
negotiated switchover if necessary.
Related information
Clustered Data ONTAP 8.3 Physical Storage Management Guide
Clustered Data ONTAP 8.3 Network Management Guide
Clustered Data ONTAP 8.3 Data Protection Guide
Configuring the MetroCluster software in Data ONTAP | 151
Steps
2. After running Config Advisor, review the tool's output and follow the recommendations in the
output to address any issues discovered.
cluster_A
controller_A_1
controller_A_2
cluster_B
controller_B_1
controller_B_2
Steps
Example
The output should indicate that takeover is possible for both nodes:
You can use the storage failover show-takeover command to monitor the progress of
the takeover operation.
c. Confirm that the takeover is complete:
storage failover show
Example
The output should indicate that controller_A_1 is in takeover state, meaning that it has taken
over its HA partner:
You can use the storage failover show-giveback command to monitor the progress of
the giveback operation.
Example
The output should indicate that takeover is possible for both nodes:
f. Repeat the previous substeps, this time taking over controller_A_1 from controller_A_2.
Related information
Clustered Data ONTAP 8.3 High-Availability Configuration Guide
Configuring the MetroCluster software in Data ONTAP | 153
Step
1. Use the procedures for negotiated switchover, healing, and switchback in the Clustered Data
ONTAP 8.3 MetroCluster Management and Disaster Recovery Guide.
Steps
Step
1. Set the URL of the remote destination for the configuration backup files:
system configuration backup settings modify URL-of-destination
Related information
Clustered Data ONTAP 8.3 System Administration Guide
154
Related concepts
Differences between the clustered Data ONTAP MetroCluster configurations on page 11
Planning and installing a MetroCluster configuration with array LUNs | 155
All the Data ONTAP systems in a MetroCluster configuration must be of the same model.
FC-VI adapters must be installed into the appropriate slots for each Data ONTAP system,
depending on the model.
NetApp Hardware Universe
The storage arrays in the MetroCluster configuration must be symmetric, which means the
following:
The two storage arrays must be from the same supported vendor family and have the same
firmware version installed.
FlexArray Virtualization Implementation Guide for NetApp E-Series Storage
FlexArray Virtualization Implementation Guide for Third-Party Storage
Disk types (for example, SATA, SSD, or SAS) used for mirrored storage must be the same on
both storage arrays.
The parameters for configuring storage arrays, such as RAID type and tiering, must be the
same across both sites.
After opening the Interoperability Matrix, you can use the Storage Solution field to select your
MetroCluster solution (fabric MetroCluster or stretch MetroCluster, with or without FlexArray).
You use the Component Explorer to select the components and Data ONTAP version to refine
your search. You can click Show Results to display the list of supported configurations that
match the criteria.
Each Data ONTAP system must be connected to storage using redundant components so that
there is redundancy in case of device and path failures.
Your MetroCluster solution can use up to four ISLs, depending on the configuration.
For more information about basic switch configuration, ISL settings, and FC-VI configurations, see
Configuring the Cisco or Brocade FC switches manually on page 50.
SyncMirror requirements
SyncMirror is required for a MetroCluster configuration.
Two separate storage arrays, one at each site, are required for the mirrored storage.
Steps
1. Racking the hardware components in a MetroCluster configuration with array LUNs on page 157
2. Preparing a storage array for use with Data ONTAP systems on page 157
3. Switch ports required for a MetroCluster configuration with array LUNs on page 158
4. Cabling the FC-VI and HBA ports in a MetroCluster configuration with array LUNs on page 159
5. Cabling the ISLs between MetroCluster sites on page 164
6. Cabling the cluster interconnect in four-node configurations on page 165
7. Cabling the cluster peering connections on page 166
8. Cabling the HA interconnect, if necessary on page 166
9. Cabling the management and data connections on page 167
10. Cabling storage arrays to FC switches in a MetroCluster configuration on page 167
Planning and installing a MetroCluster configuration with array LUNs | 157
Steps
NetApp Interoperability
After opening the Interoperability Matrix, you can use the Storage Solution field to select your
MetroCluster solution (fabric MetroCluster or stretch MetroCluster, with or without FlexArray).
You use the Component Explorer to select the components and Data ONTAP version to refine
your search. You can click Show Results to display the list of supported configurations that
match the criteria.
Steps
1. Create LUNs on the storage array depending on the number of nodes in the MetroCluster
configuration.
Each node in the MetroCluster configuration requires array LUNs for the root aggregate, data
aggregate, and spares.
158 | MetroCluster Installation and Configuration Guide
2. Configure parameters on the storage array that are required to work with Data ONTAP.
Switch ports required for a two-node MetroCluster configuration with array LUNs
When you are connecting Data ONTAP systems to FC switches for setting up a two-node
MetroCluster configuration with array LUNs, you must connect FC-VI and HBA ports from each
controller to specific switch ports.
If you are using both array LUNs and disks in the MetroCluster configuration, you must ensure that
the controller ports are connected to the switch ports recommended for configuration with disks, and
then use the remaining ports for configuration with array LUNs.
The following table lists the specific FC switch ports to which you must connect the different
controller ports in a two-node MetroCluster configuration with array LUNs:
FC_switch_x_1 FC_switch_x_2
Controller and Brocade Brocade Cisco 9148 Brocade Brocade Cisco 9148
port 6505 6510 or 9148s 6505 6510 or 9148s
controller_x 0 0 1 - - -
FC-VI port a
controller_x - - - 0 0 1
FC-VI port b
controller_x 1 1 2 - - -
HBA port a
controller_x - - - 1 1 2
HBA port b
controller_x 2 2 3 - - -
HBA port c
controller_x - - - 2 2 3
HBA port d
Note: The connections between the FC-VI ports and the FC switch ports are applicable only in the
case of a fabric-attached MetroCluster configuration.
Switch ports required for a four-node MetroCluster configuration with array LUNs
When you are connecting Data ONTAP systems to FC switches for setting up a four-node
MetroCluster configuration with array LUNs, you must connect FC-VI and HBA ports from each
controller to specific switch ports.
If you are using both array LUNs and disks in the MetroCluster configuration, you must ensure that
the controller ports are connected to the switch ports recommended for configuration with disks, and
then use the remaining ports for configuration with array LUNs.
The following table lists the specific FC switch ports to which you must connect the different
controller ports in a four-node MetroCluster configuration with array LUNs:
Planning and installing a MetroCluster configuration with array LUNs | 159
FC_switch_x_1 FC_switch_x_2
Controller and Brocade Brocade Cisco 9148 Brocade Brocade Cisco 9148
port 6505 6510 or 9148s 6505 6510 or 9148s
controller_x_1 0 0 1 - - -
FC-VI port a
controller_x_1 - - - 0 0 1
FC-VI port b
controller_x_1 1 1 2 - - -
HBA port a
controller_x_1 - - - 1 1 2
HBA port b
controller_x_1 2 2 3 - - -
HBA port c
controller_x_1 - - - 2 2 3
HBA port d
controller_x_2 3 3 4 - - -
FC-VI port a
controller_x_2 - - - 3 3 4
FC-VI port b
controller_x_2 4 4 5 - - -
HBA port a
controller_x_2 - - - 4 4 5
HBA port b
controller_x_2 5 5 6 - - -
HBA port c
controller_x_2 - - - 5 5 6
HBA port d
Cabling the FC-VI and HBA ports in a MetroCluster configuration with array
LUNs
For a MetroCluster configuration with array LUNs, you must cable the FC-VI ports and the HBA
ports from the controllers based on the type of configuration.
In a stretch configuration, you must cable the FC-VI ports across controllers and the HBA ports to
the FC switches; in a fabric-attached configuration, you must cable the FC-VI ports and the HBA
ports to the FC switches.
You must connect the controllers in a MetroCluster configuration to the storage arrays through FC
switches. Direct connectivity between the controllers and storage arrays is not supported.
Choices
Cabling the FC-VI and HBA ports in a stretch MetroCluster configuration with array LUNs on
page 160
Cabling the FC-VI and HBA ports in a two-node fabric-attached MetroCluster configuration with
array LUNs on page 161
160 | MetroCluster Installation and Configuration Guide
Cabling the FC-VI and HBA ports in a four-node fabric-attached MetroCluster configuration with
array LUNs on page 162
Cabling the FC-VI and HBA ports in a stretch MetroCluster configuration with array LUNs
If you are setting up a two-node stretch MetroCluster configuration with array LUNs, you must cable
the FC-VI ports for direct connectivity between the controllers. In addition, you must cable each
controller's HBA ports to switch ports on the corresponding FC switches.
Steps
Example
The following example shows the connections between HBA ports on Controller A and ports on
FC_switch_A_1 and FC_switch_A_2:
The following table lists the connections between the HBA ports and the FC switch ports in the
example illustration:
Related references
Switch ports required for a two-node MetroCluster configuration with array LUNs on page 158
Cabling the FC-VI and HBA ports in a two-node fabric-attached MetroCluster configuration
with array LUNs
If you are setting up a two-node fabric-attached MetroCluster configuration with array LUNs, you
must cable the FC-VI ports and the HBA ports to the switch ports.
You must repeat this task for each controller at both the MetroCluster sites.
If you plan to use disks in addition to array LUNs in your MetroCluster configuration, you must
use HBA ports and switch ports specified for configuration with disks.
Recommended port assignments for FC switches on page 38
Steps
1. Cable the FC-VI ports from the controller to alternate switch ports.
Example
The following example shows two FC-VI ports from Controller A cabled to switch ports on
alternate switches FC_switch_A_1 and FC_switch_A_2:
You must ensure redundancy in connections from the controller to the switches. Therefore, for
each controller at a site, ensure that both the HBA ports in the same port pair are connected to
alternate FC switches.
Example
The following example shows the connections between HBA ports on Controller A and ports on
FC_switch_A_1 and FC_switch_A_2:
The following table lists the connections between the HBA ports and the FC switch ports in the
illustration:
Related references
Switch ports required for a two-node MetroCluster configuration with array LUNs on page 158
Cabling the FC-VI and HBA ports in a four-node fabric-attached MetroCluster configuration
with array LUNs
If you are setting up a four-node fabric-attached MetroCluster configuration with array LUNs, you
must cable the FC-VI ports and the HBA ports to the switch ports.
You must repeat this task for each controller at both the MetroCluster sites.
If you plan to use disks in addition to array LUNs in your MetroCluster configuration, you must
use HBA ports and switch ports specified for configuration with disks.
Recommended port assignments for FC switches on page 38
Planning and installing a MetroCluster configuration with array LUNs | 163
Steps
1. Cable the FC-VI ports from each controller to ports on alternate FC switches.
Example
The following example shows the connections between FC-VI ports and switch ports at Site A:
Example
The following example shows the connections between HBA ports and switch ports at Site A:
The following table lists the connections between the HBA ports on controller_A_1 and the FC
switch ports in the illustration:
164 | MetroCluster Installation and Configuration Guide
The following table lists the connections between the HBA ports on controller_A_2 and the FC
switch ports in the illustration:
Related references
Switch ports required for a four-node MetroCluster configuration with array LUNs on page 158
Step
Related concepts
Port assignments for FC switches in a four-node configuration on page 38
Port assignments for FC switches in a two-node configuration on page 40
Step
1. Cable the cluster interconnect from one controller to the other, or, if cluster interconnect switches
are used, from each controller to the switches.
Related information
Clustered Data ONTAP 8.3 Network Management Guide
166 | MetroCluster Installation and Configuration Guide
Step
1. Identify and cable at least two ports for cluster peering and ensure they have network connectivity
with the partner cluster.
Cluster peering can be done on dedicated ports or on data ports. Using dedicated ports ensures
higher throughput for the cluster peering traffic.
Related concepts
Considerations for configuring cluster peering on page 12
Related information
Clustered Data ONTAP 8.3 Data Protection Guide
Clustered Data ONTAP 8.3 Cluster Peering Express Guide
The HA interconnect must be cabled only if the storage controllers in the HA pair are in separate
chassis.
Some storage controller models support two controllers in a single chassis, in which case they use
an internal HA interconnect.
Steps
b. Connect port ib0b on the first controller in the HA pair to port ib0b
on the other controller.
Planning and installing a MetroCluster configuration with array LUNs | 167
32xx
a. Connect port c0a on the first controller in the HA pair to port c0a on
the other controller.
b. Connect port c0b on the first controller in the HA pair to port c0b on
the other controller.
Related information
Installation and Setup Instructions FAS8040/FAS8060 Systems
Installation and setup Instructions FAS80xx Systems with I/O Expansion Modules
Installation and Setup Instructions FAS8020 systems
Installation and Setup Instructions 62xx Systems
Installation and Setup Instructions 32xx Systems
Step
1. Cable the controller's management and data ports to the management and data networks at the
local site.
Installation and Setup Instructions FAS8040/FAS8060 Systems
Installation and setup Instructions FAS80xx Systems with I/O Expansion Modules
Installation and Setup Instructions FAS8020 systems
Installation and Setup Instructions 62xx Systems
Installation and Setup Instructions 32xx Systems
The storage arrays must be set up to present array LUNs to Data ONTAP.
168 | MetroCluster Installation and Configuration Guide
The ISLs must be cabled between the FC switches across the MetroCluster sites.
You must repeat this task for each storage array at both the MetroCluster sites.
You must connect the controllers in a MetroCluster configuration to the storage arrays through FC
switches. Direct connectivity between the controllers and storage arrays is not supported.
Step
Related concepts
Switch zoning in a MetroCluster configuration with array LUNs on page 171
The connections between storage array ports and FC switch ports are similar for both stretch and
fabric-attached variants of two-node MetroCluster configurations with array LUNs.
Planning and installing a MetroCluster configuration with array LUNs | 169
Note: If you plan to use disks in addition to array LUNs in your MetroCluster configuration, you
must use the switch ports specified for the configuration with disks.
Recommended port assignments for FC switches on page 38
In the illustration, the redundant array port pairs for both the sites are as follows:
Ports 1A and 2A
Ports 1B and 2B
Storage array at Site B:
FC_switch_A_1 at Site A and FC_switch_B_1 at Site B are connected to form fabric_1. Similarly,
FC_switch_A_2 at Site A and FC_switch_B_2 are connected to form fabric_2.
The following table lists the connections between the storage array ports and the FC switches for the
example MetroCluster illustration:
Note: If you plan to use disks in addition to array LUNs in your MetroCluster configuration, you
must use the switch ports specified for the configuration with disks.
Recommended port assignments for FC switches on page 38
In the illustration, the redundant array port pairs for both the sites are as follows:
Storage array at Site A:
Ports 1A and 2A
Ports 1B and 2B
Ports 1C and 2C
Ports 1D and 2D
FC_switch_A_1 at Site A and FC_switch_B_1 at Site B are connected to form fabric_1. Similarly,
FC_switch_A_2 at Site A and FC_switch_B_2 are connected to form fabric_2.
The following table lists the connections between the storage array ports and the FC switches for the
MetroCluster illustration:
Related concepts
Example of switch zoning in a two-node MetroCluster configuration with array LUNs on page
172
Example of switch zoning in a four-node MetroCluster configuration with array LUNs on page
173
The MetroCluster configuration must follow the single-initiator to single-target zoning scheme.
Single-initiator to single-target zoning limits each zone to a single FC initiator port and a single
target port.
Sharing of multiple initiator ports with a single target port is not supported in MetroCluster
configurations.
Similarly, sharing of multiple target ports with a single initiator port is also not supported.
You must have performed a basic configuration of the FC switches used in the MetroCluster
configuration.
Configuring the Cisco or Brocade FC switches manually on page 50
172 | MetroCluster Installation and Configuration Guide
The following illustration shows zoning for a two-node stretch MetroCluster configuration with array
LUNs:
The examples show single-initiator to single-target zoning for the MetroCluster configurations. The
lines in the examples represent zones rather than connections; each line is labeled with its zone
number.
In the two illustrations, array LUNs are allocated on each storage array. LUNs of equal size are
provisioned on the storage arrays at both sites, which is a SyncMirror requirement. Each Data
ONTAP system has two paths to array LUNs. The ports on the storage array are redundant.
The redundant array port pairs for both the sites are as follows:
Ports 1A and 2A
Planning and installing a MetroCluster configuration with array LUNs | 173
Ports 1B and 2B
The redundant port pairs on each storage array form alternate paths. Therefore, both the ports of the
port pairs can access the LUNs on the respective storage arrays.
The following table shows the zones for the illustrations:
Zone Data ONTAP controller and initiator port Storage array port
FC_switch_A_1
z1 Controller A: Port 0a Port 1A
z3 Controller A: Port 0c Port 1A'
FC_switch_A_2
z2 Controller A: Port 0b Port 2A'
z4 Controller A: Port 0d Port 2A
FC_switch_B_1
z5 Controller B: Port 0a Port 1B'
z7 Controller B: Port 0c Port 1B
FC_switch_B_2
z6 Controller B: Port 0b Port 2B
z8 Controller B: Port 0d Port 2B'
The following table shows the zones for the FC-VI connections in the fabric-attached configuration:
In the illustration, array LUNs are allocated on each storage array for the MetroCluster configuration.
LUNs of equal size are provisioned on the storage arrays at both sites, which is a SyncMirror
requirement. Each Data ONTAP system has two paths to array LUNs. The ports on the storage array
are redundant.
In the illustration, the redundant array port pairs for both the sites are as follows:
Storage array at Site A:
Ports 1A and 2A
Ports 1B and 2B
Ports 1C and 2C
Ports 1D and 2D
The redundant port pairs on each storage array form alternate paths. Therefore, both the ports of the
port pairs can access the LUNs on the respective storage arrays.
The following table shows the zones for this example:
Zone Data ONTAP controller and initiator port Storage array port
FC_switch_A_1
z1 Controller_A_1: Port 0a Port 1A
z3 Controller_A_1: Port 0c Port 1A'
z5 Controller_A_2: Port 0a Port 1B
z7 Controller_A_2: Port 0c Port 1B'
FC_switch_A_2
z2 Controller_A_1: Port 0b Port 2A'
z4 Controller_A_1: Port 0d Port 2A
Planning and installing a MetroCluster configuration with array LUNs | 175
Zone Data ONTAP controller and initiator port Storage array port
z6 Controller_A_2: Port 0b Port 2B'
z8 Controller_A_2: Port 0d Port 2B
FC_switch_B_1
z9 Controller_B_1: Port 0a Port 1C'
z11 Controller_B_1: Port 0c Port 1C
z13 Controller_B_2: Port 0a Port 1D'
z15 Controller_B_2: Port 0c Port 1D
FC_switch_B_2
z10 Controller_B_1: Port 0b Port 2C
z12 Controller_B_1: Port 0d Port 2C'
z14 Controller_B_2: Port 0b Port 2D
z16 Controller_B_2: Port 0d Port 2D'
The following table shows the zones for the FC-VI connections at Site A and Site B:
Steps
1. Verifying the HA state of components is mcc or mcc-2n in Maintenance mode on page 176
2. Configuring Data ONTAP on a system that uses only array LUNs on page 177
3. Setting up the cluster on page 179
4. Installing the license for using array LUNs in a MetroCluster configuration on page 180
176 | MetroCluster Installation and Configuration Guide
Steps
1. In Maintenance mode, display the HA state of the controller module and chassis:
ha-config show
The correct HA state depends on whether you have a four-node or two-node MetroCluster
configuration:
Number of controllers in the MetroCluster HA state for all components should be...
configuration
Four mcc
Two mcc-2n
2. If the displayed system state of the controller is not mcc, set the HA state for the controller
module to mcc or mcc-2n:
3. If the displayed system state of the chassis is not mcc, set the HA state for the chassis to mcc or
mcc-2n:
The storage array administrator must have created LUNs and presented them to Data ONTAP.
Steps
1. Power on the primary node and interrupt the boot process by pressing Ctrl-C when you see the
following message on the console:
Press CTRL-C for special boot menu
.
2. Select option 4 (Clean configuration and initialize all disks) on the boot menu.
The list of array LUNs made available to Data ONTAP is displayed. In addition, the array LUN
size required for root volume creation is also specified. The size required for root volume creation
differs from one Data ONTAP system to another.
Example
If no array LUNs were previously assigned, Data ONTAP detects and displays the available
array LUNs, as shown in the following example:
**********************************************************************
* No disks or array LUNs are owned by this node. *
* You can use the following information to verify connectivity from *
* HBAs to switch ports. If the connectivity of HBAs to switch ports *
* does not match your expectations, configure your SAN and rescan. *
* You can rescan by entering 'r' at the prompt for selecting *
* array LUNs below. *
**********************************************************************
The array LUNs visible to the system are listed below. Select one array LUN to be used to
create the root aggregate and root volume. The root volume requires 45.0 GB of space.
Warning: The contents of the array LUN you select will be erased by Data ONTAP prior to their use.
Index Array LUN Name Model Vendor Size Owner Checksum Serial Number
----- ----------------- ---------- -------- -------- ------ -------- ------------------------
0 switch0:5.183L1 SYMMETRIX EMC 266.1 GB Block 600604803436313734316631
1 switch0:5.183L3 SYMMETRIX EMC 266.1 GB Block 600604803436316333353837
2 switch0:5.183L31 SYMMETRIX EMC 266.1 GB Block 600604803436313237643666
178 | MetroCluster Installation and Configuration Guide
If array LUNs were previously assigned, for example, through the maintenance mode, they are
either marked local or partner in the list of the available array LUNs, depending on
whether the array LUNs were selected from the node on which you are installing Data
ONTAP or its HA partner.
**********************************************************************
* No disks are owned by this node, but array LUNs are assigned. *
* You can use the following information to verify connectivity from *
* HBAs to switch ports. If the connectivity of HBAs to switch ports *
* does not match your expectations, configure your SAN and rescan. *
* You can rescan by entering 'r' at the prompt for selecting *
* array LUNs below. *
**********************************************************************
HBA HBA WWPN Switch port Switch port WWPN
--- -------- ----------- ----------------
0e 500a098001baf8e0 vgbr6510s203:25 20190027f88948dd
0f 500a098101baf8e0 vgci9710s202:1-17 2011547feeead680
0g 500a098201baf8e0 vgbr6510s203:27 201b0027f88948dd
0h 500a098301baf8e0 vgci9710s202:1-18 2012547feeead680
The array LUNs visible to the system are listed below. Select one array LUN to be used to
create the root aggregate and root volume. The root volume requires 350.0 GB of space.
Warning: The contents of the array LUN you select will be erased by Data ONTAP prior to their use.
Index Array LUN Name Model Vendor Size Owner Checksum Serial Number
----- ----------------------- ------ ------ -------- ------ -------- ------------------------
0 vgci9710s202:2-24.0L19 RAID5 DGC 217.3 GB Block 6006016083402B0048E576D7
1 vgbr6510s203:30.126L20 RAID5 DGC 217.3 GB Block 6006016083402B0049E576D7
2 vgci9710s202:2-24.0L21 RAID5 DGC 217.3 GB Block 6006016083402B004AE576D7
3 vgbr6510s203:30.126L22 RAID5 DGC 405.4 GB local Block 6006016083402B004BE576D7
4 vgci9710s202:2-24.0L23 RAID5 DGC 217.3 GB Block 6006016083402B004CE576D7
5 vgbr6510s203:30.126L24 RAID5 DGC 217.3 GB Block 6006016083402B004DE576D7
6 vgbr6510s203:30.126L25 RAID5 DGC 423.5 GB local Block 6006016083402B003CF93694
7 vgci9710s202:2-24.0L26 RAID5 DGC 423.5 GB Block 6006016083402B003DF93694
In this example, array LUNs with index numbers 3 and 6 are marked local, because they had
been previously assigned from this particular node.
3. Select the index number corresponding to the array LUN you want to assign as the root volume.
Ensure that you select an array LUN of sufficient size to create the root volume.
The array LUN selected for root volume creation is marked local (root).
Example
In the following example, the array LUN with index number 3 is marked for root volume
creation.
Data ONTAP requires that 11.0 GB of space be reserved for use in diagnostic and recovery
operations. Select one array LUN to be used as spare for diagnostic and recovery operations.
Index Array LUN Name Model Vendor Size Owner Checksum Serial Number
----- ----------------- ---------- ------ -------- -------------- -------- ------------------------
0 switch0:5.183L1 SYMMETRIX EMC 266.1 GB Block 600604803436313734316631
1 switch0:5.183L3 SYMMETRIX EMC 266.1 GB Block 600604803436316333353837
2 switch0:5.183L31 SYMMETRIX EMC 266.1 GB Block 600604803436313237643666
3 switch0:5.183L33 SYMMETRIX EMC 658.3 GB local (root) Block 600604803436316263613066
4 switch0:7.183L0 SYMMETRIX EMC 173.6 GB Block 600604803436313261356235
5 switch0:7.183L2 SYMMETRIX EMC 173.6 GB Block 600604803436313438396431
6 switch0:7.183L4 SYMMETRIX EMC 658.3 GB Block 600604803436313161663031
7 switch0:7.183L30 SYMMETRIX EMC 173.6 GB Block 600604803436316538353834
8 switch0:7.183L32 SYMMETRIX EMC 266.1 GB Block 600604803436313237353738
9 switch0:7.183L34 SYMMETRIX EMC 658.3 GB Block 600604803436313737333662
4. Select the index number corresponding to the array LUN you wish to assign for use in diagnostic
and recovery options.
Planning and installing a MetroCluster configuration with array LUNs | 179
Ensure that you select an array LUN with sufficient size for use in diagnostic and recovery
options. If required, you can also select multiple array LUNs with combined size greater than or
equal to the specified size.
Note: To select multiple entries, you must enter the comma-separated values of all the index
numbers corresponding to the array LUNs you wish to select for diagnostic and recovery
options.
The following example shows a list of array LUNs selected for root volume creation and for
diagnostic and recovery options.
5. Enter y when prompted by the system to continue with the installation process.
The root aggregate and the root volume are created and the rest of the installation process
continues.
Example
The following example shows the node management interface screen with a message confirming
the creation of the node management interface:
Welcome to node setup.
This node has its management address assigned and is ready for cluster setup.
Related information
FlexArray Virtualization Installation Requirements and Reference Guide
Related information
Clustered Data ONTAP 8.3 Software Setup Guide
180 | MetroCluster Installation and Configuration Guide
You must have the license key for the V_StorageAttach license.
Steps
1. Use the system license add command to install the V_StorageAttach license.
Repeat this step for each cluster node on which you want to install the license.
Clustered Data ONTAP 8.3.1 man page: system license add - Add one or more licenses
2. Use the system license show command to verify whether the V_StorageAttach license is
installed on all the required nodes in a cluster.
Example
The following example output shows that the V_StorageAttach license is installed on the nodes of
cluster_A:
Clustered Data ONTAP 8.3.1 man page: system license show - Display licenses
Back-end configuration testing (testing of the connectivity and configuration of devices behind
the Data ONTAP systems) must be completed.
Array LUNs that you want to assign must be presented to the Data ONTAP systems.
Planning and installing a MetroCluster configuration with array LUNs | 181
The array LUN is smaller than or larger than the size that Data ONTAP supports.
Data ONTAP issues an error message if you try to assign ownership of an array LUN with back-end
configuration errors that would interfere with the Data ONTAP system and the storage array
operating together. You must fix such errors before you can proceed with array LUN assignment.
Data ONTAP alerts you if you try to assign an array LUN with a redundancy error: for example, all
paths to this array LUN are connected to the same controller or only one path to the array LUN. You
can fix a redundancy error before or after assigning ownership of the LUN.
Steps
1. Enter the following command to see the array LUNs that have not yet been assigned to a node:
storage disk show -container-type unassigned
If you want to fix a redundancy error after disk assignment instead of before, you must use the -
force parameter with the storage disk assign command.
Related information
FlexArray Virtualization Installation Requirements and Reference Guide
Related concepts
Considerations when using dedicated ports on page 13
Considerations when sharing data ports on page 13
Related references
Prerequisites for cluster peering on page 12
Related information
Clustered Data ONTAP 8.3 Data Protection Guide
182 | MetroCluster Installation and Configuration Guide
Intercluster LIFs should be created in the IPspaces of both clusters you want to peer.
You should ensure that the intercluster LIFs of the clusters can route to each other.
If there are different administrators for each cluster, the passphrase used to authenticate the
cluster peer relationship should be agreed upon.
Steps
1. Create the cluster peer relationship on each cluster by using the cluster peer create
command.
The passphrase that you use is not displayed as you type it.
If you created a nondefault IPspace to designate intercluster connectivity, you use the ipspace
parameter to select that IPspace.
Example
In the following example, cluster01 is peered with a remote cluster named cluster02. Cluster01 is
a two-node cluster that has one intercluster LIF per node. The IP addresses of the intercluster
LIFs created in cluster01 are 192.168.2.201 and 192.168.2.202. Similarly, cluster02 is a two-node
cluster that has one intercluster LIF per node. The IP addresses of the intercluster LIFs created in
cluster02 are 192.168.2.203 and 192.168.2.204. These IP addresses are used to create the cluster
peer relationship.
If DNS is configured to resolve host names for the intercluster IP addresses, you can use host
names in the peer-addrs option. It is not likely that intercluster IP addresses frequently
change; however, using host names allows intercluster IP addresses to change without having to
modify the cluster peer relationship.
Planning and installing a MetroCluster configuration with array LUNs | 183
Example
In the following example, an IPspace called IP01A was created on cluster01 for intercluster
connectivity. The IP addresses used in the previous example are used in this example to create the
cluster peer relationship.
2. Display the cluster peer relationship by using the cluster peer show command with the -
instance parameter.
Displaying the cluster peer relationship verifies that the relationship was established successfully.
Example
3. Preview the health of the nodes in the peer cluster by using the cluster peer health show
command.
Previewing the health checks the connectivity and status of the nodes on the peer cluster.
Example
Step
1. Use the storage aggregate mirror command to mirror the unmirrored root aggregate.
Example
The following command mirrors the root aggregate for controller_A_1:
You should know what drives or array LUNs will be used in the new aggregate.
If you have multiple drive types in your system (heterogeneous storage), you should understand
how you can ensure that the correct drive type is selected.
Drives and array LUNs are owned by a specific node; when you create an aggregate, all drives in
that aggregate must be owned by the same node, which becomes the home node for that
aggregate.
Aggregate names should conform to the naming scheme you determined when you planned your
MetroCluster configuration.
The Clustered Data ONTAP Data Protection Guide contains more information about mirroring
aggregates.
Steps
2. Create the aggregate by using the storage aggregate create -mirror true command.
If you are logged in to the cluster on the cluster management interface, you can create an
aggregate on any node in the cluster. To ensure that the aggregate is created on a specific node,
use the -node parameter or specify drives that are owned by that node.
You can specify the following options:
Aggregate's home node (that is, the node that owns the aggregate in normal operation)
List of specific drives or array LUNs that are to be added to the aggregate
Maximum number of drives or array LUNs that can be included in a RAID group
For more information about these options, see the storage aggregate create man page.
Example
The following command creates a mirrored aggregate with 10 disks:
There must be at least two nonroot mirrored data aggregates on each cluster, and all aggregates
must be mirrored.
You can verify this with the storage aggregate show command.
Steps
1. Enter the metrocluster configure command in the following format, depending on whether
you .
Example
The following command enables MetroCluster configuration on all nodes in the DR group that
contains controller_A_1:
Example
3. Confirm the MetroCluster configuration from both sites in the MetroCluster configuration:
Example
Example
Steps
This command must be repeated on all four switches in the MetroCluster configuration.
Example
The following example shows the command to add a switch with IP address of 10.10.10.10:
It might take up to 15 minutes to reflect all data due to the 15-minute polling interval.
Example
The following example shows the command given to verify the MetroCluster's FC switches are
configured:
controller_A_1::>
If the switch's worldwide name (WWN) is shown, the Data ONTAP health monitor is able to
contact and monitor the FC switch.
188 | MetroCluster Installation and Configuration Guide
Related information
Clustered Data ONTAP 8.3 System Administration Guide
Steps
Example
The command will run as a background job:
Component Result
------------------- ---------
nodes ok
lifs ok
config-replication ok
aggregates ok
clusters ok
5 entries were displayed.
2. Display more detailed results from the most recent metrocluster check run command:
metrocluster check aggregate show
metrocluster check cluster show
metrocluster check config-replication show
metrocluster check lif show
metrocluster check node show
The metrocluster check show commands show the results of the most recent
metrocluster check run command. You should always run the metrocluster check
run command prior to using the metrocluster check show commands to ensure that the
information displayed is current.
Planning and installing a MetroCluster configuration with array LUNs | 189
Example
The following example shows the metrocluster check aggregate show output for a
healthy four-node MetroCluster configuration:
The following example shows the metrocluster check cluster show output for a healthy
four-node MetroCluster configuration. It indicates that the clusters are ready to perform a
negotiated switchover if necessary.
Related information
Clustered Data ONTAP 8.3 Physical Storage Management Guide
Clustered Data ONTAP 8.3 Network Management Guide
Clustered Data ONTAP 8.3 Data Protection Guide
190 | MetroCluster Installation and Configuration Guide
Related concepts
Example of a four-node MetroCluster configuration with disks and array LUNs on page 194
Related references
Examples of two-node stretch MetroCluster configurations with disks and array LUNs on page
191
Example of a two-node fabric-attached MetroCluster configuration with disks and array LUNs on
page 193
Consideration Guideline
Order of setting up access to the You can set up access to either disks or array LUNs first.
storage You must complete all setup for that type of storage and
verify that it is set up correctly before setting up the other
type of storage.
Location of the root aggregate If you are setting up a new MetroCluster deployment
with both disks and array LUNs, you must create the
root aggregate on native disks.
When doing this, ensure that at least one disk shelf
(with 24 disk drives) is set up at each of the sites.
Consideration Guideline
Using switches and FC-to-SAS FC-to-SAS bridges are required in four-node
bridges configurations and two-node fabric-attached
configurations to connect the Data ONTAP systems to
the disk shelves through the switches.
You must use the same switches to connect to the storage
arrays and the FC-to-SAS bridges.
Using FC initiator ports The initiator ports used to connect to an FC-to-SAS
bridge must be different from the ports used to connect to
the switches, which connect to the storage arrays.
A minimum of eight initiator ports is required to connect
a Data ONTAP system to both disks and array LUNs.
Related concepts
Example of switch zoning in a four-node MetroCluster configuration with array LUNs on page
173
Related tasks
Configuring the Cisco or Brocade FC switches manually on page 50
Installing FC-to-SAS bridges and SAS disk shelves on page 43
Related information
NetApp Hardware Universe
The following illustration shows a two-node stretch MetroCluster configuration in which the native
disks are connected to the Data ONTAP systems through FC-to-SAS bridges:
Note: If required, you can also use FC switches to connect native disks to the controllers in a
stretch MetroCluster configuration.
The following illustration shows a two-node stretch MetroCluster configuration with the array LUN
connections:
Planning and installing a MetroCluster configuration with array LUNs | 193
In the following illustration showing the connectivity between Data ONTAP systems and array
LUNs, the HBA ports 0a through 0d are used for connectivity with array LUNs because ports 1a
through 1d are used for connectivity with disks:
194 | MetroCluster Installation and Configuration Guide
In the following illustration that shows the connectivity between Data ONTAP systems and array
LUNs, the HBA ports 0a through 0d are used for connectivity with array LUNs because ports 1a
through 1d are used for connectivity with disks:
Planning and installing a MetroCluster configuration with array LUNs | 195
196
Related information
NetApp Documentation: OnCommand Unified Manager Core Package (current releases)
NetApp Documentation: OnCommand System Manager (current releases)
You cannot modify the time zone settings for a failed node or the partner node after a takeover
occurs.
Each cluster in the MetroCluster configuration should have its own separate NTP server or servers
used by the nodes, FC switches and FC-to-SAS bridges at that MetroCluster site.
If you are using the MetroCluster Tiebreaker software, it should also have its own separate NTP
server.
Steps
3. In the navigation pane, click Configuration > System Tools > DateTime.
4. Click Edit.
6. Specify the IP addresses of the time servers, and then click Add.
You must add an NTP server to the list of time servers. The domain controller can be an
authoritative server.
7. Click OK.
Using the OnCommand management tools for further configuration and monitoring | 197
8. Verify the changes you made to the date and time settings in the Date and Time window.
198
IPspace configuration
IPspace names must match between the two sites.
IPspace objects must be manually replicated to the partner cluster. Any SVMs created and assigned
to an IPspace before the IPspace is replicated will not be replicated to the partner cluster.
IPv6 configuration
If IPv6 is configured on one site, it must be configured on the other site.
LIF creation
You can confirm the successful creation of a LIF in a MetroCluster configuration by running the
metrocluster check lif show. If there are issues, you can use the metrocluster check
lif repair-placement command.
Requirements and limitations when using Data ONTAP in a MetroCluster configuration | 199
Duplicate LIFs
You should not create duplicate LIFs (multiple LIFs with the same IP address) within the same
IPspace.
Intercluster LIFs
Intercluster LIFs are limited to the default IPspace that is owned by the admin SVM.
1. DR partner availability
The system attempts to place the replicated LIF on the DR partner of the node on which it was
created.
2. Connectivity
For IP or iSCSI LIFs, the system places them on a reachable subnet.
For FCP LIFs, the system attempts to place them on a reachable FC fabric.
3. Port attributes
The system attempts to place the LIF on a port with the desired VLAN, adapter type, and speed
attributes.
Related information
Clustered Data ONTAP 8.3 Network Management Guide
Step
If... Then...
Data ONTAP detects no NVRAM errors File service starts normally.
Data ONTAP detects NVRAM errors Data ONTAP returns a stale file handle
(ESTALE) error to NFS clients trying to
access the database, causing the application
to stop responding, crash, or shut down.
Data ONTAP then sends an error message to
the system console and log file.
Data ONTAP detects NVRAM errors on a LUNs in that volume are brought offline. Then
volume that contains LUNs the in-nvfailed-state option on the
volume must be cleared, and the NVFAIL
attribute on the LUNs must be cleared by
bringing each LUN in the affected volume
online. You can perform the steps to check the
integrity of the LUNs and recover the LUN
from Snapshot copy or backup as necessary.
After all the LUNs in the volume are recovered,
the in-nvfailed-state option on the
affected volume is cleared.
See the man page for each command for more information.
Step
1. Recover the volume by using the volume modify command with the -in-nvfailed-state
parameter set to false.
Steps
1. Clear the NVFAIL state on the affect volume that hosts the LUNs by resetting the -in-
nvfailed-state parameter of the volume modify command.
3. Examine the LUNs for any data inconsistencies and resolve them.
This might involve host-based recovery or recovery done on the storage controller using
SnapRestore.
In the Data ONTAP 7.3 and 7.2 release families, this functionality is referred to as an
active/active configuration.
HA partner
A node's partner within the local HA pair. The node mirrors its HA partner's NVRAM or
NVMEM cache.
high availability (HA)
In Data ONTAP 8.x, the recovery capability provided by a pair of nodes (storage systems),
called an HA pair, that are configured to serve data for each other if one of the two nodes
stops functioning. In the Data ONTAP 7.3 and 7.2 release families, this functionality is
referred to as an active/active configuration.
healing
204 | MetroCluster Installation and Configuration Guide
The two required MetroCluster operations that prepare the storage located at the DR site
for switchback. The first heal operation resynchronizes the mirrored plexes. The second
heal operation returns ownership of root aggregates to the DR nodes.
LIF (logical interface)
A logical network interface, representing a network access point to a node. LIFs currently
correspond to IP addresses, but could be implemented by any interconnect. A LIF is
generally bound to a physical network port; that is, an Ethernet port. LIFs can fail over to
other physical ports (potentially on other nodes) based on policies interpreted by the LIF
manager.
NVRAM
nonvolatile random-access memory.
NVRAM cache
Nonvolatile RAM in a storage system, used for logging incoming write data and NFS
requests. Improves system performance and prevents loss of data in case of a storage
system or power failure.
NVRAM mirror
A synchronously updated copy of the contents of the storage system NVRAM (nonvolatile
random access memory) contents kept on the partner storage system.
node
In Data ONTAP, one of the systems in a cluster or an HA pair.
To distinguish between the two nodes in an HA pair, one node is sometimes called the
local node and the other node is sometimes called the partner node or remote node.
In Protection Manager and Provisioning Manager, the set of storage containers
(storage systems, aggregates, volumes, or qtrees) that are assigned to a dataset and
designated either primary data (primary node), secondary data (secondary node), or
tertiary data (tertiary node).
A dataset node refers to any of the nodes configured for a dataset.
A backup node refers to either a secondary or tertiary node that is the destination of a
backup or mirror operation.
A disaster recovery node refers to the dataset node that is the destination of a failover
operation.
remote storage
The storage that is accessible to the local node, but is at the location of the remote node.
root volume
A special volume on each Data ONTAP system. The root volume contains system files
and configuration information, and can also contain data. It is required for the system to
be able to boot and to function properly. Core dump files, which are important for
troubleshooting, are written to the root volume if there is enough space.
switchback
The MetroCluster operation that restores service back to one of the MetroCluster sites.
switchover
The MetroCluster operation that transfers service from one of the MetroCluster sites.
A forced switchover immediately transfers service; the shut down of the target site
might not be clean.
205
Copyright information
Copyright 19942015 NetApp, Inc. All rights reserved. Printed in the U.S.
No part of this document covered by copyright may be reproduced in any form or by any means
graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an
electronic retrieval systemwithout prior written permission of the copyright owner.
Software derived from copyrighted NetApp material is subject to the following license and
disclaimer:
THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE,
WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
NetApp reserves the right to change any products described herein at any time, and without notice.
NetApp assumes no responsibility or liability arising from the use of products described herein,
except as expressly agreed to in writing by NetApp. The use or purchase of this product does not
convey a license under any patent rights, trademark rights, or any other intellectual property rights of
NetApp.
The product described in this manual may be protected by one or more U.S. patents, foreign patents,
or pending applications.
RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject to
restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer
Software clause at DFARS 252.277-7103 (October 1988) and FAR 52-227-19 (June 1987).
206
Trademark information
NetApp, the NetApp logo, Go Further, Faster, AltaVault, ASUP, AutoSupport, Campaign Express,
Cloud ONTAP, Clustered Data ONTAP, Customer Fitness, Data ONTAP, DataMotion, Fitness, Flash
Accel, Flash Cache, Flash Pool, FlashRay, FlexArray, FlexCache, FlexClone, FlexPod, FlexScale,
FlexShare, FlexVol, FPolicy, GetSuccessful, LockVault, Manage ONTAP, Mars, MetroCluster,
MultiStore, NetApp Insight, OnCommand, ONTAP, ONTAPI, RAID DP, RAID-TEC, SANtricity,
SecureShare, Simplicity, Simulate ONTAP, Snap Creator, SnapCenter, SnapCopy, SnapDrive,
SnapIntegrator, SnapLock, SnapManager, SnapMirror, SnapMover, SnapProtect, SnapRestore,
Snapshot, SnapValidator, SnapVault, StorageGRID, Tech OnTap, Unbound Cloud, and WAFL and
other names are trademarks or registered trademarks of NetApp, Inc., in the United States, and/or
other countries. All other brands or products are trademarks or registered trademarks of their
respective holders and should be treated as such. A current list of NetApp trademarks is available on
the web at http://www.netapp.com/us/legal/netapptmlist.aspx.
207
Index
supported MetroCluster configurations 11
7-Mode fabric MetroClusters
sharing FC switch fabrics 20
7-Mode MetroCluster configurations B
differences from clustered MetroCluster
configurations 10 bridges
installing FC-to-SAS 43, 100
Brocade
A license requirements 52, 108
Brocade 6510s
addresses sharing during transition 107
configuring NTP server, to synchronize system time Brocade FC switch configuration
196 configuring ISL ports 61
gathering required network information 117, 119 configuring zoning 62
aggregates setting to default values 52
mirrored data, creating on each node of a Brocade FC switch configurations
MetroCluster configuration 145, 184 configuring ISL ports 57
aggregates, root Brocade FC switches
mirroring 183 configuring 50
architecture configuring basic settings 55
of two-node MetroCluster configurations 87, 95 introduction to manually configuring 50
parts of fabric MetroCluster 23 requirements for configuring 50
array LUNs Brocade switches
cabling ports in stretch MetroCluster configurations enabling ISL encryption 67
with 160 setting ISL encryption 65
configuration requirements for MetroCluster setting payload for 65
configurations 155 setting the authentication policy 66
in a MetroCluster configuration 14
installing and cabling MetroCluster components for
156 C
installing license for using 180
introduction to implementing a MetroCluster cabling
configuration with disks and 190 data ports 43, 92, 99, 167
planning for a MetroCluster configuration with 154 HA interconnects 42, 166
supported MetroCluster configurations with 154 management ports 43, 92, 99, 167
switch ports required for a four-node MetroCluster MetroCluster components 42, 166
configuration with 158 MetroCluster components, array LUNs 156
switch ports required for a MetroCluster stretch MetroCluster systems 89
configuration with 158 cabling)
switch ports required for a two-node MetroCluster Inter-Switch Links 37, 164
configuration with 158 cascade configurations
array LUNs and disks networking requirements for cluster peering 12
considerations when implementing MetroCluster checking
configuration with 190 MetroCluster configuration operations 149, 188
example four-node MetroCluster configuration 194 checklists
example two-node fabric-attached MetroCluster software setup, for factory-configured MetroCluster
configuration 193 17
example two-node stretch MetroCluster checklists, hardware setup
configurations 191 for factory configured clusters 15
arrays, storage Cisco 9148 switches
preparing for use with Data ONTAP systems 157 manually enabling ports 72
assigning disk ownership in AFF systems Cisco FC switch configuration
MetroCluster configuration 126 calculating buffer-to-buffer credits 80
assigning disk ownership in non-AFF systems community string 70
MetroCluster configuration 125 configuring ISL ports 74, 80
ATTO FibreBridge configuring port channels 80
See FC-to-SAS bridges configuring VSANs 76
authentication policy setting basic settings 70
setting 66 setting to default values 70
automatic switchover Cisco FC switch configurations
Index | 209
authentication policy for a Brocade switch in a MetroCluster configuration with array LUNs 171
MetroCluster configuration 66 requirements in MetroCluster configurations for 171
setup switchback
software, checklist for factory configured verifying in a MetroCluster configuration 153
MetroCluster 17 switches
setup, hardware licensing requirements
checklist for factory configured clusters 15 Brocade 52, 108
site configurations Cisco 69
worksheets for 117, 119 switches, Brocade FC
software configuring 50
settings already enabled 15 requirements for configuring 50
setup checklist for factory configured clusters 17 switches, Cisco 9148
software, configuring manually enabling ports 72
workflows for MetroCluster in Data ONTAP 116 switches, Cisco FC
storage arrays configuration requirements 68
cabling to FC switches in a MetroCluster ISL requirements 68
configuration 167 storage connection requirements 68
configuration requirements for MetroCluster switches, FC
configurations 155 configuration requirements for MetroCluster
preparing for use with Data ONTAP systems 157 configurations with array LUNs 155
storage controllers introduction to configuring 49
example names in MetroCluster configuration 88, 94 introduction to manually configuring Cisco and
example names in the MetroCluster configuration 29 Brocade 50
racking in a Metrocluster configuration 157 switchover
storage disk show command accessing the database after 202
output in a two-node MetroCluster configuration 200 verifying in a MetroCluster configuration 153
storage shelf show command synchronizing system time
output in a two-node MetroCluster configuration 200 using NTP 196
stretch MetroCluster configurations SyncMirror
cabling FC-VI ports 160 configuration requirements for MetroCluster
cabling HBA ports 160 configurations with array LUNs 155
example configurations with array LUNs and disks System Setup tool
191 performing basic software configuration 130
subnets performing cluster configuration 130
requirements for cluster peering 12 system time
suggestions synchronizing using NTP 196
how to send feedback about documentation 207
SVMs
preconfigured in a MetroCluster configuration 135
T
switch configuration, Cisco third-party storage
saving 86 requirements for MetroCluster configurations with
switch configurations, FC array LUNs 155
configuring basic settings for Brocade 55 TI zoning
switch fabrics deleting and configuring IOD settings 112
configuring switch ports for 57 Tiebreaker software
disabling before modifying configuration 110 installing 153
enabling sharing for a port group 113 using to identify failures 153
Fibre Channel 28 time
reenabling 110 synchronizing system, using NTP 196
switch parameters transition
setting 65 7-Mode to clustered Data ONTAP 14
switch ports sharing FC switch fabric 14
required for a four-node MetroCluster configuration twitter
with array LUNs 158 how to receive automatic notification of
required for a MetroCluster configuration with array documentation changes 207
LUNs 158 two-node configurations
required for a two-node MetroCluster configuration parts of 95
with array LUNs 158 two-node MetroCluster configuration
switch zoning switch ports required for a 158
example of four-node MetroCluster configuration two-node MetroCluster configurations
with array LUNs 173 cabling array LUNs and switches in 168
example of two-node MetroCluster configuration cabling FC-VI ports in 161
with array LUNs 172 cabling HBA ports in 161
Index | 215
U Cisco 76
Vservers
usernames See SVMs
preconfigured in a MetroCluster configuration 15
utilities
checking for common configuration errors with W
Config Advisor 151 workflows
downloading and running Config Advisor 151 cabling a four-node or two-node fabric-attached
MetroCluster configuration 22
V cabling a two-node bridge-attached stretch
MetroCluster configuration 94
verification cabling a two-node SAS-attached stretch
booting to Maintenance mode 123, 128 MetroCluster configuration 87
performing before booting to Data ONTAP 123, 128 MetroCluster software configurations in Data
verifying disk assignment 123, 128 ONTAP 116
verifying worksheets
MetroCluster configuration operations 149, 188 for FC switch configurations 31
Virtual fabric for FC-to-SAS bridge configurations 31, 97
disabling 65 for site configurations 117, 119
VLDB errors
in MetroCluster configurations 200
volume command errors Z
in MetroCluster configurations 200 zoning
volume creation sharing a switch fabric 113
in a MetroCluster configuration 198 zoning configuration
volumes configuring on a Brocade FC switch 62
commands 201 zoning configurations
recovering after a switchover 202 configuring on a Cisco FC switch 83
VSANs