0% found this document useful (0 votes)
79 views

B Series Remote Rep - Best Practices

Brocade remote replication best practise

Uploaded by

Kunal Sahoo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
79 views

B Series Remote Rep - Best Practices

Brocade remote replication best practise

Uploaded by

Kunal Sahoo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

HP StorageWorks B-Series remote replication solution best practices guide

Part number: 5697-6731 First edition: June 2007

Legal and notice information Copyright 2007 Hewlett-Packard Development Company, L.P. Condential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.21 1 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendors standard commercial license. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Intel, Itanium, Pentium, Intel Inside, and the Intel Inside logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Microsoft, Windows, Windows XP, and Windows NT are U.S. registered trademarks of Microsoft Corporation. Adobe and Acrobat are trademarks of Adobe Systems Incorporated. Java is a US trademark of Sun Microsystems, Inc. Oracle is a registered US trademark of Oracle Corporation, Redwood City, California. UNIX is a registered trademark of The Open Group. Printed in the US

Contents
About this guide
Intended audience . . . . . . . Related documentation . . . . . Document conventions and symbols Rack stability . . . . . . . . . HP websites . . . . . . . . . . Documentation feedback . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . .

7
7 7 7 8 8 9

1 Overview

. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 13
13 13 13 14 14 15 15 15 15 16

2 Hardware and software considerations . . . . . . . . . . . . . . . .


HP StorageWorks storage arrays . . . . . . . . . . . . . . . Required components . . . . . . . . . . . . . . . . . . . HP software . . . . . . . . . . . . . . . . . . . . . . HP hardware . . . . . . . . . . . . . . . . . . . . . Optional components . . . . . . . . . . . . . . . . . . . Solution components requirements and considerations . . . . . . HP StorageWorks 400 Multi-Protocol Router/MP Router blade HP B-Series SAN infrastructure (switches/directors) . . . . . HP StorageWorks 4x00/6x00/8x00 EVA family . . . . . . HP StorageWorks Continuous Access software . . . . . . .

3 Solution setup overview

Solution conguration concepts . . . . . . . . . . . . B-Series bre channel-to-bre channel routing . . . . Meta SANs . . . . . . . . . . . . . . . . . . . Backbone fabric . . . . . . . . . . . . . . . . . LSAN zoning . . . . . . . . . . . . . . . . . . Edge fabric view of a Meta SAN . . . . . . . . . . Meta SAN design considerations . . . . . . . . . Fibre channel over Internet Protocol (FCIP) . . . . . . Backbone fabric limitations . . . . . . . . . . . . Sample topologies and congurations . . . . . . . . . 400 MP Router and MP Router blade fabric architecture Continuous Access EVA congurations . . . . . . . Overall Setup Plan . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

17
17 17 17 18 18 19 20 20 21 22 22 23 27

4 Solution setup . . . . . . . . . . . . . . . . . . . . . . . . . .
Conguring FCIP on the HP StorageWorks 400 MP Router/MP Conguring Fibre Channel routing (FCR) . . . . . . . . . HP StorageWorks Continuous Access conguration procedure Disk group . . . . . . . . . . . . . . . . . . . . DR groups . . . . . . . . . . . . . . . . . . . . DR group log . . . . . . . . . . . . . . . . . . . HP cluster technologies for disaster-tolerant solutions . . . . HP Serviceguard . . . . . . . . . . . . . . . . . . HP Storageworks Cluster Extension EVA . . . . . . . . HP Metrocluster/HP Continentalclusters . . . . . . . . Router . . . . . . . . . . . . . . . . . . . . . . . . . . . blade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

29

29 32 33 33 33 34 34 34 34 35

B-Series remote replication solution

5 Tools for managing and monitoring the replication network


HP StorageWorks Continuous Access Solution testing with EVA workloads HP StorageWorks B-Series tools . . WAN analysis tools . . . . . Procedure description . . . . EVA . . . . . . . . Performance Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

37

37 38 38 39 39

6 Related information . . . . . . . . . . . . . . . . . . . . . . . 7 Appendix: . . . . . . . . . . . . . . . . . . . . . . . . . . .


Conguring FCIP interfaces and tunnels on the HP StorageWorks HP StorageWorks Fabric Manager . . . . . . . . . . . . . HP StorageWorks Fabric Manager share devices wizard . . Viewing LSANS using Fabric Manager . . . . . . . . . 400 . . . . . . MP Router using CLI . . . . . . . . . . . . . . . . . . . . . . . . . . .

Dark Fiber/WDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XP array family and Continuous Access XP . . . . . . . . . . . . . . . . . . . . . . . . . .

45
45 45

commands . . . . . . . . . . . . . . .

. . . .

47

47 50 50 52

Figures
1 ..Simplest Meta SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 19 19 20 21 22 23 24 25 26 27 38 2 ..Meta SAN with four edge fabrics . . . . . . . . . . . . . . . . . . . . . . . . . 3 ..Edge fabrics #1 logical view of Meta SAN 4 ..Simplest Meta SAN with FCIP . . . . . . . . . . . . . . . . . . . . . . . . . . 5 ..Dedicated backbone fabric Meta SAN . . . . . . . . . . . . . . . . . . . . . . 6 ..Dedicated backbone fabric with common devices . . . . . . . . . . . . . . . . . . 7 ..Continuous Access EVA two-fabric FCIP-router conguration . . . . . . . . . . . . . 8 ..Continuous Access EVA 4fabric FCIP router conguration . . . . . . . . . . . . . . 9 ..Continuous Access EVA ve-fabric FCIP-router conguration . . . . . . . . . . . . . 10 ..Continuous Access EVA 6fabric FCIP router conguration . . . . . . . . . . . . . . 1 1 ..Continuous Access EVA six-fabric FCIP-router with dedicated backbone fabrics conguration 12 ..HP StorageWorks Continuous Access EVA Replication Performance Estimator-V3 . . . . .

B-Series remote replication solution

Tables
1 ..Document conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2 ..Required software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3 ..Required hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 ..Optional components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 ..Inter-site link parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 ..Backbone fabric vs. edge fabric limitations 14 14 16 21 30 31

7 ..FCIP conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 ..Port mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

About this guide


This guide provides information about: Deployment of the HP disaster recovery/remote replication B-series solution across the HP StorageWorks Enterprise Virtual Array (EVA) family Design considerations specic to the HP StorageWorks B-Series SAN Infrastructure and HP StorageWorks Continuous Access EVA Deployment of the HP StorageWorks 400 Multi-Protocol Router and/or Multi-Protocol Router blade in the HP StorageWorks SAN Director 4/256 Tools for managing and monitoring the network

Intended audience
This guide is intended for: Technical personnel involved in the deployment of the disaster recovery/remote replication solution Channel partners and resellers

Related documentation
The following documents and websites provide related information: HP StorageWorks Enterprise Virtual Array homepage: http://www.hp.com/go/eva HP StorageWorks Continuous Access EVA software homepage: http://www.hp.com/go/caeva HP StorageWorks SAN design reference guide: http://www.hp.com/go/sandesignguide HP StorageWorks B-Series documentation: http://www.hp.com/go/san HP StorageWorks B-Series remote replication solution whitepaper: http://www.hp.com/go/ bseriesreplication

Document conventions and symbols


Table 1 Document conventions Convention
Blue text: Table 1 Blue, underlined text: http://www.hp.com

Element
Cross-reference links and e-mail addresses website addresses Keys that are pressed Text typed into a GUI element, such as a box GUI elements that are clicked or selected, such as menu and list items, buttons, tabs, and check boxes Text emphasis File and directory names System output Code

Bold text

Italic text Monospace text

B-Series remote replication solution

Convention

Element
Commands, their arguments, and argument values Code variables Command variables Emphasized monospace text

Monospace, italic text Monospace, bold text

WARNING! Indicates that failure to follow directions could result in bodily harm or death.

CAUTION: Indicates that failure to follow directions could result in damage to equipment or data.

IMPORTANT: Provides clarifying information or specic instructions.

NOTE: Provides additional information.

TIP: Provides helpful hints and shortcuts.

Rack stability
Rack stability protects personnel and equipment. WARNING! To reduce the risk of personal injury or damage to equipment: Extend leveling jacks to the oor. Ensure that the full weight of the rack rests on the leveling jacks. Install stabilizing feet on the rack. In multiple-rack installations, fasten racks together securely. Extend only one rack component at a time. Racks can become unstable if more than one component is extended.

HP websites
For additional information, see the following HP websites: http://www.hp.com http://www.hp.com/go/storage http://www.hp.com/service_locator

About this guide

http://www.hp.com/support/manuals http://www.hp.com/support/downloads

Documentation feedback
HP welcomes your feedback. To make comments and suggestions about product documentation, please send a message to [email protected]. All submissions become the property of HP.

B-Series remote replication solution

10

About this guide

1 Overview
This document describes how to deploy an HP disaster recovery/remote replication B-Series solution. It focuses on a replication solution using Continuous Access EVA and FCIP, and the deployment of the HP StorageWorks 400 Multi-Protocol Router and/or Multi-Protocol Router Blade in the HP StorageWorks SAN Director 4/256. The document includes sections on: Hardware and software considerations, page 13 Solution setup overview, page 17 Solution setup, page 29 Tools for managing and monitoring the replication network, page 37 Related Information, page 45 Appendix, page 47 For information on the need for disaster recovery and design considerations and factors that inuence the design and implementation of a disaster recovery/remote replication solution, see the HP StorageWorks B-Series remote replication solution whitepaper: http://www.hp.com/go/bseriesreplication. For detailed information on the deployment of Continuous Access EVA, see the HP StorageWorks Continuous Access EVA planning guide: http://www.hp.com/go/caeva. For more information on SAN design, see the SAN Design reference guide: http://www.hp.com/go/sandesignguide NOTE: There are similar replication recovery solutions available using the HP XP array family and Continuous Access XP, and intersite link technologies other than FCIP.

B-Series remote replication solution

1 1

12

Overview

2 Hardware and software considerations


HP StorageWorks storage arrays
The HP StorageWorks Enterprise Virtual Array family (EVA) is designed for customers in the business-critical enterprise marketplace, offering a high performance, high availability virtual array storage solution. The EVA is designed for the data center where there is a critical need for improved storage utilization and scalability. Remote replication (both synchronous and asynchronous) is supported for the HP StorageWorks EVA family, and replication can be performed between the same or different models of the EVA. Enhanced asynchronous replication, which increases the support for long distance replication, is available for the EVA 4x00/6x00/8x00 family, running XCS v6.x or later rmware.

Required components
HP software
Table 2 Required software Required software
HP StorageWorks Continuous Access EVA software (separate license)

Software description
HP StorageWorks Continuous Access EVA software is an array-based application that provides disaster tolerant remote replication for the entire EVA product family. Continuous Access EVA software utilizes the HP StorageWorks Replication Solutions Manager (RSM) interface to create, manage and congure remote replication on the full HP StorageWorks Enterprise Virtual Array product family. HP Replication Solutions Manager integrates both local and remote replication into a simple to use GUI and CLI. RSM removes almost all of the complexity of conguring and managing replication environments through easy to use wizards and automation of common tasks. HP StorageWorks Command View EVA Software is a comprehensive software suite designed to simplify array management of the HP StorageWorks Enterprise Virtual Array (EVA) family of storage array products. The suite allows the administrator to discover, monitor, and congure the HP StorageWorks EVA disk arrays from a single Web-based console, providing maximum control over the HP storage environment. FCIP SAN Services License (allows FCIP Tunneling)

HP StorageWorks Replication Solutions Manager (RSM)

HP StorageWorks Command View EVA software

B-Series licenses

B-Series remote replication solution

13

HP hardware
Table 3 Required hardware Required hardware
HP StorageWorks B-Series SAN infrastructureswitches/directors

Hardware description
The HP StorageWorks 400 Multi-Protocol Router/MP Router blade is compatible with all HP-supported B-Series SAN switches/directors, as specied in the HP StorageWorks SAN Design reference guide. HP StorageWorks 400 Multi-Protocol Router is a xed-conguration 16-port 4-Gb FC switch with two additional ports for FCIP connectivity. The HP StorageWorks B-Series Multi-Protocol Router blade is an option for the HP StorageWorks SAN Director 4/256. The blade has 18 ports: 16 Fibre Channel and 2 Gigabit Ethernet. The blade, as an option to the director, inherits the high availability attributes of the director platform. Both provide the following SAN services: FC switching, FC-FC SAN routing (FCR) and FCIP tunneling.

HP StorageWorks 400 Multi-Protocol Router/MP Router blade

Optional components
Table 4 Optional components Optional components
HP StorageWorks Cluster Extension EVA software

Description
Offers protection against application downtime from fault, failure, or site disaster by extending a local cluster between data centers over metropolitan distances. CLX EVA integrates with open-system clustering software (Microsoft Cluster Service in the Windows 2003 environments and HP Serviceguard for Linux in Red Hat and SuSe Linux environments) and HP StorageWorks Continuous Access EVA to automate failover and failback between sites. Disaster tolerant solutions for HP-UX 1 1i. Both solutions are built upon HP Serviceguard, HP-UXs foundation clustering software, and are integrated with Continuous Access EVA. Both provide automatic hands-free failover/failback of application services. A highly scalable, Java-based application that manages multiple B-series switches and fabrics in real-time, providing the essential functions for efciently conguring, monitoring, dynamic provisioning, and managing SAN fabrics on a daily basis. Fabric Manager includes wizards for deploying the B-Series Multi-Protocol router. The Power Pack models of the 400 MP Router and the SAN Director provide additional B-Series software licenses. The MP Router Blade inherits these licenses from the Director, if installed. If FCIP functionality is required, an additional license (as mentioned above in the required component list) is required; again, if the Director chassis is licensed, the blade will inherit the license.

HP Metrocluster / Continentalclusters with Continuous Access EVA

HP StorageWorks Fabric Manager

HP StorageWorks 400 Multi-Protocol Router Power Pack/HP StorageWorks 4/256 SAN Director Power Pack

14

Hardware and software considerations

Solution components requirements and considerations


This section highlights information to consider when building the data replication solution.

HP StorageWorks 400 Multi-Protocol Router/MP Router blade


Use the MP Router blade, if there are available slots in the HP StorageWorks 4/256 SAN Director; otherwise, use the standalone 400 Multi-Protocol Router. See the HP StorageWorks SAN Design reference guide at http://www.hp.com/go/sandesignguide for additional information on the 400 MP Router and MP Router Blade fabric rules. A minimum of two routers (standalone or blade) that communicate with the FCIP protocol must be installed at each site for path redundancy over the WAN. 400 MP Router/MP Router blade installed for use with Continuous Access EVA must have the exchange-based routing feature disabled. Follow the steps below to disable exchanged-based routing default setting: 1. Open a telnet session. 2. Enter aptpolicy. Current Policy: 3 3: Default Policy 1: Port Based Routing Policy 3: Exchange Based Routing Policy 3. Enter switchdisable. 4. Enter aptpolicy 1. Policy updated successfully. 5. Enter switchenable. 6. Enter aptpolicy. Current Policy: 1 The 400 MP Router or MP Router blade must NOT be congured for FastWrite acceleration. In-order delivery (IOD) of frames during a fabric topology change should be enabled (default). To verify the setting, enter iodshow. If IOD is not set, enter iodset. Dynamic Load Sharing (DLS) must be disabled. Use dlsreset to disable DLS. Enter dlsshow to verify the setting.

HP B-Series SAN infrastructure (switches/directors)


The default routing policy for B-Series switches is set to Exchange Based Routing Policy. Change the routing policy to Port Based Routing Policy for all switches on the SAN used by Continuous Access EVA for replication trafc. Use the following command: aptpolicy 1. (See the example under HP StorageWorks 400 Multi-Protocol Router/MP Router blade) Dynamic Load Sharing (DLS) must be disabled for all switches on the SAN used by Continuous Access EVA for replication trafc. Use dlsreset to disable DLS. In-order delivery of frames during a fabric topology change should be enabled (default).

HP StorageWorks 4x00/6x00/8x00 EVA family


For enhanced asynchronous remote replication, both arrays must be runing XCS rmware v6.x. With the enhanced asynchronous mode, Continuous Access EVA uses a buffer-to-disk implementation that increases the support for long distance replication. Replication between all EVA family models (EVA3000/4x00/5000/6x00/8x00) is possible, but enhanced asynchronous replication is available only on the EVA4x00/6x00/8x00 family arrays. For low bandwidth inter-site links, using separate EVA ports for host I/O and replication I/O is highly recommended.

B-Series remote replication solution

15

HP StorageWorks Continuous Access software


HP StorageWorks Continuous Access mandates certain inter-site link parameters in order to have optimal performance. Table 5 Inter-site link parameters Specication
IP Bandwidth
1

Description
Must be dedicated to the Continuous Access storage replication function. See the tables in the HP StorageWorks SAN design reference guide for minimum supported bandwidth based on the average packet-loss ratio and one-way inter-site latencies. 1500 bytes / 2250 bytes 100 ms IP network delay one-way or 200 ms round-trip ratio2 Low-loss network: 0.0012% average over 24 hours High-loss network: 0.02% average over 24 hours; must not exceed 0.05% for more than 5 minutes in a 2-hour window. Must not exceed 10ms over 24 hours

Maximum number of DR Groups Maximum transmission unit (MTU) of the IP network Maximum latency
1

Average packet-loss Latency jitter3

transmissions queued behind the current packet, thus increasing the time to complete pending transactions and lowering the performance predictability. 3Latency jitter is the difference between the minimum and maximum values, and indicates how stable or predictable the network delay. The greater the jitter, the greater the variance in the delay, which lowers the performance predictability.

1Pre-existing condition 2A high packet-loss ratio indicates the need to retransmit data across the inter-site link. Each retransmission delays

See the SAN extensions section in the HP StorageWorks SAN design reference guide: http://www.hp.com/go/sandesignguide for details on inter-site parameters and maximum allowable DR groups based on different bandwidths and latencies.

16

Hardware and software considerations

3 Solution setup overview


Solution conguration concepts
B-Series bre channel-to-bre channel routing
FC-to-FC Routing is a technology that logically connects physically separate fabrics (SAN islands) to enable selective shared access to resources from any fabric, with the benets of administration and fault isolation of separately managed fabrics. FC routing does not merge the fabrics, so the issues associated with merging fabrics, such as Domain ID overlaps and Zoning conicts, do not apply. FC routing provides selective device connectivity via Logical Storage Area Networks (LSANs), but prevents Fibre Channel services (e.g. FC Name Server) from propagating between fabrics. As a result, disruptions to a fabric are contained, and have less impact than would be the case without FC routing. Another benet of FC routing is that it allows each fabric to maintain independent administration, thus preventing fabric administrator errors from propagating between fabrics in addition to isolating failures in hardware or software. For example, if an administrator at one site accidentally deletes the fabric-wide zoning conguration, with FC routing in place, this will not propagate to other sites. Another advantage FC routing provides is that the scalability of one edge fabric does not affect another; fabric recongurations do not propagate between edge fabrics, and faults in fabric services are contained. Furthermore, if FC routing is used with legacy switches, the overall network size can vastly exceed the hardware and software capabilities of those switches. None of these features are true with one large merged fabric. There are a number of new terms associated with the FC routing that need to be dened in order to understand the design concepts of using FC routing between FC SANs.

Meta SANs
When FC switches are connected directly to each other via Inter-Switch Links (ISLs), they form a fabric. A dual redundant SAN is an unconnected pair of redundantly congured fabrics. When multiple fabrics are connected through a FC-to-FC router, they form a different kind of SAN. The resulting FC-routed Storage Area Network is a level above the common denition of SAN and is called a Meta SAN, or sometimes a routed FC SAN. At a minimum, a Meta SAN consists of one and only one backbone fabric and one or more edge fabrics, and the backbone fabric must contain at least one MP Router. For high availability, you may congure a dual redundant Meta SAN just as you would congure a dual redundant SAN.

B-Series remote replication solution

17

Figure 1 Simplest Meta SAN

Backbone fabric
The backbone fabric is the functional element that makes the logical connection between edge fabrics, or between the backbone fabric and an edge fabric. The backbone fabric, at a minimum consists, of at least one MP Router, but can contain more than one MP Router and standard FC switches in the backbone fabric. MP routers and FC switches in the backbone fabric are linked via a standard ISL that is no different from any other FC fabric. The key factor that distinguishes the backbone fabric is its connection to the edge fabrics. This connection between the backbone fabric and the edge fabric(s) is via an Inter-Fabric Link (IFL). The backbone fabric side of the link is congured as an EX_Port while the edge fabric side of the link is congured as an E_Port. This EX_Port is a version of the E_Port, and to the edge fabric, it appears like an ordinary connection to any other FC switch. The main difference between an E_Port and an EX_Port is that the EX_Port makes the backbone fabric appear to be a single FC switch to the edge fabric, regardless of the number of MP Routers and FC switches that are in the backbone fabric. The EX_Port is always the boundary point between the backbone fabric and an edge fabric and is in theory part of the edge fabric. Figure 1 shows that the edge fabric includes all the FC switches and the IFL up to and including the EX_Port of the backbone fabric. With the 400 MP Router and the MP Router Blade, a limited number of hosts and/or storage devices may be connected to the backbone fabric. These devices can be routed to the edge fabrics. Host and storage devices connected to the backbone fabric that contains a 2 Gb MP Router are not routable to any edge fabric.

LSAN zoning
LSAN zoning is the methodology that allows a subset of devices from separate fabrics to communicate. An LSAN zone looks like any other zone in a fabric, with the exception that the zone name must begin with LSAN_ (case insensitive). Day-to-day administration of LSAN zones is performed using the zoning tools within each fabric. This allows existing tools to work as usual, and minimizes the need to retrain SAN administrators. If this specially named LSAN zone is created on two or more fabrics to which an MP Router has access to, the MP Router will automatically create the Logical Storage Area Networks (LSANs) between those fabrics. If an LSAN spans two fabrics, there are two LSAN zones that dene itone on each fabric. An LSAN that spans three fabrics would have three LSAN zones, and so on. The name of the LSAN zone in the separate fabrics does not need to be the same, but the WWN of the devices from local and remote fabrics must be in an LSAN zone in both fabrics. TIP: While LSAN zones are not required to use the same name, it is recommended that you use the same or similar names to aid in troubleshooting. LSAN zoning only supports WWN zoning, as Port Identiers (PID) are not unique in the fabric and, in fact, two devices with exactly the same PID in two different fabrics can communicate via an LSAN zone.

18

Solution setup overview

NOTE: LSAN zones in the backbone fabric with a 2 Gb MP Router are not supported.

Edge fabric view of a Meta SAN


To the edge fabric, the MP Routers EX_Port looks like any other FC switch in the edge fabric. However, the EX_Port blocks the edge fabric from seeing the real topology of the backbone fabric or any switches in any other edge fabrics. To the edge fabric, the backbone fabric appears to be one switch, regardless of the number of switches in the backbone fabric or the number of IFLs between the edge fabric and the backbone fabric. In addition to this single logical switch that represents the backbone fabric, the backbone fabric will present one other logical switch for each remote edge fabric that has devices shared between edge fabrics. Regardless of the number of switches and shared devices in the remote edge fabric, the local edge fabric will only see that entire remote edge fabric as a single logical FC switch.

Figure 2 Meta SAN with four edge fabrics Using the Meta SAN depicted in Figure 2 as an example, and assuming that LSAN zones are deployed, allows sharing of devices between the three edge fabrics: edge fabric#1, edge fabrics #2, and edge fabric #4. Edge fabric #1s logical view of the Meta SAN appears as in Figure 3.

Figure 3 Edge fabrics #1 logical view of Meta SAN In Figure 3, edge fabric #1 sees the backbone fabric as a single FC switch regardless of the number of MP Routers, FC Switches, IFLs, or shared devices there are in the backbone fabric. Likewise, each remote edge fabric with devices routed to edge fabric #1 logically appears to be a single FC switch connected to the logical switch that represents the backbone fabric.

B-Series remote replication solution

19

NOTE: The edge fabric #3 is not presented to edge fabric #1 because, in the example, no LSAN zones for that fabric exist. For this example, if a FabricShow command is executed on any FC switch in edge fabric #1, it lists all physical FC switches in edge fabric #1 and three other logical switches: one logical switch for the backbone fabric and one for each edge fabric that has devices routed to edge fabric #1. The names of the logical switches in the FabricShow output are FCR_FD_# for the backbone fabric and FCR_XD_# for each of the remote edge fabrics, where # represents the domain identier assigned to the logical switch. This also applies to the backbone fabric and any remote edge fabric that have devices routed to edge fabric #1. Any devices routed to edge fabric #1 appear connected to the logical switch that represents that fabric and has a Port Identier (PID) based on the domain identier of that logical switch. NOTE: Only devices properly congured in LSAN zones will be presented to edge fabric #1.

Meta SAN design considerations


The SAN extension technology to be used to connect the fabrics Whether an existing fabric or a new dedicated fabric will be used for the backbone fabric

Fibre channel over Internet Protocol (FCIP)


Two additional port terms need dening when Fibre Channel over Internet Protocol (FCIP) is used between MP Routers. VE_PortA Virtual E_Port is an IP/Ethernet interface on an MP Router congured as an E_Port. It is equivalent to a FC E_Port, except the protocol on the link is FCIP. VEX_PortA Virtual EX_Port is an IP/Ethernet interface on an MP Router congured as an EX_Port and is equivalent to a MP Router EX_Port, except the protocol on the link is FCIP.

Figure 4 Simplest Meta SAN with FCIP The EX_Port or VEX_Port for FCIP is the boundary point between the backbone fabric and an edge fabric and, in theory, is part of the edge fabric. This does not have much impact on selecting the backbone fabric when using Native Fibre Channel or WDM technologies for the SAN extension medium; however, with FCIP this adds another consideration to the design of a Meta SAN. In Figure 4, the edge fabric includes all the FC switches in the edge fabric and the IFL/FCIP link up to the VEX_Port. Because the VEX_Port is part of the edge fabric, the IP network also becomes part of the edge fabric. Since the IP network is probably the most unstable component in the fabric, the fabric(s) that contain the IP network will see more disruptions than a fabric without an IP network.

20

Solution setup overview

Backbone fabric limitations


The backbone fabric is the functional element that makes the logical connection between edge fabrics, or between the backbone fabric and an edge fabric. There are some limitations in terms of the number of FC Routers, FC switches and devices that are supported in the backbone fabric. Table 6 Backbone fabric vs. edge fabric limitations
FOS Version FOS 5.1 Max # of device ports per backbone fabric Max # of device ports per edge fabric Max # of bre channel switches per backbone fabric Max # of bre channel switches per edge fabric 256 1000 10 26 FOS 5.2 512 1200 10 26

NOTE: The 2 Gb MP Router does not support FC routing on any device in the backbone fabric to any edge fabric.

NOTE: When you install the MP Router blade in the 4/256 SAN Director and create an EX_Port/VEX_Port, that Director and the fabric attached to the Director immediately become part of the backbone fabric. That is, all the non-router blades in the chassis become part of the backbone fabric, and the fabric containing the Director becomes limited in scalability and port connectivity. With the scaling limitations of the backbone fabric, it might seem intuitive to select the smaller site to be the backbone fabric, but if FCIP is utilized as the SAN Extension technology, the larger site (edge fabric) will have the IP network as part of its fabric. Since the IP network is the most unstable part of this conguration, the fabric that contains the IP network(s) will see more disruptions than a fabric without the IP network. HP recommends not including the IP network as part of the Primary site, while not limiting the scalability of the sites fabric(s). To do this, implement the type of conguration referred to as a dedicated backbone fabric as described in Sample topologies and congurations, page 22.

B-Series remote replication solution

21

Figure 5 Dedicated backbone fabric Meta SAN Utilizing a Dedicated Backbone as shown in Figure 5 can solve the backbone fabric scaling and FCIP IP network issues. By implementing a dedicated backbone fabric, no existing production fabric has the backbone fabric scaling limitations imposed on it, and generally the number of FC switches and devices on this dedicated fabric are smaller and within the current limitations. In addition to eliminating the scaling restrictions, when FCIP is used as the SAN extension technology, the IP network is now part of the backbone fabric rather then any of the individual edge fabrics. Each site has a piece of the backbone fabric and, if the IP network goes down, the edge fabrics will only see a Registered State Change Notication (RSCN) rather than the fabric transition the edge fabric would see if the IP network were a part of that edge fabric. The MP Router portion of the backbone fabric at each site could also contain common devices shared between multiple edge fabrics at that site. For example, if Site #1 had several edge fabrics and wanted to share a tape library system, the tape subsystem could be placed on the MP Router that is the interface to the backbone fabric for the sites edge fabrics.

Figure 6 Dedicated backbone fabric with common devices

Sample topologies and congurations


The following includes congurations that use the HP StorageWorks 400 MP Router and/or the MP Router blade for implementing a FCIP disaster recovery solution. The conguration concepts from the previous section are discussed in context with these congurations.

400 MP Router and MP Router blade fabric architecture


There are four proven SAN extension solutions that can be implemented with various director/switch/router combinations, depending on the size of the SAN and the availability level required: 2fabric architecture: used for smaller SANs with lower throughput and connectivity levels 4fabric architecture: used for smaller SANs, providing higher availability than the 2fabric solution. 5fabric architecture: used when larger scalability is required, and for I/O write-intensive situations

22

Solution setup overview

6fabric architecture: provides the highest availability level By using the B-Series directors and standalone switches capabilities, and the combination of Fibre Channel routing, switching, and SAN extension, the physical conguration can be reduced to fewer elements to drive additional simplication and ease of maintenance.

Continuous Access EVA congurations


The 400 MP Router and MP Router blade support the HP standard Continuous Access EVA replication congurations. This includes the two-fabric, four-fabric, ve-fabric, six-fabric and six-fabric with dedicated backbone fabric implementations as shown in Figure 7, Figure 8, Figure 9, Figure 10, and Figure 1 1. These ve conguration examples are all drawn showing 400 MP Routers; however, they could all be implemented using the MP Router blade or a combination of both. Figure 7 shows a typical 2-port EVA 6000/5000/4000/3000 conguration of a Continuous Access two-fabric LSAN replication zone solution. In this conguration, zoning is used to separate host trafc from replication trafc in the fabric. Standard B-Series zoning is used for one port from each EVA controller for local host access as in any other fabric. The other EVA port on each controller is dedicated to replication trafc, and LSAN zoning is used to enable the Fibre Channel Routing feature of the MP Router to allow these devices to communicate across the FCIP Link as if they were in the same fabric. This is the lowest cost conguration. However, no path redundancy is provided, and therefore this conguration does not provide any redundancy in the event of a fabric or WAN failure. In addition, the IP network is part of the remote site fabric, and the local site fabric is the backbone fabric.

Figure 7 Continuous Access EVA two-fabric FCIP-router conguration Figure 8 shows a typical 4-port EVA 8000 conguration of a Continuous Access four-fabric LSAN replication zone solution. Like the previous conguration, zoning is used to separate host trafc from replication trafc in each fabric. In each fabric, standard B-Series zoning is used for two ports from each EVA controller for local host access as in any other fabric. The other EVA ports on each controller are dedicated to replication trafc, and LSAN zoning is used to enable the Fibre Channel Routing feature of the MP Router to allow these devices to communicate across the FCIP Link as if they were in the same fabric. This conguration is a better solution than the Continuous Access two-fabric LSAN replication zone solution because there is No Single Point of Failure (NSOF) as both the local and remote sites implement dual redundant fabrics and a separate FCIP link for each fabric pair. Still, the IP networks are part of the remote site fabrics and the backbone fabrics are the ones on the local site.

B-Series remote replication solution

23

Figure 8 Continuous Access EVA 4fabric FCIP router conguration The design of a Meta SAN using FCIP needs to balance the scaling limitations of the backbone fabric with the disruptions an IP network can have on the edge fabric. There are two alternative solutions to overcome these two issues. The traditional solution has been to create a dedicated replication fabric or fabrics as shown in the ve-fabric, and six-fabric implementations in Figure 9 and Figure 10. The other possible solution is to use a dedicated backbone fabric solution as shown in the six-fabric with dedicated Backbone Fabrics implementation in Figure 1 1. This conguration is a modication of the four-fabric solution, where the functionality of the backbone fabric is removed from both the local and remote site fabrics and a new dedicated fabric is created for the Backbone. Both solutions eliminate the scaling limitations on both sites, and the IP network only affects the replication trafc. The dedicated backbone fabric has two advantages over the dedicated Replication Fabric solution. It is more scalable and allows devices other than the EVA storage connected to this dedicated Replication Fabric to communicate if needed. Figure 9 shows a typical 2-port EVA 6000/5000/4000/3000 conguration of a Continuous Access ve-fabric solution. In this conguration, a dedicated fabric is implemented for replication trafc, thus eliminating the fabric merging issue with a traditional FCIP implementation and the scaling limitation of an integrated backbone fabric solution. Half the EVA controller ports are connected to a dedicated replication fabric. Since the only devices in this replication fabric are the replication ports of the local and remote site, Fibre Channel Routing is not required. The benet of the MP Router over traditional FCIP gateways is that it eliminates the need for a separate Fibre Channel switch, as the MP Router has both capabilities in one device.

24

Solution setup overview

Figure 9 Continuous Access EVA ve-fabric FCIP-router conguration Figure 10 shows a combination of a 4-port EVA 8000 and 2-port EVA 6000/5000/4000/3000 conguration of a Continuous Access six-fabric solution. In this conguration, two dedicated fabrics are implemented for replication trafc. It eliminates the fabric merging issue with a traditional FCIP implementation and the scaling limitation of an integrated backbone fabric solution. Half the EVA controller ports are connected to each dedicated replication fabric. Since the only devices in this replication fabric are the replication ports of the local and remote site, Fibre Channel Routing is not required. The benet of the MP Router over traditional FCIP gateways is that it eliminates the need for a separate Fibre Channel switch as the MP Router has both capabilities in one device. NOTE: HP recommends that the primary or local site in this conguration be a 4-port EVA 8000 and the secondary or remote site can be either a 4-port EVA 8000 or 2-port EVA 6000/4000/5000/3000.

B-Series remote replication solution

25

Figure 10 Continuous Access EVA 6fabric FCIP router conguration Figure 1 1 shows a typical 4-port EVA 8000 conguration of a Continuous Access six-fabric with dedicated backbone fabrics LSAN replication zones solution. In this conguration, half the EVA controller ports are connected to each of the dual redundant fabrics for each site. This conguration, using two dedicated backbone fabrics for Fibre Channel routing trafc between edge fabrics, solves all the issues with a traditional FCIP implementation and the scaling limitations of an integrated backbone fabric solution. Fibre Channel Routing solves the issues associated with the merger of two physically separate fabrics. Since the backbone fabric is no longer used for local trafc, the scaling issues in terms of number of devices and switches in the fabric are mitigated. Also because the IP network is now part of the backbone fabric and not part of the local or remote fabrics, fabric disruptions are minimized on these fabrics. This solution allows more than two sites to be connected to the dedicated backbone fabrics, and so the same backbone fabric could be used for a primary site to connect to multiple secondary sites. This conguration also has the advantage of being able to share more devices than the dedicated replication fabrics, as the dedicated replication fabrics only have the EVA ports connected to them and only the storage controller can communicate over these fabrics. This enables a server to connect to storage resources on both the local and remote fabrics while maintaining the ease of management and fault isolation that two smaller physically separate fabrics have over one large merged fabric.

26

Solution setup overview

Figure 1 1 Continuous Access EVA six-fabric FCIP-router with dedicated backbone fabrics conguration If your topology does not t the sample congurations in this section, see the HP StorageWorks SAN design reference guide on the website http://www.hp.com/go/sandesignguide for conguration requirements and design considerations.

Overall Setup Plan

To set up Continuous Access replication from one B-Series fabric to another (FOS 5.2.1b or later) using Fibre Channel Routing and FCIP across a wide area connection: 1. Obtain both local and remote site IP addresses for the FCIP connection, and ensure that a static route is dened across the WAN.

B-Series remote replication solution

27

2. Use B-Series WANTOOLS facilities to discover the operational characteristics of the IP network you intend to use for Continuous Access replication. See Tools for managing and monitoring the replication network, page 37 for more information. Document the characteristics of the WAN connection including: Bandwidth available PMTUPath Maximum Transmission Unit and whether jumbo frames are supported across the WAN (PMTU must be larger than 1500) NOTE: If you input the wrong MTU size (too large), the I/O will take more time than necessary as the IP transport (WAN link) makes no effort to optimize the transfer of data. 3. Install both local and remote B-Series MP routers (Director blade or stand-alone). 4. Enable/congure the B-Series MP Router ports for both FC connectivity to the local fabrics and GbE / FCIP operation across the wide area connection. 5. Activate the FCIP link. 6. Dene LSAN zones in both fabrics. 7. Congure B-Series 400 MP Router or MP Router blade ports for FC-FC routing by assigning the appropriate operational mode to the FC ports (either E_Port or EX_Port mode), and the GbE ports (either VE_Port or VEX_Port). There is no single right answer for this; each installation will have its own operational requirements and considerations. 8. Connect the local and remote fabrics to the B-Series MP Router(s). 9. Run Continuous Access software to congure local and remote storage and replicate data.

28

Solution setup overview

4 Solution setup
Conguring FCIP on the HP StorageWorks 400 MP Router/MP Router blade

Use the Web Tools interface to congure the FCIP connection between two geographically separate sites on a Wide Area Network (WAN). You can also use the command line interface with either direct console (serial) connection to the switch or through telnet sessions (see Conguring FCIP interfaces and tunnels on the HP StorageWorks 400 MP Router using CLI commands, page 47 in the Appendix.) Refer to the FOS 5.1.x (or later) administrators guide for additional information. NOTE: It is assumed that the initial switch conguration setup steps are done, including the denition of Switch Name, IP/Subnet/Gateway (for switch management), Switch Domain ID, Time Zone, Date/Time, and so forth for all switches and directors in both the local and remote fabrics. See the HP StorageWorks Fabric OS administrators guide for specics on setting up the switch conguration. Follow these steps to congure FCIP on the B-Series 400 MP router. 1. Start a browser. 2. In the address area, enter the IP address of the 400 MP Router to congure. The switch login applet launches. 3. Enter the username and password. The Web Tools interface displays.

4. Click ge0 on the GigE port. The Port Administration Services window displays.

B-Series remote replication solution

29

5. Ensure that port ge0 is selected and click Edit Conguration to launch the GigE Port Conguration Wizard. 6. Click Next. The GigE Port # 0 Conguration window displays. 7. Click Add Interface, and enter the IP address and MTU size. Valid entries are 1500 and 2250.

NOTE: Your network administrator will provide the MTU size. Table 7 FCIP conguration
Examples First Tunnel GbE Port Ge0 IP Address (local) 10.0.10.1 IP Address (remote) 10.0.10.2 Tunnel 0

8. Select Close > Next. The Congure IP Routes window displays. A default route is added automatically. 9. Click Add Route and enter a new IP route. 10. Select Add > Close > Next. The Select Tunnel window displays. 1 1. Select the tunnel to congure and click Next.

30

Solution setup

The FCIP Tunnel Conguration window displays. 12. Set the desired conguration parameters and click Next. NOTE: If you want to enable compression across the FCIP link, click the Enable Compression checkbox in the Tunnel Congurations window. Do not select the FastWrite checkbox as HP Continuous Access EVA uses other ways to do this. Since it is a hardware-enabled compression, HP recommends enabling compression. HP also recommends using committed tunnels. 13. Conrm your selections and click Finish to create the tunnel. The Conrmation and Save Window displays. You have successfully created a tunnel. 14. Click Close. 15. Open another browser window and then open the Web Tools Switch View on the Remote 400 MP Router. 16. Complete steps 4 through 14 to congure the 400 MP Router at the remote site. The FCIP link setup is complete. NOTE: When conguring the second MP Router, the local and remote IP addresses will be relative to the machine you are on. You will need to reverse them when conguring the second router. 17. Enable FCIP ports after both the local and remote FCIP congurations are complete. a. From the Port Administrations window (it should still be open), click FC Ports. b. Select the port to enable and then select Persistent Enable. NOTE: Each GbE port on the router can have up to eight FCIP tunnels and up to eight logical VE_Ports or eight logical VEX_Ports, which correspond to logical FC ports. Table 8 shows the port mapping. Table 8 Port mapping Physical Ports
ge0 ge1

Tunnels
0 through 7 0 through 7

Virtual ports
16 through 23 24 through 31

c.

Enable the FCIP port on the remote router.

NOTE: It can take Web Tools a while to update the FCIP changes. You can monitor the FCIP Tunnels tab on the Port Administration Services window for link status.

NOTE: When you setup an FCIP tunnel (for FCR Fibre Channel routing), ensure that the FC ports (0 15) are disabled. This prevents unintended fabric merges.

B-Series remote replication solution

31

NOTE: Although not covered here, the procedures for the MP Router blades are identical, but some screen shots will differ slightly given the blade orientation in the director chassis. To start this procedure on a blade router, point your browser to the IP address of the Director.

Conguring Fibre Channel routing (FCR)


To congure the Fibre Channel routing using Web Tools: 1. Launch the FC routing module via the FCR button in the Switch View on the router.

A tabbed interface displays, from which you can access the functions required to enable FCR on the switch you are connected to. 2. Ensure that the backbone fabric ID of the switch is the same as other FCRs in the backbone fabric. Click Set Fabric ID and either enter the FID value or select it from the drop-down menu. 3. Enable Fibre Channel routing (FCR). a. Click Enable FCR. b. Click Yes to conrm. 4. Ensure that the ports to congure as EX-Ports are not connected or are disabled. 5. Congure the EX-Ports: a. Click the EX-Ports tab. b. Select New to launch the EX-Port Conguration wizard. c. e. Select the port to be congured. For this example, select port 0 and click Next to continue. Conrm the settings. d. Set the Port Parameters (FID) and Interop mode; set the FC operational parameters. 6. Connect the EX-Ports to the proper edge fabric if they are not already connected: a. Connect the cables. Do not enable the ports yet. b. Close the FCR Admin window. 7. Congure LSAN zones on the fabrics that share devices using the Zone Administration module of Web Tools: a. Use the browser to launch the Switch View on the edge fabric switch. b. Click zone admin to launch the Zone Administration module .

32

Solution setup

NOTE: LSANs are set up the same way as other zones with the exception that the zone name starts with LSAN_.

NOTE: The Share Devices wizard in Fabric Manager simplies the process of setting up LSANs. See HP StorageWorks Fabric Manager, page 50 for details. 8. The ports need to be enabled to see the LSANs. From the router, launch Web Tools and select the port to launch the Port Administration window: a. Select the port and click Persistent Enable. b. Close the Port Administrations window. 9. Click the FCR button on the Switch View to bring up the FCR module. Use the tabs to view the EX-Ports, LSAN Fabrics, LSAN Zones and LSAN Devices and make sure that the conguration succeeded.

HP StorageWorks Continuous Access conguration procedure


HP StorageWorks Continuous Access EVA software is an array-based application that utilizes a powerful simple graphical user interface to create, manage and congure remote replication on the entire EVA product family. Continuous Access EVA shares an integrated management interface, called Replication Solutions Manager (RSM), with HP StorageWorks Business Copy offering a unique, unied replication management approach. Refer to online Continuous Access EVA reference manuals for detailed planning and conguration information at http://www.hp.com/go/caeva . The following includes some of the array-specic factors to be aware of while conguring the Continuous Access replication solution:

Disk group
When data is synchronously replicated remotely, application performance is not necessarily improved by increasing the number of disks in a disk group. This is because response time for application writes includes the time for replication. In addition, sequential access (read or write) is limited by the per-disk performance rather than the number of disks in the disk group. Additional disks improve response time only when an application has a high percentage of random reads compared to writes. Analyze the I/O prole of your applications and consider the following: If the application I/O stream is dominated by a mix of simultaneous sequential and random transfers, determine how these streams can be directed to specic virtual disks. Put virtual disks with similar tasks in the same disk group. In general, separate sequential I/O stream data (database logs, rich content) from random I/O streams (database information store, le shares). Transfer proles that differ over time are not a concern. A virtual disk that receives sequential transfers for part of the day and random accesses for the rest of the day will operate well in both cases. The issue to consider is accommodating simultaneous sequential and random streams.

DR groups
A data replication (DR) group is a logical group of virtual disks in a remote replication relationship with a corresponding group on another array. Hosts write data to the virtual disks in the source DR group, and

B-Series remote replication solution

33

the array copies the data to the virtual disks in the destination DR group. Virtual disks in a DR group fail over together, share a write history log (DR group log), and preserve write order within the group. Virtual disks that contain data for one application must be in one DR group. For optimum failover performance, limit the virtual disks in a DR group to as few as possible, and do not group virtual disks that are assigned to separate applications. The maximum number of virtual disks in a DR group and the maximum number of DR groups per array vary with controller software versions. For current supported limits, see the latest HP StorageWorks EVA replication software consolidated release notes.

DR group log
Plan for the additional disk space required for DR group logs. A DR group stores data in the DR group log when: A problem occurs with the intersite link In suspended mode Using enhanced asynchronous write mode The log requires Vraid1 space ranging from 136 MB to 2 TB, with the maximum value depending on the size of the DR group members and the version of controller software. (In later versions, you can specify the maximum DR group log size.) For version support, see HP StorageWorks EVA replication software consolidated release notes at the manuals location mentioned above. If XCS 6.0 or later is being used, create the DR group log on an online drive. Constant writes to the DR group log in enhanced asynchronous mode signicantly shortens the expected lifetime of near on-line drives.

HP cluster technologies for disaster-tolerant solutions


HP offers a comprehensive portfolio of disaster-tolerant solutions to protect the customers data in the event of disaster or system failure. Clustering technologies can be deployed in order to provide automatic application failover and failback capabilities. This section will provide an overview of possible technologies. All are built on the foundation of HPs clustering software, HP Serviceguard.

HP Serviceguard
HP Serviceguard is a high availability (HA) clustering solution that leverages the strength of HPs experience in the HA business, bringing the best-in-class mission critical HP-UX technologies to the Linux environment and Proliant and Integrity servers. The cluster kit includes high availability software that enterprise customers require for 24x7 business operations. It is designed to: Protect applications from a wide variety of software and hardware failures Monitor the health of each server (node) Quickly respond to failures, including system processes, system memory, LAN media and adapters, and application processes Enable customers to cluster HP Proliant and Integrity server families with a choice of shared storage, depending on requirements For more information, see the website http://www.hp.com/go/ha.

HP Storageworks Cluster Extension EVA


HP StorageWorks Cluster Extension EVA offers protection against application downtime from fault, failure, or site disaster by extending a local cluster between data centers over metropolitan distances. CLX EVA integrates with open-system clustering software (Microsoft Cluster Service in the Windows 2003 environments and HP Serviceguard for Linux in Red Hat and SuSe Linux environments) and HP StorageWorks Continuous Access EVA to automate failover and failback between sites. Cluster Extension EVA delivers true hands-free failover/failback decision making - in the event the human

34

Solution setup

storage administrator is unaware of the outage, unable to respond or not present - because Cluster Extension EVA requires no server reboots or LUN presentation/mapping changes during failover. HP StorageWorks Cluster Extension EVA software is an integrated solution that provides protection against system downtime with automatic failover of application services and read/write enabling of remotely mirrored mid-range storage over metropolitan distances. Cluster Extension EVA adapts in real time, to real life situations, providing protection via rapid site recovery. Cluster Extension EVA delivers true hands-free failover/failback decision making - in the event the human storage administrator is unaware of the outage, unable to respond or not present - because Cluster Extension EVA requires no server reboots or LUN presentation/mapping changes during failover. Cluster Extension EVA integration provides efciency that preserves operations and delivers investment protection as it monitors and recovers disk pair synchronization on an application level and ofoads data replication tasks from the host. Cluster Extension EVA supports the entire EVA family of arrays at either the primary or secondary site. Implementation of a Cluster Extension EVA solution assures the highest standards in data integrity by maximizing the advantages of its integration with Continuous Access EVA Software, and with HP Serviceguard for Linux in Red Hat and SuSe Linux environments and Microsoft Cluster Service in Windows 2003 environments. For more information, see http://www.hp.com/go/clxeva.

HP Metrocluster/HP Continentalclusters
HP Metrocluster and HP Continentalclusters are disaster-tolerant solutions for the HP-UX 1 1i environment. Both are built on HPs foundation clustering software, HP Serviceguard, providing the highest level of availability and business continuity for enterprise data centers. Both are integrated with Continuous Access EVA. HP Metrocluster is a business continuity solution with automatic site failover for up to 16 HP Integrity or HP 9000 servers connected to array-based storage. It provides automatic and bi-directional failover of business-critical data and applications located at data centers up to 300km apart, so both data centers can be active, protected, and capable of handling application failover for each other. HP Continentalclusters is HPs disaster-tolerant solution for unlimited distances. Since there are no distance limitations on how far apart the data centers can be located, Continentalclusters provides exibility for customers in where to locate their computing resources. It provides both manual and bi-directional failover of business-critical data and applications over unlimited distance. For more information, see http://www.hp.com/go/dt.

B-Series remote replication solution

35

36

Solution setup

5 Tools for managing and monitoring the replication network


Sizing link bandwidth requirements for remote replication is critical in meeting customers expectations. This section includes tools that can be used for estimating the effects of inter-site latency as well as determining the characteristics of the inter-site link. For additional information about calculating latency and estimating bandwidth requirements, see the HP StorageWorks Continuous Access planning guide and the HP StorageWorks Continuous Access EVA Performance Estimator user guide. Both the planning guide and estimator user guide can be found via the Technical Documents link on the Continuous Access EVA homepage at http://www.hp.com/go/caeva.

HP StorageWorks Continuous Access EVA Performance Estimator


HP StorageWorks Continuous Access EVA Performance Estimator (the estimator) is an interactive spreadsheet that calculates the approximate effect of inter-site latency and available bandwidth on replication throughput for specic link technologies and application write size. You supply the latency and application writes size, and the estimator determines the average replication I/Os per second (IOPS) and throughput. You can use the estimator to evaluate: The performance capabilities of specic inter-site links The link requirements for specic levels of performance The bandwidth and time required for a full copy of the source disks Downloading the estimator 1. Browse to the HP StorageWorks Continuous Access EVA website http://www.hp.com/go/caeva. 2. Click Related Information. The related information links are displayed 3. Click HP StorageWorks Continuous Access EVA Performance Estimator. 4. In the File Download dialog box, click Save and select the destination for the le 5. To use the estimator, open the le in Microsoft Excel. The rst time you open the Excel le, the default values display. Values in the boxes with white background can be changed.

B-Series remote replication solution

37

Figure 12 HP StorageWorks Continuous Access EVA Replication Performance Estimator-V3

Solution testing with EVA workloads


During acceptance testing and post installation, it is very important to monitor the network under a variety of representative Continuous Access EVA workloads before placing the system in production. Some or all of these tests should be run concurrently. Suggested workloads include the following: Full DR Group copies (normalization of volumes). At least four operating concurrently is recommended Local I/O between host initiators and EVA Storage Arrays Replication Tape Backup Various tools which can used to gather information are: Host Tools: PERFMON, IOSTAT or a similar performance tool EVA Tools: Continuous Access EVA Performance Estimator, EVA-PERF

HP StorageWorks B-Series tools


A basic Ethernet MTU is 1518 bytes. This forces an FC frame to be broken into two Ethernet packets when traveling via FCIP. A 2250 byte jumbo packet can accept an entire standard FC frame (2148 bytes) leading to improved FCIP performance and more efcient network utilization. Many IP service providers can support jumbo packets throughout their network. It is critical that the user understands the operational characteristics of the IP link intended to be used for the FCIP communications. WAN analysis tools are designed to estimate the end-to-end IP path performance characteristics between a pair of FCIP port endpoints. WAN tools include the following commands and options:

38

Tools for managing and monitoring the replication network

portcmd ipperfCharacterizes end-to-end IP path performance between a pair of B-Series FCIP ports. The path characterization elements include: BandwidthTotal packets and bytes sent. Bytes/second estimate will be maintained as a weighted average with a 30 second sampling frequency and also as an average rate over the entire test run. LossEstimate is based on the number of TCP retransmits (assumption is that the number of spurious retransmits is minimal). Loss rate (percentage) is calculated based on the rate of retransmissions within the last display interval. Delay/Round Trip Time (RTT)TCP smoothed RTT and variance estimate in milliseconds. Path MTU (PMTU)Largest IP-layer datagram that can be transmitted over the end-to-end path without fragmentation. This value is measured in bytes and includes the IP header and payload. There is limited support for black hole PMTU discovery. If the Jumbo PMTU (anything over 1500) does not work, ipperf will try 1500 bytes (minimum PMTU supported for FCIP tunnels). If 1500 PMTU fails, ipperf will give up. There is no support for aging. PMTU detection is not supported for active tunnels. During black hole PMTU discovery, the BW, Loss, and PMTU values printed might not be accurate. portshow fcipTunnelDisplays performance statistics generated from the WAN analysis.

WAN analysis tools


Typically, you start the WAN tool before setting up a new FCIP tunnel between the two sites. You can congure and use the ipperf option immediately after installing the IP conguration on the FCIP port. Once the basic IP addressing and IP connectivity is established between two sites, you can congure ipperf with parameters similar to what will be used when the FCIP tunnel is congured. The trafc stream generated by the WAN tool ipperf session can be used to: Validate a service provider Service Level Agreement (SLA) throughput, loss, and delay characteristics Validate end-to-end PMTU, especially if you are trying to eliminate TCP segmentation of large Fibre Channel frames Study the effects and impact FCIP tunnel trafc may have on any other applications sharing network resources

Procedure description
1. A telnet session is used to connect to the local and remote B-Series Routers and dene the IP interface address to use with the GbE ports on each router. If more than one FCIP tunnel is used, it is necessary to dene a local and remote address for each tunnel. This is accomplished using the FOS command portcfg. These addresses will be routable across the client IP network and would have been provided from the client IP network administrator. 2. After the IP address interfaces have been dened, the user can verify that the IP addresses are routable and reachable by using the B-Series FOS command portcmd with the command control parameter ping. 3. With routable connection established and tested with the ping step, the user will connect to the remote B-Series router and use the FOS command portcmd with the command control parameter -ipperf to prepare the remote device for the WANTOOLS diagnostic process. This is known as specifying the sink mode to accept the new connection and is accomplished with the -R parameter of the ipperf command. There are additional parameters required, including -s and -d, to specify the ip address of the source and destination router. 4. Next the user returns to the local B-Series router and uses portcmd with the command control parameter -ipperf to specify the local machine as the source mode to initiate the TCP connection. The source end-point generates a trafc stream and reports to the end-to-end IP path characteristics from this end-point toward the receiver end-point sink. This step is accomplished with the ipperf command parameter -S. There are additional parameters required including s and d to specify the IP address of the source and destination router, plus a test duration -t (time to run test: default is forever) and -i (display/refresh interval, default: 30) specied in seconds.

B-Series remote replication solution

39

5. Using the output from these commands, the user has data that can be evaluated and used to guide the conguration of the FCIP tunnel.

CLI syntax
portcmd
ping: Pings a destination IP address from one of the source IP interfaces on the GbE port. Usage Arguments portcmd --ping [slot/]port args -s src_ip Species the local IP address to use for sourcing the probe packets -d dst_ip Species the destination IP address to probe the IP router path -n num_requests Species the number of ping requests. The default is 4. This operand is optional. -q service_type Species the type of service in the ping request. The default is 0 and service_type must be an integer from 0 to 244. This operand is optional. -t ttl Species the time to live. The default is 100. This operand is optional. -w wait_time Species the time to wait for the response of each ping request. The default is 5000 milliseconds and the maximum wait time is 9000. This operand is optional. -z size Species the default packet size to a xed size in bytes. The default is 64 bytes. The total size, including ICMP/IP headers (28 bytes without IP options), cannot be greater than the IP MTU congured on the interface. This operand is optional.

Optional arguments

ipperf: Determines the path characteristics to the remote host. Usage Arguments portcmd --ipperf [slot/]geport args [optional_args] -s src_ip Species the local IP address to use for sourcing the probe packets. ipperf will not start if an IPSec-enabled tunnel using the same source IP address exists -d dst_ip Species the destination IP address to probe the IP router path. -S Species the source mode to initiate the TCP connection. -R Species the sink mode to accept the new connection. The end-to-end path characteristics are not reported. -r committed_rate Species the committed rate for the data stream in kilobits/sec (kbps) -i interval

Optional arguments

40

Tools for managing and monitoring the replication network

Species the display/ refresh interval in seconds. Default: 30 sec -p port Species the TCP port number for the listener end-point. Default: 3227 -t running_time Species how long to run the test, in seconds. Default: run forever -z size Species the default packet size in bytes. Default: 64 bytes

Examples
To verify if packets can be sent to the destination IP address, enter: portcmd --ping 4/ge0 -s 192.168.100.50 -d 192.168.100.40 2. To prepare the Remote B-Series router GbE port to receive (sink), enter: portcmd --ipperf ge0 s 192.168.100.40 d 192.168.100.50 R 3. To execute the WANTOOLS procedure from the Local B-Series Router GbE port to the Remote GbE port, collect data for a period of 60 seconds, and sample at an interval of 5 seconds, enter: portcmd --ipperf ge0 s 192.168.100.40 d 192.168.100.50 S i 5 t 60 NOTE: 1. In the above examples, the source and destination IP addresses are specied in the context of where the command is executed. When connected via a telnet session to the local site router, the source (-s) is the IP address of the interface of the local router and the destination (-d) is the FCIP interface address on the remote router. 2. There are optional parameters for the -ipperf command that can be used to further characterize the link and test other IP ports. Only a sampling of simple usage are shown. 1.

CLI syntax:
portshow
fciptunnel To view the performance statistics and monitor the behavior of an online FCIP tunnel: Usage Arguments portshow fciptunnel [slot/]geport args [optional_args] all | tunnel_id Shows all or tunnel_id FCIP tunnels on this GbE port -perf Shows additional performance information -params Shows connection parameter information

Optional arguments

Examples
Executing portshow fciptunnel 8/ge0 0 perf displays the following output:

B-Series remote replication solution

41

Executing portshow fciptunnel 8/ge0 0 params displays the following output:

42

Tools for managing and monitoring the replication network

B-Series remote replication solution

43

44

Tools for managing and monitoring the replication network

6 Related information
This document covers FCIP disaster recovery solutions using the B-Series components. However, additional technologies are also supported. These include using native Fibre Channel with WDM technology, and Continuous Access XP. Both of these technologies are supported with the B-Series components.

Dark Fiber/WDM
Besides extension over FCIP, the 400 MP Router and MP Router blade are supported with other types of SAN extension technology: Native Fibre ChannelThis choice is based on direct Fibre Channel extension using short or long-wavelength, small form pluggable (SFPs) and Dark Fiber. Distance support is dependent on the SFP range, but is usually below 50km. With this option, the performance is the best possible, both in term of latency and throughput. Cost varies from low for short runs using multi-mode ber optic (MMF) cables and short wavelength (SWL) media, to high using single-mode ber optic (SMF) cables and long wavelength (LWL and ELWL) transceivers. Compared to other extension options, LWL transceivers are inexpensive and easy to deploy. This SAN extension option does have two drawbacks: limited distances are supportable and the cost of ber runs. Dedicated dark ber links are used for every port of bandwidth required, and the dark ber cables themselves can be expensive. Native Fibre Channel with Wavelength Division Multiplexing (WDM)SAN extension utilizing WDM (either DWDM or CWDM) through an existing optical metropolitan area network (MAN), which can handle multiple protocols. Distances range from 50 km up to hundreds of kilometers. CWDM is lower cost than DWDM, which offers longer distances. Both CWDM (8 Wave-lengths) & DWDM (32 Wave-Lengths) offer the benet of multiple wavelengths of light over a single ber optic cable for increased link capacity. For more details on these SAN extension technologies with B-Series fabrics, see the SAN extensions section of HP StorageWorks SAN design reference guide at http://www.hp.com/go/sandesignguide.

XP array family and Continuous Access XP


The 400 MP Router and MP Router Blade can also be used in a disaster recovery solution using Continuous Access XP and the XP family of storage arrays (XP10000/XP12000). The HP StorageWorks XP10000 Disk Array, an entry-level high-end array, delivers always-on availability for mission-critical IT environments. Boot-once scalability and heterogeneous connectivity in a compact footprint increases business agility and decreases the stress of running applications where downtime is not an option. The HP StorageWorks XP12000 Disk Array is an enterprise class storage system that delivers state-of-the-art reliability and always-on availability for mission-critical applications where downtime is not an option. Both the XP10000 and the XP12000 are designed for organizations that demand the most from their storage. Complete redundancy throughout the architecture provides no single point of failure, and non-disruptive online upgrades ensure that data is always available. HP StorageWorks Continuous Access XP software products are high availability data and disaster recovery solutions that deliver host-independent, real-time, remote data mirroring between XP disk arrays. With seamless integration into a full spectrum of remote mirroring based solutions, these products can be deployed in solutions ranging from data migration to high availability server clustering. For more information on Continuous Access XP software, see http://www.hp.com/go/storageworks/ software. For more information on XP disk arrays, see http://www.hp.com/go/storage/xparrays.

B-Series remote replication solution

45

46

Related information

7 Appendix:
Conguring FCIP interfaces and tunnels on the HP StorageWorks 400 MP Router using CLI commands
This section is for expert users.

Command procedure description


1. The user will use a telnet session to connect (login) to the local and remote B-Series routers. 2. Ensure that both the FC and FCIP ports are disabled during conguration operations. This prevents unintended fabric merges between local and remote fabrics. 3. By default, virtual ports are created as VE_Ports. If a VEX_Port is desired, use portcfgvexport to congure the port. 4. Use the command portcfg to congure the GbE port, IP interfaces on the GbE port, static routes on the IP interface, and FCIP tunnels. a. Dene the IP interface of each virtual port. b. Add IP routes on a GbE port, i.e., create the static IP route. c. Congure the FCIP tunnel.

CLI syntax
portcfgvexport
Congure VEX_Ports Usage Arguments portcfgvexport [slot/]port args -a admin Specify 1 to enable or 2 to disable the admin. -f fabricid Specify 1 to 128 for the fabric ID. -d domainid Specify 1 to 239 for the preferred domain ID. -p pidformat Specify 1 for core, 2 for extended edge, and 3 for native port ID format. -t fabric_parameter Specify 1 to enable or 2 to disable negotiate fabric parameters.

B-Series remote replication solution

47

portcfg Congures the FCIP interfaces and tunnels.


ipif: Congure IP interface entries Usage Arguments portcfg ipif [slot/][ge]port args create ipaddr netmask mtu_size Creates IP interfaces delete ipaddr Deletes IP interfaces.

iproute: Congure IP route entries Usage Arguments portcfg iproute [slot/][ge]port args. create ipaddr netmask gateway_router metric Creates IP routes. delete ipaddr netmask Deletes IP routes.

fciptunnel: FCIP tunnels Usage Arguments portcfg fciptunnel [slot/][ge]port args [optional_args]. create tunnel_id remote_ipaddr local_ipaddr comm_rate Creates FCIP tunnels. -c Enables compression on the tunnel specied. -f fastwrite Enables fastwrite on the tunnel specied. This argument cannot be used together with -ike, -ipsec or -key (IPSec). -ike policy Species the IKE policy number used on the tunnel specied. This argument must be used with -ipsec and -key. This argument cannot be used together with -f (fastwrite) or -t (tape pipelining). -ipsec policy Species the IPSec policy number to be used on the tunnel specied. This argument must be used together with -ike and -key. This argument cannot be used together with -f (fastwrite) or -t (tape pipelining). -k timeout Species the keep alive timeout, in seconds. timeout values are 8 to 7,200; default is 10. If tape pipelining is enabled, minimum value supported is 80. -key preshared-key Species the preshared-key to be used during IKE authentication. The maximum length is 32 bytes. It must be a double quoted string of alpha-numeric characters. The value of this key must be at least 12 bytes. This argument must be used together with -ike and -ipsec. This argument cannot be used together with -f (fastwrite) or -t (tape pipelining). -m time Species the minimum retransmit time, in milliseconds. time values are 20 to 5,000; default is 100. -n remote_wwn Species the remote-side FC entity WWN.

Optional arguments

48

Appendix:

-r retransmissions Species the maximum retransmissions. Retransmissions values are 1 to 8; default is 8. If tape pipelining is enabled, the default value is calculated based on the minimum retransmit time to ensure that tcp connection does not timeout before the host times out. If the user changes this value, the value specied must be greater than the calculated value. -s Disables selective acknowledgement code (SACK) on the tunnel specied. -t Enables tape pipelining on the tunnel specied (requires fastwrite to be turned on). This argument cannot be used together with -ike, -ipsec or -key (IPSec).

NOTE: Only a single IPSec-enabled tunnel can be congured on a port. No other tunnels (IPSec or otherwise) can be congured on the same port. Jumbo frames will not be supported on secure tunnels. Only a single route is supported on an interface with a secure tunnel.
Arguments modify tunnel_id Modies the properties of the existing FCIP tunnel. This will disrupt the trafc on the FCIP tunnel specied for a brief period of time. If the FCIP tunnel has IPSec enabled, it cannot be modied. To change it, you must delete it and recreate it. -b comm_rate Species the desired committed rate for the existing tunnel. -c 0|1 Disable (0) or Enable (1) compression on the existing tunnel. -f 0|1 Disable (0) or Enable (1) fastwrite on the existing tunnel. -k timeout Species the keep alive timeout, in seconds, for the existing tunnel. timeout values are 8 to 7,200; default is 10. If tape pipelining is enabled, the minimum value is 80. -m time Species the minimum retransmit time, in milliseconds, for the existing tunnel. time values are 20 to 5,000; default is 100. -r retransmissions Species the maximum retransmission for the existing tunnel. retransmission values are 1 to 16; default is 8. If tape pipelining is enabled, the default value is calculated based on the minimum retransmit time to ensure that tcp connection does not timeout before the host times out. If the user changes this value, the value specied must be greater than the calculated value. -s 0|1

Optional Arguments

B-Series remote replication solution

49

Disable (0) or Enable (1) selective acknowledgement (SACK) on the existing tunnel. -t 0|1 Disable (0) or Enable (1) tape pipelining

NOTE: Some of the optional portcfg command settings are not compatible with the HP StorageWorks Continuous Access EVA product.

Examples
To congure a VEX_Port, enter: portcfgvexport 8/18 -a 1 -f 2 -d 220 2. To create an IP interface, enter: portcfg ipif ge0 create 192.168.100.50 255.255.255.0 1500 Verify the created IP interface: portshow ipif ge0 3. To create a static IP route, enter: portcfg iproute ge0 create 192.168.1 1.0 255.255.255.0 192.168.100.1 1 Verify the route has been successfully created: portshow iproute ge0 4. To create an FCIP tunnel, enter: portcfg fciptunnel ge0 create 2 192.168.100.40 192.168.100.50 10000 1.

HP StorageWorks Fabric Manager


HP StorageWorks Fabric Manager is a highly scalable, Java-based application that manages multiple B-series switches and fabrics in real time. In particular, Fabric Manager provides the essential functions for efciently conguring, monitoring, dynamically provisioning, and managing SAN fabrics on a daily basis. It is an optional software package for the B-Series SAN products. An evaluation copy is available via download from the HP Web Site. To download an evaluation copy: 1. Browse to the HP StorageWorks Storage networking site http://www.hp.com/go/san 2. Select B-Series Switches > B-Series Software > HP StorageWorks Fabric Manager Software. 3. Download the evaluation version appropriate for your operating system.

HP StorageWorks Fabric Manager share devices wizard


The following is assumed: Knowledge of Fabric Manager Fabric Manager is installed SAN fabrics in the enterprise are discovered For the Multi-Protocol Router, this wizard walks the user through a series of steps to share devices between fabrics, and includes steps to connect the edge fabric to the FC Router. Without this, sharing devices would be a manual operation on multiple different fabrics. It collates information from multiple sources, analyzes information from different fabrics to present a comprehensive view, and provides information on the FC Router, EX_Ports and LSANs. Follow the steps below to set up LSANs using Fabric Manager: 1. Open Fabric Manager and select an edge fabric that will be part of the LSAN.

50

Appendix:

2. Select Tasks > Device Sharing and Troubleshooting > Share Devices to launch the Share Device wizard. 3. Click Next. The Select Devices to Share window displays. 4. Enter the logical SAN (LSAN) name. In this example, the LSAN name is LSAN_CA_Replication; the local managed fabric Switch1 15 is where the rst aliases containing device port WWNs are selected from. The ports include the host port where Command View is run, and port number 4 from the EVA 8000 top and bottom controllers. The use of zoning aliases makes it easy to recognize the devices, rather than having to read device World Wide Names. 5. Select the device to share from the Available Devices list and click the right arrow to move devices to the Selected Devices list. In this example, the EVA4000 devices are selected from the remote fabric names Switch37.

NOTE: The Remote Fabric called switch37 is visible to Fabric Manager, allowing easy selection of the devices that need to be available to Continuous Access EVA for the replication process. 6. Click Next.

B-Series remote replication solution

51

7. Select Finish > Yes > Yes. The LSAN is created. Fabric Manager distributes the LSAN denitions to both Fabrics. This process may take a few minutes, depending upon if the routed fabrics are local or across the WAN. 8. Select OK > Close to close the wizard.

Viewing LSANS using Fabric Manager


Choose an edge fabric and select Tasks > Device Sharing and Troubleshooting > Show LSAN view. The devices in the LSAN LSAN_CA_Replication display.

52

Appendix:

You might also like