0% found this document useful (0 votes)
8 views

ActiveCluster Requirements and Best Practices

The document outlines the requirements and best practices for installing and upgrading ActiveCluster with Purity 5.0.0, including specific hardware and network configurations. It emphasizes the need for proper management and replication network setups, as well as compatibility considerations with various systems like VMware and Windows. Additionally, it provides guidelines for uniform and non-uniform storage configurations to ensure optimal performance and reliability during operations.

Uploaded by

Sujit Francis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

ActiveCluster Requirements and Best Practices

The document outlines the requirements and best practices for installing and upgrading ActiveCluster with Purity 5.0.0, including specific hardware and network configurations. It emphasizes the need for proper management and replication network setups, as well as compatibility considerations with various systems like VMware and Windows. Additionally, it provides guidelines for uniform and non-uniform storage configurations to ensure optimal performance and reliability during operations.

Uploaded by

Sujit Francis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

ActiveCluster Requirements and Best Practices

Internal Information: Do not share externally.

All upgrades to Purity 5.0.0 and ActiveCluster installs must adhere to the requirements described in the
internal KB ActiveCluster Install and Upgrade Requirements Checklist and follow the ActiveCluster
Upgrade and Activation Approval Process.

FlashArray Requirements
• Purity 5.0.0 or greater must be installed
• ActiveCluster supports FlashArray models: FA450, m20, m50, m70, and x70. FlashArray m10 for PoC Only.
• Replication network latency cannot exceed a round trip latency of 5ms
• Four IP addresses are required for the replication interfaces per array
• Both management interfaces (ct0.eth0, ct0.eth1, ct1.eth0, and ct1.eth1) must be configured on both controllers and
both arrays with enabled and active links.
• 5 management IP addresses are required (vir0, ct0.eth0, ct0.eth1, ct1.eth0, ct1.eth1) per array.
• Hosts connected to ActiveCluster must be using HBA firmware and HBA drivers that are no more than 2 years old
• It is strongly recommended arrays are no more than 80% capacity full before enabling ActiveCluster, please contact
support if the array exceeds this threshold.
• If upgrading to Purity 5.0.0 and using VMware Site Recovery Manager (SRM) or VMware vRealize Operations
Manager (vROPS) with FlashArray the following plugins must be upgraded to these minimum versions.
• If in a Non-Uniform configuration, you must enable the ESXi Host Personality if available (introduced in Purity 5.0.7
and 5.1.0)
• Until the plugins are upgraded, both the SRM and vROPS products may be unable to function with the Pure
Storage FlashArray.
◦ vROPS plugin: 1.0.152
◦ SRA/SRM plugin: 2.0.8
• ActiveCluster is not compatible with VVOLs at this time
• ActiveCluster is not compatible with WFS at this time
• ActiveCluster is not compatible with Purity Run at this time
• ActiveCluster is not compatible with QOS at this time
• Upgrades from Purity 4.8.x to 5.x will require several additional hours to complete, there is no additional delay with
upgrades from 4.10 to 5.x

©2018 Copyright Pure Storage. All rights reserved.


1
Host Connectivity
Configuring an array for synchronous replication and transparent failover is very similar to deploying any Pure Storage
array that performs replication; power on the array, cable array for the local SAN, connect and configure the replication
ports.

What does differ is the notion of uniform and non-uniform connectivity.??

Uniform or Non-Uniform Storage Configuration


Uniform connectivity is when a host at either site has storage connectivity to the arrays at both sites. Non-Uniform
connectivity is when each host has storage connectivity only to the array local within its site.

Uniform Storage Configuration (with Preferred Arrays enabled)

The image below represents the logical paths that exist between the hosts and arrays, and the replication connection
between the two arrays in a uniform access model. In uniform environments, a setting called preferred-array can be
applied to the hosts to ensure the host uses the most optimal paths in the environment. This type of configuration is
used when there is some latency between the arrays as it prevents hosts from performing reads over the longer
distance connections between sites.

Uniform Storage Configuration (without Preferred Arrays enabled)

The image below represents a uniform storage configuration, however, the preferred arrays option has not been set.
This means that the hosts will treat all paths equally, or according to whatever path selection policy the host is
configured to use. This type of configuration may be used when the latency between the arrays is negligible or of some
short distance where any impact to performance is tolerable, such as between two racks in the same datacenter.

©2018 Copyright Pure Storage. All rights reserved.


2
Non-Uniform Storage Configuration

For a non-uniform environment, such as the below image represents, configuration changes should be minimal. Each
host has connections only to the local array at that host’s location. In this environment, the preferred-array setting is not
necessary.? This configuration is used when host to array connectivity does not exist between sites.

Host FC and iSCSI Networks


iSCSI and Fibre channel are supported, and the traditional best practices still apply: crossed connections from each
controller through two fabrics.

In uniform environments, the hosts will have long distance connections between arrays. A number of long-distance
technologies exist to transport iSCSI or Fibre Channel over those distances. It’s important to emphasize that redundant
paths are still important for long distance connections.

For high availability rack deployments, long distance connections, and certainly long-distance technologies such as
DWDM are irrelevant. In these configurations, where only a single switch might be used, the common best practice of
crisscrossing host connections between controllers still applies despite an absence of long-distance technologies.

Uniform Storage Configuration

Note that each fabric switch carries its own long-distance link in this uniform example:

Non-Uniform Storage Configuration

In a non-uniform environment, the zoning and connections for hosts remain redundant without the long distance links:

©2018 Copyright Pure Storage. All rights reserved.


3
Host Configuration Requirements
Uniform environments should enable Preferred Arrays, on the FlashArray in the host object details panel for each
host, to avoid unnecessary read latency.

The preferred arrays setting specifies which array a host should use to perform read and write operations by setting the
ALUA path properties to Active/Non-Optimized for paths to the remote array. This keeps reads on local paths for better
performance.

As ActiveCluster takes advantage of multipath, it’s critical to adhere to Pure Storage Best Practices for your specific
operating system. In the supported operating systems below there is a link to the Best Practices KB. Any significant
notes regarding multipath are noted in each operating system section.

VMware
• VMware ESX versions 5.5, 6 and 6.5 are supported for use with ActiveCluster?.
• In addition to the ActiveCluster requirements and best practices below all of the standard VMware Best Practices
should be followed.
• VMware Path Selection Policy should be Round Robin?.
◦ ESX 6.5 update 1 will automatically configure devices with the Round Robin path policy?.
• If in a Non-Uniform configuration, you must enable the ESXi Host Personality

©2018 Copyright Pure Storage. All rights reserved.


4
• For VMware vSphere HA Clusters, the Datastore with All Paths Down and Permanent Device Loss must be set to
Power off and restart VMs. This assures host recovery for any offline events that might occur with ActiveCluster
enabled arrays.?
• See Implementing vSphere Metro Storage Cluster with ActiveCluster for more detail

 If using VMware RDM devices with MSCS Clustered VMs, the RDM device may fail to connect to a MSCS
cluster node and the MSCS configuration validation wizard may show an error for the SCSI-3 validation test.
To resolve this issue see VMware KB: Adding an RDM LUN to the second node in the MSCS cluster fails.

VMware - VM Cloning and VAAI with ActiveCluster


ActiveCluster supports VMware VAAI Copy Offload for efficient VM cloning and storage vMotion operations if both the
source and target of the operation are in the same pod. VAAI copy offload operations across pod boundaries will act as
normal copies.

If performing frequent VM clones, such as automated provisioning of desktops from golden images, then the clone
source VM should be stored in the same pod as the target VMs.

©2018 Copyright Pure Storage. All rights reserved.


5
Windows and Hyper-V
• In addition to the ActiveCluster requirements and best practices below all of the standard Windows Best
Practices? should be followed.

◦ Configure the path verification retry period to 30 seconds in the DSM details on each LUN as noted in
the Configuring Multipath-IO KB article

◦ Path verification must be in enabled with a Path Verify Period set to 30 seconds to assure rediscovery of active/
non-optimized paths.
• Microsoft DSM version 6.3.9600.18592 is required? for Windows 2012r2
◦ See MS HotFix https://support.microsoft.com/en-us/help/3046101 ?

Linux
• In addition to the ActiveCluster requirements and best practices below all of the standard Linux Best
Practices? should be followed.
• Use configuration settings mentioned in Linux Recommended Settings with the following changes. ?
◦ Change attribute path_grouping_policy is set to “multibus”. Change this to group_by_prio as seen
in the below multipath.conf example. This change configures the host to use paths based on ALUA priority. ?
◦ Add attribute hardware_handler “1 alua” - Configures the host to use ALUA for the storage device.?
◦ Add attribute prio alua - Configures the host to recognize path priorities based on ALUA .?
◦ Add attribute failback immediate - Configures the host to automatically move IO back to optimized paths
when they become available.?

An example of an ActiveCluster compliant multipath.conf file is below with changes in bold.

defaults {
polling_interval 10
}
devices {
device {
vendor "PURE"
path_selector "queue-length 0"
path_grouping_policy group_by_prio
path_checker tur
fast_io_fail_tmo 10
dev_loss_tmo 60
no_path_retry 0
hardware_handler "1 alua"
prio alua
failback immediate
}
}

?Remember that changes to the multipath.conf file requires a restart of the multipathd service:

service multipathd start

©2018 Copyright Pure Storage. All rights reserved.


6
AIX
• In addition to the ActiveCluster requirements and best practices below all of the standard AIX Best
Practices? should be followed.
• Download and install the Pure Storage ODM for any AIX hosts connected to ActiveCluster arrays

Management Network

Management Network and Mediator Access


Due to the introduction of the Mediator for ActiveCluster, general guidelines for FlashArray management interfaces have
changed.

Before ActiveCluster, we recommended a single management interface on each controller. This provided resiliency
while still minimizing the deployment overhead for network configuration.

For ActiveCluster, we require a fully redundant network configuration for eth0 and eth1 - including cross connections to
multiple switches, as illustrated below. This ensures that any single network failure does not prevent the array from
contacting the mediator in the event of a significant failure at the remote site.

• Use both management ports so each controller is resilient to unexpected management network incidents?
◦ * it is not required that vir1 be configured.
• Connect each management port to 1 of 2 network switches?
• Verify that eth0 connects to Switch A and eth1 connects to Switch B on CT0. On CT1 verify that eth0 connects to
Switch B and eth1 connects to Switch A. ?

 A note about backhauled internet designs:

©2018 Copyright Pure Storage. All rights reserved.


7
To ensure that both arrays have access to the mediator in every failure scenario each site should have
independent network access to the Pure1 cloud. A backhauled internet design is one where a sites access to
the internet routes through, and depends on, the link to and through the other site. If using this design then a
complete "site failure" of the site providing internet access would prevent the FlashArray in the other site from
reaching the mediator.

Management Network Firewall Changes


In addition to increased management network resiliency, accessing the Pure1 Mediator also requires the following
firewall exceptions:
• Outbound access on port 443 for https?
• Allow access to addresses 52.40.255.224 - 52.40.255.255?
◦ Can be entered into firewall rules as a CIDR block: 52.40.255.224/27?

• Hostnames: *.cloud-support.purestorage.com?

Replication Network

Replication Network Requirements


The replication network is used for the initial asynchronous transfer of data to stretch a pod, to synchronously transfer
data and configuration information between the arrays, and to resynchronize a pod.
• 5ms maximum round-trip latency between arrays for ActiveCluster?
• Adequate bandwidth between arrays to support bi-directional synchronous writes and bandwidth for
resynchronizing. This depends on the write rate of the hosts at both sites.
• 10Gb Twinax or optical cabling for four replication interfaces? per array
• Four IP addresses per array, one for each array replication interface?
• Arrays cannot be directly connected, switching is required between arrays?
• The default MTU is 1500
• When connecting a remote array for replication, the replication addresses are auto-discovered. You only need to
enter the Replication Addresses when the remote array's replication addresses are being translated via Network
Address Translation (NAT), in which case you must enter all the external NAT addresses here.

©2018 Copyright Pure Storage. All rights reserved.


8
• Check that there is 5ms maximum round-trip latency between arrays for ActiveCluster?
◦ Assuming that ICMP is enabled on your WAN, test with purenetwork ping --interface eth2 <IP
address of remote replication interface>:

pureuser@tmefa05> purenetwork ping --count 5 --interface eth2 192.168.11.44


PING 192.168.11.44 (192.168.11.44) from 192.168.11.45 eth2: 56(84) bytes of data.64 bytes from 192.
168.11.44: icmp_seq=1 ttl=64 time=1.145 ms
64 bytes from 192.168.11.44: icmp_seq=2 ttl=64 time=1.166 ms
64 bytes from 192.168.11.44: icmp_seq=3 ttl=64 time=1.134 ms
64 bytes from 192.168.11.44: icmp_seq=4 ttl=64 time=1.132 ms
64 bytes from 192.168.11.44: icmp_seq=5 ttl=64 time=1.141 ms

• Verify all replication paths are connected


◦ This command must be run from the array that established the connection

pureuser@tmefa04> purearray list --connect --path


Name Source Destination Status
tmefa05 192.168.11.41 192.168.11.45 connected
tmefa05 192.168.11.41 192.168.11.46 connected
tmefa05 192.168.11.41 192.168.11.47 connected
tmefa05 192.168.11.41 192.168.11.48 connected
tmefa05 192.168.11.42 192.168.11.45 connected
tmefa05 192.168.11.42 192.168.11.46 connected
tmefa05 192.168.11.42 192.168.11.47 connected
tmefa05 192.168.11.42 192.168.11.48 connected

FlashArray Replication Firewall Requirements


In order to replicate between two FlashArrays, the arrays must be able to communicate via the following ports.

Service/Port Type Firewall Port

Management ports 443

Replication Ports 8117

©2018 Copyright Pure Storage. All rights reserved.


9
Replication Network Topology
The replication network should adhere to the following topology.

ActiveCluster requires that every replication port on one array has network access to every replication port on the other
array. This can be done by ensuring that redundant switches are interconnected, by routing, or by ensuring that the long
distance ethernet infrastructure allows a switch in one site to connect to both switches in the other site.

ActiveCluster Planning
ActiveCluster in Purity 5.0.x supports up to 6 pods. Some consideration and planning is recommended in how to best
take advantage of the six available. For starters, only unstretched pods can have volumes moved into (or out of) them -
so until the final decisions are made as to how many pods are to be used and what they will contain - avoid stretching
the pods.

When planning your pods, keep in mind that each pod performs it's own baseline of data at the initial stretch, and each
pod performs it's own resync in the event of any replication disruption. Therefore the pods with the least the amount of
data will baseline the soonest and recover the quickest; all volumes on the offline side are unavailable to hosts until the
entire pod as completed resync. Consider devoting an entire pod to just one workload, the most important workload, to
minimize the time needed for recovery.

When planning, keep in mind that for Purity versions 5.0.x and 5.1.x, there is an array limit of 1500 stretched volumes.
There are no limitations per pod. The limit of 1500 can be organized any way that works within 6 pods.

Additional ActiveCluster Resources


ActiveCluster Solution Overview

Synchronous Replication for FlashArrays TR-170101

Business Continuity Re-Imagined - Accelerate 2017 Session on ActiveCluster

Lightboard Explanation of ActiveCluster

ActiveCluster Demo - long version

©2018 Copyright Pure Storage. All rights reserved.


10
ActiveCluster Demo - short version

Simplifying Oracle® High Availability and Disaster Recovery with Pure Storage® Purity ActiveCluster

©2018 Copyright Pure Storage. All rights reserved.


11

You might also like