ActiveCluster Requirements and Best Practices
ActiveCluster Requirements and Best Practices
All upgrades to Purity 5.0.0 and ActiveCluster installs must adhere to the requirements described in the
internal KB ActiveCluster Install and Upgrade Requirements Checklist and follow the ActiveCluster
Upgrade and Activation Approval Process.
FlashArray Requirements
• Purity 5.0.0 or greater must be installed
• ActiveCluster supports FlashArray models: FA450, m20, m50, m70, and x70. FlashArray m10 for PoC Only.
• Replication network latency cannot exceed a round trip latency of 5ms
• Four IP addresses are required for the replication interfaces per array
• Both management interfaces (ct0.eth0, ct0.eth1, ct1.eth0, and ct1.eth1) must be configured on both controllers and
both arrays with enabled and active links.
• 5 management IP addresses are required (vir0, ct0.eth0, ct0.eth1, ct1.eth0, ct1.eth1) per array.
• Hosts connected to ActiveCluster must be using HBA firmware and HBA drivers that are no more than 2 years old
• It is strongly recommended arrays are no more than 80% capacity full before enabling ActiveCluster, please contact
support if the array exceeds this threshold.
• If upgrading to Purity 5.0.0 and using VMware Site Recovery Manager (SRM) or VMware vRealize Operations
Manager (vROPS) with FlashArray the following plugins must be upgraded to these minimum versions.
• If in a Non-Uniform configuration, you must enable the ESXi Host Personality if available (introduced in Purity 5.0.7
and 5.1.0)
• Until the plugins are upgraded, both the SRM and vROPS products may be unable to function with the Pure
Storage FlashArray.
◦ vROPS plugin: 1.0.152
◦ SRA/SRM plugin: 2.0.8
• ActiveCluster is not compatible with VVOLs at this time
• ActiveCluster is not compatible with WFS at this time
• ActiveCluster is not compatible with Purity Run at this time
• ActiveCluster is not compatible with QOS at this time
• Upgrades from Purity 4.8.x to 5.x will require several additional hours to complete, there is no additional delay with
upgrades from 4.10 to 5.x
The image below represents the logical paths that exist between the hosts and arrays, and the replication connection
between the two arrays in a uniform access model. In uniform environments, a setting called preferred-array can be
applied to the hosts to ensure the host uses the most optimal paths in the environment. This type of configuration is
used when there is some latency between the arrays as it prevents hosts from performing reads over the longer
distance connections between sites.
The image below represents a uniform storage configuration, however, the preferred arrays option has not been set.
This means that the hosts will treat all paths equally, or according to whatever path selection policy the host is
configured to use. This type of configuration may be used when the latency between the arrays is negligible or of some
short distance where any impact to performance is tolerable, such as between two racks in the same datacenter.
For a non-uniform environment, such as the below image represents, configuration changes should be minimal. Each
host has connections only to the local array at that host’s location. In this environment, the preferred-array setting is not
necessary.? This configuration is used when host to array connectivity does not exist between sites.
In uniform environments, the hosts will have long distance connections between arrays. A number of long-distance
technologies exist to transport iSCSI or Fibre Channel over those distances. It’s important to emphasize that redundant
paths are still important for long distance connections.
For high availability rack deployments, long distance connections, and certainly long-distance technologies such as
DWDM are irrelevant. In these configurations, where only a single switch might be used, the common best practice of
crisscrossing host connections between controllers still applies despite an absence of long-distance technologies.
Note that each fabric switch carries its own long-distance link in this uniform example:
In a non-uniform environment, the zoning and connections for hosts remain redundant without the long distance links:
The preferred arrays setting specifies which array a host should use to perform read and write operations by setting the
ALUA path properties to Active/Non-Optimized for paths to the remote array. This keeps reads on local paths for better
performance.
As ActiveCluster takes advantage of multipath, it’s critical to adhere to Pure Storage Best Practices for your specific
operating system. In the supported operating systems below there is a link to the Best Practices KB. Any significant
notes regarding multipath are noted in each operating system section.
VMware
• VMware ESX versions 5.5, 6 and 6.5 are supported for use with ActiveCluster?.
• In addition to the ActiveCluster requirements and best practices below all of the standard VMware Best Practices
should be followed.
• VMware Path Selection Policy should be Round Robin?.
◦ ESX 6.5 update 1 will automatically configure devices with the Round Robin path policy?.
• If in a Non-Uniform configuration, you must enable the ESXi Host Personality
If using VMware RDM devices with MSCS Clustered VMs, the RDM device may fail to connect to a MSCS
cluster node and the MSCS configuration validation wizard may show an error for the SCSI-3 validation test.
To resolve this issue see VMware KB: Adding an RDM LUN to the second node in the MSCS cluster fails.
If performing frequent VM clones, such as automated provisioning of desktops from golden images, then the clone
source VM should be stored in the same pod as the target VMs.
◦ Configure the path verification retry period to 30 seconds in the DSM details on each LUN as noted in
the Configuring Multipath-IO KB article
◦ Path verification must be in enabled with a Path Verify Period set to 30 seconds to assure rediscovery of active/
non-optimized paths.
• Microsoft DSM version 6.3.9600.18592 is required? for Windows 2012r2
◦ See MS HotFix https://support.microsoft.com/en-us/help/3046101 ?
Linux
• In addition to the ActiveCluster requirements and best practices below all of the standard Linux Best
Practices? should be followed.
• Use configuration settings mentioned in Linux Recommended Settings with the following changes. ?
◦ Change attribute path_grouping_policy is set to “multibus”. Change this to group_by_prio as seen
in the below multipath.conf example. This change configures the host to use paths based on ALUA priority. ?
◦ Add attribute hardware_handler “1 alua” - Configures the host to use ALUA for the storage device.?
◦ Add attribute prio alua - Configures the host to recognize path priorities based on ALUA .?
◦ Add attribute failback immediate - Configures the host to automatically move IO back to optimized paths
when they become available.?
defaults {
polling_interval 10
}
devices {
device {
vendor "PURE"
path_selector "queue-length 0"
path_grouping_policy group_by_prio
path_checker tur
fast_io_fail_tmo 10
dev_loss_tmo 60
no_path_retry 0
hardware_handler "1 alua"
prio alua
failback immediate
}
}
?Remember that changes to the multipath.conf file requires a restart of the multipathd service:
Management Network
Before ActiveCluster, we recommended a single management interface on each controller. This provided resiliency
while still minimizing the deployment overhead for network configuration.
For ActiveCluster, we require a fully redundant network configuration for eth0 and eth1 - including cross connections to
multiple switches, as illustrated below. This ensures that any single network failure does not prevent the array from
contacting the mediator in the event of a significant failure at the remote site.
• Use both management ports so each controller is resilient to unexpected management network incidents?
◦ * it is not required that vir1 be configured.
• Connect each management port to 1 of 2 network switches?
• Verify that eth0 connects to Switch A and eth1 connects to Switch B on CT0. On CT1 verify that eth0 connects to
Switch B and eth1 connects to Switch A. ?
• Hostnames: *.cloud-support.purestorage.com?
Replication Network
ActiveCluster requires that every replication port on one array has network access to every replication port on the other
array. This can be done by ensuring that redundant switches are interconnected, by routing, or by ensuring that the long
distance ethernet infrastructure allows a switch in one site to connect to both switches in the other site.
ActiveCluster Planning
ActiveCluster in Purity 5.0.x supports up to 6 pods. Some consideration and planning is recommended in how to best
take advantage of the six available. For starters, only unstretched pods can have volumes moved into (or out of) them -
so until the final decisions are made as to how many pods are to be used and what they will contain - avoid stretching
the pods.
When planning your pods, keep in mind that each pod performs it's own baseline of data at the initial stretch, and each
pod performs it's own resync in the event of any replication disruption. Therefore the pods with the least the amount of
data will baseline the soonest and recover the quickest; all volumes on the offline side are unavailable to hosts until the
entire pod as completed resync. Consider devoting an entire pod to just one workload, the most important workload, to
minimize the time needed for recovery.
When planning, keep in mind that for Purity versions 5.0.x and 5.1.x, there is an array limit of 1500 stretched volumes.
There are no limitations per pod. The limit of 1500 can be organized any way that works within 6 pods.
Simplifying Oracle® High Availability and Disaster Recovery with Pure Storage® Purity ActiveCluster