0% found this document useful (0 votes)
76 views

VMAX All Flash Array Product Guide

This document provides an overview of the Dell EMC VMAX All Flash array with HYPERMAX OS. It describes the hardware specifications and software packages. It also covers the management interfaces and key features for open systems and mainframe environments, including data protection, VMware Virtual Volumes support, IBM Z Systems functionality, and logical control unit capabilities.

Uploaded by

gkswami5891
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views

VMAX All Flash Array Product Guide

This document provides an overview of the Dell EMC VMAX All Flash array with HYPERMAX OS. It describes the hardware specifications and software packages. It also covers the management interfaces and key features for open systems and mainframe environments, including data protection, VMware Virtual Volumes support, IBM Z Systems functionality, and logical control unit capabilities.

Uploaded by

gkswami5891
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 99

Dell EMC VMAX All Flash Product Guide

VMAX 250F, 450F, 850F, 950F with HYPERMAX OS

February 2021
Rev. 16
Notes, cautions, and warnings

NOTE: A NOTE indicates important information that helps you make better use of your product.

CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.

WARNING: A WARNING indicates a potential for property damage, personal injury, or death.

© 2018 - 2021 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Other trademarks may be trademarks of their respective owners.
Contents

Figures..........................................................................................................................................6

Tables........................................................................................................................................... 7
Preface.........................................................................................................................................................................................8
Revision history.................................................................................................................................................................. 14

Chapter 1: VMAX All Flash with HYPERMAX OS........................................................................... 16


Introduction to VMAX All Flash with HYPERMAX OS.............................................................................................. 16
VMAX All Flash hardware specifications................................................................................................................ 18
Software packages ...........................................................................................................................................................19
HYPERMAX OS................................................................................................................................................................. 20
HYPERMAX OS emulations....................................................................................................................................... 21
Container applications ...............................................................................................................................................22
Data protection and integrity................................................................................................................................... 25
Inline compression....................................................................................................................................................... 30

Chapter 2: Management Interfaces.............................................................................................. 31


Management interface versions..................................................................................................................................... 31
Unisphere for VMAX......................................................................................................................................................... 31
Workload Planner.........................................................................................................................................................32
FAST Array Advisor.....................................................................................................................................................32
Unisphere 360.................................................................................................................................................................... 32
Solutions Enabler............................................................................................................................................................... 32
Mainframe Enablers.......................................................................................................................................................... 33
Geographically Dispersed Disaster Restart (GDDR).................................................................................................33
SMI-S Provider.................................................................................................................................................................. 34
VASA Provider................................................................................................................................................................... 34
eNAS management interface ........................................................................................................................................ 34
Storage Resource Management (SRM)...................................................................................................................... 34
vStorage APIs for Array Integration............................................................................................................................. 35
SRDF Adapter for VMware vCenter Site Recovery Manager................................................................................35
SRDF/Cluster Enabler .................................................................................................................................................... 35
Product Suite for z/TPF................................................................................................................................................. 35
SRDF/TimeFinder Manager for IBM i.......................................................................................................................... 36
AppSync...............................................................................................................................................................................36

Chapter 3: Open Systems Features............................................................................................. 38


HYPERMAX OS support for open systems.................................................................................................................38
Backup and restore using PowerProtect Storage Direct and Data Domain....................................................... 38
Backup............................................................................................................................................................................38
Restore...........................................................................................................................................................................39
Storage Direct agents................................................................................................................................................ 40
Features used for Storage Direct backup and restore.......................................................................................40
Storage Direct and traditional backup....................................................................................................................40
More information......................................................................................................................................................... 40

Contents 3
VMware Virtual Volumes.................................................................................................................................................. 41
vVol components..........................................................................................................................................................41
vVol scalability...............................................................................................................................................................41
vVol workflow...............................................................................................................................................................42

Chapter 4: Mainframe Features................................................................................................... 43


HYPERMAX OS support for mainframe.......................................................................................................................43
IBM Z Systems functionality support........................................................................................................................... 43
Global Mirror support................................................................................................................................................. 44
Transparent Cloud Tiering support......................................................................................................................... 44
IBM 2107 support.............................................................................................................................................................. 46
Logical control unit capabilities......................................................................................................................................46
Disk drive emulations........................................................................................................................................................46
Cascading configurations................................................................................................................................................ 46

Chapter 5: Provisioning............................................................................................................... 48
Thin provisioning................................................................................................................................................................48
Pre-configuration for thin provisioning.................................................................................................................. 49
Thin devices (TDEVs).................................................................................................................................................49
Thin device oversubscription....................................................................................................................................49
Open Systems-specific provisioning.......................................................................................................................50
Multi-array provisioning....................................................................................................................................................51

Chapter 6: Native local replication with TimeFinder.................................................................... 53


About TimeFinder.............................................................................................................................................................. 53
Interoperability with legacy TimeFinder products............................................................................................... 54
Targetless snapshots..................................................................................................................................................54
Secure snaps................................................................................................................................................................ 54
Provision multiple environments from a linked target........................................................................................ 55
Cascading snapshots..................................................................................................................................................55
Accessing point-in-time copies................................................................................................................................56
Mainframe SnapVX and zDP.......................................................................................................................................... 56
Snapshot policy..................................................................................................................................................................57

Chapter 7: Remote replication.....................................................................................................59


Native remote replication with SRDF...........................................................................................................................59
SRDF 2-site solutions.................................................................................................................................................60
SRDF multi-site solutions.......................................................................................................................................... 62
Interfamily compatibility.............................................................................................................................................63
SRDF device pairs....................................................................................................................................................... 63
Dynamic device personalities....................................................................................................................................67
SRDF modes of operation......................................................................................................................................... 67
SRDF groups.................................................................................................................................................................68
Director boards, links, and ports..............................................................................................................................68
SRDF consistency....................................................................................................................................................... 69
Data migration..............................................................................................................................................................69
More information......................................................................................................................................................... 70
SRDF/Metro........................................................................................................................................................................71
Deployment options..................................................................................................................................................... 71

4 Contents
SRDF/Metro Resilience.............................................................................................................................................. 71
Disaster recovery facilities........................................................................................................................................ 73
More information......................................................................................................................................................... 75
RecoverPoint......................................................................................................................................................................75
Remote replication using eNAS..................................................................................................................................... 75

Chapter 8: Blended local and remote replication.......................................................................... 76


Integration of SRDF and TimeFinder............................................................................................................................ 76
R1 and R2 devices in TimeFinder operations.............................................................................................................. 76
SRDF/AR.............................................................................................................................................................................76
SRDF/AR 2-site configurations................................................................................................................................77
SRDF/AR 3-site configurations............................................................................................................................... 78
TimeFinder and SRDF/A..................................................................................................................................................78
TimeFinder and SRDF/S..................................................................................................................................................79

Chapter 9: Data Migration........................................................................................................... 80


Overview............................................................................................................................................................................. 80
Data migration for open systems................................................................................................................................... 81
Non-Disruptive Migration overview.........................................................................................................................81
Open Replicator........................................................................................................................................................... 84
PowerPath Migration Enabler.................................................................................................................................. 85
Data migration using SRDF/Data Mobility............................................................................................................ 85
Space and zero-space reclamation......................................................................................................................... 86
Data migration for mainframe........................................................................................................................................ 86
Volume migration using z/OS Migrator................................................................................................................. 87
Dataset migration using z/OS Migrator.................................................................................................................87

Appendix A: Mainframe Error Reporting...................................................................................... 89


Error reporting to the mainframe host.........................................................................................................................89
SIM severity reporting..................................................................................................................................................... 89
Environmental errors.................................................................................................................................................. 90
Operator messages..................................................................................................................................................... 92

Appendix B: Licensing................................................................................................................. 94
eLicensing........................................................................................................................................................................... 94
Capacity measurements............................................................................................................................................ 95
Open systems licenses.....................................................................................................................................................96
License suites............................................................................................................................................................... 96
Individual licenses........................................................................................................................................................ 98
Ecosystem licenses..................................................................................................................................................... 99

Contents 5
Figures

1 VMAX All Flash scale up and out.......................................................................................................................... 17


2 D@RE architecture................................................................................................................................................. 26
3 Inline compression and over-subscription..........................................................................................................30
4 Data flow during a backup operation to Data Domain.................................................................................... 39
5 Two-site Global Mirror............................................................................................................................................44
6 TCT environment with PowerMax and DLm.....................................................................................................45
7 Auto-provisioning groups....................................................................................................................................... 51
8 SnapVX targetless snapshots...............................................................................................................................55
9 SnapVX cascaded snapshots................................................................................................................................55
10 zDP operation........................................................................................................................................................... 56
11 R1 and R2 devices....................................................................................................................................................64
12 R11 device in concurrent SRDF............................................................................................................................ 65
13 R21 device in cascaded SRDF.............................................................................................................................. 66
14 R22 devices in cascaded and concurrent SRDF/Star.................................................................................... 66
15 Migrating data and removing a secondary (R2) array.................................................................................... 70
16 SRDF/Metro.............................................................................................................................................................. 71
17 SRDF/Metro Smart DR.......................................................................................................................................... 73
18 Disaster recovery for SRDF/Metro..................................................................................................................... 74
19 SRDF/AR 2-site solution........................................................................................................................................ 77
20 SRDF/AR 3-site solution........................................................................................................................................78
21 Non-Disruptive Migration zoning..........................................................................................................................81
22 Open Replicator hot (or live) pull........................................................................................................................ 85
23 Open Replicator cold (or point-in-time) pull..................................................................................................... 85
24 z/OS volume migration.......................................................................................................................................... 87
25 z/OS Migrator dataset migration........................................................................................................................ 87
26 z/OS IEA480E acute alert error message format (call home failure).........................................................92
27 z/OS IEA480E service alert error message format (Disk Adapter failure)............................................... 93
28 z/OS IEA480E service alert error message format (SRDF Group lost/SIM presented against
unrelated resource)................................................................................................................................................. 93
29 z/OS IEA480E service alert error message format (mirror-2 resynchronization).................................. 93
30 z/OS IEA480E service alert error message format (mirror-1 resynchronization)................................... 93
31 eLicensing process.................................................................................................................................................. 94

6 Figures
Tables

1 Typographical conventions used in this content.............................................................................................. 13


2 Revision history......................................................................................................................................................... 14
3 Symbol legend for VMAX All Flash software features/software package................................................. 19
4 VMAX All Flash software features by model..................................................................................................... 19
5 HYPERMAX OS emulations....................................................................................................................................21
6 eManagement resource requirements................................................................................................................22
7 eNAS configurations by array............................................................................................................................... 24
8 Unisphere tasks.........................................................................................................................................................31
9 vVol architecture component management capability.................................................................................... 41
10 vVol-specific scalability........................................................................................................................................... 41
11 Logical control unit maximum values.................................................................................................................. 46
12 Maximum LPARs per port......................................................................................................................................46
13 RAID options............................................................................................................................................................. 49
14 RAID options............................................................................................................................................................. 49
15 SRDF 2-site solutions............................................................................................................................................. 60
16 SRDF multi-site solutions.......................................................................................................................................62
17 SIM severity alerts...................................................................................................................................................89
18 Environmental errors reported as SIM messages............................................................................................ 90
19 VMAX All Flash product title capacity types.................................................................................................... 95
20 VMAX All Flash license suites............................................................................................................................... 96
21 Individual licenses for open systems environment...........................................................................................98
22 Individual licenses for open systems environment.......................................................................................... 99

Tables 7
Preface
As part of an effort to improve its product lines, Dell EMC periodically releases revisions of its software and hardware. Functions
that are described in this document may not be supported by all versions of the software or hardware. The product release
notes provide the most up-to-date information about product features.
Contact your Dell EMC representative if a product does not function properly or does not function as described in this
document.
NOTE: This document was accurate at publication time. New versions of this document might be released on Dell EMC
Online Support (https://www.dell.com/support/home). Check to ensure that you are using the latest version of this
document.

Purpose
This document introduces the features of the VMAX All Flash 250F, 450F, 850F, 950F arrays running HYPERMAX OS 5977.

Audience
This document is intended for use by customers and Dell EMC representatives.

Related documentation
The following documentation portfolios contain documents related to the hardware platform and manuals needed to manage
your software and storage system configuration. Also listed are documents for external components that interact with the
VMAX All Flash array.
Hardware platform documents:

Dell EMC VMAX Provides planning information regarding the purchase and installation of a VMAX 250F, 450F, 850F, 950F
All Flash Site with HYPERMAX OS.
Planning Guide
for VMAX 250F,
450F, 850F, 950F
with HYPERMAX
OS

Dell EMC Describes the best practices to assure fault-tolerant power to a VMAX3 Family array or VMAX All Flash
VMAX Best array.
Practices Guide
for AC Power
Connections
Dell EMC Describes how to power-down and power-up a VMAX3 Family array or VMAX All Flash array.
VMAX Power-
down/Power-up
Procedure
Dell EMC VMAX Describes how to install the securing kit on a VMAX3 Family array or VMAX All Flash array.
Securing Kit
Installation Guide
E-Lab™ Provides a web-based interoperability and solution search portal. You can find the ELN at https://
Interoperability elabnavigator.EMC.com.
Navigator (ELN)

Unisphere documents:

8 Preface
EMC Unisphere Describes new features and any known limitations for Unisphere for VMAX.
for VMAX
Release Notes
EMC Unisphere Provides installation instructions for Unisphere for VMAX.
for VMAX
Installation Guide
EMC Unisphere Describes the Unisphere for VMAX concepts and functions.
for VMAX Online
Help
EMC Unisphere Provides installation instructions for Unisphere for VMAX Performance Viewer.
for VMAX
Performance
Viewer
Installation Guide
EMC Unisphere Describes the Unisphere for VMAX Database Storage Analyzer concepts and functions.
for VMAX
Database Storage
Analyzer Online
Help
EMC Unisphere Describes new features and any known limitations for Unisphere 360 for VMAX.
360 for VMAX
Release Notes
EMC Unisphere Provides installation instructions for Unisphere 360 for VMAX.
360 for VMAX
Installation Guide
EMC Unisphere Describes the Unisphere 360 for VMAX concepts and functions.
360 for VMAX
Online Help

Solutions Enabler documents:

Dell EMC Describes new features and any known limitations.


Solutions
Enabler, VSS
Provider, and
SMI-S Provider
Release Notes
Dell EMC Provides host-specific installation instructions.
Solutions Enabler
Installation and
Configuration
Guide
Dell EMC Documents the SYMCLI commands, daemons, error codes and option file parameters provided with the
Solutions Enabler Solutions Enabler man pages.
CLI Reference
Guide
Dell EMC Describes how to configure array control, management, and migration operations using SYMCLI
Solutions Enabler commands for arrays running HYPERMAX OS and PowerMaxOS.
Array Controls
and Management
CLI User Guide
Dell EMC Describes how to configure array control, management, and migration operations using SYMCLI
Solutions Enabler commands for arrays running Enginuity.
Array Controls
and Management
CLI User Guide
Dell EMC Describes how to configure and manage SRDF environments using SYMCLI commands.
Solutions Enabler

Preface 9
SRDF Family CLI
User Guide
Dell EMC Describes the applicable pair states for various SRDF operations.
Solutions Enabler
SRDF Family
State Tables
Guide
SRDF Interfamily Defines the versions of PowerMaxOS, HYPERMAX OS and Enginuity that can make up valid SRDF
Connectivity replication and SRDF/Metro configurations, and can participate in Non-Disruptive Migration (NDM).
Information
Dell EMC SRDF Provides an overview of SRDF, its uses, configurations, and terminology.
Introduction
Dell EMC Describes how to configure and manage TimeFinder SnapVX environments using SYMCLI commands.
Solutions Enabler
TimeFinder
SnapVX CLI User
Guide
Dell EMC Describes how to configure and manage TimeFinder Mirror, Clone, Snap, VP Snap environments for
Solutions Enabler Enginuity and HYPERMAX OS using SYMCLI commands.
TimeFinder
Family (Mirror,
Clone, Snap, VP
Snap) Version
8.2 and higher
CLI User Guide
Dell EMC Provides Storage Resource Management (SRM) information that is related to various data objects and
Solutions Enabler data handling facilities.
SRM CLI User
Guide
Dell EMC SRDF/ Describes how to install, configure, and manage SRDF/Metro using vWitness.
Metro vWitness
Configuration
Guide
Dell EMC Events Documents the SYMAPI daemon messages, asynchronous errors and message events, SYMCLI return
and Alerts for codes, and how to configure event logging.
PowerMax and
VMAX User Guide

Embedded NAS (eNAS) documents:

EMC VMAX Describes the new features and identify any known functionality restrictions and performance issues that
Embedded NAS may exist in the current version.
Release Notes
EMC VMAX Describes how to configure eNAS on a VMAX3 or VMAX All Flash storage system.
Embedded NAS
Quick Start
Guide
EMC VMAX Describes how to install and use EMC File Auto Recovery with SRDF/S.
Embedded NAS
File Auto
Recovery with
SRDF/S
Dell EMC A reference for command-line users and script programmers that provides the syntax, error codes, and
PowerMax eNAS parameters of all eNAS commands.
CLI Reference
Guide

PowerProtect Storage Direct documents:

10 Preface
Dell EMC Provides Storage Direct information that is related to various data objects and data handling facilities.
PowerProtect
Storage Direct
Solutions Guide
Dell EMC File Shows how to install, configure, and manage the Storage Direct File System Agent.
System Agent
Installation and
Administration
Guide
Dell EMC Shows how to install, configure, and manage the Storage Direct Database Application Agent.
Database
Application
Agent
Installation and
Administration
Guide
Dell EMC Shows how to install, configure, and manage the Storage Direct Microsoft Application Agent.
Microsoft
Application
Agent
Installation and
Administration
Guide

NOTE: ProtectPoint has been renamed to Storage Direct and it is included in PowerProtect, Data Protection Suite for
Apps, or Data Protection Suite Enterprise Software Edition.
Mainframe Enablers documents:

Dell EMC Describes how to install and configure Mainframe Enablers software.
Mainframe
Enablers
Installation and
Customization
Guide
Dell EMC Describes new features and any known limitations.
Mainframe
Enablers Release
Notes
Dell EMC Describes the status, warning, and error messages generated by Mainframe Enablers software.
Mainframe
Enablers
Message Guide
Dell EMC Describes how to configure VMAX system control and management using the EMC Symmetrix Control
Mainframe Facility (EMCSCF).
Enablers
ResourcePak
Base for z/OS
Product Guide
Dell EMC Describes how to use AutoSwap to perform automatic workload swaps between VMAX systems when the
Mainframe software detects a planned or unplanned outage.
Enablers
AutoSwap for
z/OS Product
Guide
Dell EMC Describes how to use Consistency Groups for z/OS (ConGroup) to ensure the consistency of data
Mainframe remotely copied by SRDF in the event of a rolling disaster.
Enablers
Consistency

Preface 11
Groups for z/OS
Product Guide
Dell EMC Describes how to use SRDF Host Component to control and monitor remote data replication processes.
Mainframe
Enablers SRDF
Host Component
for z/OS Product
Guide
Dell EMC Describes how to use TimeFinder SnapVX and zDP to create and manage space-efficient targetless
Mainframe snaps.
Enablers
TimeFinder
SnapVX and zDP
Product Guide
Dell EMC Describes how to use TimeFinder/Clone, TimeFinder/Snap, and TimeFinder/CG to control and monitor
Mainframe local data replication processes.
Enablers
TimeFinder/
Clone Mainframe
Snap Facility
Product Guide
Dell EMC Describes how to use TimeFinder/Mirror to create Business Continuance Volumes (BCVs) which can
Mainframe then be established, split, reestablished and restored from the source logical volumes for backup, restore,
Enablers decision support, or application testing.
TimeFinder/
Mirror for z/OS
Product Guide
Dell EMC Describes how to use the TimeFinder Utility to condition volumes and devices.
Mainframe
Enablers
TimeFinder
Utility for z/OS
Product Guide

Geographically Dispersed Disaster Recovery (GDDR) documents:

Dell EMC GDDR Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate business recovery
for SRDF/S following both planned outages and disaster situations.
with ConGroup
Product Guide
Dell EMC GDDR Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate business recovery
for SRDF/S following both planned outages and disaster situations.
with AutoSwap
Product Guide
Dell EMC GDDR Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate business recovery
for SRDF/Star following both planned outages and disaster situations.
Product Guide
Dell EMC GDDR Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate business recovery
for SRDF/Star following both planned outages and disaster situations.
with AutoSwap
Product Guide
Dell EMC GDDR Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate business recovery
for SRDF/SQAR following both planned outages and disaster situations.
with AutoSwap
Product Guide
Dell EMC GDDR Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate business recovery
for SRDF/A following both planned outages and disaster situations.
Product Guide

12 Preface
Dell EMC GDDR Describes the status, warning, and error messages generated by GDDR.
Message Guide
Dell EMC GDDR Describes new features and any known limitations.
Release Notes
Dell EMC GDDR Describes the basic concepts of Dell EMC Geographically Dispersed Disaster Restart (GDDR), how to
for Star-A install it, and how to implement its major features and facilities.
Product Guide

z/OS Migrator documents:

Dell EMC z/OS Describes how to use z/OS Migrator to perform volume mirror and migrator functions as well as logical
Migrator Product migration functions.
Guide
Dell EMC Describes the status, warning, and error messages generated by z/OS Migrator.
z/OS Migrator
Message Guide
Dell EMC z/OS Describes new features and any known limitations.
Migrator Release
Notes

z/TPF documents:

Dell EMC Describes how to configure VMAX system control and management in the z/TPF operating environment.
ResourcePak for
z/TPF Product
Guide
Dell EMC SRDF Describes how to perform remote replication operations in the z/TPF operating environment.
Controls for
z/TPF Product
Guide
Dell EMC Describes how to perform local replication operations in the z/TPF operating environment.
TimeFinder
Controls for
z/TPF Product
Guide
Dell EMC z/TPF Describes new features and any known limitations.
Suite Release
Notes

Typographical conventions
Dell EMC uses the following type style conventions in this document:

Table 1. Typographical conventions used in this content


Bold Used for names of interface elements
Examples: Names of windows, dialog boxes, buttons, fields, tab names, key names, and menu
paths (what the user selects or clicks)

Italic Used for full titles of publications referenced in text


Monospace Used for:
● System code
● System output, such as an error message or script
● Pathnames, filenames, prompts, and syntax
● Commands and options
Monospace italic Used for variables
Monospace bold Used for user input

Preface 13
Table 1. Typographical conventions used in this content (continued)
[] Square brackets enclose optional values.
| A vertical bar indicates alternate selections. The bar means "or".
{} Braces enclose content that the user must specify, such as x or y or z.
... Ellipses indicate nonessential information that is omitted from the example.

Where to get help


Dell EMC support, product, and licensing information can be obtained as follows:

Product Dell EMC technical support, documentation, release notes, software updates, or information about
information Dell EMC products can be obtained at https://www.dell.com/support/home (registration required) or
https://www.dellemc.com/en-us/documentation/vmax-all-flash-family.htm.
Technical To open a service request through the Dell EMC Online Support (https://www.dell.com/support/home)
support site, you must have a valid support agreement. Contact your Dell EMC sales representative for details
about obtaining a valid support agreement or to answer any questions about your account.
Additional ● Support by Product — Dell EMC offers consolidated, product-specific information on the Web at:
support options https://support.EMC.com/products
The Support by Product web pages offer quick links to Documentation, White Papers, Advisories
(such as frequently used Knowledgebase articles), and Downloads, as well as more dynamic content,
such as presentations, discussion, relevant Customer Support Forum entries, and a link to Dell EMC
Live Chat.
● Dell EMC Live Chat — Open a Chat or instant message session with a Dell EMC Support Engineer.
e-Licensing To activate your entitlements and obtain your license files, go to the Service Center on Dell EMC Online
support Support (https://www.dell.com/support/home). Follow the directions on your License Authorization
Code (LAC) letter that is emailed to you.
● Expected functionality may be unavailable because it is not licensed. For help with missing or incorrect
entitlements after activation, contact your Dell EMC Account Representative or Authorized Reseller.
● For help with any errors applying license files through Solutions Enabler, contact the Dell EMC
Customer Support Center.
● Contact the Dell EMC worldwide LIcensing team if you are missing a LAC letter or require further
instructions on activating your licenses through the Online Support site.
[email protected]
○ North America, Latin America, APJK, Australia, New Zealand: SVC4EMC (800-782-4362) and
follow the voice prompts.
○ EMEA: +353 (0) 21 4879862 and follow the voice prompts.

Your comments
Your suggestions help improve the accuracy, organization, and overall quality of the documentation. Send your comments and
feedback to: [email protected]

Revision history
The following table lists the revision history of this document.

Table 2. Revision history


Revision Description and/or change Operating
system
16 Updated with new and changed features related to the latest release of the PowerMax OS PowerMax OS
5978.711.711

14 Preface
Table 2. Revision history (continued)
Revision Description and/or change Operating
system
15 Updated with new and changed features related to the latest release of the PowerMax OS PowerMax OS
5978.669.669
14 Updated with new and changed features related to the latest release of the PowerMax OS PowerMax OS
5978.444.444
13 Revised content: PowerMax OS
5978.444.444
● Updated links
12 Updated with new and changed features related to the latest release of the PowerMax OS PowerMax OS
5978.444.444
11 Revised content: HYPERMAX OS
● Clarify the recommended maximum distance between arrays using SRDF/S 5977.1125.1125

10 Revised content: HYPERMAX OS


● Update system capacities 5977.1125.1125

09 Revised content: HYPERMAX OS


● Update section on using ProtectPoint for backup and restore operations 5977.1125.1125
● Add hardware compression to table of SRDF features
08 Revised content: HYPERMAX OS
● Update descriptions of the All Flash arrays 5977.1125.1125
● Add PowerMax and PowerMaxOS to the SRDF chapter
07 New content: HYPERMAX OS
● RecoverPoint 5977. 1125.1125
● VMAX 950F support
● Secure snaps
● Data at Rest Encryption
06 Revised content: HYPERMAX OS
● Power consumption and heat dissipation numbers for the VMAX 250F 5997.952. 892
● SRDF/Metro array witness overview
05 New content: HYPERMAX OS
● VMAX 250F support 5997.952. 892
● Inline Compression
● Mainframe support
● Non disruptive migration
● Virtual Witness (vWitness)
04 Removed "RPQ" requirement from Third Party racking. HYPERMAX
5977.810.784
03 Updated Licensing appendix. HYPERMAX
5977.810.784
02 Updated values in the power and heat dissipation specification table. HYPERMAX OS
5977.691.684 +
Q1 2016 Service
Pack
01 First release of the VMAX All Flash with EMC HYPERMAX OS 5977 for VMAX 450F, HYPERMAX OS
450FX, 850F, and 850FX. 5977.691.684 +
Q1 2016 Service
Pack

Preface 15
1
VMAX All Flash with HYPERMAX OS
This chapter introduces VMAX All Flash systems and the HYPERMAX OS operating environment.
Topics:
• Introduction to VMAX All Flash with HYPERMAX OS
• Software packages
• HYPERMAX OS

Introduction to VMAX All Flash with HYPERMAX OS


VMAX All Flash is a range of storage arrays that use only high-density flash drives. The range contains four models that combine
high scale, low latency and rich data services:
● VMAX 250F with a maximum capacity of 1.16 PBe (Petabytes effective)
● VMAX 450F with a maximum capacity of 2.3 PBe
● VMAX 850F with a maximum capacity of 4.4 PBe
● VMAX 950F with a maximum capacity of 4.42 PBe
Each VMAX All Flash array is made up of one or more building blocks known as V-Bricks (in an open systems array) or zBricks
(in a mainframe array). A V-Brick or zBrick consists of:
● An engine with two directors (the redundant data storage processing unit)
● Flash capacity in Drive Array Enclosures (DAEs):
○ VMAX 250F: Two 25-slot DAEs with a minimum base capacity of 13TBu
○ VMAX 450F, VMAX 850F: Two 120-slot DAEs with a minimum base capacity of 53TBu
○ VMAX 950F (open or mixed systems): Two 120-slot DAEs with a minimum base capacity of 53TBu
○ VMAX 950F (mainframe systems): Two 120-slot DAEs with a minimum base capacity of 13TBu
● Multiple software packages are available: F and FX packages for open system arrays and zF and zFX for mainframe arrays.
Customers can increase the initial configuration by adding 11 TBu (250F) or 13 TBu (450F, 850F, 950F) capacity packs that
bundle all required flash capacity and software. In open system arrays, capacity packs are known as Flash capacity packs. In
mainframe arrays, they are known as zCapacity packs. In addition, customers can also scale out the initial configuration by
adding additional V-Bricks or zBricks to increase performance, connectivity, and throughput.
● VMAX 250F All Flash arrays scale from one to two V-Bricks
● VMAX 450F All Flash arrays scale from one to four V-Bricks/zBricks
● VMAX 850F/950F All Flash arrays scale from one to eight V-Bricks/zBricks
Independent and linear scaling of both capacity and performance enables VMAX All Flash to be extremely flexible at addressing
varying workloads. The following illustrates scaling opportunities for VMAX All Flash open system arrays.

16 VMAX All Flash with HYPERMAX OS


Flash pack START SMALL, GET BIG
S 11/13 TBu* LINEAR SCALE TB’s AND IOPS
c EASY TO SIZE, CONFIG, ORDER
a Flash pack
l 11/13 TBu*
e

u V-Brick V-Brick V-Brick


p 11/53 TBu* 11/53 TBu* 11/53 TBu*

Scale out

* Depending on the VMAX model


Figure 1. VMAX All Flash scale up and out

The All Flash arrays:


● Use the powerful Dynamic Virtual Matrix Architecture.
● Deliver high levels of performance and scale. For example, VMAX 950F arrays deliver 6.74M IOPS (RRH) with less than
0.5 ms latency at 150 GB/sec bandwidth. VMAX 250F, 450F, 850F, 950F arrays deliver consistently low response times (<
0.5ms).
● Provide mainframe (VMAX 450F, 850F, 950F) and open systems (including IBM i) host connectivity for mission critical
storage needs
● Use the HYPERMAX OS hypervisor to provide file system storage with eNAS and embedded management services for
Unisphere. Embedded Network Attached Storage (eNAS) on page 23 and Embedded Management on page 22, have more
information on these features.
● Provide data services such as:
○ SRDF remote replication technology with the latest SRDF/Metro functionality
○ SnapVX local replication services based on SnapVX infrastructure
○ Data protection and encryption
About TimeFinder on page 53 has more information on these features.
● Use the latest Flash drive technology in V-Bricks/zBricks and capacity packs to deliver a top-tier, diamond service level.

VMAX All Flash with HYPERMAX OS 17


VMAX All Flash hardware specifications
Detailed specifications of the VMAX All Flash hardware, including capacity, cache memory, I/O protocols, and I/O connections
are available at https://www.emc.com/collateral/specification-sheet/h16051-vmax-all-flash-250f-950f-ss.pdf

18 VMAX All Flash with HYPERMAX OS


Software packages
VMAX All Flash arrays are available with multiple software packages (F/FX for open system arrays, and zF/zFX for mainframe
arrays) containing standard and optional features.

Table 3. Symbol legend for VMAX All Flash software features/software package
Standard feature with that model/software package. Optional feature with that model/software package.

Table 4. VMAX All Flash software features by model


Software/Feature VMAX model and software packages
250F 450F 850F, 950F
F FX F FX zF zF F FX zF zF See:
X X
HYPERMAX OS HYPERMAX OS on page 20

Embedded Management a Management Interfaces on page 31

Mainframe Essentials Plus Mainframe Features on page 43

SnapVX About TimeFinder on page 53

AppSync Starter Pack AppSync on page 36

Compression Inline compression on page 30

Non-Disruptive Migration Non-Disruptive Migration overview on


page 81
SRDF Remote replication on page 59

SRDF/Metro SRDF/Metro on page 71

Embedded Network Embedded Network Attached Storage


Attached Storage (eNAS) (eNAS) on page 23
Unisphere 360 Unisphere 360 on page 32

SRM Storage Resource Management


(SRM) on page 34
Data at Rest Encryption Data at Rest Encryption on page 25
(D@RE)
PowerPath b b b PowerPath Migration Enabler on page
85
AppSync Full Suite AppSync on page 36

ProtectPoint Storage Direct agents on page 40

Mainframe SnapVX and zDP on page


AutoSwap and zDP
56
Geographically Dispersed Disaster
GDDR
Restart (GDDR) on page 33

a. eManagement includes: embedded Unisphere, Solutions Enabler, and SMI-S.


b. The FX package includes 75 PowerPath licenses.. Additional licensesare available separately..

VMAX All Flash with HYPERMAX OS 19


HYPERMAX OS
This section highlights the features of the HYPERMAX OS.

20 VMAX All Flash with HYPERMAX OS


HYPERMAX OS emulations
HYPERMAX OS provides emulations (executables) that perform specific data service and control functions in the HYPERMAX
environment. The following table lists the available emulations.

Table 5. HYPERMAX OS emulations


Area Emulation Description Protocol Speed a
Back-end DS Back-end connection in the SAS 12 Gb/s (VMAX 250F)
array that communicates with SAS 6 Gb/s (VMAX 450F,
the drives, DS is also known 850F, and 950F)
as an internal drive controller.
DX Back-end connections that FC 16 or 8 Gb/s
are not used to connect to
hosts. Used by ProtectPoint.
ProtectPoint links Data
Domain to the array. DX ports
must be configured for the FC
protocol.
Management IM Separates infrastructure tasks N/A
and emulations. By separating
these tasks, emulations can
focus on I/O-specific work
only, while IM manages
and executes common
infrastructure tasks, such
as environmental monitoring,
Field Replacement Unit (FRU)
monitoring, and vaulting.
ED Middle layer used to separate N/A
front-end and back-end I/O
processing. It acts as a
translation layer between the
front-end, which is what
the host knows about, and
the back-end, which is the
layer that reads, writes, and
communicates with physical
storage in the array.
Host connectivity FA - Fibre Channel Front-end emulation that: FC - 16 or 8 Gb/s
● Receives data from
SE - iSCSI SE - 10 Gb/s
the host (network) and
EF - FICON b commits it to the array EF - 16 Gb/s
● Sends data from the array
to the host/network
Remote replication RF - Fibre Channel Interconnects arrays for RF - 8 Gb/s SRDF
SRDF.
RE - GbE RE - 1 GbE SRDF
RE - 10 GbE SRDF

a. The 8 Gb/s module auto-negotiates to 2/4/8 Gb/s and the 16 Gb/s module auto-negotiates to 16/8/4 Gb/s using optical
SFP and OM2/OM3/OM4 cabling.
b. Only on VMAX 450F, 850F, and 950F arrays.

VMAX All Flash with HYPERMAX OS 21


Container applications
HYPERMAX OS provides an open application platform for running data services. It includes a lightweight hypervisor that enables
multiple operating environments to run as virtual machines on the storage array.
Application containers are virtual machines that provide embedded applications on the storage array. Each container virtualizes
the hardware resources that are required by the embedded application, including:
● Hardware needed to run the software and embedded application (processor, memory, PCI devices, power management)
● VM ports, to which LUNs are provisioned
● Access to necessary drives (boot, root, swap, persist, shared)

Embedded Management
The eManagement container application embeds management software (Solutions Enabler, SMI-S, Unisphere for VMAX) on the
storage array, enabling you to manage the array without requiring a dedicated management host.
With eManagement, you can manage a single storage array and any SRDF attached arrays. To manage multiple storage arrays
with a single control pane, use the traditional host-based management interfaces: Unisphere and Solutions Enabler. To this end,
eManagement allows you to link-and-launch a host-based instance of Unisphere.
eManagement is typically preconfigured and enabled at the factory. However, starting with HYPERMAX OS 5977.945.890,
eManagement can be added to arrays in the field. Contact your support representative for more information.
Embedded applications require system memory. The following table lists the amount of memory unavailable to other data
services.

Table 6. eManagement resource requirements


VMAX All Flash model CPUs Memory Devices supported
VMAX 250F 4 16 GB 200K
VMAX 450F 4 16 GB 200K
VMAX 850F, 950F 4 20 GB 400K

Virtual machine ports


Virtual machine (VM) ports are associated with virtual machines to avoid contention with physical connectivity. VM ports are
addressed as ports 32-63 on each director FA emulation.
LUNs are provisioned on VM ports using the same methods as provisioning physical ports.
A VM port can be mapped to one VM only. However, a VM can be mapped to multiple ports.

22 VMAX All Flash with HYPERMAX OS


Embedded Network Attached Storage (eNAS)
eNAS is fully integrated into the VMAX All Flash array. eNAS provides flexible and secure multi-protocol file sharing (NFS 2.0,
3.0, 4.0/4.1, and CIFS/SMB 3.0) and multiple file server identities (CIFS and NFS servers). eNAS enables:
● File server consolidation/multi-tenancy
● Built-in asynchronous file level remote replication (File Replicator)
● Built-in Network Data Management Protocol (NDMP)
● VDM Synchronous replication with SRDF/S and optional automatic failover manager File Auto Recovery (FAR)
● Anti-virus
eNAS provides file data services for:
● Consolidating block and file storage in one infrastructure
● Eliminating the gateway hardware, reducing complexity and costs
● Simplifying management
Consolidated block and file storage reduces costs and complexity while increasing business agility. Customers can leverage data
services across block and file storage including storage provisioning, dynamic Host I/O Limits, and Data at Rest Encryption.

eNAS solutions and implementation


eNAS runs on standard array hardware and is typically pre-configured at the factory. There is a one-off setup of the Control
Station and Data Movers, containers, control devices, and required masking views as part of the factory pre-configuration.
Additional front-end I/O modules are required to implement eNAS. However, starting with HYPERMAX OS 5977.945.890, eNAS
can be added to arrays in the field. Contact your support representative for more information.
eNAS uses the HYPERMAX OS hypervisor to create virtual instances of NAS Data Movers and Control Stations on
VMAX All Flash controllers. Control Stations and Data Movers are distributed within the VMAX All Flash array based upon
the number of engines and their associated mirrored pair.
By default, VMAX All Flash arrays have:
● Two Control Station virtual machines
● Two or more Data Mover virtual machines. The number of Data Movers differs for each model of the array. All configurations
include one standby Data Mover.

VMAX All Flash with HYPERMAX OS 23


eNAS configurations
The storage capacity required for arrays supporting eNAS is at least 680 GB. This table lists eNAS configurations and front-end
I/O modules.

Table 7. eNAS configurations by array


Component Description VMAX 250F VMAX 450F VMAX 850F, 950F
Data Movers a virtual Maximum number 4 4 8b
machine
Max capacity/DM 512 TB 512 TB 512 TB
Logical cores c 12/24 12/24 16/32/48/64 b
Memory (GB) c 48/96 48/96 48/96/144/192 b
I/O modules (Max) c 12 12 d 24 d
Control Station virtual Logical cores 2 2 2
machines (2)
Memory (GB) 8 8 8
NAS Capacity/Array Maximum 1.15 PB 1.5 PB 3.5 PB

a. Data Movers are added in pairs and must have the same configuration.
b. The 850F and 950F can be configured through Sizer with a maximum of four data movers. However, six and eight data
movers can be ordered by RPQ. As the number of data movers increases, the maximum number of I/O cards , logical
cores, memory, and maximum capacity also increases.
c. For 2, 4, 6, and 8 data movers, respectively.
d. A single 2-port 10GbE Optical I/O module is required by each Data Mover for initial All-Flash configurations. However,
that I/O module can be replaced with a different I/O module (such as a 4-port 1GbE or 2-port 10GbE copper) using the
normal replacement capability that exists with any eNAS Data Mover I/O module. In addition, additional I/O modules can
be configured through a I/O module upgrade/add as long as standard rules are followed (no more than 3 I/O modules per
Data Mover, all I/O modules must occupy the same slot on each director on which a Data Mover resides).

Replication using eNAS


The replication methods available for eNAS file systems are:
● Asynchronous file system level replication using VNX Replicator for File.
● Synchronous replication with SRDF/S using File Auto Recovery (FAR) with the optional File Auto Recover Manager (FARM).
● Checkpoint (point-in-time, logical images of a production file system) creation and management using VNX SnapSure.
NOTE: SRDF/A, SRDF/Metro, and TimeFinder are not available with eNAS.

24 VMAX All Flash with HYPERMAX OS


Data protection and integrity
HYPERMAX OS provides facilities to ensure data integrity and to protect data in the event of a system failure or power outage:
● RAID levels
● Data at Rest Encryption
● Data erasure
● Block CRC error checks
● Data integrity checks
● Drive monitoring and correction
● Physical memory error correction and error verification
● Drive sparing and direct member sparing
● Vault to flash

RAID levels
VMAX All Flash arrays provide the following RAID levels:
● VMAX 250F: RAID5 (7+1) (Default), RAID5 (3+1), and RAID6 (6+2)
● VMAX 450F, 850F, 950F: RAID5 (7+1), and RAID6 (14+2)

Data at Rest Encryption


Securing sensitive data is an important IT issue, that has regulatory and legislative implications. Several of the most important
data security threats relate to protection of the storage environment. Drive loss and theft are primary risk factors. Data at Rest
Encryption (D@RE) protects data by adding back-end encryption to an entire array.
D@RE provides hardware-based encryption for VMAX All Flash arrays using I/O modules that incorporate AES-XTS inline data
encryption. These modules encrypt and decrypt data as it is being written to or read from a drive. This protects your information
from unauthorized access even when drives are removed from the array.
D@RE can use either an internal embedded key manager, or one of these external, enterprise-grade key managers:
● Gemalto SafeNet KeySecure
● IBM Security Key Lifecycle Manager
D@RE accesses an external key manager using the Key Management Interoperability Protocol (KMIP). The EMC E-Lab
Interoperability Matrix (https://www.emc.com/products/interoperability/elab.htm) lists the external key managers for each
version of HYPERMAX OS.
When D@RE is active, all configured drives are encrypted, including data drives, spares, and drives with no provisioned volumes.
Vault data is encrypted on Flash I/O modules.
D@RE provides:
● Secure replacement for failed drives that cannot be erased—For some types of drive failures, data erasure is not possible.
Without D@RE, if the failed drive is repaired, data on the drive may be at risk. With D@RE, deletion of the applicable keys
makes the data on the failed drive unreadable.
● Protection against stolen drives—When a drive is removed from the array, the key stays behind, making data on the drive
unreadable.
● Faster drive sparing—The drive replacement script destroys the keys associated with the removed drive, quickly making all
data on that drive unreadable.
● Secure array retirement—Delete all copies of keys on the array, and all remaining data is unreadable.
In addition, D@RE:
● Is compatible with all array features and all supported drive types or volume emulations
● Provides encryption without degrading performance or disrupting existing applications and infrastructure

Enabling D@RE
D@RE is a licensed feature that is installed and configured at the factory. Upgrading an existing array to use D@RE is possible,
but is disruptive. The upgrade requires re-installing the array, and may involve a full data back up and restore. Before upgrading,
plan how to manage any data already on the array. Dell EMC Professional Services offers services to help you implement D@RE.

VMAX All Flash with HYPERMAX OS 25


D@RE components
Embedded D@RE uses the following components, all of which reside on the primary Management Module Control Station
(MMCS):
● EMC Key Trust Platform (KTP) (embedded)—This component adds embedded key management functionality to the KMIP
Client.
● Lockbox—Hardware- and software-specific encrypted repository that securely stores passwords and other sensitive key
manager configuration information. The lockbox binds to a specific MMCS.
External D@RE uses the same components as embedded D@RE, and adds the following:
● EMC Key Trust Platform (KTP)—Also known as the KMIP Client, this component resides on the MMCS and communicates
with external key managers using the OASIS Key Management Interoperability Protocol (KMIP) to manage encryption keys.
● External Key Manager—Provides centralized encryption key management capabilities such as secure key generation,
storage, distribution, audit, and enabling Federal Information Processing Standard (FIPS) 140-2.
● Cluster/Replication Group—Multiple external key managers sharing configuration settings and encryption keys.
Configuration and key lifecycle changes made to one node are replicated to all members within the same cluster or
replication group.

Figure 2. D@RE architecture

External Key Managers


D@RE's external key management is provided by Gemalto SafeNet KeySecure and IBM Security Key Lifecycle Manager. Keys
are generated and distributed using industry standards (NIST 800-57 and ISO 11770). With D@RE, there is no need to replicate
keys across volume snapshots or remote sites. D@RE external key managers can be used with FIPS 140-2 level 1 validated
software.
Encryption keys must be highly available when they are needed, and tightly secured. Keys, and the information required to use
keys (during decryption), must be preserved for the lifetime of the data. This is critical for encrypted data that is kept for many
years.
Key accessibility is vital in high-availability environments. D@RE caches the keys locally. So connection to the Key Manager is
necessary only for operations such as the initial installation of the array, replacement of a drive, or drive upgrades.
Lifecycle events involving keys (generation and destruction) are recorded in the array's Audit Log.

Key protection
The local keystore file is encrypted with a 256-bit AES key derived from a randomly generated password file. This password file
is secured in the Lockbox. The Lockbox is protected using MMCS-specific stable system values (SSVs) of the primary MMCS.
These are the same SSVs that protect Secure Service Credentials (SSC).
Compromising the MMCS drive or copying Lockbox and keystore files off the array causes the SSV tests to fail. Compromising
the entire MMCS only gives an attacker access if they also successfully compromise SSC.
There are no backdoor keys or passwords to bypass D@RE security.

Key operations
D@RE provides a separate, unique Data Encryption Key (DEK) for each physical drive in the array, including spare drives. To
ensure that D@RE uses the correct key for a given drive:
● DEKs stored in the array include a unique key tag and key metadata when they are wrapped (encrypted) for use by the
array. This information is included with the key material when the DEK is wrapped (encrypted) for use in the array.

26 VMAX All Flash with HYPERMAX OS


● During encryption I/O, the expected key tag associated with the drive is supplied separately from the wrapped key.
● During key unwrap, the encryption hardware checks that the key unwrapped correctly and that it matches the supplied key
tag.
● Information in a reserved system LBA (Physical Information Block, or PHIB) verifies the key used to encrypt the drive and
ensures the drive is in the correct location.
● During initialization, the hardware performs self-tests to ensure that the encryption/decryption logic is intact. The self-test
prevents silent data corruption due to encryption hardware failures.

Audit logs
The audit log records major activities on an array, including:
● Host-initiated actions
● Physical component changes
● Actions on the MMCS
● D@RE key management events
● Attempts blocked by security controls (Access Controls)
The Audit Log is secure and tamper-proof so event contents cannot be altered. Users with the Auditor access can view, but not
modify, the log.

Data erasure
Dell EMC Data Erasure uses specialized software to erase information on arrays. It mitigates the risk of information
dissemination, and helps secure information at the end of the information lifecycle. Data erasure:
● Protects data from unauthorized access
● Ensures secure data migration by making data on the source array unreadable
● Supports compliance with internal policies and regulatory requirements
Data Erasure overwrites data at the lowest application-addressable level to drives. The number of overwrites is configurable
from three (the default) to seven with a combination of random patterns on the selected arrays.
An optional certification service is available to provide a certificate of erasure. Drives that fail erasure are delivered to customers
for final disposal.
For individual flash drives, Secure Erase operations erase all physical flash areas on the drive which may contain user data.
The available data erasure services are:
● Dell EMC Data Erasure for Full Arrays—Overwrites data on all drives in the system when replacing, retiring or re-purposing
an array.
● Dell EMC Data Erasure/Single Drives—Overwrites data on individual drives.
● Dell EMC Disk Retention—Enables organizations that must retain all media to retain failed drives.
● Dell EMC Assessment Service for Storage Security—Assesses your information protection policies and suggests a
comprehensive security strategy.
All erasure services are performed on-site in the security of the customer’s data center and include a Data Erasure Certificate
and report of erasure results.

Block CRC error checks


HYPERMAX OS provides:
● Industry-standard, T10 Data Integrity Field (DIF) block cyclic redundancy check (CRC) for track formats.
For open systems, this enables host-generated DIF CRCs to be stored with user data by the arrays and used for end-to-end
data integrity validation.
● Additional protections for address/control fault modes for increased levels of protection against faults. These protections
are defined in user-definable blocks supported by the T10 standard.
● Address and write status information in the extra bytes in the application tag and reference tag portion of the block CRC.

VMAX All Flash with HYPERMAX OS 27


Data integrity checks
HYPERMAX OS validates the integrity of data at every possible point during the lifetime of that data. From the time data
enters an array, it is continuously protected by error detection metadata. This metadata is checked by hardware and software
mechanisms any time data is moved within the array. This allows the array to provide true end-to-end integrity checking and
protection against hardware or software faults.
The protection metadata is appended to the data stream, and contains information describing the expected data location as
well as the CRC representation of the actual data contents. The expected values to be found in protection metadata are stored
persistently in an area separate from the data stream. The protection metadata is used to validate the logical correctness of
data being moved within the array any time the data transitions between protocol chips, internal buffers, internal data fabric
endpoints, system cache, and system drives.

Drive monitoring and correction


HYPERMAX OS monitors medium defects by both examining the result of each disk data transfer and proactively scanning the
entire disk during idle time. If a block on the disk is determined to be bad, the director:
1. Rebuilds the data in the physical storage, if necessary.
2. Rewrites the data in physical storage, if necessary.
The director keeps track of each bad block detected on a drive. If the number of bad blocks exceeds a predefined threshold,
the array proactively invokes a sparing operation to replace the defective drive, and then alerts Customer Support to arrange for
corrective action, if necessary. With the deferred service model, immediate action is not always required.

Physical memory error correction and error verification


HYPERMAX OS corrects single-bit errors and reports an error code once the single-bit errors reach a predefined threshold.
In the unlikely event that physical memory replacement is required, the array notifies Customer Support, and a replacement is
ordered.

Drive sparing and direct member sparing


When HYPERMAX OS 5977 detects a drive is about to fail or has failed, it starts a direct member sparing (DMS) process. Direct
member sparing looks for available spares within the same engine that are of the same block size, capacity and speed, with the
best available spare always used.
With direct member sparing, the invoked spare is added as another member of the RAID group. During a drive rebuild, the option
to directly copy the data from the failing drive to the invoked spare drive is available. The failing drive is removed only when the
copy process is complete. Direct member sparing is automatically initiated upon detection of drive-error conditions.
Direct member sparing provides the following benefits:
● The array can copy the data from the failing RAID member (if available), removing the need to read the data from all of the
members and doing the rebuild. Copying to the new RAID member is less CPU intensive.
● If a failure occurs in another member, the array can still recover the data automatically from the failing member (if available).
● More than one spare for a RAID group is supported at the same time.

Vault to flash
VMAX All Flash arrays initiate a vault operation when the system is powered down, goes offline, or if environmental conditions
occur, such as the loss of a data center due to an air conditioning failure.
Each array comes with Standby Power Supply (SPS) modules. On a power loss, the array uses the SPS power to write the
system mirrored cache to flash storage. Vaulted images are fully redundant; the contents of the system mirrored cache are
saved twice to independent flash storage.

The vault operation


When a vault operation starts:

28 VMAX All Flash with HYPERMAX OS


● During the save part of the vault operation, the VMAX All Flash array stops all I/O. When the system mirrored cache
reaches a consistent state, directors write the contents to the vault devices, saving two copies of the data. The array then
completes the power down, or, if power down is not required, remains in the offline state.
● During the restore part of the operation, the array's startup program initializes the hardware and the environmental system,
and restores the system mirrored cache contents from the saved data (while checking data integrity).
The system resumes normal operation when the SPS modules have sufficient charge to complete another vault operation, if
required. If any condition is not safe, the system does not resume operation and notifies Customer Support for diagnosis and
repair. This allows Customer Support to communicate with the array and restore normal system operations.

Vault configuration considerations


● To support vault to flash, the VMAX All Flash arrays require the following number of flash I/O modules:
○ VMAX 250F two to six per engine/V-Brick
○ VMAX 450F four to eight per engine/V-Brick/zBrick
○ VMAX 850F, 950F four to eight per engine/V-Brick/zBrick
● The size of the flash module is determined by the amount of system cache and metadata required for the configuration.
● The vault space is for internal use only and cannot be used for any other purpose when the system is online.
● The total capacity of all vault flash partitions is sufficient to keep two logical copies of the persistent portion of the system
mirrored cache.

VMAX All Flash with HYPERMAX OS 29


Inline compression
HYPERMAX OS 5977.945.890 introduced support for inline compression on VMAX All Flash arrays. Inline compression
compresses data as it is written to flash drives.
Inline compression is a feature of storage groups. When enabled (this is the default setting), new I/O to a storage group is
compressed when written to disk, while existing data on the storage group starts to compress in the background. After turning
off compression, new I/O is no longer compressed, and existing data remains compressed until it is written again, at which time
it decompresses.
Inline compression, deduplication, and over-subscription complement each other. Over-subscription allows presenting larger
than needed devices to hosts without having the physical drives to fully allocate the space represented by the thin devices (Thin
device oversubscription on page 49 has more information on over-subscription). Inline compression further reduces the data
footprint by increasing the effective capacity of the array.
The example in Inline compression and over-subscription on page 30 shows this. Here, 1.3 PB of host attached devices
(TDEVs) is over-provisioned to 1.0 PB of back-end (TDATs), that reside on 1.0 PB of Flash drives. Following data compression,
the data blocks are compressed, by a ratio of 2:1, reducing the number of Flash drives by half. Basically, with compression
enabled, the array requires half as many drives to support a given front-end capacity.

TDEVs TDATs Flash drives


Front-end Back-end 1.0 PB
SSD SSD SSD SSD
1.3 PB 1.0 PB SSD SSD SSD SSD
SSD SSD SSD SSD

Over-subscription ratio: 1.3:1

TDEVs TDATs Flash drives


Front-end Back-end 0.5 PB
SSD SSD SSD SSD
1.3 PB 1.0 PB SSD SSD SSD SSD
SSD SSD SSD SSD

Over-subscription ratio: 1.3:1 Compression ratio: 2:1


Figure 3. Inline compression and over-subscription

Compression is pre-configured on new VMAX All Flash arrays at the factory. Existing VMAX All Flash arrays in the field can have
compression added to them. Contact your Support Representative for more information.
Further characteristics of compression are:
● All supported data services, such as SnapVX, SRDF, and encryption are supported with compression.
● Compression is available on open systems (FBA) only (including eNAS). It is not available for CKD arrays, including those
with a mix of FBA and CKD devices. Any open systems array with compression enabled, cannot have CKD devices added to
it.
● ProtectPoint operations are still supported to Data Domain arrays
● Compression is switched on and off through Solutions Enabler and Unisphere.
● Compression efficiency can be monitored for SRPs, storage groups, and volumes.
● Activity Based Compression: the most active tracks are held in cache and not compressed until they move from cache to
disk. This feature helps improve the overall performance of the array while reducing wear on the flash drives.

30 VMAX All Flash with HYPERMAX OS


2
Management Interfaces
This chapter introduces the tools for managing arrays.
Topics:
• Management interface versions
• Unisphere for VMAX
• Unisphere 360
• Solutions Enabler
• Mainframe Enablers
• Geographically Dispersed Disaster Restart (GDDR)
• SMI-S Provider
• VASA Provider
• eNAS management interface
• Storage Resource Management (SRM)
• vStorage APIs for Array Integration
• SRDF Adapter for VMware vCenter Site Recovery Manager
• SRDF/Cluster Enabler
• Product Suite for z/TPF
• SRDF/TimeFinder Manager for IBM i
• AppSync

Management interface versions


The following components provide management capabilities for HYPERMAX OS 5977.1125.1125:
● Unisphere for VMAX V8.4
● Solutions Enabler V8.4
● Mainframe Enablers V8.2
● GDDR V5.0
● SMI-S V8.4
● SRDF/CE V4.2.1
● SRA V6.3
● VASA Provider V8.4

Unisphere for VMAX


Unisphere for VMAX is a web-based application that provides provisioning, management, and monitoring of arrays.
With Unisphere you can perform the following tasks:

Table 8. Unisphere tasks


Section Allows you to:
Home View and manage functions such as array usage, alert settings, authentication options, system
preferences, user authorizations, and link and launch client registrations.
Storage View and manage storage groups and storage tiers.
Hosts View and manage initiators, masking views, initiator groups, array host aliases, and port groups.

Management Interfaces 31
Table 8. Unisphere tasks (continued)
Section Allows you to:
Data Protection View and manage local replication, monitor and manage replication pools, create and view device
groups, and monitor and manage migration sessions.
Performance Monitor and manage array dashboards, perform trend analysis for future capacity planning, and
analyze data. Set preferences, such as, general, dashboards, charts, reports, data imports, and alerts
for performance management tasks.
Databases Troubleshoot database and storage issues, and launch Database Storage Analyzer.
System View and display dashboards, active jobs, alerts, array attributes, and licenses.
Support View online help for Unisphere tasks.

Unisphere also has a Representational State Transfer (REST) API. With this API you can access performance and configuration
information, and provision storage arrays. You can use the API in any programming environment that supports standard REST
clients, such as web browsers and programming platforms that can issue HTTP requests.

Workload Planner
Workload Planner displays performance metrics for applications. Use Workload Planner to:
● Model the impact of migrating a workload from one storage system to another.
● Model proposed new workloads.
● Assess the impact of moving one or more workloads off of a given array running HYPERMAX OS.
● Determine current and future resource shortfalls that require action to maintain the requested workloads.

FAST Array Advisor


The FAST Array Advisor wizard guides you through the steps to determine the impact on performance of migrating a workload
from one array to another.
If the wizard determines that the target array can absorb the added workload, it automatically creates all the auto-provisioning
groups required to duplicate the source workload on the target array.

Unisphere 360
Unisphere 360 is an on-premises management solution that provides a single window across arrays running HYPERMAX OS at a
single site. Use Unisphere 360 to:
● Add a Unisphere server to Unisphere 360 to allow for data collection and reporting of Unisphere management storage
system data.
● View the system health, capacity, alerts and capacity trends for your Data Center.
● View all storage systems from all enrolled Unisphere instances in one place.
● View details on performance and capacity.
● Link and launch to Unisphere instances running V8.2 or higher.
● Manage Unisphere 360 users and configure authentication and authorization rules.
● View details of visible storage arrays, including current and target storage.

Solutions Enabler
Solutions Enabler provides a comprehensive command line interface (SYMCLI) to manage your storage environment.
SYMCLI commands are invoked from a management host, either interactively on the command line, or using scripts.
SYMCLI is built on functions that use system calls to generate low-level I/O SCSI commands. Configuration and status
information is maintained in a host database file, reducing the number of enquiries from the host to the arrays.
Use SYMCLI to:

32 Management Interfaces
● Configure array software (For example, TimeFinder, SRDF, Open Replicator)
● Monitor device configuration and status
● Perform control operations on devices and data objects
Solutions Enabler also has a Representational State Transfer (REST) API. Use this API to access performance and configuration
information, and provision storage arrays. It can be used in any programming environments that supports standard REST clients,
such as web browsers and programming platforms that can issue HTTP requests.

Mainframe Enablers
The Dell EMC Mainframe Enablers are software components that allow you to monitor and manage arrays running HYPERMAX
OS in a mainframe environment:
● ResourcePak Base for z/OS
Enables communication between mainframe-based applications (provided by Dell EMC or independent software vendors)
and PowerMax/VMAX arrays.
● SRDF Host Component for z/OS
Monitors and controls SRDF processes through commands executed from a host. SRDF maintains a real-time copy of data at
the logical volume level in multiple arrays located in physically separate sites.
● Dell EMC Consistency Groups for z/OS
Ensures the consistency of data remotely copied by SRDF feature in the event of a rolling disaster.
● AutoSwap for z/OS
Handles automatic workload swaps between arrays when an unplanned outage or problem is detected.
● TimeFinder SnapVX
With Mainframe Enablers V8.0 and higher, SnapVX creates point-in-time copies directly in the Storage Resource Pool (SRP)
of the source device, eliminating the concepts of target devices and source/target pairing. SnapVX point-in-time copies
are accessible to the host through a link mechanism that presents the copy on another device. TimeFinder SnapVX and
HYPERMAX OS support backward compatibility to traditional TimeFinder products, including TimeFinder/Clone, TimeFinder
VP Snap, and TimeFinder/Mirror.
● Data Protector for z Systems (zDP™)
With Mainframe Enablers V8.0 and higher, zDP is deployed on top of SnapVX. zDP provides a granular level of application
recovery from unintended changes to data. zDP achieves this by providing automated, consistent point-in-time copies of
data from which an application-level recovery can be conducted.
● TimeFinder/Clone Mainframe Snap Facility
Produces point-in-time copies of full volumes or of individual datasets. TimeFinder/Clone operations involve full volumes or
datasets where the amount of data at the source is the same as the amount of data at the target. TimeFinder VP Snap
leverages clone technology to create space-efficient snaps for thin devices.
● TimeFinder/Mirror for z/OS
Allows the creation of Business Continuance Volumes (BCVs) and provides the ability to ESTABLISH, SPLIT, RE-ESTABLISH
and RESTORE from the source logical volumes.
● TimeFinder Utility
Conditions SPLIT BCVs by relabeling volumes and (optionally) renaming and recataloging datasets. This allows BCVs to be
mounted and used.

Geographically Dispersed Disaster Restart (GDDR)


GDDR automates business recovery following both planned outages and disaster situations, including the total loss of a data
center. Using the VMAX All Flash architecture and the foundation of SRDF and TimeFinder replication families, GDDR eliminates
any single point of failure for disaster restart plans in mainframe environments. GDDR intelligence automatically adjusts disaster
restart plans based on triggered events.

Management Interfaces 33
GDDR does not provide replication and recovery services itself. Rather GDDR monitors and automates the services that other
Dell EMC products and third-party products provide that are required for continuous operations or business restart. GDDR
facilitates business continuity by generating scripts that can be run on demand. For example, scripts to restart business
applications following a major data center incident, or resume replication following unplanned link outages.
Scripts are customized when invoked by an expert system that tailors the steps based on the configuration and the event that
GDDR is managing. Through automatic event detection and end-to-end automation of managed technologies, GDDR removes
human error from the recovery process and allows it to complete in the shortest time possible.
The GDDR expert system is also invoked to automatically generate planned procedures, such as moving compute operations
from one data center to another. This is the gold standard for high availability compute operations, to be able to move from
scheduled DR test weekend activities to regularly scheduled data center swaps without disrupting application workloads.

SMI-S Provider
Dell EMC SMI-S Provider supports the SNIA Storage Management Initiative (SMI), an ANSI standard for storage management.
This initiative has developed a standard management interface that resulted in a comprehensive specification (SMI-Specification
or SMI-S).
SMI-S defines the open storage management interface, to enable the interoperability of storage management technologies from
multiple vendors. These technologies are used to monitor and control storage resources in multivendor or SAN topologies.
Solutions Enabler components required for SMI-S Provider operations are included as part of the SMI-S Provider installation.

VASA Provider
The VASA Provider enables VMAX All Flash management software to inform vCenter of how VMDK storage, including vVols,
is configured and protected. These capabilities are defined by Dell EMC and include characteristics such as disk type, type
of provisioning, storage tiering and remote replication status. This allows vSphere administrators to make quick and informed
decisions about virtual machine placement. VASA offers the ability for vSphere administrators to complement their use of
plugins and other tools to track how devices hosting vVols are configured to meet performance and availability needs. Details
about VASA Provider replication groups can be viewed on the Unisphere vVols dashboard.

eNAS management interface


You manage eNAS block and file storage using the Unisphere File Dashboard. Link and launch enables you to run the block and
file management GUI within the same session.
The configuration wizard helps you create storage groups (automatically provisioned to the Data Movers) quickly and easily.
Creating a storage group creates a storage pool in Unisphere that can be used for file level provisioning tasks.

Storage Resource Management (SRM)


SRM provides comprehensive monitoring, reporting, and analysis for heterogeneous block, file, and virtualized storage
environments.
Use SRM to:
● Visualize applications to storage dependencies
● Monitor and analyze configurations and capacity growth
● Optimize your environment to improve return on investment
Virtualization enables businesses to simplify management, control costs, and guarantee uptime. However, virtualized
environments also add layers of complexity to the IT infrastructure that reduce visibility and can complicate the management
of storage resources. SRM addresses these layers by providing visibility into the physical and virtual relationships to ensure
consistent service levels.
As you build out a cloud infrastructure, SRM helps you ensure storage service levels while optimizing IT resources — both key
attributes of a successful cloud deployment.
SRM is designed for use in heterogeneous environments containing multi-vendor networks, hosts, and storage devices. The
information it collects and the functionality it manages can reside on technologically disparate devices in geographically diverse

34 Management Interfaces
locations. SRM moves a step beyond storage management and provides a platform for cross-domain correlation of device
information and resource topology, and enables a broader view of your storage environment and enterprise data center.
SRM provides a dashboard view of the storage capacity at an enterprise level through Watch4net. The Watch4net dashboard
view displays information to support decisions regarding storage capacity.
The Watch4net dashboard consolidates data from multiple ProSphere instances spread across multiple locations. It gives a quick
overview of the overall capacity status in the environment, raw capacity usage, usable capacity, used capacity by purpose,
usable capacity by pools, and service levels.

vStorage APIs for Array Integration


VMware vStorage APIs for Array Integration (VAAI) optimize server performance by offloading virtual machine operations to
arrays running HYPERMAX OS.
The storage array performs the select storage tasks, freeing host resources for application processing and other tasks.
In VMware environments, storage arrays supports the following VAAI components:
● Full Copy — (Hardware Accelerated Copy) Faster virtual machine deployments, clones, snapshots, and VMware Storage
vMotion® operations by offloading replication to the storage array.
● Block Zero — (Hardware Accelerated Zeroing) Initializes file system block and virtual drive space more rapidly.
● Hardware-Assisted Locking — (Atomic Test and Set) Enables more efficient meta data updates and assists virtual desktop
deployments.
● UNMAP — Enables more efficient space usage for virtual machines by reclaiming space on datastores that is unused and
returns it to the thin provisioning pool from which it was originally drawn.
● VMware vSphere Storage APIs for Storage Awareness (VASA).
VAAI is native in HYPERMAX OS and does not require additional software, unless eNAS is also implemented. If eNAS is
implemented on the array, support for VAAI requires the VAAI plug-in for NAS. The plug-in is available from the Dell EMC
support website.

SRDF Adapter for VMware vCenter Site Recovery


Manager
Dell EMC SRDF Adapter is a Storage Replication Adapter (SRA) that extends the disaster restart management functionality of
VMware vCenter Site Recovery Manager 5.x to arrays running HYPERMAX OS.
SRA allows Site Recovery Manager to automate storage-based disaster restart operations on storage arrays in an SRDF
configuration.

SRDF/Cluster Enabler
Cluster Enabler (CE) for Microsoft Failover Clusters is a software extension of failover clusters functionality. Cluster Enabler
enables Windows Server 2012 (including R2) Standard and Datacenter editions running Microsoft Failover Clusters to operate
across multiple connected storage arrays in geographically distributed clusters.
SRDF/Cluster Enabler (SRDF/CE) is a software plug-in module to Dell EMC Cluster Enabler for Microsoft Failover Clusters
software. The Cluster Enabler plug-in architecture consists of a CE base module component and separately available plug-in
modules, which provide your chosen storage replication technology.
SRDF/CE supports:
● Synchronous and asynchronous mode (SRDF modes of operation on page 67 summarizes these modes)
● Concurrent and cascaded SRDF configurations (SRDF multi-site solutions on page 62 summarizes these configurations)

Product Suite for z/TPF


The Dell EMC Product Suite for z/TPF is a suite of components that monitor and manage arrays running HYPERMAX OS from
a z/TPF host. z/TPF is an IBM mainframe operating system characterized by high-volume transaction rates with significant

Management Interfaces 35
communications content. The following software components are distributed separately and can be installed individually or in
any combination:
● SRDF Controls for z/TPF
Monitors and controls SRDF processes with functional entries entered at the z/TPF Prime CRAS (computer room agent
set).
● TimeFinder Controls for z/TPF
Provides a business continuance solution consisting of TimeFinder SnapVX, TimeFinder/Clone, and TimeFinder/Mirror.
● ResourcePak for z/TPF
Provides PowerMax and VMAX configuration and statistical reporting and extended features for SRDF Controls for z/TPF
and TimeFinder Controls for z/TPF.

SRDF/TimeFinder Manager for IBM i


Dell EMC SRDF/TimeFinder Manager for IBM i is a set of host-based utilities that provides an IBM i interface to SRDF and
TimeFinder.
This feature allows you to configure and control SRDF or TimeFinder operations on arrays attached to IBM i hosts, including:
● SRDF: Configure, establish, and split SRDF devices, including:
○ SRDF/A
○ SRDF/S
○ Concurrent SRDF/A
○ Concurrent SRDF/S
● TimeFinder:
○ Create point-in-time copies of full volumes or individual data sets.
○ Create point-in-time snaphots of images.

Extended features
SRDF/TimeFinder Manager for IBM i extended features provide support for the IBM independent ASP (IASP) functionality.
IASPs are sets of switchable or private auxiliary disk pools (up to 223) that can be brought online/offline on an IBM i host
without affecting the rest of the system.
When combined with SRDF/TimeFinder Manager for IBM i, IASPs let you control SRDF or TimeFinder operations on arrays
attached to IBM i hosts, including:
● Display and assign TimeFinder SnapVX devices.
● Execute SRDF or TimeFinder commands to establish and split SRDF or TimeFinder devices.
● Present one or more target devices containing an IASP image to another host for business continuance (BC) processes.
Access to extended features control operations include:
● From the SRDF/TimeFinder Manager menu-driven interface.
● From the command line using SRDF/TimeFinder Manager commands and associated IBM i commands.

AppSync
Dell EMC AppSync offers a simple, SLA-driven, self-service approach for protecting, restoring, and cloning critical Microsoft
and Oracle applications and VMware environments. After defining service plans, application owners can protect, restore, and
clone production data quickly with item-level granularity by using the underlying Dell EMC replication technologies. AppSync also
provides an application protection monitoring service that generates alerts when the SLAs are not met.
AppSync supports the following applications and storage arrays:
● Applications—Oracle, Microsoft SQL Server, Microsoft Exchange, and VMware vStorage VMFS and NFS datastores and File
systems.
● Replication Technologies—SRDF, SnapVX, RecoverPoint, XtremIO Snapshot, VNX Advanced Snapshots, VNXe Unified
Snapshot, and ViPR Snapshot.

36 Management Interfaces
NOTE: For VMAX All Flash arrays, AppSync is available in a starter bundle. The AppSync Starter Bundle provides the license
for a scale-limited, yet fully functional version of AppSync. For more information, see the AppSync Starter Bundle with
VMAX All Flash Product Brief available on the Dell EMC Online Support Website.

Management Interfaces 37
3
Open Systems Features
This chapter introduces the open systems features of VMAX All Flash arrays.
Topics:
• HYPERMAX OS support for open systems
• Backup and restore using PowerProtect Storage Direct and Data Domain
• VMware Virtual Volumes

HYPERMAX OS support for open systems


HYPERMAX OS provides FBA device emulations for open systems and D910 for IBM i.
Any logical device manager software installed on a host can be used with the storage devices.
HYPERMAX OS increases scalability limits from previous generations of arrays, including:
● Maximum device size is 64 TB
● Maximum host addressable devices is 64,000 for each array
● Maximum storage groups, port groups, initiator groups, and masking views is 16,000 for each object type for each array
● Maximum devices addressable through each port is 4,000
HYPERMAX OS does not support meta devices, thus it is much more difficult to reach this limit.

Open Systems-specific provisioning on page 50 has more information on provisioning storage in an open systems environment.
The Dell EMC Support Matrix in the E-Lab Interoperability Navigator at http://elabnavigator.emc.com has the most recent
information on HYPERMAX open systems capabilities.

Backup and restore using PowerProtect Storage


Direct and Data Domain
Dell EMC Storage Direct provides data backup and restore facilities for a VMAX All Flash array. A remote Data Domain array
stores the backup copies of the data.
Storage Direct uses existing features of the VMAX All Flash and Data Domain arrays to create backup copies and to restore
backed up data if necessary. There is no need for any specialized or additional hardware and software.
This section is a high-level summary of Storage Direct backup and restore facilities. It also shows where to get detailed
information about the product, including instructions on how to configure and manage it.

Backup
A LUN is the basic unit of backup in Storage Direct. For each LUN, Storage Direct creates a backup image on the Data Domain
array. You can group backup images to create a backup set. One use of the backup set is to capture all the data for an
application as a point-in-time image.

Backup process
To create a backup of a LUN, Storage Direct:
1. Uses SnapVX to create a local snapshot of the LUN on the VMAX All Flash array (the primary storage array).

38 Open Systems Features


After the snapshot is created, Storage Direct and the application can proceed independently each other and the backup
process has no further impact on the application.
2. Copies the snapshot to a vdisk on the Data Domain array where it is deduplicated and cataloged.
On the primary storage array, the vdisk appears as a FAST.X encapsulated LUN. The copy of the snapshot to the vdisk uses
existing SnapVX link copy and VMAX All Flash destaging technologies.

When the vdisk contains all the data for the LUN, Data Domain converts the data into a static image. This image then has
metadata added to it and Data Domain catalogs the resultant backup image.

Figure 4. Data flow during a backup operation to Data Domain

Incremental data copy


The first time that Storage Direct backs up a LUN, it takes a complete copy of its contents using a SnapVX snapshot. While
taking this snapshot, the application assigned to the LUN is paused for a short time. This ensures that Storage Direct has a
copy of the LUN that is application consistent. To create the first backup image of the LUN, Storage Direct copies the entire
snapshot to the Data Domain array.
For each subsequent backup of the LUN, Storage Direct copies only those parts of the LUN that have changed. This makes best
use of the communication links and minimizes the time that is required to create the backup.

Restore
Storage Direct provides two forms of data restore:
● Object level restore from a selected backup image
● Full application rollback restore

Object level restore


For an object level restore, Data Domain puts the static image from the selected backup image on a vdisk. As with the backup
process, this vdisk on the Data Domain array appears as a FAST.X encapsulated LUN on the VMAX All Flash array. The
administrator can now mount the file system of the encapsulated LUN, and restore one or more objects to their final destination.

Open Systems Features 39


Full application rollback restore
In a full application rollback restore, all the static images in a selected backup set are made available as vdisks on the Data
Domain array and available as FAST.X encapsulated LUNs on the VMAX All Flash array. From there, the administrator can
restore data from the encapsulated LUNs to their original devices.

Storage Direct agents


Storage Direct has three agents, each responsible for backing up and restoring a specific type of data:

File system agent Provides facilities to back up, manage, and restore application LUNs.
Database Provides facilities to back up, manage, and restore DB2 databases, Oracle databases, or SAP with Oracle
application agent database data.
Microsoft Provides facilities to back up, manage, and restore Microsoft Exchange and Microsoft SQL Server
application agent databases.

Features used for Storage Direct backup and restore


Storage Direct uses existing features of HYPERMAX OS and Data Domain to provide backup and restore services:
● HYPERMAX OS:
○ SnapVX
○ FAST.X encapsulated devices
● Data Domain:
○ Block services for Storage Direct
○ vdisk services
○ FastCopy

Storage Direct and traditional backup


The Storage Direct workflow can provide data protection in situations where more traditional approaches cannot successfully
meet the business requirements. This is often due to small or nonexistent backup windows, demanding recovery time objective
(RTO) or recovery point objective (RPO) requirements, or a combination of both.
Unlike traditional backup and recovery, Storage Direct does not rely on a separate process to identify the data that must be
backed up and additional actions to move that data to backup storage. Instead of using dedicated hardware, host, and network
resources, Storage Direct uses existing application and storage capabilities to create point-in-time copies of large datasets. The
copies are transported across a storage area network (SAN) to Data Domain systems to protect the copies while providing
deduplication to maximize storage efficiency.
Storage Direct minimizes the time that is required to protect large datasets, and allows backups to fit into the smallest of
backup windows to meet demanding RTO or RPO requirements.

More information
More information about Storage Direct, its components, how to configure them, and how to use them is available in:
● PowerProtect Storage Direct Solutions Guide
● File System Agent Installation and Administration Guide
● Database Application Agent Installation and Administration Guide
● Microsoft Application Agent Installation and Administration Guide

40 Open Systems Features


VMware Virtual Volumes
VMware Virtual Volumes (vVols) are a storage object developed by VMware to simplify management and provisioning in
virtualized environments.
With vVols, the management process moves from the LUN (data store) to the virtual machine (VM). This level of detail allows
VMware and cloud administrators to assign specific storage attributes to each VM, according to its performance and storage
requirements. Storage arrays running HYPERMAX OS implement vVols.
PowerMax 5978.669.669, and later, can remotely replicate vVols for disaster recovery purposes using SRDF/Asynchronous
(SRDF/A).
For more information about vVols, see Dell EMC SRDF Introduction and Dell EMC VASA Provider and Embedded VASA Provider
for PowerMax Product Guide.

vVol components
To support management capabilities of vVols, the storage/vCenter environment requires the following:
● EMC VMAX VASA Provider – The VASA Provider (VP) is a software plug-in that uses a set of out-of-band management
APIs (VASA version 2.0). The VASA Provider exports storage array capabilities and presents them to vSphere through
the VASA APIs. vVols are managed by way of vSphere through the VASA Provider APIs (create/delete) and not with the
Unisphere for VMAX user interface or Solutions Enabler CLI. After vVols are setup on the array, Unisphere and Solutions
Enabler only support vVol monitoring and reporting.
● Storage Containers (SC)—Storage containers are chunks of physical storage used to logically group vVols. SCs are based
on the grouping of Virtual Machine Disks (VMDKs) into specific Service Levels. SC capacity is limited only by hardware
capacity. At least one SC per storage system is required, but multiple SCs per array are allowed. SCs are created and
managed on the array by the Storage Administrator. Unisphere and Solutions Enabler CLI support management of SCs.
● Protocol Endpoints (PE)—Protocol endpoints are the access points from the hosts to the array by the Storage
Administrator. PEs are compliant with FC and replace the use of LUNs and mount points. vVols are "bound" to a PE, and
the bind and unbind operations are managed through the VP APIs, not with the Solutions Enabler CLI. Existing multi-path
policies and NFS topology requirements can be applied to the PE. PEs are created and managed on the array by the Storage
Administrator. Unisphere and Solutions Enabler CLI support management of PEs.

Table 9. vVol architecture component management capability


Functionality Component
vVol device management (create, delete) VASA Provider APIs / Solutions Enabler APIs
vVol bind management (bind, unbind)
Protocol Endpoint device management (create, delete) Unisphere/Solutions Enabler CLI
Protocol Endpoint-vVol reporting (list, show)
Storage Container management (create, delete, modify)
Storage container reporting (list, show)

vVol scalability
The vVol scalability limits are:

Table 10. vVol-specific scalability


Requirement Value
Number of vVols/Array 64,000
Number of Snapshots/Virtual Machine a 12
Number of Storage Containers/Array 16
Number of Protocol Endpoints/Array 1/ESXi Host

Open Systems Features 41


Table 10. vVol-specific scalability (continued)
Requirement Value
Maximum number of Protocol Endpoints/Array 1,024
Number of arrays supported /VP 1
Number of vCenters/VP 2
Maximum device size 16 TB

a. vVol Snapshots are managed through vSphere only. You cannot use Unisphere or Solutions Enabler to create them.

vVol workflow

Requirements
Install and configure these applications:
● Unisphere for VMAX V8.2 or later
● Solutions Enabler CLI V8.2 or later
● VASA Provider V8.2 or later
Instructions for installing Unisphere and Solutions Enabler are in their respective installation guides. Instructions on installing the
VASA Provider are in the Dell EMC PowerMax VASA Provider Release Notes .

Procedure
The creation of a vVol-based virtual machine involves both the storage administrator and the VMware administrator:

Storage The storage administrator uses Unisphere or Solutions Enabler to create the storage and present it to the
administrator VMware environment:
1. Create one or more storage containers on the storage array.
This step defines how much storage and from which service level the VMware user can provision.
2. Create Protocol Endpoints and provision them to the ESXi hosts.
VMware The VMware administrator uses the vSphere Web Client to deploy the VM on the storage array:
administrator 1. Add the VASA Provider to the vCenter.
This allows vCenter to communicate with the storage array,
2. Create a vVol datastore from the storage container.
3. Create the VM storage policies.
4. Create the VM in the vVol datastore, selecting one of the VM storage policies.

42 Open Systems Features


4
Mainframe Features
This chapter introduces the mainframe-specific features of VMAX All Flash arrays.
Topics:
• HYPERMAX OS support for mainframe
• IBM Z Systems functionality support
• IBM 2107 support
• Logical control unit capabilities
• Disk drive emulations
• Cascading configurations

HYPERMAX OS support for mainframe


VMAX 450F, 850F, and 950F arrays can be ordered with the zF and zFX software packages to support mainframe.
VMAX All Flash arrays provide the following mainframe features:
● Mixed FBA and CKD drive configurations.
● Support for 64, 128, 256 FICON single and multi mode ports, respectively.
● Support for CKD 3380/3390 and FBA devices.
● Mainframe (FICON) and OS FC/iSCSI connectivity.
● High capacity flash drives.
● Up to 16 Gb/s FICON host connectivity.
● Support for Forward Error Correction, Query Host Access, and FICON Dynamic Routing.
● T10 DIF protection for CKD data along the data path (in cache and on disk) to improve performance for multi-record
operations.
● D@RE external key managers. Data at Rest Encryption on page 25 provides more information on D@RE and external key
managers.

IBM Z Systems functionality support


VMAX All Flash arrays support the latest IBM Z Systems enhancements, ensuring that the array can handle the most demanding
mainframe environments:
● zHPF, including support for single track, multi track, List Prefetch, bi-directional transfers, QSAM/BSAM access, and Format
Writes
● zHyperWrite
● Non-Disruptive State Save (NDSS)
● Compatible Native Flash (Flash Copy)
● Remote Pair Flash Copy (RPFC)
● Concurrent Copy
● Multi-subsystem Imaging
● Parallel Access Volumes (PAV)
● Dynamic Channel Management (DCM)
● Dynamic Parallel Access Volumes/Multiple Allegiance (PAV/MA)
● Peer-to-Peer Remote Copy (PPRC) SoftFence
● Extended Address Volumes (EAV)
● Persistent IU Pacing (Extended Distance FICON)
● HyperPAV
● PDS Search Assist
● Modified Indirect Data Address Word (MIDAW)

Mainframe Features 43
● Multiple Allegiance (MA)
● Sequential Data Striping
● Multi-Path Lock Facility
● Product Suite for z/TPF
● HyperSwap
● Global Mirror
● Transparent Cloud Tiering

Global Mirror support


Global Mirror is an IBM solution for long-distance replication that can be run through Geographically Dispersed Parallel Sysplex
(GDPS) automation.
PowerMax (and VMAX) supports a two-site Global Mirror environment. A two-site Global Mirror configuration has Peer-to-Peer
Remote Copy (PPRC) primary devices at Site A and PPRC secondary devices at Site B. There is also a relationship from the
PPRC secondary devices to a third set of devices at Site B that is not usually active.

NOTE: Use of Global Mirror requires an RPQ.

Global Mirror requirements and restrictions are as follows:


● Global Mirror is supported on PowerMax 8000 and VMAX 950F arrays.
● Both PPRC primary and secondary sites run PowerMaxOS 5978.669.669, or later.
● One Global Mirror session is associated with one SRDF group.
● Up to 31 Global Mirror sessions can be run simultaneously.
● When multiple Global Mirror sessions are running on an array, each session maps to a unique SRDF group.
● All devices in the SRDF group are defined to the Global Mirror session.
NOTE: To add or remove devices from the Global Mirror session, stop or pause Global Mirror.

NOTE: A VMAX All Flash array can participate in a z/OS Global Mirror (XRC) configuration only as a secondary.

Figure 5. Two-site Global Mirror

Transparent Cloud Tiering support


PowerMax (or VMAX) can be used in an IBM Transparent Cloud Tiering (TCT) environment to store DASD files in, or retrieve
them from, the cloud with minimal CPU requirements on the z/OS host.
PowerMax TCT support uses the Dell EMC Disk Library for mainframe (DLm) as its 'cloud'. Count Key Data (CKD) extents are
stored as files on standard 3590 tape volumes. The DLm 3590 tape volumes and the DLm tape drives for TCT are separate from
any z/OS-defined tape volumes and tape drives. TCT 3590 tapes are not accessible to, or managed, by z/OS. Tape volumes are

44 Mainframe Features
written to the DLm from the PowerMax over a FICON connection. DLm then stores the data on any backend storage that DLm
supports. Optionally, the DLm Long-Term Retention feature can then be used, independent of TCT, to move the data to a Dell
EMC Elastic Cloud Storage (ECS) solution.
A cloud object store is required for TCT support to operate. This cloud must support the OpenStack SWIFT protocol and is used
to store cloud metadata. If ECS is used as the cloud, the ECS can be the same or a different ECS from any ECS deployed with
the DLm.
A REST API proxy server is required in each z/OS image accessing a TCT-enabled PowerMax. This proxy server runs as a
separate address space in z/OS.
The ResourcePak Base for z/OS Product Guide provides additional information about TCT support, including TCT support
requirements and restrictions. It also discusses how to set up and run the Dell EMC REST API proxy and the Dell EMC REST API
utility.

NOTE: Use of TCT in PowerMaxOS 5978 requires an RPQ.

PowerMax TCT environment

The PowerMax TCT environment includes the following components:


● z/OS host with DFSMS and Dell EMC REST API proxy:
○ The Data Facility Storage Management Subsystem (DFSMS) provides space management and data movement tools that
are primary users of TCT services.
○ Dell EMC REST API proxy provides REST API services for DFSMS.
The REST API proxy requires a connection to a cloud service. The cloud service can be the same ECS cloud that DLm
uses or another supported service (SWIFT protocol is required).
● PowerMax with TCT support—A PowerMax mainframe host adapter runs the Cloud Services Stack. The Cloud Services
Stack writes data to and reads data from DLm virtual tape volumes.
● Disk Library for mainframe (DLm)—The TCT-enabled PowerMax stores data in a DLm, which acts as a cloud for PowerMax.
DLm can be connected to a cloud provider, such as ECS.
The following figure illustrates a TCT environment with PowerMax and DLm.

Figure 6. TCT environment with PowerMax and DLm

Mainframe Features 45
IBM 2107 support
When VMAX All Flash arrays emulate an IBM 2107, they externally represent the array serial number as an alphanumeric
number in order to be compatible with IBM command output. Internally, the arrays retain a numeric serial number for IBM 2107
emulations. HYPERMAX OS handles correlation between the alphanumeric and numeric serial numbers.

Logical control unit capabilities


The following table lists logical control unit (LCU) maximum values:

Table 11. Logical control unit maximum values


Capability Maximum value
LCUs per director slice (or port) 255 (within the range of 00 to FE)
LCUs per split a 255
Splits per array 16 (0 to 15)
Devices per split 65,280
LCUs per array 512
Devices per LCU 256
Logical paths per port 2,048
Logical paths per LCU per port (see Maximum LPARs per port 128
on page 46)
Array system host address per array (base and alias) 64K
I/O host connections per array engine 32

a. A split is a logical partition of the storage array, identified by unique devices, SSIDs, and host serial number. The maximum
storage array host address per array is inclusive of all splits.

The following table lists the maximum LPARs per port based on the number of LCUs with active paths:

Table 12. Maximum LPARs per port


LCUs with active paths per port Maximum volumes supported per Array maximum LPARs per port
port
16 4K 128
32 8K 64
64 16K 32
128 32K 16
255 64K 8

Disk drive emulations


When VMAX All Flash arrays are configured to mainframe hosts, the data recording format is Extended CKD (ECKD). The
supported CKD emulations are 3380 and 3390.

Cascading configurations
Cascading configurations greatly enhance FICON connectivity between local and remote sites by using switch-to-switch
extensions of the CPU to the FICON network. These cascaded switches communicate over long distances using a small number

46 Mainframe Features
of high-speed lines called interswitch links (ISLs). A maximum of two switches may be connected together within a path
between the CPU and the storage array.
Use of the same switch vendors is required for a cascaded configuration. To support cascading, each switch vendor requires
specific models, hardware features, software features, configuration settings, and restrictions. Specific IBM CPU models,
operating system release levels, host hardware, and HYPERMAX levels are also required.
The Dell EMC Support Matrix, available through E-Lab Interoperability Navigator (ELN) at http://elabnavigator.emc.com has
the most up-to-date information on switch support.

Mainframe Features 47
5
Provisioning
This chapter introduces storage provisioning.
Topics:
• Thin provisioning
• Multi-array provisioning

Thin provisioning
VMAX All Flash arrays are configured in the factory with thin provisioning pools ready for use. Thin provisioning improves
capacity utilization and simplifies storage management. It also enables storage to be allocated and accessed on demand from a
pool of storage that services one or many applications. LUNs can be “grown” over time as space is added to the data pool with
no impact to the host or application. Data is widely striped across physical storage (drives) to deliver better performance than
standard provisioning.
NOTE: Data devices (TDATs) are provisioned/pre-configured/created while the host addressable storage devices TDEVs
are created by either the customer or customer support, depending on the environment.
Thin provisioning increases capacity utilization and simplifies storage management by:
● Enabling more storage to be presented to a host than is physically consumed
● Allocating storage only as needed from a shared thin provisioning pool
● Making data layout easier through automated wide striping
● Reducing the steps required to accommodate growth
Thin provisioning allows you to:
● Create host-addressable thin devices (TDEVs) using Unisphere or Solutions Enabler
● Add the TDEVs to a storage group
● Run application workloads on the storage groups
When hosts write to TDEVs, the physical storage is automatically allocated from the default Storage Resource Pool.

48 Provisioning
Pre-configuration for thin provisioning
VMAX All Flash arrays are custom-built and pre-configured with array-based software applications, including a factory pre-
configuration for thin provisioning that includes:
● Data devices (TDAT) — an internal device that provides physical storage used by thin devices.
● Virtual provisioning pool — a collection of data devices of identical emulation and protection type, all of which reside on
drives of the same technology type and speed. The drives in a data pool are from the same disk group.
● Disk group— a collection of physical drives within the array that share the same drive technology and capacity. RAID
protection options are configured at the disk group level. Dell Technologies strongly recommends that you use one or more
of the RAID data protection schemes for all data devices.

Table 13. RAID options


RAID Provides the following Configuration considerations
RAID 5 Distributed parity and striped data ● RAID-5 (3 + 1) provides 75% data
across all drives in the RAID group. storage capacity. Only available with
Options include: VMAX 250F arrays.
● RAID 5 (3 + 1) — Consists of four ● RAID-5 (7 + 1) provides 87.5% data
drives with parity and data striped storage capacity.
across each device. ● Withstands failure of a single drive
● RAID-5 (7 + 1) — Consists of eight within the RAID-5 group.
drives with parity and data striped
across each device.
RAID 6 Striped drives with double distributed ● RAID-6 (6 + 2) provides 75% data
parity (horizontal and diagonal). The storage capacity. Only available with
highest level of availability options VMAX 250F arrays.
include: ● RAID-6 (14 + 2) provides 87.5% data
● RAID-6 (6 + 2) — Consists of eight storage capacity.
drives with dual parity and data ● Withstands failure of two drives
striped across each device. within the RAID-6 group.
● RAID-6 (14 + 2) — Consists of
16 drives with dual parity and data
striped across each device.

● Storage Resource Pools — one (default) Storage Resource Pool is pre-configured on the array. This process is automatic
and requires no setup. You cannot modify Storage Resource Pools, but you can list and display their configuration. You can
also generate reports detailing the demand storage groups are placing on the Storage Resource Pools.

Thin devices (TDEVs)


NOTE: On VMAX All Flash arrays the thin device is the only device type for front end devices.

Thin devices (TDEVs) have no storage allocated until the first write is issued to the device. Instead, the array allocates only a
minimum allotment of physical storage from the pool, and maps that storage to a region of the thin device including the area
targeted by the write.
These initial minimum allocations are performed in units called thin device extents. Each extent for a thin device is 1 track (128
KB).
When a read is performed on a device, the data being read is retrieved from the appropriate data device to which the thin
device extent is allocated. Reading an area of a thin device that has not been mapped does not trigger allocation operations.
Reading an unmapped block returns a block in which each byte is equal to zero.
When more storage is required to service existing or future thin devices, data devices can be added to existing thin storage
groups.

Thin device oversubscription


A thin device can be presented for host use before mapping all of the reported capacity of the device.

Provisioning 49
The sum of the reported capacities of the thin devices using a given pool can exceed the available storage capacity of the pool.
Thin devices whose capacity exceeds that of their associated pool are "oversubscribed".
Over-subscription allows presenting larger than needed devices to hosts and applications without having the physical drives to
fully allocate the space represented by the thin devices.

Open Systems-specific provisioning

HYPERMAX host I/O limits for open systems


On open systems, you can define host I/O limits and associate a limit with a storage group. The I/O limit definitions contain the
operating parameters of the I/O per second or bandwidth limitations.
When an I/O limit is associated with a storage group, the limit is divided equally among all the directors in the masking view that
is associated with the storage group. All devices in that storage group share that limit.
When applications are configured, you can associate the limits with storage groups that contain a list of devices. A single
storage group can only be associated with one limit, and a device can only be in one storage group that has limits associated.
There can be up to 4096 host I/O limits.
Consider the following when using host I/O limits:
● Cascaded host I/O limits controlling parent and child storage groups limits in a cascaded storage group configuration.
● Offline and failed director redistribution of quota that supports all available quota to be available instead of losing quota
allocations from offline and failed directors.
● Dynamic host I/O limits support for dynamic redistribution of steady state unused director quota.

Initiator bandwidth limits


Slow SAN drain is a congestion issue that is caused by slow drain devices (host bus adapters (HBAs) that have a lower link
speed (BW) compared to storage array SLIC speeds (BW)). This issue can lead to severe fabric-wide performance degradation,
but can be mitigated by configuring per initiator bandwidth limits.
At the host group level you can configure initiator-based host I/O, or bandwidth, limits. To set the limits, there must be fibre
channel connectivity to the host, and at least one initiator configured on the host.

Auto-provisioning groups on open systems


You can auto-provision groups on open systems to reduce complexity, execution time, labor cost, and the risk of error.
Auto-provisioning groups enables users to group initiators, front-end ports, and devices together, and to build masking views
that associate the devices with the ports and initiators.
When a masking view is created, the necessary mapping and masking operations are performed automatically to provision
storage.
After a masking view exists, any changes to its grouping of initiators, ports, or storage devices automatically propagate
throughout the view, automatically updating the mapping and masking as required.

50 Provisioning
Components of an auto-provisioning group

Masking view

Initiator group

VM 1 VM
VM 1 2 VM
VM 2 3 VM
VM 3 4
VM 4

HBA 22

HBA 33

HBA 44
HBA 11
ESX

HBA

HBA

HBA
HBA
2
1
Host initiators

Port group
Ports
dev
dev
dev dev
dev
dev
dev
dev
dev Storage group

Devices

SYM-002353

Figure 7. Auto-provisioning groups

Initiator group
A logical grouping of Fibre Channel initiators. An initiator group is limited to either a parent, which can
contain other groups, or a child, which contains one initiator role. Mixing of initiators and child name in a
group is not supported.
Port group
A logical grouping of Fibre Channel front-end director ports. A port group can contain up to 32 ports.
Storage group
A logical grouping of thin devices. LUN addresses are assigned to the devices within the storage group
when the view is created if the group is either cascaded or stand alone. Often there is a correlation
between a storage group and a host application. One or more storage groups may be assigned to
an application to simplify management of the system. Storage groups can also be shared among
applications.
Cascaded storage group
A parent storage group comprised of multiple storage groups (parent storage group members) that
contain child storage groups comprised of devices. By assigning child storage groups to the parent
storage group members and applying the masking view to the parent storage group, the masking view
inherits all devices in the corresponding child storage groups.
Masking view
An association between one initiator group, one port group, and one storage group. When a masking
view is created, the group within the view is a parent, the contents of the children are used. For
example, the initiators from the children initiator groups and the devices from the children storage
groups. Depending on the server and application requirements, each server or group of servers may have
one or more masking views that associate a set of thin devices to an application, server, or cluster of
servers.

Multi-array provisioning
The multi-array Provisioning Storage wizard simplifies the task of identifying the optimal target array and provisioning storage on
that array.
Unisphere for PowerMax 9.2 provides a system-level provisioning launch point that takes array-independent inputs (storage
group name, device count and size, and (optionally) response time target or initiator filter), selects ports that are based on

Provisioning 51
current utilization and port group best practices, and returns component impact scores for all locally connected arrays running
HYPERMAX OS 5977 or PowerMaxOS 5978.
You can also select a provisioning template and provision new storage using the wizard. Storage group capacity information
and response time targets that are already part of the provisioning template are populated when the wizard opens. The most
suitable ports (based on specified options) are selected and a list of all locally connected arrays (V3 and higher) are returned.
The list is sorted by the impact of the new workload on the target arrays.
Host I/O limits (quotas) can be used to limit the amount of Front End (FE) Bandwidth and I/O operations per second (IOPS)
that can be consumed by a set of storage volumes over a set of director ports. Host I/O limits are defined as storage group
attributes – the maximum bandwidth (in MB per second) and the maximum IOPS. The Host I/O limit for a storage group can be
either active or inactive.

52 Provisioning
6
Native local replication with TimeFinder
This chapter introduces the local replication features.
Topics:
• About TimeFinder
• Mainframe SnapVX and zDP
• Snapshot policy

About TimeFinder
Dell EMC TimeFinder delivers point-in-time copies of volumes that can be used for backups, decision support, data warehouse
refreshes, or any other process that requires parallel access to production data.
Previous VMAX families offered multiple TimeFinder products, each with their own characteristics and use cases. These
traditional products required a target volume to retain snapshot or clone data.
HYPERMAX OS introduces TimeFinder SnapVX which provides the best aspects of the traditional TimeFinder offerings
combined with increased scalability and ease-of-use.
TimeFinder SnapVX emulates the following legacy replication products:
● FBA devices:
○ TimeFinder/Clone
○ TimeFinder/Mirror
○ TimeFinder VP Snap
● Mainframe (CKD) devices:
○ TimeFinder/Clone
○ TimeFinder/Mirror
○ TimeFinder/Snap
○ Dell EMC Dataset Snap
○ IBM FlashCopy (Full Volume and Extent Level)
TimeFinder SnapVX dramatically decreases the impact of snapshots and clones:
● For snapshots, this is done by using redirect on write technology (ROW).
● For clones, this is done by storing changed tracks (deltas) directly in the Storage Resource Pool of the source device -
sharing tracks between snapshot versions and also with the source device, where possible.
There is no need to specify a target device and source/target pairs. SnapVX supports up to 256 snapshots per volume. Each
snapshot can have a name and an automatic expiration date.

Access to snapshots
With SnapVX, a snapshot can be accessed by linking it to a host accessible volume (known as a target volume). Target volumes
are standard VMAX All Flash TDEVs. Up to 1024 target volumes can be linked to the snapshots of the source volumes. The 1024
links can all be to the same snapshot of the source volume, or they can be multiple target volumes linked to multiple snapshots
from the same source volume. However, a target volume may be linked only to one snapshot at a time.
Snapshots can be cascaded from linked targets, and targets can be linked to snapshots of linked targets. There is no limit to the
number of levels of cascading, and the cascade can be broken.
SnapVX links to targets in the following modes:
● Nocopy Mode (Default): SnapVX does not copy data to the linked target volume but still makes the point-in-time image
accessible through pointers to the snapshot. The target device is modifiable and retains the full image in a space-efficient
manner even after unlinking from the point-in-time.
● Copy Mode: SnapVX copies all relevant tracks from the snapshot's point-in-time image to the linked target volume. This
creates a complete copy of the point-in-time image that remains available after the target is unlinked.

Native local replication with TimeFinder 53


If an application needs to find a particular point-in-time copy among a large set of snapshots, SnapVX enables you to link and
relink until the correct snapshot is located.

Interoperability with legacy TimeFinder products


TimeFinder SnapVX and HYPERMAX OS emulate legacy TimeFinder and IBM FlashCopy replication products to provide
backwards compatibility. You can run your legacy replication scripts and jobs on VMAX All Flash arrays running TimeFinder
SnapVX and HYPERMAX OS without altering them.
Arrays that run PowerMaxOS 5978.444.444 and later enable coexistence and interoperability of SnapVX with legacy TimeFinder
products. On such an array, a device can simultaneously be the source of a SnapVX operation and the source of one of these
legacy TimeFinder products:
● TimeFinder/Clone
● TimeFinder/Mirror
● TimeFinder VP Snap
The target device of a legacy TimeFinder product cannot be the source device for SnapVX. Similarly, the target device of
SnapVX cannot be the source device for a legacy TimeFinder product.
Uses for the coexistence of SnapVX with legacy TimeFinder products include:
● A site wants to keep its current, legacy configuration in place while trying out SnapVX.
● Moving to SnapVX may require the deletion of existing legacy sessions and that violates local business policies.
NOTE: Coexistence of SnapVX and legacy TimeFinder products is not available when the source of a SnapVX session is
undergoing a restore operation.

Targetless snapshots
With the TimeFinder SnapVX management interfaces you can take a snapshot of an entire VMAX All Flash Storage Group using
a single command. With this in mind, VMAX All Flash supports up to 64K storage groups. The number of groups is enough even
in the most demanding environment to provide one for each application. The storage group construct already exists in most
cases as they are created for masking views. TimeFinder SnapVX uses this existing structure, so reducing the administration
required to maintain the application and its replication environment.
Creation of SnapVX snapshots does not require preconfiguration of extra volumes. In turn, this reduces the amount of cache
that SnapVX snapshots use and simplifies implementation. Snapshot creation and automatic termination can easily be scripted.
The following Solutions Enabler example creates a snapshot with a 2-day retention period. The command can be scheduled to
run as part of a script to create multiple versions of the snapshot. Each snapshot shares tracks where possible with the other
snapshots and the source devices. Use a cron job or scheduler to run the snapshot script on a schedule to create up to 256
snapshots of the source volumes; enough for a snapshot every 15 minutes with 2 days of retention:
symsnapvx -sid 001 -sg StorageGroup1 -name sg1_snap establish -ttl -delta 2
If a restore operation is required, any of the snapshots created by this example can be specified.
When the storage group transitions to a restored state, the restore session can be terminated. The snapshot data is preserved
during the restore process and can be used again should the snapshot data be required for a future restore.

Secure snaps
Secure snaps prevent administrators or other high-level users from deleting snapshot data, intentionally or not. Also, Secure
snaps are also immune to automatic failure resulting from running out of Storage Resource Pool (SRP) or Replication Data
Pointer (RDP) space on the array.
When the administrator creates a secure snapshot, they assign it an expiration date and time. The administrator can express
the expiration either as a delta from the current date or as an absolute date. Once the expiration date passes, and if the
snapshot has no links, HYPERMAX OS automatically deletes the snapshot. Before its expiration, administrators can only extend
the expiration date; they cannot shorten the date or delete the snapshot. If a secure snapshot expires, and it has a volume
linked to it, or an active restore session, the snapshot is not deleted. However, it is no longer considered secure.
NOTE: Secure snapshots may only be terminated after they expire or by customer-authorized Dell EMC support. Refer to
Knowledgebase article 498316 for more information.

54 Native local replication with TimeFinder


Provision multiple environments from a linked target
Use SnapVX to create multiple test and development environments using linked snapshots. To access a point-in-time copy,
create a link from the snapshot data to a host mapped target device.
Each linked storage group can access the same snapshot, or each can access a different snapshot version in either no copy or
copy mode. Changes to the linked volumes do not affect the snapshot data. To roll back a test or development environment to
the original snapshot image, perform a relink operation.

Figure 8. SnapVX targetless snapshots

NOTE: Unmount target volumes before issuing the relink command to ensure that the host operating system does not
cache any filesystem data. If accessing through VPLEX, ensure that you follow the procedure outlined in the technical note
VPLEX: Leveraging Array Based and Native Copy Technologies, available on the Dell EMC support website.

Once the relink is complete, volumes can be remounted.


Snapshot data is unchanged by the linked targets, so the snapshots can also be used to restore production data.

Cascading snapshots
Presenting sensitive data to test or development environments often requires that the source of the data be disguised
beforehand. Cascaded snapshots provides this separation and disguise, as shown in the following image.

Figure 9. SnapVX cascaded snapshots

Native local replication with TimeFinder 55


If no change to the data is required before presenting it to the test or development environments, there is no need to create a
cascaded relationship.

Accessing point-in-time copies


To access a point-in time-copy, create a link from the snapshot data to a host mapped target device. The links may be created
in Copy mode for a permanent copy on the target device, or in NoCopy mode for temporary use. Copy mode links create
full-volume, full-copy clones of the data by copying it to the target device’s Storage Resource Pool. NoCopy mode links are
space-saving snapshots that only consume space for the changed data that is stored in the source device’s Storage Resource
Pool.
HYPERMAX OS supports up to 1,024 linked targets per source device.
NOTE: When a target is first linked, all of the tracks are undefined. This means that the target does not know where in the
Storage Resource Pool the track is located, and host access to the target must be derived from the SnapVX metadata. A
background process eventually defines the tracks and updates the thin device to point directly to the track location in the
source device’s Storage Resource Pool.

Mainframe SnapVX and zDP


Data Protector for z Systems (zDP) is a mainframe software solution that is layered on SnapVX on VMAX All Flash arrays.
Using zDP you can recover from logical data corruption with minimal data loss. zDP achieves this by providing multiple,
frequent, consistent point-in-time copies of data automatically. You can then use these copies to recover an application or the
environment to a point prior to the logical corruption.
By providing easy access to multiple different point-in-time copies of data (with a granularity of minutes), precise recovery from
logical data corruption can be performed using application-based recovery procedure. zDP results in minimal data loss compared
to other methods such as restoring data from daily or weekly backups.
As shown in zDP operation on page 56, you can use zDP to create and manage multiple point-in-time snapshots of volumes.
Each snapshot is a pointer-based, point-in-time image of a single volume. These images are created using the SnapVX feature
of HYPERMAX OS. SnapVX is a space-efficient method for making snapshots of thin devices and consuming additional storage
capacity only when changes are made to the source volume.
There is no need to copy each snapshot to a target volume as SnapVX separates the capturing of a point-in-time copy from its
usage. Capturing a point-in-time copy does not require a target volume. Using a point-in-time copy from a host requires linking
the snapshot to a target volume.
There can be up to 256 snapshots of each source volume.

Figure 10. zDP operation

56 Native local replication with TimeFinder


These snapshots share allocations to the same track image whenever possible while ensuring they each continue to represent a
unique point-in-time image of the source volume. Despite the space efficiency achieved through shared allocation to unchanged
data, additional capacity is required to preserve the pre-update images of changed tracks captured by each point-in-time
snapshot.
The process of implementing zDP has two phases — the planning phase and the implementation phase.
● The planning phase is done in conjunction with your EMC representative who has access to tools that can help size the
capacity needed for zDP if you are currently a VMAX All Flash user.
● The implementation phase uses the following methods for z/OS:
○ A batch interface that allows you to submit jobs to define and manage zDP.
○ A zDP run-time environment that executes under SCF to create snapsets.
For details on zDP usage, refer to the TimeFinder SnapVX and zDP Product Guide. For details on zDP usage in z/TPF, refer to
the TimeFinder Controls for z/TPF Product Guide.

Snapshot policy
The Snapshot policy feature provides snapshot orchestration at scale (1024 snaps per storage group). The feature simplifies
snapshot management for standard and cloud snapshots.
Snapshots can be used to recover from data corruption, accidental deletion, or other damage, offering continuous data
protection. A large number of snapshots can be difficult to manage. The Snapshot policy feature provides an end to end
solution to create, schedule and manage standard (local) and cloud snapshots.
The snapshot policy (Recovery Point Objective (RPO)) specifies how often the snapshot should be taken and how many of the
snapshots should be retained. The snapshot may also be specified to be secure (these snapshots cannot be terminated by users
before their time to live (TTL), derived from the snapshot policy's interval and maximum count, has expired.) Up to four policies
can be associated with a storage group, and a snapshot policy can be associated with many storage groups.
The following rules apply to snapshot policies:
● The maximum number of snapshot policies that can be created on a storage system is 20. Multiple storage groups can be
associated with a snapshot policy.
● A maximum of four snapshot policies can be associated with an individual storage group.
● A storage group or device can have a maximum of 256 manual snapshots.
● A storage group or device can have a maximum of 1024 snapshots.
● The oldest unused snapshots are removed or recycled in accordance with the specified policy max_count value.
● When devices are added to a snapshot policy storage group, snapshot policies that apply to the storage group are applied to
the added devices.
● When devices are removed from a snapshot policy storage group, snapshot policies that apply to the storage group are no
longer applied to the removed devices.
● If overlapping snapshot policies are applied to storage groups, they run and take snapshots independently.
Compliance information is provided for each snapshot policy that is directly associated with (not inherited to) a storage group.
Snapshot compliance for a storage group is taken as the lowest compliance value for any of the snapshot policies that are
directly associated with the storage group.
Compliance for a snapshot policy that is associated with a storage group is based on the number of good snapshots within the
retention count. The retention count is translated to a retention period for compliance calculation. The retention period is the
snapshot interval multiplied by the snapshot maximum count. For example, a 1 hr interval with a 30 snapshot count means a
30-hour retention period.
The compliance threshold for green to yellow change is the maximum count, that is, all snapshots must be good and in place for
the compliance to be green. If there is one snapshot short (missing or failed), then the compliance turns yellow.
The compliance threshold value for yellow to red is stored in the snapshot policy definition. Once the number of good snapshots
falls below this value, compliance turns red.
Snapshot compliance is calculated by polling the storage system once an hour for SnapVX related information for storage
groups which have snapshot policies that are associated with them. The returned snapshot information is summarized into the
required information for the database compliance entries.
When the maximum count of snapshots for a snapshot policy is changed, this changes the compliance for the storage group or
service level combination. Compliance values are updated accordingly simultaneously.
If compliance calculation is performed during the creation of a snapshot, then an establish-in-progress state may be detected.
This is acceptable for the most recent snapshot but is considered failed for any older snapshot.

Native local replication with TimeFinder 57


When a storage group and service level have only recently been associated and the full maximum count of snapshots has not
yet been reached, the calculation is scaled to the number of snapshots that are available and represents compliance accordingly
until the full maximum count of snapshots has been reached. If a snapshot failed to be taken for a reason, such as the storage
group or service level was suspended, or a snapshot was manually terminated before the maximum snapshot count was reached,
the compliance is reported as out of compliance appropriately.
When the service level interval is changed, the compliance window changes and the number of snapshots may not exist for
correct compliance.
If a service level is suspended or a storage group or service level combination is suspended, snapshots are not created and older
snapshots fall outside the compliance window and the maximum count of required snapshot is not found.
Manual termination of snapshots inside the compliance window results in the storage group or service level combination falling
out of compliance.

58 Native local replication with TimeFinder


7
Remote replication
This chapter introduces the remote replication facilities.
Topics:
• Native remote replication with SRDF
• SRDF/Metro
• RecoverPoint
• Remote replication using eNAS

Native remote replication with SRDF


The Dell EMC Symmetrix Remote Data Facility (SRDF) family of products offers a range of array-based disaster recovery,
parallel processing, and data migration solutions for Dell EMC storage systems, including:
● PowerMaxOS for PowerMax 2000 and 8000 arrays and for VMAX All Flash 450F and 950F arrays
● HYPERMAX OS for VMAX All Flash 250F, 450F, 850F, and 950F arrays
● HYPERMAX OS for VMAX 100K, 200K, and 400K arrays
● Enginuity for VMAX 10K, 20K, and 40K arrays
SRDF disaster recovery solutions use “active, remote” mirroring and dependent-write logic to create consistent copies of data.
Dependent-write consistency ensures transactional consistency when the applications are restarted at the remote location. You
can tailor your SRDF solution to meet various Recovery Point Objectives and Recovery Time Objectives.
Using SRDF, you can create complete solutions to:
● Create real-time or dependent-write-consistent copies at 1, 2, or 3 remote arrays.
● Move data quickly over extended distances.
● Provide 3-site disaster recovery with zero data loss recovery, business continuity protection and disaster-restart.
You can integrate SRDF with other Dell EMC products to create complete solutions to:
● Restart operations after a disaster with zero data loss and business continuity protection.
● Restart operations in cluster environments. For example, Microsoft Cluster Server with Microsoft Failover Clusters.
● Monitor and automate restart operations on an alternate local or remote server.
● Automate restart operations in VMware environments.

Remote replication 59
SRDF 2-site solutions
The following table describes SRDF 2-site solutions.

Table 15. SRDF 2-site solutions


Solution highlights Site topology
SRDF/Synchronous (SRDF/S) Host Primary Secondary

Maintains a real-time copy of production data at a


physically separated array.
R1 Limited distance R2
● No data exposure. Synchronous
● Ensured consistency protection with SRDF/
Consistency Group.
● Recommended maximum distance of 200 km
(125 miles) between arrays as application
latency may rise to unacceptable levels at longer
distances. a
SRDF/Asynchronous (SRDF/A) Host Primary Secondary
Maintains a dependent-write consistent copy of the
data on a remote secondary site. The sites can be
an unlimited distance apart. The copy of the data R1 Unlimited distance R2
at the secondary site is seconds behind the primary Asynchronous
site.

SRDF/Data Mobility (SRDF/DM)


Enables the fast transfer of data from R1 to R2
devices over extended distances.
Host Host
● Uses adaptive copy mode to transfer data. R1 SRDF links R2
● Designed for migration or data replication
purposes, not for disaster restart solutions.
Site A Site B

SRDF/Automated Replication (SRDF/AR)


● Combines SRDF and TimeFinder to optimize
bandwidth requirements and provide a long- Host Host
distance disaster restart solution.
● Operates in 2-site solutions that use SRDF/DM
in combination with TimeFinder.

SRDF
TimeFinder
TimeFinder
background copy
R1 R2

Site A Site B

60 Remote replication
Table 15. SRDF 2-site solutions (continued)
Solution highlights Site topology
SRDF/Cluster Enabler (CE)
VLAN switch VLAN switch
● Integrates SRDF/S or SRDF/A with Microsoft Extended IP subnet
Failover Clusters (MSCS) to automate or semi-
automate site failover.
● Complete solution for restarting operations in
cluster environments (MSCS with Microsoft
Failover Clusters). Cluster 1 Fibre Channel Fibre Channel
● Expands the range of cluster storage and Host 1 hub/switch hub/switch Cluster 1
Host 2
management capabilities while ensuring full
protection of the SRDF remote replication.

Cluster 2
SRDF/S or SRDF/A links
Cluster 2 Host 2
Host 1

SRDF-2node2cluster.eps

Site A Site B

SRDF and VMware Site Recovery Manager Protection side Recovery side
vCenter and SRM Server vCenter and SRM Server
Completely automates storage-based disaster Solutions Enabler software Solutions Enabler software
restart operations for VMware environments in
SRDF topologies. IP Network IP Network

● The Dell EMC SRDF Adapter enables VMware


Site Recovery Manager to automate storage-
based disaster restart operations in SRDF ESX Server
solutions. Solutions Enabler software
configured as a SYMAPI server
● Can address configurations in which data are
spread across multiple storage arrays or SRDF SAN Fabric SAN Fabric SAN Fabric SAN Fabric
groups.
● Requires that the adapter is installed on each
array to facilitate the discovery of arrays and to
initiate failover operations.
SRDF mirroring
● Implemented with:
○ SRDF/S
○ SRDF/A
○ SRDF/Star
○ TimeFinder
Site A, primary
Site B, secondary

a. In some circumstances, using SRDF/S over distances greater than 200 km may be feasible. Contact your Dell EMC
representative for more information.

Remote replication 61
SRDF multi-site solutions
The following table describes SRDF multi-site solutions.

Table 16. SRDF multi-site solutions


Solution highlights Site topology
SRDF/Automated Replication
(SRDF/AR)
Host Host
● Combines SRDF and TimeFinder
to optimize bandwidth
requirements and provide a
long-distance disaster restart
solution.
● Operates in a 3-site
environment that uses a R1 R2
SRDF adaptive
combination of SRDF/S, SRDF/S TimeFinder copy
TimeFinder
SRDF/DM, and TimeFinder.
R2
R1

Site A Site B Site C

Concurrent SRDF
3-site disaster recovery and
advanced multi-site business F/S R2
SRD
continuity protection.
● Data on the primary site is Site B
concurrently replicated to 2 R11 adaptive copy R2
secondary sites.
● Replication to remote site Site A Site C
can use SRDF/S, SRDF/A, or
adaptive copy.

Cascaded SRDF
3-site disaster recovery and SRDF/S SRDF/A
advanced multi-site business R1 R21 R2
continuity protection.
Data on the primary site (Site Site A Site B Site C
A) is synchronously mirrored to a
secondary site (Site B), and then
asynchronously mirrored from the
secondary site to a tertiary site
(Site C).

SRDF/Star Cascaded SRDF/Star


3-site data protection and disaster R21
recovery configuration with zero
data loss recovery, business F/S SRD
R11 SRD F/A R2/
continuity protection and disaster R22
Site B
restart.
● Available in 2 configurations: Site A SRDF/A (recovery) Site C
○ Cascaded SRDF/Star
○ Concurrent SRDF/Star Concurrent SRDF/Star

● Differential synchronization R21


allows rapid reestablishment of F/S SR
SRD (re DF/A
mirroring among surviving sites R11 cov
R2/
R22
in a multi-site disaster recovery Site B ery
)
implementation.
Site A SRDF/A Site C

62 Remote replication
Table 16. SRDF multi-site solutions (continued)
Solution highlights Site topology

● Implemented using SRDF


consistency groups (CG) with
SRDF/S and SRDF/A.

Interfamily compatibility
SRDF supports connectivity between different operating environments and arrays. Arrays running HYPERMAX OS can connect
to legacy arrays running older operating environments. In mixed configurations where arrays are running different versions,
SRDF features of the lowest version are supported.
VMAX All Flash arrays can connect to:
● PowerMax arrays running PowerMaxOS
● VMAX 250F, 450F, 850F, and 950F arrays running HYPERMAX OS
● VMAX 100K, 200K, and 400K arrays running HYPERMAX OS
● VMAX 10K, 20K, and 40K arrays running Enginuity 5876 with an Enginuity ePack
NOTE: When you connect between arrays running different operating environments, limitations may apply. Information
about which SRDF features are supported, and applicable limitations for 2-site and 3-site solutions is in the SRDF
Interfamily Connectivity Information.
This interfamily connectivity allows you to add the latest hardware platform/operating environment to an existing SRDF
solution, enabling technology refreshes.

SRDF device pairs


An SRDF device pair is a logical device that is paired with another logical device that resides in a second array. The arrays are
connected by SRDF links.
Encapsulated Data Domain devices that are used for Storage Direct cannot be part of an SRDF device pair.

Remote replication 63
R1 and R2 devices
An R1 device is the member of the device pair at the source (production) site. R1 devices are generally Read/Write accessible to
the application host.
An R2 device is the member of the device pair at the target (remote) site. During normal operations, host I/O writes to the R1
device are mirrored over the SRDF links to the R2 device. In general, data on R2 devices is not available to the application host
while the SRDF relationship is active. In SRDF synchronous mode, however, an R2 device can be in Read Only mode that allows
a host to read from the R2.
In a typical environment:
● The application production host has Read/Write access to the R1 device.
● An application host connected to the R2 device has Read Only (Write Disabled) access to the R2 device.
Open systems hosts
Production host Optional remote host

Active host path Recovery path


Write Disabled

R1 SRDF Links R2
Read/ Read
Write Only
R1 data copies to R2

Figure 11. R1 and R2 devices

64 Remote replication
R11 devices
R11 devices operate as the R1 device for two R2 devices. Links to both R2 devices are active.
R11 devices are typically used in 3-site concurrent configurations where data on the R11 site is mirrored to two secondary (R2)
arrays:

Site B
Target

R2

Site C
R11
Target

Site A
Source

R2

Figure 12. R11 device in concurrent SRDF

Remote replication 65
R21 devices
R21 devices have a dual role and are used in cascaded 3-site configurations where:
● Data on the R1 site is synchronously mirrored to a secondary (R21) site, and then
● Asynchronously mirrored from the secondary (R21) site to a tertiary (R2) site:

Production
host

SRDF Links
R1 R21 R2

Site A Site B Site C

Figure 13. R21 device in cascaded SRDF

The R21 device acts as a R2 device that receives updates from the R1 device, and as a R1 device that sends updates to the R2
device.
When the R1->R21->R2 SRDF relationship is established, no host has write access to the R21 device.
In arrays that run Enginuity, the R21 device can be diskless. That is, it consists solely of cache memory and does not have any
associated storage device. It acts purely to relay changes in the R1 device to the R2 device. This capability requires the use of
thick devices. Systems that run PowerMaxOS or HYPERMAX OS contain thin devices only, so setting up a diskless R21 device is
not possible on arrays running those environments.

R22 devices
R22 devices:
● Have two R1 devices, only one of which is active at a time.
● Are typically used in cascaded SRDF/Star and concurrent SRDF/Star configurations to decrease the complexity and time
required to complete failover and failback operations.
● Let you recover without removing old SRDF pairs and creating new ones.

Figure 14. R22 devices in cascaded and concurrent SRDF/Star

66 Remote replication
Dynamic device personalities
SRDF devices can dynamically swap “personality” between R1 and R2. After a personality swap:
● The R1 in the device pair becomes the R2 device, and
● The R2 becomes the R1 device.
Swapping R1/R2 personalities allows the application to be restarted at the remote site without interrupting replication if an
application fails at the production site. After a swap, the R2 side (now R1) can control operations while being remotely mirrored
at the primary (now R2) site.
An R1/R2 personality swap is not supported:
● If the R2 device is larger than the R1 device.
● If the device to be swapped is participating in an active SRDF/A session.
● In SRDF/EDP topologies diskless R11 or R22 devices are not valid end states.
● If the device to be swapped is the target device of any TimeFinder or EMC Compatible flash operations.

SRDF modes of operation


The SRDF mode of operation determines:
● How R1 devices are remotely mirrored to R2 devices across the SRDF links
● How I/O operations are processed
● When the acknowledgment is returned to the application host that issued an I/O write command
In SRDF there are three principal modes:
● Synchronous
● Asynchronous
● Adaptive copy

Synchronous mode
Synchronous mode maintains a real-time mirror image of data between the R1 and R2 devices over distances up to 200 km (125
miles). Host data is written to both arrays in real time. The application host does not receive the acknowledgment until the data
has been stored in the cache of both arrays.

Asynchronous mode
Asynchronous mode maintains a dependent-write consistent copy between the R1 and R2 device over unlimited distances. On
receiving data from the application host, SRDF on the R1 side of the link writes that data to its cache. Also it batches the
data received into delta sets. Delta sets are transferred to the R2 device in timed cycles. The application host receives the
acknowledgment once data is successfully written to the cache on the R1 side.

Adaptive copy modes


Adaptive copy modes:
● Accumulate write requests that are destined for the R2 device on the R1 side, but not in cache memory.
● A background copy process sends the outstanding write requests to the R2 device.
● Allow the R1 and R2 devices to be out of synchronization by user-defined maximum skew value. Once the skew value is
exceeded, SRDF transfers the batched data to the R2 device.
● Send the acknowledgment to the application host once the data is successfully written to cache on the R1 side.
Unlike asynchronous mode, the adaptive copy modes do not guarantee a dependent-write copy of data on the R2 devices.

Remote replication 67
SRDF groups
An SRDF group defines the logical relationship between SRDF devices and directors on both sides of an SRDF link.

Group properties
The properties of an SRDF group are:
● Label (name)
● Set of ports on the local array used to communicate over the SRDF links
● Set of ports on the remote array used to communicate over the SRDF links
● Local group number
● Remote group number
● One or more pairs of devices
The devices in the group share the ports and associated CPU resources of the port's directors.

Types of group
There are two types of SRDF group:
● Static: which are defined in the local array's configuration file.
● Dynamic: which are defined using SRDF management tools and their properties that are stored in the array's cache memory.
On arrays running PowerMaxOS or HYPERMAX OS all SRDF groups are dynamic.

Director boards, links, and ports


SRDF links are the logical connections between SRDF groups and their ports. The ports are physically connected by cables,
routers, extenders, switches and other network devices.

NOTE: Two or more SRDF links per SRDF group are required for redundancy and fault tolerance.

The relationship between the resources on a director (CPU cores and ports) varies depending on the operating environment.

HYPERMAX OS
On arrays running HYPERMAX OS:
● The relationship between the SRDF emulation and resources on a director is configurable:
○ One director/multiple CPU cores/multiple ports
○ Connectivity (ports in the SRDF group) is independent of compute power (number of CPU cores). You can change the
amount of connectivity without changing compute power.
● Each director has up to 16 front end ports, any or all of which can be used by SRDF. Both the SRDF Gigabit Ethernet and
SRDF Fibre Channel emulations can use any port.
● The data path for devices in an SRDF group is not fixed to a single port. Instead, the path for data is shared across all ports
in the group.

Mixed configurations: HYPERMAX OS and Enginuity 5876


For configurations where one array is running Enginuity 5876, and the other array is running HYPERMAX OS, the following rules
apply:
● On the 5876 side, an SRDF group can have the full complement of directors, but no more than 16 ports on the HYPERMAX
OS side.
● You can connect to 16 directors using one port each, 2 directors using 8 ports each or any other combination that does not
exceed 16 per SRDF group.

68 Remote replication
SRDF consistency
Many applications, especially database systems, use dependent write logic to ensure data integrity. That is, each write operation
must complete successfully before the next can begin. Without write dependency, write operations could get out of sequence
resulting in irrecoverable data loss.
SRDF implements write dependency using the consistency group (also known as SRDF/CG). A consistency group consists of a
set of SRDF devices that use write dependency. For each device in the group, SRDF ensures that write operations propagate to
the corresponding R2 devices in the correct order.
However, if the propagation of any write operation to any R2 device in the group cannot complete, SRDF suspends propagation
to all group's R2 devices. This suspension maintains the integrity of the data on the R2 devices. While the R2 devices are
unavailable, SRDF continues to store write operations on the R1 devices. It also maintains a list of those write operations in
their time order. When all R2 devices in the group become available, SRDF propagates the outstanding write operations, in the
correct order, for each device in the group.
SRDF/CG is available for both SRDF/S and SRDF/A.

Data migration
Data migration is the one-time movement of data from one array to another. Once the movement is complete, the data is
accessed from the secondary array. A common use of migration is to replace an older array with a new one.
Dell EMC support personnel can assist with the planning and implementation of migration projects.
SRDF multisite configurations enable migration to occur in any of these ways:
● Replace R2 devices.
● Replace R1 devices.
● Replace both R1 and R2 devices simultaneously.
For example, this diagram shows the use of concurrent SRDF to replace the secondary (R2) array in a 2-site configuration:

Remote replication 69
Array A Array B

R1 R2

Array A Array B Array A

R11 R2 R1

SRDF
migration
R2
R2

Array C Array C

Figure 15. Migrating data and removing a secondary (R2) array

Here:
● The top section of the diagram shows the original, 2-site configuration.
● The lower left section of the diagram shows the interim, 3-site configuration with data being copied to two secondary arrays.
● The lower right section of the diagram shows the final, 2-site configuration where the new secondary array has replaced the
original one.
The Dell EMC SRDF Introduction contains more information about using SRDF to migrate data.

More information
Here are other Dell EMC documents that contain more information about the use of SRDF in replication and migration:
SRDF Introduction
SRDF and NDM Interfamily Connectivity Information
SRDF/Cluster Enabler Plug-in Product Guide
Using the Dell EMC Adapter for VMWare Site Recovery Manager Technical Book
Dell EMC SRDF Adapter for VMware Site Recovery Manager Release Notes

70 Remote replication
SRDF/Metro
In traditional SRDF configurations, only the R1 devices are Read/Write accessible to the application hosts. The R2 devices are
Read Only and Write Disabled.
In SRDF/Metro configurations, however:
● Both the R1 and R2 devices are Read/Write accessible to the application hosts.
● Application hosts can write to both the R1 and R2 side of the device pair.
● R2 devices assume the same external device identity as the R1 devices. The identity includes the device geometry and
device WWN.
This shared identity means that R1 and R2 devices appear to application hosts as a single, virtual device across two arrays.

Deployment options
SRDF/Metro can be deployed in either a single, multipathed host environment or in a clustered host environment:
Multi-Path Cluster

Read/Write Read/Write
Read/Write Read/Write

R1 SRDF links R2 R1 SRDF links R2

Site A Site B Site A Site B

Figure 16. SRDF/Metro

Hosts can read and write to both the R1 and R2 devices:


● In a single host configuration, a single host issues I/O operations. Multipathing software directs parallel reads and writes to
each array.
● In a clustered host configuration, multiple hosts issue I/O operations. Those hosts access both sides of the SRDF device
pair. Each cluster node has dedicated access to one of the storage arrays.
● In both configurations, writes to the R1 and R2 devices are synchronously copied to the paired device in the other array.
SRDF/Metro software resolves any write conflicts to maintain consistent images on the SRDF device pairs.

SRDF/Metro Resilience
If either of the devices in a SRDF/Metro configuration become Not Ready, or connectivity between the devices is lost, SRDF/
Metro must decide which side remains available to the application host. There are two mechanisms that SRDF/Metro can use :
Device Bias and Witness.

Device Bias
Device pairs for SRDF/Metro are created with a bias attribute. By default, the create pair operation sets the bias to the R1
side of the pair. That is, if a device pair becomes Not Ready (NR) on the SRDF link, the R1 (bias side) remains accessible
to the hosts, and the R2 (nonbias side) becomes inaccessible. However, if there is a failure on the R1 side, the host loses all
connectivity to the device pair. The Device Bias method cannot make the R2 device available to the host.

Witness
A witness is a third party that mediates between the two sides of a SRDF/Metro pair to help:
● Decide which side remains available to the host

Remote replication 71
● Avoid a "split brain" scenario when both sides attempt to remain accessible to the host despite the failure
The witness method allows for intelligently choosing on which side to continue operations when the bias-only method may not
result in continued host availability to a surviving, nonbiased array.
There are two forms of the Witness mechanism:
● Array Witness: The operating environment of a third array is the mediator.
● Virtual Witness (vWitness): A daemon running on a separate, virtual machine is the mediator.
When both sides run PowerMaxOS 5978 SRDF/Metro takes these criteria into account when selecting the side to remain
available to the hosts (in priority order):
1. The side that has connectivity to the application host (requires PowerMaxOS 5978.444.444or later)
2. The side that has a SRDF/A DR leg
3. Whether the SRDF/A DR leg is synchronized
4. The side that has more than 50% of the RA or FA directors that are available
5. The side that is currently the bias side
The first of these criteria that one array has, and the other does not, stops the selection process. The side with the matched
criteria is the preferred winner.

72 Remote replication
Disaster recovery facilities
Devices in SRDF/Metro groups can simultaneously be in other groups that replicate data to a third, disaster recovery site. There
are two replication solutions. The number available in any SRDF/Metro configuration depends on the version of the operating
environment that the participating arrays run:
● Highly-available disaster recovery – in configurations that consist of arrays that run PowerMaxOS 5978.669.669 and later
● Independent disaster recovery – in configurations that run all supported versions of PowerMaxOS 5978 and HYPERMAX OS
5977

Highly available disaster recovery (SRDF/Metro Smart DR)


SRDF/Metro Smart DR maintains a single, disaster recovery (DR) copy of the data in a SRDF/Metro pair on a third, remote
array. This diagram shows the SRDF/Metro Smart DR configuration:

SRDF/Metro
R11 R21

Array A Array B
SRDF/A or SRDF/A or
Adaptive Copy Adaptive Copy
Disk Disk

Active link
Inactive link
R22

Array C

Figure 17. SRDF/Metro Smart DR

Notice that the device names differ from a standard SRDF/Metro configuration. This difference reflects the change in the
device functions when SRDF/Metro Smart DR is in operation. For instance, as the diagram shows the R1 side of the SRDF/
Metro on Array A now has the name R11, because it is the R1 device to both the:
● R21 device on Array B in the SRDF/Metro configuration
● R22 device on Array C in the SRDF/Metro Smart DR configuration
Arrays A and B both have SRDF/Asynchronous or Adaptive Copy Disk connections to the DR array (Array C). However, only
one of those connections is active at a time (in this example the connection between Array A and Array C). The two SRDF/A
connections are known as the active and standby connections.
If a problem prevents Array A replicating data to Array C, the standby link between Array B and Array C becomes active and
replication continues. Array A and Array B keep track of the data replicated to Array C to enable replication and avoid data loss.

Remote replication 73
Independent disaster recovery
Devices in SRDF/Metro groups can simultaneously be part of device groups that replicate data to a third, disaster-recovery site.
Either or both sides of the Metro region can be replicated. You can choose which ever configuration that suits your business
needs. The following diagram shows the possible configurations:
NOTE: When the SRDF/Metro session is using a witness, the R1 side of the Metro pair can change based on the witness
determination of the preferred side.
Single-sided replication

SRDF/Metro SRDF/Metro

R11 R2 R1 R21

Site A Site B Site A Site B

SRDF/A SRDF/A
or Adaptive Copy or Adaptive Copy
Disk Disk

R2 R2

Site C Site C

Double-sided replication

SRDF/Metro SRDF/Metro

R11 R21 R11 R21

SRDF/A SRDF/A SRDF/A


Site A or Adaptive Copy Site B or Adaptive Copy Site A or Adaptive Copy Site B
Disk Disk Disk
SRDF/A
or Adaptive Copy
Disk

R2

R2 R2

R2

Site C Site D Site C

Figure 18. Disaster recovery for SRDF/Metro

The device types differ from a stand-alone SRDF/Metro configuration. This difference reflects the change in the devices'
function when disaster recovery facilities are in place. For instance, when the R2 side is replicated to a disaster recovery site, its
type changes to R21 because it is both the:
● R2 device in the SRDF/Metro configuration
● R1 device in the disaster-recovery configuration
When an SRDF/Metro uses a witness for resilience protection, the two sides periodically renegotiate the winning and losing
sides. If the winning and losing sides switch as a result of renegotiation:

74 Remote replication
● An R11 device becomes an R21 device. That device was the R1 device for both the SRDF/Metro and disaster recovery
configurations. Now the device is the R2 device of the SRDF/Metro configuration but it remains the R1 device of the
disaster recovery configuration.
● An R21 device becomes and R11 device. That device was the R2 device in the SRDF/Metro configuration and the R1 device
of the disaster recovery configuration. Now the device is the R1 device of both the SRDF/Metro and disaster recovery
configurations.

More information
Here are other Dell EMC documents that contain more information on SRDF/Metro:
SRDF Introduction
SRDF/Metro vWitness Configuration Guide
SRDF Interfamily Connectivity Information

RecoverPoint
HYPERMAX OS 5977.1125.1125 introduced support for RecoverPoint on VMAX storage arrays. RecoverPoint is a comprehensive
data protection solution designed to provide production data integrity at local and remote sites. RecoverPoint also provides the
ability to recover data from a point in time using journaling technology.
The primary reasons for using RecoverPoint are:
● Remote replication to heterogeneous arrays
● Protection against Local and remote data corruption
● Disaster recovery
● Secondary device repurposing
● Data migrations
RecoverPoint systems support local and remote replication of data that applications are writing to SAN-attached storage.
The systems use existing Fibre Channel infrastructure to integrate seamlessly with existing host applications and data storage
subsystems. For remote replication, the systems use existing Fibre Channel connections to send the replicated data over a
WAN, or use Fibre Channel infrastructure to replicate data aysnchronously. The systems provide failover of operations to a
secondary site in the event of a disaster at the primary site.
Previous implementations of RecoverPoint relied on a splitter to track changes made to protected volumes. The current
implementation relies on a cluster of RecoverPoint nodes, provisioned with one or more RecoverPoint storage groups, leveraging
SnapVX technology, on the storage array. Volumes in the RecoverPoint storage groups are visible to all the nodes in the cluster,
and available for replication to other storage arrays.
RecoverPoint allows data replication of up to 8,000 LUNs for each RecoverPoint cluster and up to eight different RecoverPoint
clusters attached to one array. Supported array types include PowerMax, VMAX All Flash, VMAX3, VMAX, VNX, VPLEX, and
XtremIO.
RecoverPoint is licensed and sold separately. For more information about RecoverPoint and its capabilities see the Dell EMC
RecoverPoint Product Guide.

Remote replication using eNAS


File Auto Recovery (FAR) allows you to manually failover or move a virtual Data Mover (VDM) from a source eNAS system to
a destination eNAS system. The failover or move leverages block-level SRDF synchronous replication, so it incurs zero data loss
in the event of an unplanned operation. This feature consolidates VDMs, file systems, file system checkpoint schedules, CIFS
servers, networking, and VDM configurations into their own separate pools. This feature works for a recovery where the source
is unavailable. For recovery support in the event of an unplanned failover, there is an option to recover and clean up the source
system and make it ready as a future destination.
The manually initiated failover and reverse operations can be performed using EMC File Auto Recovery Manager (FARM). FARM
can automatically failover a selected sync-replicated VDM on a source eNAS system to a destination eNAS system. FARM can
also monitor sync-replicated VDMs and trigger automatic failover based on Data Mover, File System, Control Station, or IP
network unavailability that would cause the NAS client to lose access to data.

Remote replication 75
8
Blended local and remote replication
This chapter introduces TimeFinder integration with SRDF.
Topics:
• Integration of SRDF and TimeFinder
• R1 and R2 devices in TimeFinder operations
• SRDF/AR
• TimeFinder and SRDF/A
• TimeFinder and SRDF/S

Integration of SRDF and TimeFinder


You can use TimeFinder and SRDF products to complement each other when you require both local and remote replication. For
example, you can use TimeFinder to create local gold copies of SRDF devices for recovery operations and for testing disaster
recovery solutions.
The key benefits of TimeFinder integration with SRDF include:
● Remote controls simplify automation—Use Dell EMC host-based control software to transfer commands across the SRDF
links. A single command from the host to the primary array can initiate TimeFinder operations on both the primary and
secondary arrays.
● Consistent data images across multiple devices and arrays—SRDF/CG guarantees that a dependent-write consistent image
of production data on the R1 devices is replicated across the SRDF links.
You can use TimeFinder/CG in an SRDF configuration to create dependent-write consistent local and remote images of
production data across multiple devices and arrays.
NOTE: Using a SRDF/A single session guarantees dependent-write consistency across the SRDF links and does not require
SRDF/CG. SRDF/A MSC mode requires host software to manage consistency among multiple sessions.

NOTE: Some TimeFinder operations are not supported on devices that SRDF protects. The Dell EMC Solutions Enabler
TimeFinder SnapVX CLI User Guide has further information.
The rest of this chapter summarizes the ways of integrating SRDF and TimeFinder.

R1 and R2 devices in TimeFinder operations


You can use TimeFinder to create local replicas of R1 and R2 devices. The following rules apply:
● You can use R1 devices and R2 devices as TimeFinder source devices.
● R1 devices can be the target of TimeFinder operations as long as there is no host accessing the R1 during the operation.
● R2 devices can be used as TimeFinder target devices if SRDF replication is not active (writing to the R2 device). To use R2
devices as TimeFinder target devices, first suspend the SRDF replication session.

SRDF/AR
SRDF/AR combines SRDF and TimeFinder to provide a long-distance disaster restart solution. SRDF/AR can be deployed over 2
or 3 sites:
● In 2-site configurations, SRDF/DM is deployed with TimeFinder.
● In 3-site configurations, SRDF/DM is deployed with a combination of SRDF/S and TimeFinder.
The time to create the new replicated consistent image is determined by the time that it takes to replicate the deltas.

76 Blended local and remote replication


SRDF/AR 2-site configurations
The following image shows a 2-site configuration where the production device (R1) on the primary array (Site A) is also a
TimeFinder target device:

Host Host

SRDF
TimeFinder
TimeFinder
background copy
R1 R2

Site A Site B
Figure 19. SRDF/AR 2-site solution

In this configuration, data on the SRDF R1/TimeFinder target device is replicated across the SRDF links to the SRDF R2 device.
The SRDF R2 device is also a TimeFinder source device. TimeFinder replicates this device to a TimeFinder target device. You
can map the TimeFinder target device to the host connected to the secondary array at Site B.
In a 2-site configuration, SRDF operations are independent of production processing on both the primary and secondary arrays.
You can utilize resources at the secondary site without interrupting SRDF operations.
Use SRDF/AR 2-site configurations to:
● Reduce required network bandwidth using incremental resynchronization between the SRDF target sites.
● Reduce network cost and improve resynchronization time for long-distance SRDF implementations.

Blended local and remote replication 77


SRDF/AR 3-site configurations
SRDF/AR 3-site configurations provide a zero data loss solution at long distances in the event that the primary site is lost.
The following image shows a 3-site configuration where:
● Site A and Site B are connected using SRDF in synchronous mode.
● Site B and Site C are connected using SRDF in adaptive copy mode.

Host Host

R1 R2
SRDF adaptive TimeFinder
SRDF/S TimeFinder copy

R2
R1

Site A Site B Site C


Figure 20. SRDF/AR 3-site solution

If Site A (primary site) fails, the R2 device at Site B provides a restartable copy with zero data loss. Site C provides an
asynchronous restartable copy.
If both Site A and Site B fail, the device at Site C provides a restartable copy with controlled data loss. The amount of data loss
is a function of the replication cycle time between Site B and Site C.
SRDF and TimeFinder control commands to R1 and R2 devices for all sites can be issued from Site A. No controlling host is
required at Site B.
Use SRDF/AR 3-site configurations to:
● Reduce required network bandwidth using incremental resynchronization between the secondary SRDF target site and the
tertiary SRDF target site.
● Reduce network cost and improve resynchronization time for long-distance SRDF implementations.
● Provide disaster recovery testing, point-in-time backups, decision support operations, third-party software testing, and
application upgrade testing or the testing of new applications.

Requirements/restrictions
In a 3-site SRDF/AR multi-hop configuration, SRDF/S host I/O to Site A is not acknowledged until Site B has acknowledged it.
This can cause a delay in host response time.

TimeFinder and SRDF/A


In SRDF/A solutions, device pacing:
● Prevents cache utilization bottlenecks when the SRDF/A R2 devices are also TimeFinder source devices.
● Allows R2 or R22 devices at the middle hop to be used as TimeFinder source devices.
NOTE: Device write pacing is not required in configurations that include HYPERMAX OS and Enginuity 5876.

78 Blended local and remote replication


TimeFinder and SRDF/S
SRDF/S solutions support any type of TimeFinder copy sessions running on R1 and R2 devices as long as the conditions
described in R1 and R2 devices in TimeFinder operations on page 76 are met.

Blended local and remote replication 79


9
Data Migration
This chapter introduces data migration solutions.
Topics:
• Overview
• Data migration for open systems
• Data migration for mainframe

Overview
Data migration is a one-time movement of data from one array (the source) to another array (the target). Typical examples are
data center refreshes where data is moved from an old array after which the array is retired or re-purposed. Data migration is
not data movement due to replication (where the source data is accessible after the target is created) or data mobility (where
the target is continually updated).
After a data migration operation, applications that access the data reference it at the new location.
To plan a data migration, consider the potential impact on your business, including the:
● Type of data to be migrated
● Site location(s)
● Number of systems and applications
● Amount of data to be moved
● Business needs and schedules
PowerMaxOS provides migration facilities for:
● Open systems
● IBM System i
● Mainframe

80 Data Migration
Data migration for open systems
The data migration features available for open system environments are:
● Non-disruptive migration
● Open Replicator
● PowerPath Migration Enabler
● Data migration using SRDF/Data Mobility
● Space and zero-space reclamation

Non-Disruptive Migration overview


Non-Disruptive Migration (NDM) provides a method for migrating data from a source array to a target array without application
host downtime across a metro distance, typically within a data center. For NDM array operating system version support, please
consult the NDM support matrix, or the SRDF Interfamily Connectivity Guide.
If regulatory or business requirements for DR (disaster recovery) dictate the use of SRDF/S during migration, contact Dell EMC
for required ePacks for SRDF/S configuration.
The NDM operations involved in a typical migration are:
● Environment setup – Configures source and target array infrastructure for the migration process.
● Create – Duplicates the application storage environment from source array to target array.
● Cutover – Switches the application data access form the source array to the target array and duplicates the application data
on the source array to the target array.
● Commit – Removes application resources from the source array and releases the resources used for migration. Application
permanently runs on the target array.
● Enviroment remove –Removes the migration infrastructure created by the environmental setup.
Some key features of NDM are:
● Simple process for migration:
1. Select storage group to migrate.
2. Create the migration session.
3. Discover paths to the host.
4. Cutover or readytgt storage group to VMAX3 or VMAX All Flash array.
5. Monitor for synchronization to complete.
6. Commit the migration.
● Allows for inline compression on VMAX All Flash array during migration.
● Maintains snapshot and disaster recovery relationships on source array, but are not migrated.
● Allows for non-disruptive revert to source array.
● Allows up to 50 concurrent migration sessions.
● Requires no license since it is part of HYPERMAX OS.
● Requires no additional hardware in the data path.
The following graphic shows the connections required between the host (single or cluster) and the source and target array, and
the SRDF connection between the two arrays.

Figure 21. Non-Disruptive Migration zoning

Data Migration 81
The App host connection to both arrays uses FC, and the SRDF connection between arrays uses FC or GigE .
The migration controls should be run from a control host and not from the application host. The control host should have
visibility to both the source array and target array.
The following devices and components are not supported with NDM:
● CKD devices
● eNAS data
● Storage Direct, FAST.X relationships and associated data

Environmental requirements for Non-Disruptive Migration


The following configurations are required for a successful data migration:

Array configuration
● The target array must be running HYPERMAX OS 5977.811.784 or higher. This includes VMAX3 Family arrays and VMAX All
Flash arrays.
● The source array must be a VMAX array running Enginuity 5876 with required ePack (contact Dell EMC for required ePack).
● SRDF is used for data migration, so zoning of SRDF ports between the source and target arrays is required. Note that an
SRDF license is not required, as there is no charge for NDM.
● The NDM RDF group is configured with a minimum of two paths on different directors for redundancy and fault tolerance. If
more paths are found up to eight paths will be configured.
● If SRDF is not normally used in the migration environment, it may be necessary to install and configure RDF directors and
ports on both the source and target arrays and physically configure SAN connectivity.

Host configuration
● The migration controls should be run from a control host and not from the application host.
● Both the source and the target array have be visible to the controlling host that runs the migration commands.

Pre-migration rules and restrictions for NDM


In addition to general configuration requirements of the migration environment, the following rules and restrictions apply before
starting a migration:
● A storage group is the data container that is migrated, and the requirements that apply to the group and its devices are:
○ Storage groups must have masking views. All devices in the group on the source array must be visible only through a
masking view. Each device must be mapped only to a port that is part of the masking view.

82 Data Migration
○ Multiple masking views on a storage group using the same initiator group are valid only when:
■ Port groups on the target array exist for each masking view, and
■ Ports in the port group are selected
○ A storage group must be a parent or stand-alone group. A child storage group with a masking view on the child group is
not supported.
○ If the selected storage group is a parent, its child groups are also migrated.
○ The names of storage groups and their children (if any) must not exist on the target array.
○ Gatekeeper devices in a storage group are not migrated.
● Devices cannot:
○ Have a mobility ID
○ Have a nonbirth identity, when the source array runs Enginuity 5876
○ Have the BCV attribute
○ Be encapsulated
○ Be RP devices
○ Be Data Domain devices
○ Be vVOL devices
○ Be R2 or Concurrent SRDF devices
○ Be masked to FCoE (in the case of source arrays), iSCSI, non-ACLX, or NVMe over FC ports
○ Be part of another data migration operation
○ Be part of an ORS relationship
○ Be in other masked storage groups
○ Have a device status of Not Ready
● Devices can be part of TimeFinder sessions.
● Devices can act as R1 devices but cannot be part of a SRDF/Star or SRDF/SQAR configuration.
● The names of masking groups to migrate must not exist on the target array.
● The names of initiator groups to migrate may exist on the target array. However, the aggregate set of host initiators in the
initiator groups that the masking groups use must be the same. Also, the effective ports flags on the host initiators must
have the same setting on both arrays.
● The names of port groups to migrate may exist on the target array, as long as the groups on the target array are in the
logging history table for at least one port.
● The status of the target array must be as follows:
○ If a target-side Storage Resource Pool (SRP) is specified for the migration, that SRP must exist on the target array.
○ The SRP to be used for target-side storage must have enough free capacity to support the migration.
○ The target side must be able to support the additional devices required to receive the source-side data.
○ All initiators provisioned to an application on the source array must also be logged into ports on the target array.

Migration infrastructure - RDF device pairing


RDF device pairing is done during the create operation, with the following actions occurring on the device pairs.
● NDM creates RDF device pairs, in a DM RDF group, between devices on the source array and the devices on the target
array.
● Once device pairing is complete NDM controls the data flow between both sides of the migration process.
● Once the migration is complete, the RDF pairs are deleted when the migration is committed.
● Other RDF pairs may exist in the DM RDF group if another migration is still in progress.
Due to differences in device attributes between the source and target array, the following rules apply during migration:
● Any source array device that has an odd number of cylinders is migrated to a device on the target array that has Geometry
Compatibility Mode (GCM).
● Any source array meta device is migrated to a non-meta device on the target array.
Once the copying of data to the target array has begun, the target devices can have SRDF mirrors (R2 devices) added to them
for remote replication. However, the mirror devices cannot be:
● Enabled for MSC or Synchronous SRDF Consistency
● Part of a SRDF/Star, SRDF/SQAR, or SRDF/Metro configuration

Data Migration 83
Open Replicator
Open Replicator enables copying data (full or incremental copies) from qualified arrays within a storage area network (SAN)
infrastructure to or from arrays running HYPERMAX OS. Open Replicator uses the Solutions Enabler SYMCLI symrcopy
command.
Use Open Replicator to migrate and back up/archive existing data between arrays running HYPERMAX OS. Open Replicator
uses the Solutions Enabler SYMCLI symrcopy and third-party storage arrays within the SAN infrastructure without interfering
with host applications and ongoing business operations.
Use Open Replicator to:
● Pull from source volumes on qualified remote arrays to a volume on an array running HYPERMAX OS. Open Replicator uses
the Solutions Enabler SYMCLI symrcopy.
● Perform online data migrations from qualified storage to an array running HYPERMAX OS. Open Replicator uses the
Solutions Enabler SYMCLI symrcopy with minimal disruption to host applications.
NOTE: Open Replicator cannot copy a volume that is in use by TimeFinder.

Open Replicator operations


Open Replicator uses the following terminology:
Control
The recipent array and its devices are referred to as the control side of the copy operation.
Remote
The donor Dell EMC arrays or third-party arrays on the SAN are referred to as the remote array/devices.
Hot
The Control device is Read/Write online to the host while the copy operation is in progress.
NOTE: Hot push operations are not supported on arrays running HYPERMAX OS. Open Replicator
uses the Solutions Enabler SYMCLI symrcopy.

Cold
The Control device is Not Ready (offline) to the host while the copy operation is in progress.
Pull
A pull operation copies data to the control device from the remote device(s).
Push
A push operation copies data from the control device to the remote device(s).

Pull operations
On arrays running HYPERMAX OS, Open Replicator uses the Solutions Enabler SYMCLI symrcopy support up to 4096 pull
sessions.
For pull operations, the volume can be in a live state during the copy process. The local hosts and applications can begin to
access the data as soon as the session begins, even before the data copy process has completed.
These features enable rapid and efficient restoration of remotely vaulted volumes and migration from other storage platforms.
Copy on First Access ensures the appropriate data is available to a host operation when it is needed. The following image shows
an Open Replicator hot pull.

84 Data Migration
SB14

SB15
SB12

SB13
SB10

SB11
PiT

SB8

SB9
SB6

SB7
Copy

SB4

SB5
SB2

SB3
SB0

SB1
PS0 PS1 PS2 PS3 PS4 SMB0 SMB1

STD

STD

PiT
Copy

Figure 22. Open Replicator hot (or live) pull

The pull can also be performed in cold mode to a static volume. The following image shows an Open Replicator cold pull.

SB14

SB15
SB12

SB13
SB10

SB11
SB8

SB9
STD

SB6

SB7
SB4

SB5
SB2

SB3
SB0

SB1
PS0 PS1 PS2 PS3 PS4 SMB0 SMB1

Target
STD
Target

Target
STD

Figure 23. Open Replicator cold (or point-in-time) pull

PowerPath Migration Enabler


Dell EMC PowerPath is host-based software that provides automated data path management and load-balancing capabilities
for heterogeneous server, network, and storage deployed in physical and virtual environments. PowerPath includes a migration
tool called PowerPath Migration Enabler (PPME). PPME enables non-disruptive or minimally disruptive data migration between
storage systems or within a single storage system.
PPME allows applications continued data access throughout the migration process. PPME integrates with other technologies to
minimize or eliminate application downtime during data migration.
PPME works in conjunction with underlying technologies, such as Open Replicator, SnapVX, and Host Copy.

NOTE: PowerPath Multipathing must be installed on the host machine.

The following documentation provides additional information:


● Dell EMC Support Matrix PowerPath Family Protocol Support
● Dell EMC PowerPath Migration Enabler User Guide

Data migration using SRDF/Data Mobility


SRDF/Data Mobility (DM) uses SRDF's adaptive copy mode to transfer large amounts of data without impact to the host.
SRDF/DM supports data replication or migration between two or more arrays running HYPERMAX OS. Adaptive copy mode
enables applications using the primary volume to avoid propagation delays while data is transferred to the remote site.
SRDF/DM can be used for local or remote transfers.
Data migration on page 69 has a more information about using SRDF to migrate data.

Data Migration 85
Space and zero-space reclamation
Space reclamation reclaims unused space following a replication or migration activity from a regular device to a thin device in
which software tools, such as Open Replicator and Open Migrator, copied-all-zero, unused space to a target thin volume.
Space reclamation deallocates data chunks that contain all zeros. Space reclamation is most effective for migrations from
standard, fully provisioned devices to thin devices. Space reclamation is non-disruptive and can be executed while the targeted
thin device is fully available to operating systems and applications.
Zero-space reclamations provides instant zero detection during Open Replicator and SRDF migration operations by reclaiming
all-zero space, including both host-unwritten extents (or chunks) and chunks that contain all zeros due to file system or
database formatting.
Solutions Enabler and Unisphere can be used to initiate and monitor the space reclamation process.

Data migration for mainframe


For mainframe environments, z/OS Migrator provides non-disruptive migration from any vendor storage to VMAX All Flash or
VMAX arrays. z/OS Migrator can also migrate data from one VMAX or VMAX All Flash array to another VMAX or VMAX All
Flash array. With z/OS Migrator, you can:
● Introduce new storage subsystem technologies with minimal disruption of service.
● Reclaim z/OS UCBs by simplifying the migration of datasets to larger volumes (combining volumes).
● Facilitate data migration while applications continue to run and fully access data being migrated, eliminating application
downtime usually required when migrating data.
● Eliminate the need to coordinate application downtime across the business, and eliminate the costly impact of such
downtime on the business.
● Improve application performance by facilitating the relocation of poor performing datasets to lesser used volumes/storage
arrays.
● Ensure all metadata always accurately reflects the location and status of datasets being migrated.
Refer to the Dell EMC z/OS Migrator Product Guide for detailed product information.

86 Data Migration
Volume migration using z/OS Migrator
EMC z/OS Migrator is a host-based data migration facility that performs traditional volume migrations as well as host-based
volume mirroring. Together, these capabilities are referred to as the volume mirror and migrator functions of z/OS Migrator.

Figure 24. z/OS volume migration

Volume level data migration facilities move logical volumes in their entirety. z/OS Migrator volume migration is performed on a
track for track basis without regard to the logical contents of the volumes involved. Volume migrations end in a volume swap
which is entirely non-disruptive to any applications using the data on the volumes.

Volume migrator
Volume migration provides host-based services for data migration at the volume level on mainframe systems. It provides
migration from third-party devices to devices on Dell EMC arrays as well as migration between devices on Dell EMC arrays.

Volume mirror
Volume mirroring provides mainframe installations with volume-level mirroring from one device on a EMC array to another. It
uses host resources (UCBs, CPU, and channels) to monitor channel programs scheduled to write to a specified primary volume
and clones them to also write to a specified target volume (called a mirror volume).
After achieving a state of synchronization between the primary and mirror volumes, Volume Mirror maintains the volumes in a
fully synchronized state indefinitely, unless interrupted by an operator command or by an I/O failure to a Volume Mirror device.
Mirroring is controlled by the volume group. Mirroring may be suspended consistently for all volumes in the group.

Dataset migration using z/OS Migrator


In addition to volume migration, z/OS Migrator provides for logical migration, that is, the migration of individual datasets. In
contrast to volume migration functions, z/OS Migrator performs dataset migrations with full awareness of the contents of the
volume, and the metadata in the z/OS system that describe the datasets on the logical volume.

Figure 25. z/OS Migrator dataset migration

Data Migration 87
Thousands of datasets can either be selected individually or wild-carded. z/OS Migrator automatically manages all metadata
during the migration process while applications continue to run.

88 Data Migration
A
Mainframe Error Reporting
This appendix lists the mainframe environmental errors.
Topics:
• Error reporting to the mainframe host
• SIM severity reporting

Error reporting to the mainframe host


HYPERMAX OS can detect and report the following error types to the mainframe host in the storage systems:
● Data Check — HYPERMAX OS detected an error in the bit pattern read from the disk. Data checks are due to hardware
problems when writing or reading data, media defects, or random events.
● System or Program Check — HYPERMAX OS rejected the command. This type of error is indicated to the processor and is
always returned to the requesting program.
● Overrun — HYPERMAX OS cannot receive data at the rate it is transmitted from the host. This error indicates a timing
problem. Resubmitting the I/O operation usually corrects this error.
● Equipment Check —HYPERMAX OS detected an error in hardware operation.
● Environmental — HYPERMAX OS internal test detected an environmental error. Internal environmental tests monitor, check,
and report failures of the critical hardware components. They run at the initial system power-up, upon every software reset
event, and at least once every 24 hours during regular operations.
If an environmental test detects an error condition, it sets a flag to indicate a pending error and presents a unit check status
to the host on the next I/O operation. The test that detected the error condition is then scheduled to run more frequently. If a
device-level problem is detected, it is reported across all logical paths to the device experiencing the error. Subsequent failures
of that device are not reported until the failure is fixed.
If a second failure is detected for a device while there is a pending error-reporting condition in effect, HYPERMAX OS reports
the pending error on the next I/O and then the second error.
HYPERMAX OS reports error conditions to the host and to the Dell EMC Customer Support Center. When reporting to the host,
HYPERMAX OS presents a unit check status in the status byte to the channel whenever it detects an error condition such as a
data check, a command reject, an overrun, an equipment check, or an environmental error.
When presented with a unit check status, the host retrieves the sense data from the storage array and, if logging action has
been requested, places it in the Error Recording Data Set (ERDS). The EREP (Environment Recording, Editing, and Printing)
program prints the error information. The sense data identifies the condition that caused the interruption and indicates the type
of error and its origin. The sense data format depends on the mainframe operating system. For 2105, 2107, or 3990 controller
emulations, the sense data is returned in the SIM format.

SIM severity reporting


HYPERMAX OS supports SIM severity reporting that enables filtering of SIM severity alerts reported to the multiple virtual
storage (MVS) console.
● All SIM severity alerts are reported by default to the EREP (Environmental Record Editing and Printing program).
● ACUTE, SERIOUS, and MODERATE alerts are reported by default to the MVS console.
The following table lists the default settings for SIM severity reporting.

Table 17. SIM severity alerts


Severity Description
SERVICE No system or application performance degradation is
expected. No system or application outage has occurred.

Mainframe Error Reporting 89


Table 17. SIM severity alerts (continued)
Severity Description
MODERATE Performance degradation is possible in a heavily loaded
environment. No system or application outage has occurred.
SERIOUS A primary I/O subsystem resource is disabled. Significant
performance degradation is possible. System or application
outage may have occurred.
ACUTE A major I/O subsystem resource is disabled, or damage to the
product is possible. Performance may be severely degraded.
System or application outage may have occurred.
REMOTE SERVICE EMC Customer Support Center is performing service/
maintenance operations on the system.
REMOTE FAILED The Service Processor cannot communicate with the EMC
Customer Support Center.

Environmental errors
The following table lists the environmental errors in SIM format for HYPERMAX OS 5977 or later.

Table 18. Environmental errors reported as SIM messages


Hex code Severity level Description SIM reference code
04DD MODERATE MMCS health check error 24DD
043E MODERATE An SRDF Consistency Group E43E
was suspended.
044D MODERATE An SRDF path was lost. E44D
044E SERVICE An SRDF path is operational E44E
after a previous failure.
0461 NONE The M2 is resynchronized E461
with the M1 device. This
event occurs once the M2
device is brought back to a
Ready state. a
0462 NONE The M1 is resynchronized with E462
the M2 device. This event
occurs once the M1 device
is brought back to a Ready
state. a
0463 SERIOUS One of the back-end directors 2463
failed into the IMPL Monitor
state.
0465 NONE Device resynchronization E465
process has started. a
0467 MODERATE The remote storage system E467
reported an SRDF error
across the SRDF links.
046D MODERATE An SRDF group is lost. This E46D
event happens, for example,
when all SRDF links fail.
046E SERVICE An SRDF group is up and E46E
operational.

90 Mainframe Error Reporting


Table 18. Environmental errors reported as SIM messages (continued)
Hex code Severity level Description SIM reference code
0470 ACUTE OverTemp condition based on 2470
memory module temperature.
0471 ACUTE The Storage Resource Pool 2471
has exceeded its upper
threshold value.
0473 SERIOUS A periodic environmental test E473
(env_test9) detected the
mirrored device in a Not
Ready state.
0474 SERIOUS A periodic environmental est E474
(env_test9) detected the
mirrored device in a Write
Disabled (WD) state.
0475 SERIOUS An SRDF R1 remote mirror is E475
in a Not Ready state.
0476 SERVICE Service Processor has been 2476
reset.
0477 REMOTE FAILED The Service Processor could 1477
not call the EMC Customer
Support Center (failed to call
home) due to communication
problems.
047A MODERATE AC power lost to Power Zone 247A
A or B.
047B MODERATE Drop devices after RDF E47B
Adapter dropped.
01BA ACUTE Power supply or enclosure 24BA
SPS problem.
02BA
03BA
04BA

047C ACUTE The Storage Resource Pool 247C


has Not Ready or Inactive
TDATs.
047D MODERATE Either the SRDF group lost an E47D
SRDF link or the SRDF group
is lost locally.
047E SERVICE An SRDF link recovered from E47E
failure. The SRDF link is
operational.
047F REMOTE SERVICE The Service Processor 147F
successfully called the EMC
Customer Support Center
(called home) to report an
error.
0488 SERIOUS Replication Data Pointer Meta E488
Data Usage reached 90-99%.
0489 ACUTE Replication Data Pointer Meta E489
Data Usage reached 100%.

Mainframe Error Reporting 91


Table 18. Environmental errors reported as SIM messages (continued)
Hex code Severity level Description SIM reference code
0492 MODERATE Flash monitor or MMCS drive 2492
error.
04BE MODERATE Meta Data Paging file system 24BE
mirror not ready.
04CA MODERATE An SRDF/A session dropped E4CA
due to a non-user request.
Possible reasons include
fatal errors, SRDF link loss,
or reaching the maximum
SRDF/A host-response delay
time.
04D1 REMOTE SERVICE Remote connection 14D1
established. Remote control
connected.
04D2 REMOTE SERVICE Remote connection closed. 14D2
Remote control rejected.
04D3 MODERATE Flex filter problems. 24D3
04D4 REMOTE SERVICE Remote connection closed. 14D4
Remote control disconnected.
04DA MODERATE Problems with task/threads. 24DA
04DB SERIOUS SYMPL script generated 24DB
error.
04DC MODERATE PC related problems. 24DC
04E0 REMOTE FAILED Communications problems. 14E0
04E1 SERIOUS Problems in error polling. 24E1
052F None A sync SRDF write failure E42F
occurred.
3D10 SERIOUS A SnapVX snapshot failed. E410

a. Dell EMC recommendation: NONE.

Operator messages

Error messages
On z/OS, SIM messages are displayed as IEA480E Service Alert Error messages. They are formatted as shown below:

*IEA480E 1900,SCU,ACUTE ALERT,MT=2107,SER=0509-ANTPC, 266


REFCODE=1477-0000-0000,SENSE=00101000 003C8F00 40C00000 00000014

PC failed to call home due to communication problems.


Figure 26. z/OS IEA480E acute alert error message format (call home failure)

92 Mainframe Error Reporting


*IEA480E 1900,SCU,SERIOUS ALERT,MT=2107,SER=0509-ANTPC, 531
REFCODE=2463-0000-0021,SENSE=00101000 003C8F00 11800000

Disk Adapter = Director 21 = 0x2C


One of the Disk Adapters failed into IMPL Monitor state.

Figure 27. z/OS IEA480E service alert error message format (Disk Adapter failure)

*IEA480E 1900,DASD,MODERATE ALERT,MT=2107,SER=0509-ANTPC, 100


REFCODE=E46D-0000-0001,VOLSER=/UNKN/,ID=00,SENSE=00001F10

SRDF Group 1 SIM presented against unreleated resource


An SRDF Group is lost (no links)

Figure 28. z/OS IEA480E service alert error message format (SRDF Group lost/SIM presented against unrelated
resource)

Event messages
The storage array also reports events to the host and to the service processor. These events are:
● The mirror-2 volume has synchronized with the source volume.
● The mirror-1 volume has synchronized with the target volume.
● Device resynchronization process has begun.
On z/OS, these events are displayed as IEA480E Service Alert Error messages. They are formatted as shown below:

*IEA480E 0D03,SCU,SERVICE ALERT,MT=3990-3,SER=,


REFCODE=E461-0000-6200

Channel address of the synchronized device

E461 = Mirror-2 volume resynchronized with Mirror-1 volume

Figure 29. z/OS IEA480E service alert error message format (mirror-2 resynchronization)

*IEA480E 0D03,SCU,SERVICE ALERT,MT=3990-3,SER=,


REFCODE=E462-0000-6200

Channel address of the synchronized device

E462 = Mirror-1 volume resynchronized with Mirror-2 volume

Figure 30. z/OS IEA480E service alert error message format (mirror-1 resynchronization)

Mainframe Error Reporting 93


B
Licensing
This appendix is an overview of licensing on arrays running HYPERMAX OS.
Topics:
• eLicensing
• Open systems licenses

eLicensing
Arrays running HYPERMAX OS use Electronic Licenses (eLicenses).
NOTE: For more information on eLicensing, refer to Dell EMC Knowledgebase article 335235 on the Dell EMC Online
Support website.
You obtain license files from Dell EMC Online Support, copy them to a Solutions Enabler or a Unisphere host, and push them out
to your arrays. The following figure illustrates the process of requesting and obtaining your eLicense.

1. New software purchase either as 2. EMC generates a single license file


part of a new array, or as for the array and posts it
an additional purchase on support.emc.com for download.
to an existing system.

A License Authorization Code (LAC) with


3. instructions on how to obtain the license
activation file is emailed to the
The entitled user retrieves the LAC letter entitled users (one per array).
4. on the Get and Manage Licenses page
on support.emc.com, and then
downloads the license file.

The entitled user loads the license file


5. to the array and verifies that
the licenses were successfully activated.
Figure 31. eLicensing process

NOTE: To install array licenses, follow the procedure described in the Solutions Enabler Installation Guide and Unisphere
Online Help.
Each license file fully defines all of the entitlements for a specific system, including the license type and the licensed capacity.
To add a feature or increase the licensed capacity, obtain and install a new license file.
Most array licenses are array-based, meaning that they are stored internally in the system feature registration database on the
array. However, there are a number of licenses that are host-based.
Array-based eLicenses are available in the following forms:

94 Licensing
● An individual license enables a single feature.
● A license suite is a single license that enables multiple features. License suites are available only if all features are enabled.
● A license pack is a collection of license suites that fit a particular purpose.
To view effective licenses and detailed usage reports, use Solutions Enabler, Unisphere, Mainframe Enablers, Transaction
Processing Facility (TPF), or IBM i platform console.

Capacity measurements
Array-based licenses include a capacity licensed value that defines the scope of the license. The method for measuring this
value depends on the license's capacity type (Usable or Registered).
Not all product titles are available in all capacity types, as shown below.

Table 19. VMAX All Flash product title capacity types


Usable Registered Other
All F software package titles Storage Direct PowerPath (if purchased separately)

All FX software package titles Events and Retention Suite

All zF software package titles


All zFX software package titles
RecoverPoint

Usable capacity
Usable Capacity is defined as the amount of storage available for use on an array. The usable capacity is calculated as the sum
of all Storage Resource Pool (SRP) capacities available for use. This capacity does not include any external storage capacity.

Registered capacity
Registered capacity is the amount of user data managed or protected by each particular product title. It is independent of the
type or size of the disks in the array.
The methods for measuring registered capacity depends on whether the licenses are part of a bundle or individual.

Registered capacity licenses


Registered capacity is measured according to the following:
● Storage Direct
○ The registered capacity of this license is the sum of all Data Domain encapsulated devices that are link targets. When
there are TimeFinder sessions present on an array with only a Storage Direct license and no TimeFinder license, the
capacity is calculated as the sum of all Data Domain encapsulated devices with link targets and the sum of all TimeFinder
allocated source devices and delta RDPs.

Licensing 95
Open systems licenses
This section details the licenses available in an open system environment.

License suites
This table lists the license suites available in an open systems environment.

Table 20. VMAX All Flash license suites


License suite Includes Allows you to With the command
All Flash F ● HYPERMAX OS Create time windows symoptmz
● Priority Controls
symtw
● OR-DM
● Unisphere for VMAX ● Add disk group tiers to symfast
● FAST FAST policies
● SL Provisioning ● Enable FAST
● Workload Planner ● Set the following FAST
● Database Storage parameters:
Analyzer ○ Swap Non-Visible
Devices
○ Allow Only Swap
○ User Approval Mode
○ Maximum Devices to
Move
○ Maximum
Simultaneous
Devices
○ Workload Period
○ Minimum
Performance Period
● Add virtual pool (VP)
tiers to FAST policies
● Set the following FAST
VP-specific parameters:
○ Thin Data Move
Mode
○ Thin Relocation Rate
○ Pool Reservation
Capacity
● Set the following FAST
parameters:
○ Workload Period
○ Minimum
Performance Period
Perform SL-based symconfigure
provisioning
symsg
symcfg

AppSync Manage protection and


replication for critical
applications and databases
for Microsoft, Oracle and
VMware environments.

● TimeFinder/Snap Create new native clone symclone


sessions

96 Licensing
Table 20. VMAX All Flash license suites (continued)
License suite Includes Allows you to With the command
● TimeFinder/SnapVX Create new TimeFinder/ symmir
● SnapSure Clone emulations
● Create new sessions symsnap
● Duplicate existing
sessions
● Create snap pools symconfigure
● Create SAVE devices
● Perform SnapVX symsnapvx
Establish operations
● Perform SnapVX
snapshot Link operations
All Flash FX All Flash F Suite Perform tasks available in
the All Flash F suite.
● SRDF ● Create new SRDF symrdf
● SRDF/Asynchronous groups
● SRDF/Synchronous ● Create dynamic SRDF
● SRDF/Star pairs in Adaptive Copy
mode
● Replication for File
● Create SRDF devices symconfigure
● Convert non-SRDF
devices to SRDF
● Add SRDF mirrors to
devices in Adaptive
Copy mode
Set the dynamic-SRDF
capable attribute on
devices
Create SAVE devices

● Create dynamic SRDF symrdf


pairs in Asynchronous
mode
● Set SRDF pairs into
Asynchronous mode
● Add SRDF mirrors to symconfigure
devices in Asynchronous
mode
Create RDFA_DSE pools

Set any of the following


SRDF/A attributes on an
SRDF group:
○ Minimum Cycle Time
○ Transmit Idle
○ DSE attributes,
including:
■ Associating an
RDFA-DSE pool
with an SRDF
group
DSE Threshold

Licensing 97
Table 20. VMAX All Flash license suites (continued)
License suite Includes Allows you to With the command
DSE Autostart
○ Write Pacing
attributes, including:
■ Write Pacing
Threshold
■ Write Pacing
Autostart
■ Device Write
Pacing exemption
■ TimeFinder Write
Pacing Autostart
● Create dynamic SRDF symrdf
pairs in Synchronous
mode
● Set SRDF pairs into
Synchronous mode
Add an SRDF mirror to a symconfigure
device in Synchronous mode
D@RE Encrypt data and protect
it against unauthorized
access unless valid keys are
provided. This prevents data
from being accessed and
provides a mechanism to
quickly shred data.

SRDF/Metro ● Place new SRDF device


pairs into an SRDF/
Metro configuration.
● Synchronize device
pairs.
VIPR Suite (Controller and Automate storage
SRM) provisioning and reclamation
tasks to improve operational
efficiency.

Individual licenses
These items are available for arrays running HYPERMAX OS and are not in any of the license suites:

Table 21. Individual licenses for open systems environment


License Allows you to With the command
Storage Direct Store and retrieve backup data within
an integrated environment containing
arrays running HYPERMAX OS and Data
Domain arrays.

RecoverPoint Protect data integrity at local and


remote sites, and recover data from a
point in time using journaling technology.

98 Licensing
Ecosystem licenses
These licenses do not apply to arrays:

Table 22. Individual licenses for open systems environment


License Allows you to
PowerPath Automate data path failover and recovery to ensure
applications are always available and remain operational.

Events and Retention Suite ● Protect data from unwanted changes, deletions and
malicious activity.
● Encrypt data where it is created for protection anywhere
outside the server.
● Maintain data confidentiality for selected data at rest and
enforce retention at the file-level to meet compliance
requirements.
● Integrate with third-party anti-virus checking, quota
management, and auditing applications.

Licensing 99

You might also like