VMAX All Flash Array Product Guide
VMAX All Flash Array Product Guide
February 2021
Rev. 16
Notes, cautions, and warnings
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
© 2018 - 2021 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Other trademarks may be trademarks of their respective owners.
Contents
Figures..........................................................................................................................................6
Tables........................................................................................................................................... 7
Preface.........................................................................................................................................................................................8
Revision history.................................................................................................................................................................. 14
Contents 3
VMware Virtual Volumes.................................................................................................................................................. 41
vVol components..........................................................................................................................................................41
vVol scalability...............................................................................................................................................................41
vVol workflow...............................................................................................................................................................42
Chapter 5: Provisioning............................................................................................................... 48
Thin provisioning................................................................................................................................................................48
Pre-configuration for thin provisioning.................................................................................................................. 49
Thin devices (TDEVs).................................................................................................................................................49
Thin device oversubscription....................................................................................................................................49
Open Systems-specific provisioning.......................................................................................................................50
Multi-array provisioning....................................................................................................................................................51
4 Contents
SRDF/Metro Resilience.............................................................................................................................................. 71
Disaster recovery facilities........................................................................................................................................ 73
More information......................................................................................................................................................... 75
RecoverPoint......................................................................................................................................................................75
Remote replication using eNAS..................................................................................................................................... 75
Appendix B: Licensing................................................................................................................. 94
eLicensing........................................................................................................................................................................... 94
Capacity measurements............................................................................................................................................ 95
Open systems licenses.....................................................................................................................................................96
License suites............................................................................................................................................................... 96
Individual licenses........................................................................................................................................................ 98
Ecosystem licenses..................................................................................................................................................... 99
Contents 5
Figures
6 Figures
Tables
Tables 7
Preface
As part of an effort to improve its product lines, Dell EMC periodically releases revisions of its software and hardware. Functions
that are described in this document may not be supported by all versions of the software or hardware. The product release
notes provide the most up-to-date information about product features.
Contact your Dell EMC representative if a product does not function properly or does not function as described in this
document.
NOTE: This document was accurate at publication time. New versions of this document might be released on Dell EMC
Online Support (https://www.dell.com/support/home). Check to ensure that you are using the latest version of this
document.
Purpose
This document introduces the features of the VMAX All Flash 250F, 450F, 850F, 950F arrays running HYPERMAX OS 5977.
Audience
This document is intended for use by customers and Dell EMC representatives.
Related documentation
The following documentation portfolios contain documents related to the hardware platform and manuals needed to manage
your software and storage system configuration. Also listed are documents for external components that interact with the
VMAX All Flash array.
Hardware platform documents:
Dell EMC VMAX Provides planning information regarding the purchase and installation of a VMAX 250F, 450F, 850F, 950F
All Flash Site with HYPERMAX OS.
Planning Guide
for VMAX 250F,
450F, 850F, 950F
with HYPERMAX
OS
Dell EMC Describes the best practices to assure fault-tolerant power to a VMAX3 Family array or VMAX All Flash
VMAX Best array.
Practices Guide
for AC Power
Connections
Dell EMC Describes how to power-down and power-up a VMAX3 Family array or VMAX All Flash array.
VMAX Power-
down/Power-up
Procedure
Dell EMC VMAX Describes how to install the securing kit on a VMAX3 Family array or VMAX All Flash array.
Securing Kit
Installation Guide
E-Lab™ Provides a web-based interoperability and solution search portal. You can find the ELN at https://
Interoperability elabnavigator.EMC.com.
Navigator (ELN)
Unisphere documents:
8 Preface
EMC Unisphere Describes new features and any known limitations for Unisphere for VMAX.
for VMAX
Release Notes
EMC Unisphere Provides installation instructions for Unisphere for VMAX.
for VMAX
Installation Guide
EMC Unisphere Describes the Unisphere for VMAX concepts and functions.
for VMAX Online
Help
EMC Unisphere Provides installation instructions for Unisphere for VMAX Performance Viewer.
for VMAX
Performance
Viewer
Installation Guide
EMC Unisphere Describes the Unisphere for VMAX Database Storage Analyzer concepts and functions.
for VMAX
Database Storage
Analyzer Online
Help
EMC Unisphere Describes new features and any known limitations for Unisphere 360 for VMAX.
360 for VMAX
Release Notes
EMC Unisphere Provides installation instructions for Unisphere 360 for VMAX.
360 for VMAX
Installation Guide
EMC Unisphere Describes the Unisphere 360 for VMAX concepts and functions.
360 for VMAX
Online Help
Preface 9
SRDF Family CLI
User Guide
Dell EMC Describes the applicable pair states for various SRDF operations.
Solutions Enabler
SRDF Family
State Tables
Guide
SRDF Interfamily Defines the versions of PowerMaxOS, HYPERMAX OS and Enginuity that can make up valid SRDF
Connectivity replication and SRDF/Metro configurations, and can participate in Non-Disruptive Migration (NDM).
Information
Dell EMC SRDF Provides an overview of SRDF, its uses, configurations, and terminology.
Introduction
Dell EMC Describes how to configure and manage TimeFinder SnapVX environments using SYMCLI commands.
Solutions Enabler
TimeFinder
SnapVX CLI User
Guide
Dell EMC Describes how to configure and manage TimeFinder Mirror, Clone, Snap, VP Snap environments for
Solutions Enabler Enginuity and HYPERMAX OS using SYMCLI commands.
TimeFinder
Family (Mirror,
Clone, Snap, VP
Snap) Version
8.2 and higher
CLI User Guide
Dell EMC Provides Storage Resource Management (SRM) information that is related to various data objects and
Solutions Enabler data handling facilities.
SRM CLI User
Guide
Dell EMC SRDF/ Describes how to install, configure, and manage SRDF/Metro using vWitness.
Metro vWitness
Configuration
Guide
Dell EMC Events Documents the SYMAPI daemon messages, asynchronous errors and message events, SYMCLI return
and Alerts for codes, and how to configure event logging.
PowerMax and
VMAX User Guide
EMC VMAX Describes the new features and identify any known functionality restrictions and performance issues that
Embedded NAS may exist in the current version.
Release Notes
EMC VMAX Describes how to configure eNAS on a VMAX3 or VMAX All Flash storage system.
Embedded NAS
Quick Start
Guide
EMC VMAX Describes how to install and use EMC File Auto Recovery with SRDF/S.
Embedded NAS
File Auto
Recovery with
SRDF/S
Dell EMC A reference for command-line users and script programmers that provides the syntax, error codes, and
PowerMax eNAS parameters of all eNAS commands.
CLI Reference
Guide
10 Preface
Dell EMC Provides Storage Direct information that is related to various data objects and data handling facilities.
PowerProtect
Storage Direct
Solutions Guide
Dell EMC File Shows how to install, configure, and manage the Storage Direct File System Agent.
System Agent
Installation and
Administration
Guide
Dell EMC Shows how to install, configure, and manage the Storage Direct Database Application Agent.
Database
Application
Agent
Installation and
Administration
Guide
Dell EMC Shows how to install, configure, and manage the Storage Direct Microsoft Application Agent.
Microsoft
Application
Agent
Installation and
Administration
Guide
NOTE: ProtectPoint has been renamed to Storage Direct and it is included in PowerProtect, Data Protection Suite for
Apps, or Data Protection Suite Enterprise Software Edition.
Mainframe Enablers documents:
Dell EMC Describes how to install and configure Mainframe Enablers software.
Mainframe
Enablers
Installation and
Customization
Guide
Dell EMC Describes new features and any known limitations.
Mainframe
Enablers Release
Notes
Dell EMC Describes the status, warning, and error messages generated by Mainframe Enablers software.
Mainframe
Enablers
Message Guide
Dell EMC Describes how to configure VMAX system control and management using the EMC Symmetrix Control
Mainframe Facility (EMCSCF).
Enablers
ResourcePak
Base for z/OS
Product Guide
Dell EMC Describes how to use AutoSwap to perform automatic workload swaps between VMAX systems when the
Mainframe software detects a planned or unplanned outage.
Enablers
AutoSwap for
z/OS Product
Guide
Dell EMC Describes how to use Consistency Groups for z/OS (ConGroup) to ensure the consistency of data
Mainframe remotely copied by SRDF in the event of a rolling disaster.
Enablers
Consistency
Preface 11
Groups for z/OS
Product Guide
Dell EMC Describes how to use SRDF Host Component to control and monitor remote data replication processes.
Mainframe
Enablers SRDF
Host Component
for z/OS Product
Guide
Dell EMC Describes how to use TimeFinder SnapVX and zDP to create and manage space-efficient targetless
Mainframe snaps.
Enablers
TimeFinder
SnapVX and zDP
Product Guide
Dell EMC Describes how to use TimeFinder/Clone, TimeFinder/Snap, and TimeFinder/CG to control and monitor
Mainframe local data replication processes.
Enablers
TimeFinder/
Clone Mainframe
Snap Facility
Product Guide
Dell EMC Describes how to use TimeFinder/Mirror to create Business Continuance Volumes (BCVs) which can
Mainframe then be established, split, reestablished and restored from the source logical volumes for backup, restore,
Enablers decision support, or application testing.
TimeFinder/
Mirror for z/OS
Product Guide
Dell EMC Describes how to use the TimeFinder Utility to condition volumes and devices.
Mainframe
Enablers
TimeFinder
Utility for z/OS
Product Guide
Dell EMC GDDR Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate business recovery
for SRDF/S following both planned outages and disaster situations.
with ConGroup
Product Guide
Dell EMC GDDR Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate business recovery
for SRDF/S following both planned outages and disaster situations.
with AutoSwap
Product Guide
Dell EMC GDDR Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate business recovery
for SRDF/Star following both planned outages and disaster situations.
Product Guide
Dell EMC GDDR Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate business recovery
for SRDF/Star following both planned outages and disaster situations.
with AutoSwap
Product Guide
Dell EMC GDDR Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate business recovery
for SRDF/SQAR following both planned outages and disaster situations.
with AutoSwap
Product Guide
Dell EMC GDDR Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate business recovery
for SRDF/A following both planned outages and disaster situations.
Product Guide
12 Preface
Dell EMC GDDR Describes the status, warning, and error messages generated by GDDR.
Message Guide
Dell EMC GDDR Describes new features and any known limitations.
Release Notes
Dell EMC GDDR Describes the basic concepts of Dell EMC Geographically Dispersed Disaster Restart (GDDR), how to
for Star-A install it, and how to implement its major features and facilities.
Product Guide
Dell EMC z/OS Describes how to use z/OS Migrator to perform volume mirror and migrator functions as well as logical
Migrator Product migration functions.
Guide
Dell EMC Describes the status, warning, and error messages generated by z/OS Migrator.
z/OS Migrator
Message Guide
Dell EMC z/OS Describes new features and any known limitations.
Migrator Release
Notes
z/TPF documents:
Dell EMC Describes how to configure VMAX system control and management in the z/TPF operating environment.
ResourcePak for
z/TPF Product
Guide
Dell EMC SRDF Describes how to perform remote replication operations in the z/TPF operating environment.
Controls for
z/TPF Product
Guide
Dell EMC Describes how to perform local replication operations in the z/TPF operating environment.
TimeFinder
Controls for
z/TPF Product
Guide
Dell EMC z/TPF Describes new features and any known limitations.
Suite Release
Notes
Typographical conventions
Dell EMC uses the following type style conventions in this document:
Preface 13
Table 1. Typographical conventions used in this content (continued)
[] Square brackets enclose optional values.
| A vertical bar indicates alternate selections. The bar means "or".
{} Braces enclose content that the user must specify, such as x or y or z.
... Ellipses indicate nonessential information that is omitted from the example.
Product Dell EMC technical support, documentation, release notes, software updates, or information about
information Dell EMC products can be obtained at https://www.dell.com/support/home (registration required) or
https://www.dellemc.com/en-us/documentation/vmax-all-flash-family.htm.
Technical To open a service request through the Dell EMC Online Support (https://www.dell.com/support/home)
support site, you must have a valid support agreement. Contact your Dell EMC sales representative for details
about obtaining a valid support agreement or to answer any questions about your account.
Additional ● Support by Product — Dell EMC offers consolidated, product-specific information on the Web at:
support options https://support.EMC.com/products
The Support by Product web pages offer quick links to Documentation, White Papers, Advisories
(such as frequently used Knowledgebase articles), and Downloads, as well as more dynamic content,
such as presentations, discussion, relevant Customer Support Forum entries, and a link to Dell EMC
Live Chat.
● Dell EMC Live Chat — Open a Chat or instant message session with a Dell EMC Support Engineer.
e-Licensing To activate your entitlements and obtain your license files, go to the Service Center on Dell EMC Online
support Support (https://www.dell.com/support/home). Follow the directions on your License Authorization
Code (LAC) letter that is emailed to you.
● Expected functionality may be unavailable because it is not licensed. For help with missing or incorrect
entitlements after activation, contact your Dell EMC Account Representative or Authorized Reseller.
● For help with any errors applying license files through Solutions Enabler, contact the Dell EMC
Customer Support Center.
● Contact the Dell EMC worldwide LIcensing team if you are missing a LAC letter or require further
instructions on activating your licenses through the Online Support site.
○ [email protected]
○ North America, Latin America, APJK, Australia, New Zealand: SVC4EMC (800-782-4362) and
follow the voice prompts.
○ EMEA: +353 (0) 21 4879862 and follow the voice prompts.
Your comments
Your suggestions help improve the accuracy, organization, and overall quality of the documentation. Send your comments and
feedback to: [email protected]
Revision history
The following table lists the revision history of this document.
14 Preface
Table 2. Revision history (continued)
Revision Description and/or change Operating
system
15 Updated with new and changed features related to the latest release of the PowerMax OS PowerMax OS
5978.669.669
14 Updated with new and changed features related to the latest release of the PowerMax OS PowerMax OS
5978.444.444
13 Revised content: PowerMax OS
5978.444.444
● Updated links
12 Updated with new and changed features related to the latest release of the PowerMax OS PowerMax OS
5978.444.444
11 Revised content: HYPERMAX OS
● Clarify the recommended maximum distance between arrays using SRDF/S 5977.1125.1125
Preface 15
1
VMAX All Flash with HYPERMAX OS
This chapter introduces VMAX All Flash systems and the HYPERMAX OS operating environment.
Topics:
• Introduction to VMAX All Flash with HYPERMAX OS
• Software packages
• HYPERMAX OS
Scale out
Table 3. Symbol legend for VMAX All Flash software features/software package
Standard feature with that model/software package. Optional feature with that model/software package.
a. The 8 Gb/s module auto-negotiates to 2/4/8 Gb/s and the 16 Gb/s module auto-negotiates to 16/8/4 Gb/s using optical
SFP and OM2/OM3/OM4 cabling.
b. Only on VMAX 450F, 850F, and 950F arrays.
Embedded Management
The eManagement container application embeds management software (Solutions Enabler, SMI-S, Unisphere for VMAX) on the
storage array, enabling you to manage the array without requiring a dedicated management host.
With eManagement, you can manage a single storage array and any SRDF attached arrays. To manage multiple storage arrays
with a single control pane, use the traditional host-based management interfaces: Unisphere and Solutions Enabler. To this end,
eManagement allows you to link-and-launch a host-based instance of Unisphere.
eManagement is typically preconfigured and enabled at the factory. However, starting with HYPERMAX OS 5977.945.890,
eManagement can be added to arrays in the field. Contact your support representative for more information.
Embedded applications require system memory. The following table lists the amount of memory unavailable to other data
services.
a. Data Movers are added in pairs and must have the same configuration.
b. The 850F and 950F can be configured through Sizer with a maximum of four data movers. However, six and eight data
movers can be ordered by RPQ. As the number of data movers increases, the maximum number of I/O cards , logical
cores, memory, and maximum capacity also increases.
c. For 2, 4, 6, and 8 data movers, respectively.
d. A single 2-port 10GbE Optical I/O module is required by each Data Mover for initial All-Flash configurations. However,
that I/O module can be replaced with a different I/O module (such as a 4-port 1GbE or 2-port 10GbE copper) using the
normal replacement capability that exists with any eNAS Data Mover I/O module. In addition, additional I/O modules can
be configured through a I/O module upgrade/add as long as standard rules are followed (no more than 3 I/O modules per
Data Mover, all I/O modules must occupy the same slot on each director on which a Data Mover resides).
RAID levels
VMAX All Flash arrays provide the following RAID levels:
● VMAX 250F: RAID5 (7+1) (Default), RAID5 (3+1), and RAID6 (6+2)
● VMAX 450F, 850F, 950F: RAID5 (7+1), and RAID6 (14+2)
Enabling D@RE
D@RE is a licensed feature that is installed and configured at the factory. Upgrading an existing array to use D@RE is possible,
but is disruptive. The upgrade requires re-installing the array, and may involve a full data back up and restore. Before upgrading,
plan how to manage any data already on the array. Dell EMC Professional Services offers services to help you implement D@RE.
Key protection
The local keystore file is encrypted with a 256-bit AES key derived from a randomly generated password file. This password file
is secured in the Lockbox. The Lockbox is protected using MMCS-specific stable system values (SSVs) of the primary MMCS.
These are the same SSVs that protect Secure Service Credentials (SSC).
Compromising the MMCS drive or copying Lockbox and keystore files off the array causes the SSV tests to fail. Compromising
the entire MMCS only gives an attacker access if they also successfully compromise SSC.
There are no backdoor keys or passwords to bypass D@RE security.
Key operations
D@RE provides a separate, unique Data Encryption Key (DEK) for each physical drive in the array, including spare drives. To
ensure that D@RE uses the correct key for a given drive:
● DEKs stored in the array include a unique key tag and key metadata when they are wrapped (encrypted) for use by the
array. This information is included with the key material when the DEK is wrapped (encrypted) for use in the array.
Audit logs
The audit log records major activities on an array, including:
● Host-initiated actions
● Physical component changes
● Actions on the MMCS
● D@RE key management events
● Attempts blocked by security controls (Access Controls)
The Audit Log is secure and tamper-proof so event contents cannot be altered. Users with the Auditor access can view, but not
modify, the log.
Data erasure
Dell EMC Data Erasure uses specialized software to erase information on arrays. It mitigates the risk of information
dissemination, and helps secure information at the end of the information lifecycle. Data erasure:
● Protects data from unauthorized access
● Ensures secure data migration by making data on the source array unreadable
● Supports compliance with internal policies and regulatory requirements
Data Erasure overwrites data at the lowest application-addressable level to drives. The number of overwrites is configurable
from three (the default) to seven with a combination of random patterns on the selected arrays.
An optional certification service is available to provide a certificate of erasure. Drives that fail erasure are delivered to customers
for final disposal.
For individual flash drives, Secure Erase operations erase all physical flash areas on the drive which may contain user data.
The available data erasure services are:
● Dell EMC Data Erasure for Full Arrays—Overwrites data on all drives in the system when replacing, retiring or re-purposing
an array.
● Dell EMC Data Erasure/Single Drives—Overwrites data on individual drives.
● Dell EMC Disk Retention—Enables organizations that must retain all media to retain failed drives.
● Dell EMC Assessment Service for Storage Security—Assesses your information protection policies and suggests a
comprehensive security strategy.
All erasure services are performed on-site in the security of the customer’s data center and include a Data Erasure Certificate
and report of erasure results.
Vault to flash
VMAX All Flash arrays initiate a vault operation when the system is powered down, goes offline, or if environmental conditions
occur, such as the loss of a data center due to an air conditioning failure.
Each array comes with Standby Power Supply (SPS) modules. On a power loss, the array uses the SPS power to write the
system mirrored cache to flash storage. Vaulted images are fully redundant; the contents of the system mirrored cache are
saved twice to independent flash storage.
Compression is pre-configured on new VMAX All Flash arrays at the factory. Existing VMAX All Flash arrays in the field can have
compression added to them. Contact your Support Representative for more information.
Further characteristics of compression are:
● All supported data services, such as SnapVX, SRDF, and encryption are supported with compression.
● Compression is available on open systems (FBA) only (including eNAS). It is not available for CKD arrays, including those
with a mix of FBA and CKD devices. Any open systems array with compression enabled, cannot have CKD devices added to
it.
● ProtectPoint operations are still supported to Data Domain arrays
● Compression is switched on and off through Solutions Enabler and Unisphere.
● Compression efficiency can be monitored for SRPs, storage groups, and volumes.
● Activity Based Compression: the most active tracks are held in cache and not compressed until they move from cache to
disk. This feature helps improve the overall performance of the array while reducing wear on the flash drives.
Management Interfaces 31
Table 8. Unisphere tasks (continued)
Section Allows you to:
Data Protection View and manage local replication, monitor and manage replication pools, create and view device
groups, and monitor and manage migration sessions.
Performance Monitor and manage array dashboards, perform trend analysis for future capacity planning, and
analyze data. Set preferences, such as, general, dashboards, charts, reports, data imports, and alerts
for performance management tasks.
Databases Troubleshoot database and storage issues, and launch Database Storage Analyzer.
System View and display dashboards, active jobs, alerts, array attributes, and licenses.
Support View online help for Unisphere tasks.
Unisphere also has a Representational State Transfer (REST) API. With this API you can access performance and configuration
information, and provision storage arrays. You can use the API in any programming environment that supports standard REST
clients, such as web browsers and programming platforms that can issue HTTP requests.
Workload Planner
Workload Planner displays performance metrics for applications. Use Workload Planner to:
● Model the impact of migrating a workload from one storage system to another.
● Model proposed new workloads.
● Assess the impact of moving one or more workloads off of a given array running HYPERMAX OS.
● Determine current and future resource shortfalls that require action to maintain the requested workloads.
Unisphere 360
Unisphere 360 is an on-premises management solution that provides a single window across arrays running HYPERMAX OS at a
single site. Use Unisphere 360 to:
● Add a Unisphere server to Unisphere 360 to allow for data collection and reporting of Unisphere management storage
system data.
● View the system health, capacity, alerts and capacity trends for your Data Center.
● View all storage systems from all enrolled Unisphere instances in one place.
● View details on performance and capacity.
● Link and launch to Unisphere instances running V8.2 or higher.
● Manage Unisphere 360 users and configure authentication and authorization rules.
● View details of visible storage arrays, including current and target storage.
Solutions Enabler
Solutions Enabler provides a comprehensive command line interface (SYMCLI) to manage your storage environment.
SYMCLI commands are invoked from a management host, either interactively on the command line, or using scripts.
SYMCLI is built on functions that use system calls to generate low-level I/O SCSI commands. Configuration and status
information is maintained in a host database file, reducing the number of enquiries from the host to the arrays.
Use SYMCLI to:
32 Management Interfaces
● Configure array software (For example, TimeFinder, SRDF, Open Replicator)
● Monitor device configuration and status
● Perform control operations on devices and data objects
Solutions Enabler also has a Representational State Transfer (REST) API. Use this API to access performance and configuration
information, and provision storage arrays. It can be used in any programming environments that supports standard REST clients,
such as web browsers and programming platforms that can issue HTTP requests.
Mainframe Enablers
The Dell EMC Mainframe Enablers are software components that allow you to monitor and manage arrays running HYPERMAX
OS in a mainframe environment:
● ResourcePak Base for z/OS
Enables communication between mainframe-based applications (provided by Dell EMC or independent software vendors)
and PowerMax/VMAX arrays.
● SRDF Host Component for z/OS
Monitors and controls SRDF processes through commands executed from a host. SRDF maintains a real-time copy of data at
the logical volume level in multiple arrays located in physically separate sites.
● Dell EMC Consistency Groups for z/OS
Ensures the consistency of data remotely copied by SRDF feature in the event of a rolling disaster.
● AutoSwap for z/OS
Handles automatic workload swaps between arrays when an unplanned outage or problem is detected.
● TimeFinder SnapVX
With Mainframe Enablers V8.0 and higher, SnapVX creates point-in-time copies directly in the Storage Resource Pool (SRP)
of the source device, eliminating the concepts of target devices and source/target pairing. SnapVX point-in-time copies
are accessible to the host through a link mechanism that presents the copy on another device. TimeFinder SnapVX and
HYPERMAX OS support backward compatibility to traditional TimeFinder products, including TimeFinder/Clone, TimeFinder
VP Snap, and TimeFinder/Mirror.
● Data Protector for z Systems (zDP™)
With Mainframe Enablers V8.0 and higher, zDP is deployed on top of SnapVX. zDP provides a granular level of application
recovery from unintended changes to data. zDP achieves this by providing automated, consistent point-in-time copies of
data from which an application-level recovery can be conducted.
● TimeFinder/Clone Mainframe Snap Facility
Produces point-in-time copies of full volumes or of individual datasets. TimeFinder/Clone operations involve full volumes or
datasets where the amount of data at the source is the same as the amount of data at the target. TimeFinder VP Snap
leverages clone technology to create space-efficient snaps for thin devices.
● TimeFinder/Mirror for z/OS
Allows the creation of Business Continuance Volumes (BCVs) and provides the ability to ESTABLISH, SPLIT, RE-ESTABLISH
and RESTORE from the source logical volumes.
● TimeFinder Utility
Conditions SPLIT BCVs by relabeling volumes and (optionally) renaming and recataloging datasets. This allows BCVs to be
mounted and used.
Management Interfaces 33
GDDR does not provide replication and recovery services itself. Rather GDDR monitors and automates the services that other
Dell EMC products and third-party products provide that are required for continuous operations or business restart. GDDR
facilitates business continuity by generating scripts that can be run on demand. For example, scripts to restart business
applications following a major data center incident, or resume replication following unplanned link outages.
Scripts are customized when invoked by an expert system that tailors the steps based on the configuration and the event that
GDDR is managing. Through automatic event detection and end-to-end automation of managed technologies, GDDR removes
human error from the recovery process and allows it to complete in the shortest time possible.
The GDDR expert system is also invoked to automatically generate planned procedures, such as moving compute operations
from one data center to another. This is the gold standard for high availability compute operations, to be able to move from
scheduled DR test weekend activities to regularly scheduled data center swaps without disrupting application workloads.
SMI-S Provider
Dell EMC SMI-S Provider supports the SNIA Storage Management Initiative (SMI), an ANSI standard for storage management.
This initiative has developed a standard management interface that resulted in a comprehensive specification (SMI-Specification
or SMI-S).
SMI-S defines the open storage management interface, to enable the interoperability of storage management technologies from
multiple vendors. These technologies are used to monitor and control storage resources in multivendor or SAN topologies.
Solutions Enabler components required for SMI-S Provider operations are included as part of the SMI-S Provider installation.
VASA Provider
The VASA Provider enables VMAX All Flash management software to inform vCenter of how VMDK storage, including vVols,
is configured and protected. These capabilities are defined by Dell EMC and include characteristics such as disk type, type
of provisioning, storage tiering and remote replication status. This allows vSphere administrators to make quick and informed
decisions about virtual machine placement. VASA offers the ability for vSphere administrators to complement their use of
plugins and other tools to track how devices hosting vVols are configured to meet performance and availability needs. Details
about VASA Provider replication groups can be viewed on the Unisphere vVols dashboard.
34 Management Interfaces
locations. SRM moves a step beyond storage management and provides a platform for cross-domain correlation of device
information and resource topology, and enables a broader view of your storage environment and enterprise data center.
SRM provides a dashboard view of the storage capacity at an enterprise level through Watch4net. The Watch4net dashboard
view displays information to support decisions regarding storage capacity.
The Watch4net dashboard consolidates data from multiple ProSphere instances spread across multiple locations. It gives a quick
overview of the overall capacity status in the environment, raw capacity usage, usable capacity, used capacity by purpose,
usable capacity by pools, and service levels.
SRDF/Cluster Enabler
Cluster Enabler (CE) for Microsoft Failover Clusters is a software extension of failover clusters functionality. Cluster Enabler
enables Windows Server 2012 (including R2) Standard and Datacenter editions running Microsoft Failover Clusters to operate
across multiple connected storage arrays in geographically distributed clusters.
SRDF/Cluster Enabler (SRDF/CE) is a software plug-in module to Dell EMC Cluster Enabler for Microsoft Failover Clusters
software. The Cluster Enabler plug-in architecture consists of a CE base module component and separately available plug-in
modules, which provide your chosen storage replication technology.
SRDF/CE supports:
● Synchronous and asynchronous mode (SRDF modes of operation on page 67 summarizes these modes)
● Concurrent and cascaded SRDF configurations (SRDF multi-site solutions on page 62 summarizes these configurations)
Management Interfaces 35
communications content. The following software components are distributed separately and can be installed individually or in
any combination:
● SRDF Controls for z/TPF
Monitors and controls SRDF processes with functional entries entered at the z/TPF Prime CRAS (computer room agent
set).
● TimeFinder Controls for z/TPF
Provides a business continuance solution consisting of TimeFinder SnapVX, TimeFinder/Clone, and TimeFinder/Mirror.
● ResourcePak for z/TPF
Provides PowerMax and VMAX configuration and statistical reporting and extended features for SRDF Controls for z/TPF
and TimeFinder Controls for z/TPF.
Extended features
SRDF/TimeFinder Manager for IBM i extended features provide support for the IBM independent ASP (IASP) functionality.
IASPs are sets of switchable or private auxiliary disk pools (up to 223) that can be brought online/offline on an IBM i host
without affecting the rest of the system.
When combined with SRDF/TimeFinder Manager for IBM i, IASPs let you control SRDF or TimeFinder operations on arrays
attached to IBM i hosts, including:
● Display and assign TimeFinder SnapVX devices.
● Execute SRDF or TimeFinder commands to establish and split SRDF or TimeFinder devices.
● Present one or more target devices containing an IASP image to another host for business continuance (BC) processes.
Access to extended features control operations include:
● From the SRDF/TimeFinder Manager menu-driven interface.
● From the command line using SRDF/TimeFinder Manager commands and associated IBM i commands.
AppSync
Dell EMC AppSync offers a simple, SLA-driven, self-service approach for protecting, restoring, and cloning critical Microsoft
and Oracle applications and VMware environments. After defining service plans, application owners can protect, restore, and
clone production data quickly with item-level granularity by using the underlying Dell EMC replication technologies. AppSync also
provides an application protection monitoring service that generates alerts when the SLAs are not met.
AppSync supports the following applications and storage arrays:
● Applications—Oracle, Microsoft SQL Server, Microsoft Exchange, and VMware vStorage VMFS and NFS datastores and File
systems.
● Replication Technologies—SRDF, SnapVX, RecoverPoint, XtremIO Snapshot, VNX Advanced Snapshots, VNXe Unified
Snapshot, and ViPR Snapshot.
36 Management Interfaces
NOTE: For VMAX All Flash arrays, AppSync is available in a starter bundle. The AppSync Starter Bundle provides the license
for a scale-limited, yet fully functional version of AppSync. For more information, see the AppSync Starter Bundle with
VMAX All Flash Product Brief available on the Dell EMC Online Support Website.
Management Interfaces 37
3
Open Systems Features
This chapter introduces the open systems features of VMAX All Flash arrays.
Topics:
• HYPERMAX OS support for open systems
• Backup and restore using PowerProtect Storage Direct and Data Domain
• VMware Virtual Volumes
Open Systems-specific provisioning on page 50 has more information on provisioning storage in an open systems environment.
The Dell EMC Support Matrix in the E-Lab Interoperability Navigator at http://elabnavigator.emc.com has the most recent
information on HYPERMAX open systems capabilities.
Backup
A LUN is the basic unit of backup in Storage Direct. For each LUN, Storage Direct creates a backup image on the Data Domain
array. You can group backup images to create a backup set. One use of the backup set is to capture all the data for an
application as a point-in-time image.
Backup process
To create a backup of a LUN, Storage Direct:
1. Uses SnapVX to create a local snapshot of the LUN on the VMAX All Flash array (the primary storage array).
When the vdisk contains all the data for the LUN, Data Domain converts the data into a static image. This image then has
metadata added to it and Data Domain catalogs the resultant backup image.
Restore
Storage Direct provides two forms of data restore:
● Object level restore from a selected backup image
● Full application rollback restore
File system agent Provides facilities to back up, manage, and restore application LUNs.
Database Provides facilities to back up, manage, and restore DB2 databases, Oracle databases, or SAP with Oracle
application agent database data.
Microsoft Provides facilities to back up, manage, and restore Microsoft Exchange and Microsoft SQL Server
application agent databases.
More information
More information about Storage Direct, its components, how to configure them, and how to use them is available in:
● PowerProtect Storage Direct Solutions Guide
● File System Agent Installation and Administration Guide
● Database Application Agent Installation and Administration Guide
● Microsoft Application Agent Installation and Administration Guide
vVol components
To support management capabilities of vVols, the storage/vCenter environment requires the following:
● EMC VMAX VASA Provider – The VASA Provider (VP) is a software plug-in that uses a set of out-of-band management
APIs (VASA version 2.0). The VASA Provider exports storage array capabilities and presents them to vSphere through
the VASA APIs. vVols are managed by way of vSphere through the VASA Provider APIs (create/delete) and not with the
Unisphere for VMAX user interface or Solutions Enabler CLI. After vVols are setup on the array, Unisphere and Solutions
Enabler only support vVol monitoring and reporting.
● Storage Containers (SC)—Storage containers are chunks of physical storage used to logically group vVols. SCs are based
on the grouping of Virtual Machine Disks (VMDKs) into specific Service Levels. SC capacity is limited only by hardware
capacity. At least one SC per storage system is required, but multiple SCs per array are allowed. SCs are created and
managed on the array by the Storage Administrator. Unisphere and Solutions Enabler CLI support management of SCs.
● Protocol Endpoints (PE)—Protocol endpoints are the access points from the hosts to the array by the Storage
Administrator. PEs are compliant with FC and replace the use of LUNs and mount points. vVols are "bound" to a PE, and
the bind and unbind operations are managed through the VP APIs, not with the Solutions Enabler CLI. Existing multi-path
policies and NFS topology requirements can be applied to the PE. PEs are created and managed on the array by the Storage
Administrator. Unisphere and Solutions Enabler CLI support management of PEs.
vVol scalability
The vVol scalability limits are:
a. vVol Snapshots are managed through vSphere only. You cannot use Unisphere or Solutions Enabler to create them.
vVol workflow
Requirements
Install and configure these applications:
● Unisphere for VMAX V8.2 or later
● Solutions Enabler CLI V8.2 or later
● VASA Provider V8.2 or later
Instructions for installing Unisphere and Solutions Enabler are in their respective installation guides. Instructions on installing the
VASA Provider are in the Dell EMC PowerMax VASA Provider Release Notes .
Procedure
The creation of a vVol-based virtual machine involves both the storage administrator and the VMware administrator:
Storage The storage administrator uses Unisphere or Solutions Enabler to create the storage and present it to the
administrator VMware environment:
1. Create one or more storage containers on the storage array.
This step defines how much storage and from which service level the VMware user can provision.
2. Create Protocol Endpoints and provision them to the ESXi hosts.
VMware The VMware administrator uses the vSphere Web Client to deploy the VM on the storage array:
administrator 1. Add the VASA Provider to the vCenter.
This allows vCenter to communicate with the storage array,
2. Create a vVol datastore from the storage container.
3. Create the VM storage policies.
4. Create the VM in the vVol datastore, selecting one of the VM storage policies.
Mainframe Features 43
● Multiple Allegiance (MA)
● Sequential Data Striping
● Multi-Path Lock Facility
● Product Suite for z/TPF
● HyperSwap
● Global Mirror
● Transparent Cloud Tiering
NOTE: A VMAX All Flash array can participate in a z/OS Global Mirror (XRC) configuration only as a secondary.
44 Mainframe Features
written to the DLm from the PowerMax over a FICON connection. DLm then stores the data on any backend storage that DLm
supports. Optionally, the DLm Long-Term Retention feature can then be used, independent of TCT, to move the data to a Dell
EMC Elastic Cloud Storage (ECS) solution.
A cloud object store is required for TCT support to operate. This cloud must support the OpenStack SWIFT protocol and is used
to store cloud metadata. If ECS is used as the cloud, the ECS can be the same or a different ECS from any ECS deployed with
the DLm.
A REST API proxy server is required in each z/OS image accessing a TCT-enabled PowerMax. This proxy server runs as a
separate address space in z/OS.
The ResourcePak Base for z/OS Product Guide provides additional information about TCT support, including TCT support
requirements and restrictions. It also discusses how to set up and run the Dell EMC REST API proxy and the Dell EMC REST API
utility.
Mainframe Features 45
IBM 2107 support
When VMAX All Flash arrays emulate an IBM 2107, they externally represent the array serial number as an alphanumeric
number in order to be compatible with IBM command output. Internally, the arrays retain a numeric serial number for IBM 2107
emulations. HYPERMAX OS handles correlation between the alphanumeric and numeric serial numbers.
a. A split is a logical partition of the storage array, identified by unique devices, SSIDs, and host serial number. The maximum
storage array host address per array is inclusive of all splits.
The following table lists the maximum LPARs per port based on the number of LCUs with active paths:
Cascading configurations
Cascading configurations greatly enhance FICON connectivity between local and remote sites by using switch-to-switch
extensions of the CPU to the FICON network. These cascaded switches communicate over long distances using a small number
46 Mainframe Features
of high-speed lines called interswitch links (ISLs). A maximum of two switches may be connected together within a path
between the CPU and the storage array.
Use of the same switch vendors is required for a cascaded configuration. To support cascading, each switch vendor requires
specific models, hardware features, software features, configuration settings, and restrictions. Specific IBM CPU models,
operating system release levels, host hardware, and HYPERMAX levels are also required.
The Dell EMC Support Matrix, available through E-Lab Interoperability Navigator (ELN) at http://elabnavigator.emc.com has
the most up-to-date information on switch support.
Mainframe Features 47
5
Provisioning
This chapter introduces storage provisioning.
Topics:
• Thin provisioning
• Multi-array provisioning
Thin provisioning
VMAX All Flash arrays are configured in the factory with thin provisioning pools ready for use. Thin provisioning improves
capacity utilization and simplifies storage management. It also enables storage to be allocated and accessed on demand from a
pool of storage that services one or many applications. LUNs can be “grown” over time as space is added to the data pool with
no impact to the host or application. Data is widely striped across physical storage (drives) to deliver better performance than
standard provisioning.
NOTE: Data devices (TDATs) are provisioned/pre-configured/created while the host addressable storage devices TDEVs
are created by either the customer or customer support, depending on the environment.
Thin provisioning increases capacity utilization and simplifies storage management by:
● Enabling more storage to be presented to a host than is physically consumed
● Allocating storage only as needed from a shared thin provisioning pool
● Making data layout easier through automated wide striping
● Reducing the steps required to accommodate growth
Thin provisioning allows you to:
● Create host-addressable thin devices (TDEVs) using Unisphere or Solutions Enabler
● Add the TDEVs to a storage group
● Run application workloads on the storage groups
When hosts write to TDEVs, the physical storage is automatically allocated from the default Storage Resource Pool.
48 Provisioning
Pre-configuration for thin provisioning
VMAX All Flash arrays are custom-built and pre-configured with array-based software applications, including a factory pre-
configuration for thin provisioning that includes:
● Data devices (TDAT) — an internal device that provides physical storage used by thin devices.
● Virtual provisioning pool — a collection of data devices of identical emulation and protection type, all of which reside on
drives of the same technology type and speed. The drives in a data pool are from the same disk group.
● Disk group— a collection of physical drives within the array that share the same drive technology and capacity. RAID
protection options are configured at the disk group level. Dell Technologies strongly recommends that you use one or more
of the RAID data protection schemes for all data devices.
● Storage Resource Pools — one (default) Storage Resource Pool is pre-configured on the array. This process is automatic
and requires no setup. You cannot modify Storage Resource Pools, but you can list and display their configuration. You can
also generate reports detailing the demand storage groups are placing on the Storage Resource Pools.
Thin devices (TDEVs) have no storage allocated until the first write is issued to the device. Instead, the array allocates only a
minimum allotment of physical storage from the pool, and maps that storage to a region of the thin device including the area
targeted by the write.
These initial minimum allocations are performed in units called thin device extents. Each extent for a thin device is 1 track (128
KB).
When a read is performed on a device, the data being read is retrieved from the appropriate data device to which the thin
device extent is allocated. Reading an area of a thin device that has not been mapped does not trigger allocation operations.
Reading an unmapped block returns a block in which each byte is equal to zero.
When more storage is required to service existing or future thin devices, data devices can be added to existing thin storage
groups.
Provisioning 49
The sum of the reported capacities of the thin devices using a given pool can exceed the available storage capacity of the pool.
Thin devices whose capacity exceeds that of their associated pool are "oversubscribed".
Over-subscription allows presenting larger than needed devices to hosts and applications without having the physical drives to
fully allocate the space represented by the thin devices.
50 Provisioning
Components of an auto-provisioning group
Masking view
Initiator group
VM 1 VM
VM 1 2 VM
VM 2 3 VM
VM 3 4
VM 4
HBA 22
HBA 33
HBA 44
HBA 11
ESX
HBA
HBA
HBA
HBA
2
1
Host initiators
Port group
Ports
dev
dev
dev dev
dev
dev
dev
dev
dev Storage group
Devices
SYM-002353
Initiator group
A logical grouping of Fibre Channel initiators. An initiator group is limited to either a parent, which can
contain other groups, or a child, which contains one initiator role. Mixing of initiators and child name in a
group is not supported.
Port group
A logical grouping of Fibre Channel front-end director ports. A port group can contain up to 32 ports.
Storage group
A logical grouping of thin devices. LUN addresses are assigned to the devices within the storage group
when the view is created if the group is either cascaded or stand alone. Often there is a correlation
between a storage group and a host application. One or more storage groups may be assigned to
an application to simplify management of the system. Storage groups can also be shared among
applications.
Cascaded storage group
A parent storage group comprised of multiple storage groups (parent storage group members) that
contain child storage groups comprised of devices. By assigning child storage groups to the parent
storage group members and applying the masking view to the parent storage group, the masking view
inherits all devices in the corresponding child storage groups.
Masking view
An association between one initiator group, one port group, and one storage group. When a masking
view is created, the group within the view is a parent, the contents of the children are used. For
example, the initiators from the children initiator groups and the devices from the children storage
groups. Depending on the server and application requirements, each server or group of servers may have
one or more masking views that associate a set of thin devices to an application, server, or cluster of
servers.
Multi-array provisioning
The multi-array Provisioning Storage wizard simplifies the task of identifying the optimal target array and provisioning storage on
that array.
Unisphere for PowerMax 9.2 provides a system-level provisioning launch point that takes array-independent inputs (storage
group name, device count and size, and (optionally) response time target or initiator filter), selects ports that are based on
Provisioning 51
current utilization and port group best practices, and returns component impact scores for all locally connected arrays running
HYPERMAX OS 5977 or PowerMaxOS 5978.
You can also select a provisioning template and provision new storage using the wizard. Storage group capacity information
and response time targets that are already part of the provisioning template are populated when the wizard opens. The most
suitable ports (based on specified options) are selected and a list of all locally connected arrays (V3 and higher) are returned.
The list is sorted by the impact of the new workload on the target arrays.
Host I/O limits (quotas) can be used to limit the amount of Front End (FE) Bandwidth and I/O operations per second (IOPS)
that can be consumed by a set of storage volumes over a set of director ports. Host I/O limits are defined as storage group
attributes – the maximum bandwidth (in MB per second) and the maximum IOPS. The Host I/O limit for a storage group can be
either active or inactive.
52 Provisioning
6
Native local replication with TimeFinder
This chapter introduces the local replication features.
Topics:
• About TimeFinder
• Mainframe SnapVX and zDP
• Snapshot policy
About TimeFinder
Dell EMC TimeFinder delivers point-in-time copies of volumes that can be used for backups, decision support, data warehouse
refreshes, or any other process that requires parallel access to production data.
Previous VMAX families offered multiple TimeFinder products, each with their own characteristics and use cases. These
traditional products required a target volume to retain snapshot or clone data.
HYPERMAX OS introduces TimeFinder SnapVX which provides the best aspects of the traditional TimeFinder offerings
combined with increased scalability and ease-of-use.
TimeFinder SnapVX emulates the following legacy replication products:
● FBA devices:
○ TimeFinder/Clone
○ TimeFinder/Mirror
○ TimeFinder VP Snap
● Mainframe (CKD) devices:
○ TimeFinder/Clone
○ TimeFinder/Mirror
○ TimeFinder/Snap
○ Dell EMC Dataset Snap
○ IBM FlashCopy (Full Volume and Extent Level)
TimeFinder SnapVX dramatically decreases the impact of snapshots and clones:
● For snapshots, this is done by using redirect on write technology (ROW).
● For clones, this is done by storing changed tracks (deltas) directly in the Storage Resource Pool of the source device -
sharing tracks between snapshot versions and also with the source device, where possible.
There is no need to specify a target device and source/target pairs. SnapVX supports up to 256 snapshots per volume. Each
snapshot can have a name and an automatic expiration date.
Access to snapshots
With SnapVX, a snapshot can be accessed by linking it to a host accessible volume (known as a target volume). Target volumes
are standard VMAX All Flash TDEVs. Up to 1024 target volumes can be linked to the snapshots of the source volumes. The 1024
links can all be to the same snapshot of the source volume, or they can be multiple target volumes linked to multiple snapshots
from the same source volume. However, a target volume may be linked only to one snapshot at a time.
Snapshots can be cascaded from linked targets, and targets can be linked to snapshots of linked targets. There is no limit to the
number of levels of cascading, and the cascade can be broken.
SnapVX links to targets in the following modes:
● Nocopy Mode (Default): SnapVX does not copy data to the linked target volume but still makes the point-in-time image
accessible through pointers to the snapshot. The target device is modifiable and retains the full image in a space-efficient
manner even after unlinking from the point-in-time.
● Copy Mode: SnapVX copies all relevant tracks from the snapshot's point-in-time image to the linked target volume. This
creates a complete copy of the point-in-time image that remains available after the target is unlinked.
Targetless snapshots
With the TimeFinder SnapVX management interfaces you can take a snapshot of an entire VMAX All Flash Storage Group using
a single command. With this in mind, VMAX All Flash supports up to 64K storage groups. The number of groups is enough even
in the most demanding environment to provide one for each application. The storage group construct already exists in most
cases as they are created for masking views. TimeFinder SnapVX uses this existing structure, so reducing the administration
required to maintain the application and its replication environment.
Creation of SnapVX snapshots does not require preconfiguration of extra volumes. In turn, this reduces the amount of cache
that SnapVX snapshots use and simplifies implementation. Snapshot creation and automatic termination can easily be scripted.
The following Solutions Enabler example creates a snapshot with a 2-day retention period. The command can be scheduled to
run as part of a script to create multiple versions of the snapshot. Each snapshot shares tracks where possible with the other
snapshots and the source devices. Use a cron job or scheduler to run the snapshot script on a schedule to create up to 256
snapshots of the source volumes; enough for a snapshot every 15 minutes with 2 days of retention:
symsnapvx -sid 001 -sg StorageGroup1 -name sg1_snap establish -ttl -delta 2
If a restore operation is required, any of the snapshots created by this example can be specified.
When the storage group transitions to a restored state, the restore session can be terminated. The snapshot data is preserved
during the restore process and can be used again should the snapshot data be required for a future restore.
Secure snaps
Secure snaps prevent administrators or other high-level users from deleting snapshot data, intentionally or not. Also, Secure
snaps are also immune to automatic failure resulting from running out of Storage Resource Pool (SRP) or Replication Data
Pointer (RDP) space on the array.
When the administrator creates a secure snapshot, they assign it an expiration date and time. The administrator can express
the expiration either as a delta from the current date or as an absolute date. Once the expiration date passes, and if the
snapshot has no links, HYPERMAX OS automatically deletes the snapshot. Before its expiration, administrators can only extend
the expiration date; they cannot shorten the date or delete the snapshot. If a secure snapshot expires, and it has a volume
linked to it, or an active restore session, the snapshot is not deleted. However, it is no longer considered secure.
NOTE: Secure snapshots may only be terminated after they expire or by customer-authorized Dell EMC support. Refer to
Knowledgebase article 498316 for more information.
NOTE: Unmount target volumes before issuing the relink command to ensure that the host operating system does not
cache any filesystem data. If accessing through VPLEX, ensure that you follow the procedure outlined in the technical note
VPLEX: Leveraging Array Based and Native Copy Technologies, available on the Dell EMC support website.
Cascading snapshots
Presenting sensitive data to test or development environments often requires that the source of the data be disguised
beforehand. Cascaded snapshots provides this separation and disguise, as shown in the following image.
Snapshot policy
The Snapshot policy feature provides snapshot orchestration at scale (1024 snaps per storage group). The feature simplifies
snapshot management for standard and cloud snapshots.
Snapshots can be used to recover from data corruption, accidental deletion, or other damage, offering continuous data
protection. A large number of snapshots can be difficult to manage. The Snapshot policy feature provides an end to end
solution to create, schedule and manage standard (local) and cloud snapshots.
The snapshot policy (Recovery Point Objective (RPO)) specifies how often the snapshot should be taken and how many of the
snapshots should be retained. The snapshot may also be specified to be secure (these snapshots cannot be terminated by users
before their time to live (TTL), derived from the snapshot policy's interval and maximum count, has expired.) Up to four policies
can be associated with a storage group, and a snapshot policy can be associated with many storage groups.
The following rules apply to snapshot policies:
● The maximum number of snapshot policies that can be created on a storage system is 20. Multiple storage groups can be
associated with a snapshot policy.
● A maximum of four snapshot policies can be associated with an individual storage group.
● A storage group or device can have a maximum of 256 manual snapshots.
● A storage group or device can have a maximum of 1024 snapshots.
● The oldest unused snapshots are removed or recycled in accordance with the specified policy max_count value.
● When devices are added to a snapshot policy storage group, snapshot policies that apply to the storage group are applied to
the added devices.
● When devices are removed from a snapshot policy storage group, snapshot policies that apply to the storage group are no
longer applied to the removed devices.
● If overlapping snapshot policies are applied to storage groups, they run and take snapshots independently.
Compliance information is provided for each snapshot policy that is directly associated with (not inherited to) a storage group.
Snapshot compliance for a storage group is taken as the lowest compliance value for any of the snapshot policies that are
directly associated with the storage group.
Compliance for a snapshot policy that is associated with a storage group is based on the number of good snapshots within the
retention count. The retention count is translated to a retention period for compliance calculation. The retention period is the
snapshot interval multiplied by the snapshot maximum count. For example, a 1 hr interval with a 30 snapshot count means a
30-hour retention period.
The compliance threshold for green to yellow change is the maximum count, that is, all snapshots must be good and in place for
the compliance to be green. If there is one snapshot short (missing or failed), then the compliance turns yellow.
The compliance threshold value for yellow to red is stored in the snapshot policy definition. Once the number of good snapshots
falls below this value, compliance turns red.
Snapshot compliance is calculated by polling the storage system once an hour for SnapVX related information for storage
groups which have snapshot policies that are associated with them. The returned snapshot information is summarized into the
required information for the database compliance entries.
When the maximum count of snapshots for a snapshot policy is changed, this changes the compliance for the storage group or
service level combination. Compliance values are updated accordingly simultaneously.
If compliance calculation is performed during the creation of a snapshot, then an establish-in-progress state may be detected.
This is acceptable for the most recent snapshot but is considered failed for any older snapshot.
Remote replication 59
SRDF 2-site solutions
The following table describes SRDF 2-site solutions.
SRDF
TimeFinder
TimeFinder
background copy
R1 R2
Site A Site B
60 Remote replication
Table 15. SRDF 2-site solutions (continued)
Solution highlights Site topology
SRDF/Cluster Enabler (CE)
VLAN switch VLAN switch
● Integrates SRDF/S or SRDF/A with Microsoft Extended IP subnet
Failover Clusters (MSCS) to automate or semi-
automate site failover.
● Complete solution for restarting operations in
cluster environments (MSCS with Microsoft
Failover Clusters). Cluster 1 Fibre Channel Fibre Channel
● Expands the range of cluster storage and Host 1 hub/switch hub/switch Cluster 1
Host 2
management capabilities while ensuring full
protection of the SRDF remote replication.
Cluster 2
SRDF/S or SRDF/A links
Cluster 2 Host 2
Host 1
SRDF-2node2cluster.eps
Site A Site B
SRDF and VMware Site Recovery Manager Protection side Recovery side
vCenter and SRM Server vCenter and SRM Server
Completely automates storage-based disaster Solutions Enabler software Solutions Enabler software
restart operations for VMware environments in
SRDF topologies. IP Network IP Network
a. In some circumstances, using SRDF/S over distances greater than 200 km may be feasible. Contact your Dell EMC
representative for more information.
Remote replication 61
SRDF multi-site solutions
The following table describes SRDF multi-site solutions.
Concurrent SRDF
3-site disaster recovery and
advanced multi-site business F/S R2
SRD
continuity protection.
● Data on the primary site is Site B
concurrently replicated to 2 R11 adaptive copy R2
secondary sites.
● Replication to remote site Site A Site C
can use SRDF/S, SRDF/A, or
adaptive copy.
Cascaded SRDF
3-site disaster recovery and SRDF/S SRDF/A
advanced multi-site business R1 R21 R2
continuity protection.
Data on the primary site (Site Site A Site B Site C
A) is synchronously mirrored to a
secondary site (Site B), and then
asynchronously mirrored from the
secondary site to a tertiary site
(Site C).
62 Remote replication
Table 16. SRDF multi-site solutions (continued)
Solution highlights Site topology
Interfamily compatibility
SRDF supports connectivity between different operating environments and arrays. Arrays running HYPERMAX OS can connect
to legacy arrays running older operating environments. In mixed configurations where arrays are running different versions,
SRDF features of the lowest version are supported.
VMAX All Flash arrays can connect to:
● PowerMax arrays running PowerMaxOS
● VMAX 250F, 450F, 850F, and 950F arrays running HYPERMAX OS
● VMAX 100K, 200K, and 400K arrays running HYPERMAX OS
● VMAX 10K, 20K, and 40K arrays running Enginuity 5876 with an Enginuity ePack
NOTE: When you connect between arrays running different operating environments, limitations may apply. Information
about which SRDF features are supported, and applicable limitations for 2-site and 3-site solutions is in the SRDF
Interfamily Connectivity Information.
This interfamily connectivity allows you to add the latest hardware platform/operating environment to an existing SRDF
solution, enabling technology refreshes.
Remote replication 63
R1 and R2 devices
An R1 device is the member of the device pair at the source (production) site. R1 devices are generally Read/Write accessible to
the application host.
An R2 device is the member of the device pair at the target (remote) site. During normal operations, host I/O writes to the R1
device are mirrored over the SRDF links to the R2 device. In general, data on R2 devices is not available to the application host
while the SRDF relationship is active. In SRDF synchronous mode, however, an R2 device can be in Read Only mode that allows
a host to read from the R2.
In a typical environment:
● The application production host has Read/Write access to the R1 device.
● An application host connected to the R2 device has Read Only (Write Disabled) access to the R2 device.
Open systems hosts
Production host Optional remote host
R1 SRDF Links R2
Read/ Read
Write Only
R1 data copies to R2
64 Remote replication
R11 devices
R11 devices operate as the R1 device for two R2 devices. Links to both R2 devices are active.
R11 devices are typically used in 3-site concurrent configurations where data on the R11 site is mirrored to two secondary (R2)
arrays:
Site B
Target
R2
Site C
R11
Target
Site A
Source
R2
Remote replication 65
R21 devices
R21 devices have a dual role and are used in cascaded 3-site configurations where:
● Data on the R1 site is synchronously mirrored to a secondary (R21) site, and then
● Asynchronously mirrored from the secondary (R21) site to a tertiary (R2) site:
Production
host
SRDF Links
R1 R21 R2
The R21 device acts as a R2 device that receives updates from the R1 device, and as a R1 device that sends updates to the R2
device.
When the R1->R21->R2 SRDF relationship is established, no host has write access to the R21 device.
In arrays that run Enginuity, the R21 device can be diskless. That is, it consists solely of cache memory and does not have any
associated storage device. It acts purely to relay changes in the R1 device to the R2 device. This capability requires the use of
thick devices. Systems that run PowerMaxOS or HYPERMAX OS contain thin devices only, so setting up a diskless R21 device is
not possible on arrays running those environments.
R22 devices
R22 devices:
● Have two R1 devices, only one of which is active at a time.
● Are typically used in cascaded SRDF/Star and concurrent SRDF/Star configurations to decrease the complexity and time
required to complete failover and failback operations.
● Let you recover without removing old SRDF pairs and creating new ones.
66 Remote replication
Dynamic device personalities
SRDF devices can dynamically swap “personality” between R1 and R2. After a personality swap:
● The R1 in the device pair becomes the R2 device, and
● The R2 becomes the R1 device.
Swapping R1/R2 personalities allows the application to be restarted at the remote site without interrupting replication if an
application fails at the production site. After a swap, the R2 side (now R1) can control operations while being remotely mirrored
at the primary (now R2) site.
An R1/R2 personality swap is not supported:
● If the R2 device is larger than the R1 device.
● If the device to be swapped is participating in an active SRDF/A session.
● In SRDF/EDP topologies diskless R11 or R22 devices are not valid end states.
● If the device to be swapped is the target device of any TimeFinder or EMC Compatible flash operations.
Synchronous mode
Synchronous mode maintains a real-time mirror image of data between the R1 and R2 devices over distances up to 200 km (125
miles). Host data is written to both arrays in real time. The application host does not receive the acknowledgment until the data
has been stored in the cache of both arrays.
Asynchronous mode
Asynchronous mode maintains a dependent-write consistent copy between the R1 and R2 device over unlimited distances. On
receiving data from the application host, SRDF on the R1 side of the link writes that data to its cache. Also it batches the
data received into delta sets. Delta sets are transferred to the R2 device in timed cycles. The application host receives the
acknowledgment once data is successfully written to the cache on the R1 side.
Remote replication 67
SRDF groups
An SRDF group defines the logical relationship between SRDF devices and directors on both sides of an SRDF link.
Group properties
The properties of an SRDF group are:
● Label (name)
● Set of ports on the local array used to communicate over the SRDF links
● Set of ports on the remote array used to communicate over the SRDF links
● Local group number
● Remote group number
● One or more pairs of devices
The devices in the group share the ports and associated CPU resources of the port's directors.
Types of group
There are two types of SRDF group:
● Static: which are defined in the local array's configuration file.
● Dynamic: which are defined using SRDF management tools and their properties that are stored in the array's cache memory.
On arrays running PowerMaxOS or HYPERMAX OS all SRDF groups are dynamic.
NOTE: Two or more SRDF links per SRDF group are required for redundancy and fault tolerance.
The relationship between the resources on a director (CPU cores and ports) varies depending on the operating environment.
HYPERMAX OS
On arrays running HYPERMAX OS:
● The relationship between the SRDF emulation and resources on a director is configurable:
○ One director/multiple CPU cores/multiple ports
○ Connectivity (ports in the SRDF group) is independent of compute power (number of CPU cores). You can change the
amount of connectivity without changing compute power.
● Each director has up to 16 front end ports, any or all of which can be used by SRDF. Both the SRDF Gigabit Ethernet and
SRDF Fibre Channel emulations can use any port.
● The data path for devices in an SRDF group is not fixed to a single port. Instead, the path for data is shared across all ports
in the group.
68 Remote replication
SRDF consistency
Many applications, especially database systems, use dependent write logic to ensure data integrity. That is, each write operation
must complete successfully before the next can begin. Without write dependency, write operations could get out of sequence
resulting in irrecoverable data loss.
SRDF implements write dependency using the consistency group (also known as SRDF/CG). A consistency group consists of a
set of SRDF devices that use write dependency. For each device in the group, SRDF ensures that write operations propagate to
the corresponding R2 devices in the correct order.
However, if the propagation of any write operation to any R2 device in the group cannot complete, SRDF suspends propagation
to all group's R2 devices. This suspension maintains the integrity of the data on the R2 devices. While the R2 devices are
unavailable, SRDF continues to store write operations on the R1 devices. It also maintains a list of those write operations in
their time order. When all R2 devices in the group become available, SRDF propagates the outstanding write operations, in the
correct order, for each device in the group.
SRDF/CG is available for both SRDF/S and SRDF/A.
Data migration
Data migration is the one-time movement of data from one array to another. Once the movement is complete, the data is
accessed from the secondary array. A common use of migration is to replace an older array with a new one.
Dell EMC support personnel can assist with the planning and implementation of migration projects.
SRDF multisite configurations enable migration to occur in any of these ways:
● Replace R2 devices.
● Replace R1 devices.
● Replace both R1 and R2 devices simultaneously.
For example, this diagram shows the use of concurrent SRDF to replace the secondary (R2) array in a 2-site configuration:
Remote replication 69
Array A Array B
R1 R2
R11 R2 R1
SRDF
migration
R2
R2
Array C Array C
Here:
● The top section of the diagram shows the original, 2-site configuration.
● The lower left section of the diagram shows the interim, 3-site configuration with data being copied to two secondary arrays.
● The lower right section of the diagram shows the final, 2-site configuration where the new secondary array has replaced the
original one.
The Dell EMC SRDF Introduction contains more information about using SRDF to migrate data.
More information
Here are other Dell EMC documents that contain more information about the use of SRDF in replication and migration:
SRDF Introduction
SRDF and NDM Interfamily Connectivity Information
SRDF/Cluster Enabler Plug-in Product Guide
Using the Dell EMC Adapter for VMWare Site Recovery Manager Technical Book
Dell EMC SRDF Adapter for VMware Site Recovery Manager Release Notes
70 Remote replication
SRDF/Metro
In traditional SRDF configurations, only the R1 devices are Read/Write accessible to the application hosts. The R2 devices are
Read Only and Write Disabled.
In SRDF/Metro configurations, however:
● Both the R1 and R2 devices are Read/Write accessible to the application hosts.
● Application hosts can write to both the R1 and R2 side of the device pair.
● R2 devices assume the same external device identity as the R1 devices. The identity includes the device geometry and
device WWN.
This shared identity means that R1 and R2 devices appear to application hosts as a single, virtual device across two arrays.
Deployment options
SRDF/Metro can be deployed in either a single, multipathed host environment or in a clustered host environment:
Multi-Path Cluster
Read/Write Read/Write
Read/Write Read/Write
SRDF/Metro Resilience
If either of the devices in a SRDF/Metro configuration become Not Ready, or connectivity between the devices is lost, SRDF/
Metro must decide which side remains available to the application host. There are two mechanisms that SRDF/Metro can use :
Device Bias and Witness.
Device Bias
Device pairs for SRDF/Metro are created with a bias attribute. By default, the create pair operation sets the bias to the R1
side of the pair. That is, if a device pair becomes Not Ready (NR) on the SRDF link, the R1 (bias side) remains accessible
to the hosts, and the R2 (nonbias side) becomes inaccessible. However, if there is a failure on the R1 side, the host loses all
connectivity to the device pair. The Device Bias method cannot make the R2 device available to the host.
Witness
A witness is a third party that mediates between the two sides of a SRDF/Metro pair to help:
● Decide which side remains available to the host
Remote replication 71
● Avoid a "split brain" scenario when both sides attempt to remain accessible to the host despite the failure
The witness method allows for intelligently choosing on which side to continue operations when the bias-only method may not
result in continued host availability to a surviving, nonbiased array.
There are two forms of the Witness mechanism:
● Array Witness: The operating environment of a third array is the mediator.
● Virtual Witness (vWitness): A daemon running on a separate, virtual machine is the mediator.
When both sides run PowerMaxOS 5978 SRDF/Metro takes these criteria into account when selecting the side to remain
available to the hosts (in priority order):
1. The side that has connectivity to the application host (requires PowerMaxOS 5978.444.444or later)
2. The side that has a SRDF/A DR leg
3. Whether the SRDF/A DR leg is synchronized
4. The side that has more than 50% of the RA or FA directors that are available
5. The side that is currently the bias side
The first of these criteria that one array has, and the other does not, stops the selection process. The side with the matched
criteria is the preferred winner.
72 Remote replication
Disaster recovery facilities
Devices in SRDF/Metro groups can simultaneously be in other groups that replicate data to a third, disaster recovery site. There
are two replication solutions. The number available in any SRDF/Metro configuration depends on the version of the operating
environment that the participating arrays run:
● Highly-available disaster recovery – in configurations that consist of arrays that run PowerMaxOS 5978.669.669 and later
● Independent disaster recovery – in configurations that run all supported versions of PowerMaxOS 5978 and HYPERMAX OS
5977
SRDF/Metro
R11 R21
Array A Array B
SRDF/A or SRDF/A or
Adaptive Copy Adaptive Copy
Disk Disk
Active link
Inactive link
R22
Array C
Notice that the device names differ from a standard SRDF/Metro configuration. This difference reflects the change in the
device functions when SRDF/Metro Smart DR is in operation. For instance, as the diagram shows the R1 side of the SRDF/
Metro on Array A now has the name R11, because it is the R1 device to both the:
● R21 device on Array B in the SRDF/Metro configuration
● R22 device on Array C in the SRDF/Metro Smart DR configuration
Arrays A and B both have SRDF/Asynchronous or Adaptive Copy Disk connections to the DR array (Array C). However, only
one of those connections is active at a time (in this example the connection between Array A and Array C). The two SRDF/A
connections are known as the active and standby connections.
If a problem prevents Array A replicating data to Array C, the standby link between Array B and Array C becomes active and
replication continues. Array A and Array B keep track of the data replicated to Array C to enable replication and avoid data loss.
Remote replication 73
Independent disaster recovery
Devices in SRDF/Metro groups can simultaneously be part of device groups that replicate data to a third, disaster-recovery site.
Either or both sides of the Metro region can be replicated. You can choose which ever configuration that suits your business
needs. The following diagram shows the possible configurations:
NOTE: When the SRDF/Metro session is using a witness, the R1 side of the Metro pair can change based on the witness
determination of the preferred side.
Single-sided replication
SRDF/Metro SRDF/Metro
R11 R2 R1 R21
SRDF/A SRDF/A
or Adaptive Copy or Adaptive Copy
Disk Disk
R2 R2
Site C Site C
Double-sided replication
SRDF/Metro SRDF/Metro
R2
R2 R2
R2
The device types differ from a stand-alone SRDF/Metro configuration. This difference reflects the change in the devices'
function when disaster recovery facilities are in place. For instance, when the R2 side is replicated to a disaster recovery site, its
type changes to R21 because it is both the:
● R2 device in the SRDF/Metro configuration
● R1 device in the disaster-recovery configuration
When an SRDF/Metro uses a witness for resilience protection, the two sides periodically renegotiate the winning and losing
sides. If the winning and losing sides switch as a result of renegotiation:
74 Remote replication
● An R11 device becomes an R21 device. That device was the R1 device for both the SRDF/Metro and disaster recovery
configurations. Now the device is the R2 device of the SRDF/Metro configuration but it remains the R1 device of the
disaster recovery configuration.
● An R21 device becomes and R11 device. That device was the R2 device in the SRDF/Metro configuration and the R1 device
of the disaster recovery configuration. Now the device is the R1 device of both the SRDF/Metro and disaster recovery
configurations.
More information
Here are other Dell EMC documents that contain more information on SRDF/Metro:
SRDF Introduction
SRDF/Metro vWitness Configuration Guide
SRDF Interfamily Connectivity Information
RecoverPoint
HYPERMAX OS 5977.1125.1125 introduced support for RecoverPoint on VMAX storage arrays. RecoverPoint is a comprehensive
data protection solution designed to provide production data integrity at local and remote sites. RecoverPoint also provides the
ability to recover data from a point in time using journaling technology.
The primary reasons for using RecoverPoint are:
● Remote replication to heterogeneous arrays
● Protection against Local and remote data corruption
● Disaster recovery
● Secondary device repurposing
● Data migrations
RecoverPoint systems support local and remote replication of data that applications are writing to SAN-attached storage.
The systems use existing Fibre Channel infrastructure to integrate seamlessly with existing host applications and data storage
subsystems. For remote replication, the systems use existing Fibre Channel connections to send the replicated data over a
WAN, or use Fibre Channel infrastructure to replicate data aysnchronously. The systems provide failover of operations to a
secondary site in the event of a disaster at the primary site.
Previous implementations of RecoverPoint relied on a splitter to track changes made to protected volumes. The current
implementation relies on a cluster of RecoverPoint nodes, provisioned with one or more RecoverPoint storage groups, leveraging
SnapVX technology, on the storage array. Volumes in the RecoverPoint storage groups are visible to all the nodes in the cluster,
and available for replication to other storage arrays.
RecoverPoint allows data replication of up to 8,000 LUNs for each RecoverPoint cluster and up to eight different RecoverPoint
clusters attached to one array. Supported array types include PowerMax, VMAX All Flash, VMAX3, VMAX, VNX, VPLEX, and
XtremIO.
RecoverPoint is licensed and sold separately. For more information about RecoverPoint and its capabilities see the Dell EMC
RecoverPoint Product Guide.
Remote replication 75
8
Blended local and remote replication
This chapter introduces TimeFinder integration with SRDF.
Topics:
• Integration of SRDF and TimeFinder
• R1 and R2 devices in TimeFinder operations
• SRDF/AR
• TimeFinder and SRDF/A
• TimeFinder and SRDF/S
NOTE: Some TimeFinder operations are not supported on devices that SRDF protects. The Dell EMC Solutions Enabler
TimeFinder SnapVX CLI User Guide has further information.
The rest of this chapter summarizes the ways of integrating SRDF and TimeFinder.
SRDF/AR
SRDF/AR combines SRDF and TimeFinder to provide a long-distance disaster restart solution. SRDF/AR can be deployed over 2
or 3 sites:
● In 2-site configurations, SRDF/DM is deployed with TimeFinder.
● In 3-site configurations, SRDF/DM is deployed with a combination of SRDF/S and TimeFinder.
The time to create the new replicated consistent image is determined by the time that it takes to replicate the deltas.
Host Host
SRDF
TimeFinder
TimeFinder
background copy
R1 R2
Site A Site B
Figure 19. SRDF/AR 2-site solution
In this configuration, data on the SRDF R1/TimeFinder target device is replicated across the SRDF links to the SRDF R2 device.
The SRDF R2 device is also a TimeFinder source device. TimeFinder replicates this device to a TimeFinder target device. You
can map the TimeFinder target device to the host connected to the secondary array at Site B.
In a 2-site configuration, SRDF operations are independent of production processing on both the primary and secondary arrays.
You can utilize resources at the secondary site without interrupting SRDF operations.
Use SRDF/AR 2-site configurations to:
● Reduce required network bandwidth using incremental resynchronization between the SRDF target sites.
● Reduce network cost and improve resynchronization time for long-distance SRDF implementations.
Host Host
R1 R2
SRDF adaptive TimeFinder
SRDF/S TimeFinder copy
R2
R1
If Site A (primary site) fails, the R2 device at Site B provides a restartable copy with zero data loss. Site C provides an
asynchronous restartable copy.
If both Site A and Site B fail, the device at Site C provides a restartable copy with controlled data loss. The amount of data loss
is a function of the replication cycle time between Site B and Site C.
SRDF and TimeFinder control commands to R1 and R2 devices for all sites can be issued from Site A. No controlling host is
required at Site B.
Use SRDF/AR 3-site configurations to:
● Reduce required network bandwidth using incremental resynchronization between the secondary SRDF target site and the
tertiary SRDF target site.
● Reduce network cost and improve resynchronization time for long-distance SRDF implementations.
● Provide disaster recovery testing, point-in-time backups, decision support operations, third-party software testing, and
application upgrade testing or the testing of new applications.
Requirements/restrictions
In a 3-site SRDF/AR multi-hop configuration, SRDF/S host I/O to Site A is not acknowledged until Site B has acknowledged it.
This can cause a delay in host response time.
Overview
Data migration is a one-time movement of data from one array (the source) to another array (the target). Typical examples are
data center refreshes where data is moved from an old array after which the array is retired or re-purposed. Data migration is
not data movement due to replication (where the source data is accessible after the target is created) or data mobility (where
the target is continually updated).
After a data migration operation, applications that access the data reference it at the new location.
To plan a data migration, consider the potential impact on your business, including the:
● Type of data to be migrated
● Site location(s)
● Number of systems and applications
● Amount of data to be moved
● Business needs and schedules
PowerMaxOS provides migration facilities for:
● Open systems
● IBM System i
● Mainframe
80 Data Migration
Data migration for open systems
The data migration features available for open system environments are:
● Non-disruptive migration
● Open Replicator
● PowerPath Migration Enabler
● Data migration using SRDF/Data Mobility
● Space and zero-space reclamation
Data Migration 81
The App host connection to both arrays uses FC, and the SRDF connection between arrays uses FC or GigE .
The migration controls should be run from a control host and not from the application host. The control host should have
visibility to both the source array and target array.
The following devices and components are not supported with NDM:
● CKD devices
● eNAS data
● Storage Direct, FAST.X relationships and associated data
Array configuration
● The target array must be running HYPERMAX OS 5977.811.784 or higher. This includes VMAX3 Family arrays and VMAX All
Flash arrays.
● The source array must be a VMAX array running Enginuity 5876 with required ePack (contact Dell EMC for required ePack).
● SRDF is used for data migration, so zoning of SRDF ports between the source and target arrays is required. Note that an
SRDF license is not required, as there is no charge for NDM.
● The NDM RDF group is configured with a minimum of two paths on different directors for redundancy and fault tolerance. If
more paths are found up to eight paths will be configured.
● If SRDF is not normally used in the migration environment, it may be necessary to install and configure RDF directors and
ports on both the source and target arrays and physically configure SAN connectivity.
Host configuration
● The migration controls should be run from a control host and not from the application host.
● Both the source and the target array have be visible to the controlling host that runs the migration commands.
82 Data Migration
○ Multiple masking views on a storage group using the same initiator group are valid only when:
■ Port groups on the target array exist for each masking view, and
■ Ports in the port group are selected
○ A storage group must be a parent or stand-alone group. A child storage group with a masking view on the child group is
not supported.
○ If the selected storage group is a parent, its child groups are also migrated.
○ The names of storage groups and their children (if any) must not exist on the target array.
○ Gatekeeper devices in a storage group are not migrated.
● Devices cannot:
○ Have a mobility ID
○ Have a nonbirth identity, when the source array runs Enginuity 5876
○ Have the BCV attribute
○ Be encapsulated
○ Be RP devices
○ Be Data Domain devices
○ Be vVOL devices
○ Be R2 or Concurrent SRDF devices
○ Be masked to FCoE (in the case of source arrays), iSCSI, non-ACLX, or NVMe over FC ports
○ Be part of another data migration operation
○ Be part of an ORS relationship
○ Be in other masked storage groups
○ Have a device status of Not Ready
● Devices can be part of TimeFinder sessions.
● Devices can act as R1 devices but cannot be part of a SRDF/Star or SRDF/SQAR configuration.
● The names of masking groups to migrate must not exist on the target array.
● The names of initiator groups to migrate may exist on the target array. However, the aggregate set of host initiators in the
initiator groups that the masking groups use must be the same. Also, the effective ports flags on the host initiators must
have the same setting on both arrays.
● The names of port groups to migrate may exist on the target array, as long as the groups on the target array are in the
logging history table for at least one port.
● The status of the target array must be as follows:
○ If a target-side Storage Resource Pool (SRP) is specified for the migration, that SRP must exist on the target array.
○ The SRP to be used for target-side storage must have enough free capacity to support the migration.
○ The target side must be able to support the additional devices required to receive the source-side data.
○ All initiators provisioned to an application on the source array must also be logged into ports on the target array.
Data Migration 83
Open Replicator
Open Replicator enables copying data (full or incremental copies) from qualified arrays within a storage area network (SAN)
infrastructure to or from arrays running HYPERMAX OS. Open Replicator uses the Solutions Enabler SYMCLI symrcopy
command.
Use Open Replicator to migrate and back up/archive existing data between arrays running HYPERMAX OS. Open Replicator
uses the Solutions Enabler SYMCLI symrcopy and third-party storage arrays within the SAN infrastructure without interfering
with host applications and ongoing business operations.
Use Open Replicator to:
● Pull from source volumes on qualified remote arrays to a volume on an array running HYPERMAX OS. Open Replicator uses
the Solutions Enabler SYMCLI symrcopy.
● Perform online data migrations from qualified storage to an array running HYPERMAX OS. Open Replicator uses the
Solutions Enabler SYMCLI symrcopy with minimal disruption to host applications.
NOTE: Open Replicator cannot copy a volume that is in use by TimeFinder.
Cold
The Control device is Not Ready (offline) to the host while the copy operation is in progress.
Pull
A pull operation copies data to the control device from the remote device(s).
Push
A push operation copies data from the control device to the remote device(s).
Pull operations
On arrays running HYPERMAX OS, Open Replicator uses the Solutions Enabler SYMCLI symrcopy support up to 4096 pull
sessions.
For pull operations, the volume can be in a live state during the copy process. The local hosts and applications can begin to
access the data as soon as the session begins, even before the data copy process has completed.
These features enable rapid and efficient restoration of remotely vaulted volumes and migration from other storage platforms.
Copy on First Access ensures the appropriate data is available to a host operation when it is needed. The following image shows
an Open Replicator hot pull.
84 Data Migration
SB14
SB15
SB12
SB13
SB10
SB11
PiT
SB8
SB9
SB6
SB7
Copy
SB4
SB5
SB2
SB3
SB0
SB1
PS0 PS1 PS2 PS3 PS4 SMB0 SMB1
STD
STD
PiT
Copy
The pull can also be performed in cold mode to a static volume. The following image shows an Open Replicator cold pull.
SB14
SB15
SB12
SB13
SB10
SB11
SB8
SB9
STD
SB6
SB7
SB4
SB5
SB2
SB3
SB0
SB1
PS0 PS1 PS2 PS3 PS4 SMB0 SMB1
Target
STD
Target
Target
STD
Data Migration 85
Space and zero-space reclamation
Space reclamation reclaims unused space following a replication or migration activity from a regular device to a thin device in
which software tools, such as Open Replicator and Open Migrator, copied-all-zero, unused space to a target thin volume.
Space reclamation deallocates data chunks that contain all zeros. Space reclamation is most effective for migrations from
standard, fully provisioned devices to thin devices. Space reclamation is non-disruptive and can be executed while the targeted
thin device is fully available to operating systems and applications.
Zero-space reclamations provides instant zero detection during Open Replicator and SRDF migration operations by reclaiming
all-zero space, including both host-unwritten extents (or chunks) and chunks that contain all zeros due to file system or
database formatting.
Solutions Enabler and Unisphere can be used to initiate and monitor the space reclamation process.
86 Data Migration
Volume migration using z/OS Migrator
EMC z/OS Migrator is a host-based data migration facility that performs traditional volume migrations as well as host-based
volume mirroring. Together, these capabilities are referred to as the volume mirror and migrator functions of z/OS Migrator.
Volume level data migration facilities move logical volumes in their entirety. z/OS Migrator volume migration is performed on a
track for track basis without regard to the logical contents of the volumes involved. Volume migrations end in a volume swap
which is entirely non-disruptive to any applications using the data on the volumes.
Volume migrator
Volume migration provides host-based services for data migration at the volume level on mainframe systems. It provides
migration from third-party devices to devices on Dell EMC arrays as well as migration between devices on Dell EMC arrays.
Volume mirror
Volume mirroring provides mainframe installations with volume-level mirroring from one device on a EMC array to another. It
uses host resources (UCBs, CPU, and channels) to monitor channel programs scheduled to write to a specified primary volume
and clones them to also write to a specified target volume (called a mirror volume).
After achieving a state of synchronization between the primary and mirror volumes, Volume Mirror maintains the volumes in a
fully synchronized state indefinitely, unless interrupted by an operator command or by an I/O failure to a Volume Mirror device.
Mirroring is controlled by the volume group. Mirroring may be suspended consistently for all volumes in the group.
Data Migration 87
Thousands of datasets can either be selected individually or wild-carded. z/OS Migrator automatically manages all metadata
during the migration process while applications continue to run.
88 Data Migration
A
Mainframe Error Reporting
This appendix lists the mainframe environmental errors.
Topics:
• Error reporting to the mainframe host
• SIM severity reporting
Environmental errors
The following table lists the environmental errors in SIM format for HYPERMAX OS 5977 or later.
Operator messages
Error messages
On z/OS, SIM messages are displayed as IEA480E Service Alert Error messages. They are formatted as shown below:
Figure 27. z/OS IEA480E service alert error message format (Disk Adapter failure)
Figure 28. z/OS IEA480E service alert error message format (SRDF Group lost/SIM presented against unrelated
resource)
Event messages
The storage array also reports events to the host and to the service processor. These events are:
● The mirror-2 volume has synchronized with the source volume.
● The mirror-1 volume has synchronized with the target volume.
● Device resynchronization process has begun.
On z/OS, these events are displayed as IEA480E Service Alert Error messages. They are formatted as shown below:
Figure 29. z/OS IEA480E service alert error message format (mirror-2 resynchronization)
Figure 30. z/OS IEA480E service alert error message format (mirror-1 resynchronization)
eLicensing
Arrays running HYPERMAX OS use Electronic Licenses (eLicenses).
NOTE: For more information on eLicensing, refer to Dell EMC Knowledgebase article 335235 on the Dell EMC Online
Support website.
You obtain license files from Dell EMC Online Support, copy them to a Solutions Enabler or a Unisphere host, and push them out
to your arrays. The following figure illustrates the process of requesting and obtaining your eLicense.
NOTE: To install array licenses, follow the procedure described in the Solutions Enabler Installation Guide and Unisphere
Online Help.
Each license file fully defines all of the entitlements for a specific system, including the license type and the licensed capacity.
To add a feature or increase the licensed capacity, obtain and install a new license file.
Most array licenses are array-based, meaning that they are stored internally in the system feature registration database on the
array. However, there are a number of licenses that are host-based.
Array-based eLicenses are available in the following forms:
94 Licensing
● An individual license enables a single feature.
● A license suite is a single license that enables multiple features. License suites are available only if all features are enabled.
● A license pack is a collection of license suites that fit a particular purpose.
To view effective licenses and detailed usage reports, use Solutions Enabler, Unisphere, Mainframe Enablers, Transaction
Processing Facility (TPF), or IBM i platform console.
Capacity measurements
Array-based licenses include a capacity licensed value that defines the scope of the license. The method for measuring this
value depends on the license's capacity type (Usable or Registered).
Not all product titles are available in all capacity types, as shown below.
Usable capacity
Usable Capacity is defined as the amount of storage available for use on an array. The usable capacity is calculated as the sum
of all Storage Resource Pool (SRP) capacities available for use. This capacity does not include any external storage capacity.
Registered capacity
Registered capacity is the amount of user data managed or protected by each particular product title. It is independent of the
type or size of the disks in the array.
The methods for measuring registered capacity depends on whether the licenses are part of a bundle or individual.
Licensing 95
Open systems licenses
This section details the licenses available in an open system environment.
License suites
This table lists the license suites available in an open systems environment.
96 Licensing
Table 20. VMAX All Flash license suites (continued)
License suite Includes Allows you to With the command
● TimeFinder/SnapVX Create new TimeFinder/ symmir
● SnapSure Clone emulations
● Create new sessions symsnap
● Duplicate existing
sessions
● Create snap pools symconfigure
● Create SAVE devices
● Perform SnapVX symsnapvx
Establish operations
● Perform SnapVX
snapshot Link operations
All Flash FX All Flash F Suite Perform tasks available in
the All Flash F suite.
● SRDF ● Create new SRDF symrdf
● SRDF/Asynchronous groups
● SRDF/Synchronous ● Create dynamic SRDF
● SRDF/Star pairs in Adaptive Copy
mode
● Replication for File
● Create SRDF devices symconfigure
● Convert non-SRDF
devices to SRDF
● Add SRDF mirrors to
devices in Adaptive
Copy mode
Set the dynamic-SRDF
capable attribute on
devices
Create SAVE devices
Licensing 97
Table 20. VMAX All Flash license suites (continued)
License suite Includes Allows you to With the command
DSE Autostart
○ Write Pacing
attributes, including:
■ Write Pacing
Threshold
■ Write Pacing
Autostart
■ Device Write
Pacing exemption
■ TimeFinder Write
Pacing Autostart
● Create dynamic SRDF symrdf
pairs in Synchronous
mode
● Set SRDF pairs into
Synchronous mode
Add an SRDF mirror to a symconfigure
device in Synchronous mode
D@RE Encrypt data and protect
it against unauthorized
access unless valid keys are
provided. This prevents data
from being accessed and
provides a mechanism to
quickly shred data.
Individual licenses
These items are available for arrays running HYPERMAX OS and are not in any of the license suites:
98 Licensing
Ecosystem licenses
These licenses do not apply to arrays:
Events and Retention Suite ● Protect data from unwanted changes, deletions and
malicious activity.
● Encrypt data where it is created for protection anywhere
outside the server.
● Maintain data confidentiality for selected data at rest and
enforce retention at the file-level to meet compliance
requirements.
● Integrate with third-party anti-virus checking, quota
management, and auditing applications.
Licensing 99