Srdfcli 92
Srdfcli 92
2 SRDF Family
CLI User Guide
9.2
July 2023
Rev. 04
Notes, cautions, and warnings
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
© 2020 2023 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Other trademarks may be trademarks of their respective owners.
Contents
Figures......................................................................................................................................... 11
Tables..........................................................................................................................................15
PREFACE................................................................................................................................................................................... 17
Revision history........................................................................................................................................................................ 19
Contents 3
SRDF modes of operation ........................................................................................................................................55
Establish an SRDF pair (full) ................................................................................................................................... 59
Establish an SRDF pair (incremental) ................................................................................................................... 60
Failback to source....................................................................................................................................................... 62
Failover to target.........................................................................................................................................................63
Invalidate R1 tracks ....................................................................................................................................................65
Invalidate R2 tracks ................................................................................................................................................... 65
Make R1 ready .............................................................................................................................................................66
Make R1 not ready ..................................................................................................................................................... 66
Make R2 ready ............................................................................................................................................................ 67
Make R2 not ready .....................................................................................................................................................67
Merge track tables ..................................................................................................................................................... 67
Move one-half of an SRDF pair .............................................................................................................................. 68
Move both sides of SRDF device pairs .................................................................................................................68
Read/write disable target device ...........................................................................................................................69
Refresh R1 ....................................................................................................................................................................69
Refresh R2 ................................................................................................................................................................... 70
Restore SRDF pairs (full) ......................................................................................................................................... 70
Restore SRDF pairs (incremental) ......................................................................................................................... 72
Resume I/O on links ...................................................................................................................................................74
Split ................................................................................................................................................................................ 74
Suspend I/O on links ................................................................................................................................................. 76
Swap one-half of an SRDF pair ...............................................................................................................................77
Swap SRDF pairs ........................................................................................................................................................ 77
Update R1 mirror .........................................................................................................................................................78
Write disable R1 .......................................................................................................................................................... 80
Write disable R2 ......................................................................................................................................................... 80
Write enable R1 ...........................................................................................................................................................80
Write enable R2 ........................................................................................................................................................... 81
4 Contents
Dynamic failover operations.....................................................................................................................................110
Contents 5
Setting bias when suspending the group.............................................................................................................159
Deactivate SRDF/Metro (deletepair).........................................................................................................................159
Example: Setting up SRDF/Metro (Array Witness method)................................................................................ 159
6 Contents
Restrictions: Device types allowed for remove operations from a cascaded RDF1 consistency
group .......................................................................................................................................................................229
Recovering from a failed dynamic modify operation .......................................................................................230
Consistency groups with a parallel database........................................................................................................... 230
Consistency groups with BCV access at the target site....................................................................................... 231
Contents 7
Target site states .....................................................................................................................................................266
SRDF/Star site configuration transitions .......................................................................................................... 266
SRDF/Star operation categories...........................................................................................................................268
Required states for operations: Concurrent SRDF/Star.................................................................................269
Required states for operations: Cascaded SRDF/Star.................................................................................... 272
SRDF/Star operations summary ................................................................................................................................ 276
symstar command options ..................................................................................................................................... 277
Command failure while in Connected state .......................................................................................................280
Restrictions for cascaded mode............................................................................................................................280
Configure and bring up SRDF/Star ...........................................................................................................................280
Step 1: Verify SRDF/Star control host connectivity ........................................................................................281
Step 2: Verify array settings .................................................................................................................................. 281
Step 3: Create an SRDF/Star composite group .............................................................................................. 282
Step 4: Create the SRDF/Star options file ....................................................................................................... 286
Step 5: Perform the symstar setup operation .................................................................................................. 288
Step 6: Create composite groups on target sites ............................................................................................289
Step 7: (Optional) Add BCV devices to the SRDF/Star configuration........................................................290
Step 8: Bring up the SRDF/Star configuration................................................................................................. 290
Displaying the symstar configuration ...................................................................................................................291
Removal of a CG from SRDF/STAR control ..................................................................................................... 294
Basic SRDF/Star operations .......................................................................................................................................295
Isolate SRDF/Star sites ..........................................................................................................................................296
Unprotect target sites............................................................................................................................................. 297
Halt target sites.........................................................................................................................................................297
Clean up metadata ...................................................................................................................................................298
SRDF/Star consistency group operations ...............................................................................................................298
Before you begin: SRDF daemon interaction .................................................................................................... 298
SRDF/Star consistency group restrictions.........................................................................................................299
Prepare staging for SRDF/Star consistency group modification..................................................................299
Add devices to a concurrent SRDF/Star consistency group ........................................................................300
Add devices to a cascaded SRDF/Star consistency group ...........................................................................303
Remove devices from consistency groups......................................................................................................... 305
Recovering from a failed consistency group modification .............................................................................306
Recovery operations: Concurrent SRDF/Star ........................................................................................................307
Recover from transient faults: concurrent SRDF/Star................................................................................... 308
Recover from a transient fault without reconfiguration: concurrent SRDF/Star ....................................308
Recover from transient fault with reconfiguration: concurrent SRDF/Star.............................................. 309
Recover using reconfigure operations..................................................................................................................310
Workload switching: Concurrent SRDF/Star ........................................................................................................... 311
Planned workload switching: Concurrent SRDF/Star ..................................................................................... 312
Unplanned workload switching: concurrent SRDF/Star.................................................................................. 315
Unplanned workload switch to synchronous target site: concurrent SRDF/Star .................................... 316
Unplanned workload switch to asynchronous target site: concurrent SRDF/Star ................................. 320
Switch back to the original workload site: concurrent SRDF/Star ............................................................. 324
Recovery operations: Cascaded SRDF/Star ...........................................................................................................325
Recovering from transient faults: Cascaded SRDF/Star ...............................................................................325
Recovering from transient faults without reconfiguration: Cascaded SRDF/Star ..................................325
Recovering from transient faults with reconfiguration: Cascaded SRDF/Star ........................................ 327
Workload switching: Cascaded SRDF/Star .............................................................................................................328
Planned workload switching: Cascaded SRDF/Star ....................................................................................... 328
8 Contents
Unplanned workload switching: cascaded SRDF/Star ................................................................................... 330
Reconfiguration operations ......................................................................................................................................... 339
Before you begin reconfiguration operations..................................................................................................... 339
Reconfiguring mode: cascaded to concurrent ..................................................................................................339
Reconfiguring cascaded paths...............................................................................................................................343
Reconfiguring mode: concurrent to cascaded ..................................................................................................345
Reconfigure mode without halting the workload site ..................................................................................... 348
SRDF/Star configuration with R22 devices ............................................................................................................349
Before you begin SRDF/Star configuration with R22 devices...................................................................... 349
Transition SRDF/Star to use R22 devices ........................................................................................................ 350
Contents 9
Format of the symreplicate options file ............................................................................................................. 390
Set replication retry and sleep times .................................................................................................................. 390
Setting the symreplicate control parameters .................................................................................................... 391
Manage locked devices ................................................................................................................................................ 394
Recover locks ............................................................................................................................................................394
Release locks..............................................................................................................................................................394
Acquire persistent locks ......................................................................................................................................... 395
10 Contents
Figures
Figures 11
41 Cascaded SRDF configuration........................................................................................................................... 240
42 Configuring the first hop..................................................................................................................................... 244
43 Configuring the second hop................................................................................................................................244
44 Determining SRDF pair state in cascaded configurations........................................................................... 245
45 Location of hop-2 devices...................................................................................................................................246
46 Cascaded SRDF with EDP...................................................................................................................................247
47 Set up first hop in cascaded SRDF with EDP................................................................................................ 249
48 Set up second hop in cascaded SRDF with EDP...........................................................................................249
49 Adding a diskless SRDF mirror............................................................................................................................ 251
50 Cascaded configuration before planned failover........................................................................................... 252
51 Planned failover - after first swap.................................................................................................................... 253
52 Planned failover - after second swap...............................................................................................................253
53 Cascaded SRDF/Star configuration.................................................................................................................. 261
54 Concurrent SRDF/Star configuration.............................................................................................................. 262
55 Typical concurrent SRDF/Star with R22 devices......................................................................................... 263
56 Typical cascaded SRDF/Star with R22 devices............................................................................................ 263
57 Site configuration transitions without concurrent devices......................................................................... 267
58 Site configuration transitions with concurrent devices............................................................................... 268
59 Concurrent SRDF/Star: normal operations.................................................................................................... 269
60 Concurrent SRDF/Star: transient fault operations.......................................................................................270
61 Concurrent SRDF/Star: unplanned switch operations................................................................................. 271
62 Concurrent SRDF/Star: planned switch operations..................................................................................... 272
63 Cascaded SRDF/Star: normal operations........................................................................................................273
64 Cascaded SRDF/Star: transient fault operations (asynchronous loss)................................................... 273
65 Cascaded SRDF/Star: transient fault operations (synchronous loss)..................................................... 274
66 Cascaded SRDF/Star: unplanned switch operations....................................................................................275
67 Concurrent SRDF/Star setup using the StarGrp composite group..........................................................283
68 Cascaded SRDF/Star setup using the StarGrp composite group.............................................................285
69 Adding a device to a concurrent SRDF/Star CG........................................................................................... 301
70 ConStarCG after a dynamic add operation.....................................................................................................302
71 Adding devices to a cascaded SRDF/Star CG...............................................................................................303
72 CasStarCG after a dynamic add operation..................................................................................................... 304
73 Transient failure: concurrent SRDF/Star........................................................................................................ 308
74 Transient fault recovery: before reconfiguration........................................................................................... 310
75 Transient fault recovery: after reconfiguration............................................................................................... 311
76 Concurrent SRDF/Star: halted........................................................................................................................... 313
77 Concurrent SRDF/Star: switched......................................................................................................................313
78 Concurrent SRDF/Star: connected...................................................................................................................314
79 Concurrent SRDF/Star: protected.................................................................................................................... 315
80 Loss of workload site: concurrent SRDF/Star................................................................................................316
81 Concurrent SRDF/Star: workload switched to synchronous site.............................................................. 317
82 Concurrent SRDF/Star: new workload site connected to asynchronous site........................................ 318
83 Concurrent SRDF/Star: protected to asynchronous site............................................................................ 319
12 Figures
84 Concurrent SRDF/Star: protect to all sites....................................................................................................320
85 Concurrent SRDF/Star: workload switched to asynchronous site........................................................... 322
86 Concurrent SRDF/Star: protected to asynchronous site............................................................................322
87 Concurrent SRDF/Star: one asynchronous site not protected................................................................. 323
88 Transient fault: cascaded SRDF/Star.............................................................................................................. 325
89 Cascaded SRDF/Star with transient fault...................................................................................................... 326
90 Cascaded SRDF/Star: asynchronous site not protected............................................................................ 327
91 SRDF/Star: after reconfiguration to concurrent...........................................................................................328
92 Cascaded SRDF/Star: halted............................................................................................................................. 329
93 Cascaded SRDF/Star: switched workload site.............................................................................................. 330
94 Loss of workload site: cascaded SRDF/Star...................................................................................................331
95 Workload switched to synchronous target site: cascaded SRDF/Star....................................................332
96 After workload switch to synchronous site: cascaded SRDF/Star...........................................................333
97 Cascaded SRDF/Star after workload switch: protected.............................................................................334
98 After reconfiguration to concurrent mode......................................................................................................335
99 Protected after reconfiguration from cascaded to concurrent mode......................................................336
100 Loss of workload site: Cascaded SRDF/Star................................................................................................. 337
101 Cascaded SRDF: after switch to asynchronous site, connect, and protect........................................... 338
102 Cascaded SRDF: after switch to asynchronous site.................................................................................... 339
103 Halted cascaded SRDF/Star.............................................................................................................................. 340
104 After reconfiguration to concurrent.................................................................................................................. 341
105 Halted cascaded SRDF/Star.............................................................................................................................. 342
106 After reconfiguration to concurrent................................................................................................................. 343
107 Halted cascaded SRDF/Star.............................................................................................................................. 344
108 After cascaded path reconfiguration................................................................................................................345
109 Halted concurrent SRDF/Star........................................................................................................................... 346
110 After reconfiguration to cascaded.................................................................................................................... 346
111 Halted concurrent SRDF/Star............................................................................................................................347
112 After reconfiguration to cascaded.................................................................................................................... 348
113 R1 migration: configuration setup......................................................................................................................354
114 R1 migration: establishing a concurrent relationship.................................................................................... 355
115 R1 migration: replacing the source device.......................................................................................................356
116 Migrating R2 devices............................................................................................................................................ 357
117 R2 migration: configuration setup.....................................................................................................................358
118 R2 migration: establishing a concurrent relationship....................................................................................359
119 R2 migration: replacing the target device....................................................................................................... 360
120 R1 migration example: Initial configuration...................................................................................................... 362
121 Concurrent SRDF relationship........................................................................................................................... 365
122 Migrated R1 devices..............................................................................................................................................367
123 R2 migration example: Initial configuration..................................................................................................... 368
124 Concurrent SRDF relationship............................................................................................................................369
125 Migrated R2 devices.............................................................................................................................................370
126 R1 migration: applicable R1/R2 pair states for migrate -setup....................................................................371
Figures 13
127 R2 migration: applicable R1/R2 pair states for migrate -setup.................................................................. 372
128 R1 migration: R11/R2 applicable pair states for migrate -replace (first leg)........................................... 373
129 R2 migration:R11/R2 applicable pair states for migrate -replace (first leg)............................................374
130 R1 migration: applicable R11/R2 pair states for migrate -replace (second leg)..................................... 375
131 R2 migration: applicable R11/R2 pair states for migrate -replace (second leg).....................................376
132 Automated data copy path in single-hop SRDF systems.............................................................................378
133 Automated data copy path in multi-hop SRDF.............................................................................................. 382
134 Concurrent BCV in a multi-hop configuration................................................................................................ 384
135 Commands used to perform splits in a complex configuration.................................................................. 397
136 Basic operations in multi-hop SRDF configurations......................................................................................399
137 SnapVX and Cascaded SRDF..............................................................................................................................401
138 SnapVX and Concurrent SRDF........................................................................................................................... 401
139 SRDF recovery environment.............................................................................................................................. 404
14 Figures
Tables
Tables 15
41 SRDF/Star operation categories....................................................................................................................... 268
42 SRDF/Star control operations........................................................................................................................... 276
43 symstar command options...................................................................................................................................277
44 Allowable SRDF/Star states for adding device pairs to a concurrent CG.............................................. 302
45 Allowable states for adding device pairs to a cascaded CG....................................................................... 303
46 Pair states of the SRDF devices after symstar modifycg -add completion............................................304
47 Allowable states for removing device pairs from a concurrent SRDF/Star CG.................................... 305
48 Allowable states for removing device pairs from a cascaded SRDF/Star CG....................................... 306
49 Possible pair states of the SRDF devices after a recovery.........................................................................307
50 SRDF migrate -setup control operation and applicable pair states...........................................................370
51 SRDF migrate -replace control operation and applicable pair states........................................................372
52 SRDF migrate -replace control operation and applicable pair states........................................................374
53 Initial setups for cycle timing parameters....................................................................................................... 385
54 Basic operations in a multi-hop configuration................................................................................................ 397
55 symrecover options file parameters................................................................................................................. 408
16 Tables
PREFACE
As part of an effort to improve its product lines, Dell periodically releases revisions of its software and hardware. Therefore,
some functions described in this document might not be supported by all versions of the software or hardware currently in use.
The product release notes provide the most up-to-date information on product features.
Contact your Dell technical support professional if a product does not function properly or does not function as described in this
document.
NOTE: This document was accurate at publication time. Go to Dell Online Support (https://www.dell.com/support/home/
en-us/) to ensure that you are using the latest version of this document.
Purpose
This document describes how to use Solutions Enabler SYMCLI to manage SRDF®.
Audience
This document is for advanced command-line users and script programmers to manage various types of control operations on
arrays and devices using Solutions Enabler's SYMCLI commands.
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
Typographical conventions
Dell Technologies uses the following type style conventions in this document:
PREFACE 17
Table 1. Typographical conventions used in this content (continued)
{} Braces enclose content that the user must specify, such as x or y or z.
... Ellipses indicate nonessential information that is omitted from the example.
Product Dell Technologies technical support, documentation, release notes, software updates, or information
information about Dell Technologies products can be obtained at https://www.dell.com/support/home (registration
required) or https://www.dell.com/en-us/dt/documentation/vmax-all-flash-family.htm.
Technical To open a service request through the Dell Technologies Online Support (https://www.dell.com/
support support/home) site, you must have a valid support agreement. Contact your Dell Technologies sales
representative for details about obtaining a valid support agreement or to answer any questions about
your account.
Technical Dell Technologies offers various support options.
support ● Support by Product: Dell Technologies offers consolidated, product-specific information through the
Dell Technologies Online Support site.
The Support by Product web pages: https://www.dell.com/support/home, select Product Support.
These pages offer quick links to Documentation, White Papers, Advisories (such as frequently used
Knowledgebase articles) and Downloads. They also offer dynamic content such as presentations,
discussion, relevant Customer Support Forum entries, and a link to Dell Technologies Live Chat.
● Dell Technologies Live Chat: Open a Chat or instant message session with a Dell Technologies Support
Engineer.
e-Licensing To activate your entitlements and obtain your license files, go to the Service Center on Dell
support Technologies Online Support (https://www.dell.com/support/home). Follow the directions on your
License Authorization Code (LAC) letter that is emailed to you.
● Expected functionality may be unavailable because it is not licensed. For help with missing or incorrect
entitlements after activation, contact your Dell Technologies Account Representative or Authorized
Reseller.
● For help with any errors applying license files through Solutions Enabler, contact the Dell Technologies
Customer Support Center.
● Contact the Dell Technologies worldwide Licensing team if you are missing the LAC letter or require
further instructions on activating your licenses through the Online Support site.
○ [email protected]
○ North America, Latin America, APJK, Australia, New Zealand: SVC4EMC (800-782-4362) and
follow the voice prompts.
○ EMEA: +353 (0) 21 4879862 and follow the voice prompts.
SolVe Online and SolVe provides links to customer service documentation and procedures for common tasks. Go to
SolVe Desktop https://solve.dell.com/solve/home, or download the SolVe Desktop tool from https://www.dell.com/
support/home and search for SolVe Desktop. From SolVe Online or SolVe Desktop, load the PowerMax
and VMAX procedure generator.
NOTE: Authenticate (authorize) the SolVe Desktop tool. After it is installed, familiarize yourself with
the information under Help.
Your comments
Your suggestions help improve the accuracy, organization, and overall quality of the documentation. Send your comments and
feedback to: [email protected]
18 PREFACE
Revision history
The following table presents the revision history of this document:
Revision history 19
1
SRDF CLI overview
This chapter describes the following topics:
Topics:
• Introduction to SRDF
• SYMCLI for SRDF
• SRDF pair states and links
• Before you begin
Introduction to SRDF
The Dell EMC Symmetrix ® Remote Data Facility (SRDF ® ) family of products offers a range of array based disaster recovery,
parallel processing, high availability, and data migration solutions for VMAX ® Family and VMAX All Flash systems, including:
● HYPERMAX OS for VMAX3 Family 100K, 200K, 400K arrays, VMAX All Flash 250F, 450F, 850F, 950F arrays
● Enginuity 5876 for VMAX 10K, 20K, and 40K arrays
SRDF replicates data between 2, 3 or 4 arrays located in the same room, on the same campus, or thousands of kilometers apart.
Replicated volumes may include a single device, all devices on a system, or thousands of volumes across multiple systems.
HYPERMAX OS 5977.691.684 introduces an additional SRDF configuration; SRDF/Metro.
The following image shows two-site SRDF configurations, one traditional and one SRDF/Metro.
Traditional SRDF (open hosts) SRDF/Metro (multipath)
Production Remote (target)
(source)host host (optional) Multi-Path
SRDF links
R1 R2 R1 SRDF links R2
HYPERMAX OS
VMAX 100K/200K/400K arrays (referred to as VMAX3™ arrays), or VMAX All Flash arrays, running HYPERMAX OS can use
SRDF to replicate to:
● VMAX3 arrays running HYPERMAX OS.
● VMAX 10K/20K/40K arrays running Enginuity™ version 5876 with applicable ePack.
Enginuity 5876
Refer to the SRDF Two-site Interfamily Connectivity tool for information about SRDF features supported between arrays
running Enginuity 5876.
SRDF documentation
Table 3. SRDF documentation
For information on See
Technical concepts and operations of the SRDF product EMC VMAX3 Family Product Guide for VMAX 100K, VMAX
family. Topics include: 200K, VMAX 400K with HYPERMAX OS and Dell EMC VMAX
● SRDF Solutions All Flash Product Guide for VMAX 250F, 450F, 850F, 950F
● SRDF interfamily connectivity with HYPERMAX OS
● SRDF concepts and terminology
● SRDF/DM, SRDF/AR, SRDF/Concurrent
● SRDF integration with other products
Configure and manage arrays using the SYMCLI. Dell Solutions Enabler Array Controls and Management CLI
User Guide
Install, configure, and manage Virtual Witness instances for Dell SRDF/Metro vWitness Configuration Guide
SRDF/Metro.
Determine which SRDF replication features are supported SRDF Interfamily Connectivity Information
between two or three arrays running Enginuity 5876,
HYPERMAX OS, or PowerMaxOS.
Securing your configuration EMC VMAX All Flash and VMAX3 Family Security
Configuration Guide and EMC VMAX All Flash and VMAX3
Family Security Configuration Guide
Host connectivity Dell EMC Host Connectivity Guides for your operating system.
Managing legacy versions of SRDF using SYMCLI Download the SolVe Desktop and load the VMAX Family and
DMX procedure generator. Select VMAX 10K, 20K, 40K, DMX
-> Customer procedures -> Managing SRDF using SYMCLI.
SRDF/Metro
5876 arrays with the applicable ePack can participate only as Witness arrays in SRDF/Metro configurations.
Witness SRDF groups can be created between two VMAX3 arrays running HYPERMAX OS 5977.691.684 or later and a 5876
array.
An SRDF/Metro configuration between the two VMAX3 arrays can then use Witness protection, provided by the 5876 array.
Mobility ID
Devices in VMAX arrays running HYPERMAX OS 5977 or PowerMAXOS 5978 can have either a Compatibility ID or a Mobility ID.
The symdev show and symdev list commands can be used to report the device ID type for arrays running PowerMaxOS
5978.
The example output of the symdev show command below shows a device carrying Mobility ID on array 084.
. . .
Vendor ID : EMC
Product ID : SYMMETRIX
Product Revision : 5977
Device WWN : 600009700BBF82341FA1006E00000017
Device ID Type : Mobility
Device Emulation Type : FBA
. . .
Geometry : Native
{
Sectors/Track : 256
Tracks/Cylinder : 15
Cylinders : 10925
512-byte Blocks : 41952000
MegaBytes : 20484
KiloBytes : 20976000
}
}
To filter devices based on ID type, use the symdev list command with the following syntax:
Converting Device ID
To covert device ID types between Compatibility ID and Mobility ID on a FBA devices, use the following syntax:
Description
Type command -h to display command line help for the specified command.
On UNIX hosts, type man command to display the man page for the specified command.
Examples
To display help for the symrdf command, enter:
symrdf - h
man symrdf
● On UNIX hosts: specify the SYMCLI man page directory (/usr/symcli/man/) in the SYMCLI_MANPATH environment
variable.
● On Windows hosts: the default directory for man pages is C:\Program Files\EMC\symcli\man
Description
SYMCLI includes variables to streamline command line sessions.
Examples
To display a list of variables that can be set for your SYMCLI session, enter:
symcli -env
symcli - def
setenv SYMCLI_VERBOSE 1
unsetenv SYMCLI_VERBOSE
Description
Use the SYMCLI environmental variables to preset the identity of objects, such as SID. Once the object's identity is defined, you
do not need to type them in the command line.
To view a list of environment variables that can be set for a given SYMCLI session, enter:
symcli -env
symcli -def
symdev list -sid SID -metroDR Display the array devices that are
identified as SRDF/Metro Smart DR
devices.
symdev list -sid SID -r1 -bcv Displays the RDF1 BCV devices for
the specified array.
symdev list -sid SID -devs Display devices with a device
Device:Device -lock external lock.
Displays a specified range of devices
that have a device external lock.
symdev show
symdev show Device_number -sid SID Displays information about the
specified SRDF devices, including:
● SRDF device type and its group
number
● Whether the device is in an
SRDF/Metro configuration
● Whether the device is paired with
a diskless or concurrent device
● Whether the device has a
standard/thin relationship
● If the R2 device is larger than its
R1
● Whether SRDF/A group-level
and/or device-level write pacing
is currently activated and
symrdf query
symrdf -g DgName query Displays the state of the SRDF
devices and their SRDF links in the
specified device group.
During normal operations, the SRDF
pair is Synchronized:
● The R1 devices and SRDF links
are read-writable.
● The R2 devices are write
disabled.
● The link is in synchronous
replication.
During failed over operations:
● The R1 devices are write
disabled.
● The R2 devices are read/write.
● The SRDF links are suspended.
symrdf -g DgName query -all Displays the SRDF pair state of
all devices in the specified device
group, regardless of the device type.
symrdf -g DgName query -bcv Displays the SRDF pair state of the
SRDF BCV devices in the specified
device group.
symrdf -g DgName query -summary Displays summarized information
about the state of the SRDF devices
and their SRDF links in the specified
device group, including:
● Pair state
● Number of invalid tracks on the
source and target
● Synchronization rate
● Estimated time remaining for
SRDF pair synchronization.
symrdf -cg CgName query Displays the state of the SRDF
devices and their SRDF links in the
specified composite group.
-refresh Marks the source (R1) devices or the target (R2) devices to
refresh from the remote mirror.
-remote Requests a remote data copy with the failback , restore ,
resume, createpair and update actions. When the concurrent
links are ready, data is also copied to the concurrent SRDF
mirror. For these actions to execute, use this option or
suspend the concurrent links.
-remote_rdfg Specifies the SRDF group number for the remote array.
-remote_sg Specifies the remote storage group name.
Used with createpair to specify the storage group.
Used with createpair -hop2 to specify the storage group at
the second hop.
-until Checks the number of invalid tracks that are allowed to build
up from the active R2 local I/O before another update (R2 to
R1) copy is retriggered. The update sequence loops until the
invalid track count is less than the number specified for the
-until value. Refer to Write disable R1 for more information.
-use_bias When used with createpair -establish, createpair
-restore, establish or restore actions, indicates that
SRDF/Metro configuration will use bias instead of Witness
protection.
-v Provides more detailed, verbose command output.
-witness When used with addgrp, identifies the RDF group as
a Witness SRDF group. When used with removegrp or
modifygrp, specifies the action is targeted for an RDF
group which is a Witness SRDF group.
When used with -R1, lists RDF11 devices and RDF1 devices
that are paired with a concurrent SRDF device.
When used with -R2, lists RDF22 devices and RDF2 devices
that are paired with a concurrent device.
-cons_exempt Lists devices that are consistency exempt or are paired with
devices that are consistency exempt.
-dir Lists the local directors (separated by commas), such as, 1a,
1b, and so on.
-diskless_rdf Lists diskless SRDF devices and the devices paired with
diskless SRDF devices.
When used with -R1, lists RDF1 devices that are either
diskless or that are paired with a diskless device.
When used with -R2, lists RDF2 devices that are either
diskless or are paired with a diskless device.
When used with -R21, lists RDF21 devices that are either
diskless or that are paired with a diskless device.
-dup_pair Lists SRDF devices that are paired with the same SRDF type.
To list all of the duplicate pair devices in array 333, enter:
symrdf -sid 333 -dup_pair list
NOTE:
Duplicate pair devices can result from an SRDF/Star
failover scenario or a configuration change.
-noprompt Requests that prompts are not displayed after the command is
entered. The default is to prompt the user for confirmation.
Description
Use the symrdf -rdf ping command to determine if an array using SRDF links is up and running.
Example
To ping SID 123, enter:
symrdf -rdf -sid 123 ping
The return codes tell you whether some or all of the arrays were successfully pinged.
For more information on return codes, refer to the Dell Solutions Enabler CLI Reference Guide.
verify command
Description
Use the symrdf verify command to verify the SRDF mode and pair states of device groups, composite groups, and device
files.
Use the symrdf verify -enabled command to verify that device pairs are enabled for consistency protection.
If the verify command specifies asynchronous, synchronous mode, OR adaptive copy disk mode:
symrdf -g STAGING -rdfg 129 verify -async -sync -acp_disk
All device pairs in STAGING are using synchronous OR adaptive copy disk mode. The following message is displayed, even
though NO devices are in asynchronous mode:
All devices in the group 'STAGING' are in 'Asynchronous, Synchronous, Adaptive Copy
Disk' modes.
Not All devices in the group 'STAGING' are in 'Consistent, Split' states.
Verify both SRDF mode and pair state in one command line
When verifying both SRDF states and modes in the same command line, Solutions Enabler logically ORs the states, logically ORs
the modes, and then logically ANDs the two results.
In the following example, a device group named STAGING has devices in:
● Synchronous, and adaptive copy disk modes
● Synchronized, suspended and split states, but NOT consistent state
If the verify command specifies synchronous, OR adaptive copy disk mode, AND Synchronized, Suspended, OR Split states:
symrdf -g STAGING -rdfg 129 verify -sync -acp_disk -synchronized -suspended -split
All device pairs in STAGING are using synchronous OR adaptive copy disk mode AND are in the Synchronized, Suspended, OR
Split state, and the following message is displayed:
All devices in the group 'STAGING' are in 'Synchronized, Suspended, Split' states and
'Synchronous, Adaptive Copy Disk' modes.
If the verify command specifies adaptive copy disk mode AND the Synchronized, Suspended, OR Split state:
symrdf -g STAGING -rdfg 129 verify -acp_disk -synchronized -suspended -split
Some device pairs in the STAGING group are using synchronous mode, and the following message is displayed:
Not All devices in the group 'STAGING' are in 'Synchronized, Suspended, Split' states
and 'Adaptive Copy Disk' modes.
If the verify command specifies synchronous, adaptive copy disk mode AND the Consistent state:
symrdf -g STAGING -rdfg 129 verify -sync -acp_disk -consistent
None of the device pairs in the STAGING group are in the Consistent state, and the following message is displayed:
None of the devices in the group 'STAGING' are in 'Consistent' state and 'Synchronous,
Adaptive Copy Disk' modes
NOTE:
Link States
R1 RW, WD, NR R2
Invalid This is the default state when no other SRDF state applies.
● The combination of the R1 device, the R2 device, and the
SRDF link states do not match any other pair state.
● This state may occur if there is a problem at the disk
director level.
Consistent The R2 SRDF/A capable devices are in a consistent state.
The consistent state signifies the normal state of operation
for device pairs operating in asynchronous mode.
Transmit Idle The SRDF/A session cannot send data in the transmit cycle
over the link because the link is unavailable.
WD
Transmit Idle Ready (RW) f Ready (RW) Not Ready or —
WD
ActiveBias The R1 and the R2 are in the default SRDF/Metro configuration which uses a
witness, however, the witness is in a failed state and not available.
● There are no invalid tracks between the two pairs.
● The R1 and the R2 are Ready (RW) to the hosts.
Suspended The SRDF links have been suspended and are not ready or write disabled.
If the R1 is ready while the links are suspended, any I/O accumulates as invalid
tracks owed to the R2.
Partitioned The SRDF group between the two SRDF/Metro arrays is offline.
If the R1 is ready while the group is offline, any I/O accumulates as invalid tracks
owed to the R2.
Unknown If the environment is not valid, the SRDF/Metro session state is marked as
Unknown.
If the SRDF/Metro session is queried from the DR array and the DR Link State is
Offline, the SRDF/Metro session state is reported as Unknown.
Invalid This is the default state when no other SRDF state applies.
The combination of the R1 device, the R2 device, and the SRDF link states do not
match any other pair state, or there is a problem at the disk director level.
DR pair states
Table 13. DR pair states
Pair state Description
Synchronized
NOTE: This state is only applicable when the DR pair is in Acp_disk mode.
The background copy between the SRDF/Metro and DR is complete and they are
synchronized.
The MetroR2 device states are dependent on the SRDF/Metro session state.
The DR side is not host accessible with the devices in a Write Disabled SRDF state.
The MetroR2 device states are dependent on the SRDF/Metro session state.
Consistent
NOTE: This state is only applicable when the DR pair is in Async mode.
This is the normal state of operation for device pairs operating in asynchronous
mode indicating that there is a dependent-write consistent copy of data on the DR
site.
The MetroR2 device states are dependent on the SRDF/Metro session state.
The SRDF/A session is active but it cannot send data in the transmit cycle over the
SRDF link because the SRDF link is offline.
● There may be a dependent-write consistent copy of data on the DR devices.
● The background copy may not be complete.
● The MetroR2 device states are dependent on the SRDF/Metro session state.
Split MetroR1 and the DR side are currently ready to their hosts, but synchronization is
currently suspended between the SRDF/Metro and the DR devices as the SRDF link
is Not Ready.
The MetroR2 device states are dependent on the Metro session State
Failed Over Synchronization is currently suspended between the SRDF/Metro and the DR
devices and the SRDF link is Not Ready.
Host writes accumulate and can be seen as invalids
If a failover command is issued when the DR Link state is not Offline:
● the SRDF/ Metro session is suspended
● MetroR1 and R2 are not host accessible
If a failover command is issued when the DR state is Partitioned or TransIdle, and
the DR Link state is Offline:
● the SRDF/Metro state does not change.
● the MetroR1 and MetroR2 device states regarding to their accessibility to the
host do not change.
R1 Updated The MetroR1 was updated from the DR side and both MetroR1 and MetroR2 are not
host accessible.
The SRDF/Metro session is suspended.
There are no local invalid tracks on the R1 side, and the links are ready or write
disabled.
R1 UpdInProg The MetroR1 is being updated from the DR side and both MetroR1 and MetroR2 are
not host accessible.
The SRDF/Metro session is suspended.
There are invalid local tracks on the source side, so data is being copied from the DR
to the R1 device, and the links are ready.
MetroR1, R2, and the DR side are either Ready or Write Disabled depending on
whether or not they are accessible to the host.
Acp_disk Adaptive copy mode can transfer large amounts of data without having an impact on performance.
Adaptive copy mode allows the SRDF/Metro and DR devices to be more than one I/O out of
synchronization.
NOTE: Adaptive copy mode does not guarantee a dependent-write consistent copy of data on DR
devices.
Adaptive copy mode applies when:
● If querying from the DR array and:
○ the DR state is not TransIdle, and
○ the DR Link State is offline.
● If querying from the MetroR2 array and:
○ the DR state is not TransIdle, and
○ the DR Link State is offline, and
○ the SRDF/Metro Link State is offline.
For some types of file arrays and attached hosts, host-dependent operations may be required to access data migrated
to a larger R2 device.
Restrict synchronization
Restricting synchronization direction is not supported on arrays running HYPERMAX OS.
Syntax
To set hardware and software compression for an SRDF group, use the following form:
Set SRDF group attributes provides more information about SRDF group attributes.
Options
on
Set the specified compression on.
off
Set the specified compression off.
Examples
To turn on software compression on both sides of SRDF group 12:
symrdf -sid 134 -rdfg 12 set rdfg -swcomp on -both_sides
To turn off hardware compression on both sides of SRDF group 12:
symrdf -sid 134 -rdfg 12 set rdfg -hwcomp off -both_sides
Syntax
Syntax for the symqos command:
Examples
To enable the workload percentage settings for synchronous, asynchronous, and copy I/Os on SID 1234:
symqos -RA -sid 1234 enable -io
To set the default settings of the workload percentages for all directors on SID 1234 to 60% for Synchronous I/Os, 30% for
asynchronous I/Os and 10% for copy I/Os:
symqos -RA -sid 1234 set IO -default -sync 60 -async 30 -copy 10
To set the settings of the workload percentages on director 8G of SID 1234 to 50% for synchronous I/Os, 30% for
asynchronous I/Os, and 20% for copy I/Os:
symqos -RA -sid 1234 -dir 8G set IO -sync 50 -async 30 -copy 20
To reset the customized settings of the workload percentages to the default settings on director 8G of SID 1234:
symqos -RA -sid 1234 -dir 8G reset IO
Summary
Table 16. SRDF control operations summary
Control operation symrdf argument Description
SRDF modes of operation set mode Set the replication mode for a device,
device group, composite group, storage
[sync|asynch|acp_disk|acp_wp|acp_off] group, or list of devices in a device file.
Enable and disable SRDF consistency enable Enable or disable consistency protection
protection for SRDF/A capable devices.
disable
Establish an SRDF pair (full) establish -full Establish remote mirroring and initiate
a full data copy from the source (R1)
device to the target (R2) device.
Use this for:
● Initial synchronization of SRDF
mirrors.
● Replacement of a failed drive on the
R2 side.
Establish an SRDF pair (incremental) establish Establish remote mirroring and initiate an
incremental data copy from the source
(R1) device to the target (R2) device.
Use this to resynchronize after a split if
you can discard the target data.
Move SRDF device pairs movepair Move the SRDF device pair to a
different SRDF group.
Move both sides of SRDF device pairs NOTE:
If the RA ends up supporting more
than 64K devices in the new SRDF
group, this operation fails.
Read/write disable target device rw_disable r2 Read/write disables the target (R2)
device to its local host.
Refresh R1 refresh r1 Mark any changed tracks on the source
(R1) side to be refreshed from the R2
side.
Refresh R2 refresh r2 Mark any changed tracks on the target
(R2) side to be refreshed from the R1
side.
Restore SRDF pairs (full) restore -full Resume remote mirroring and initiate
a full data copy from the target (R2)
device to the source (R1) device.
Use this for:
● Initial (reverse) synchronization of
SRDF mirrors.
● Replacement of a failed drive on the
R1 side.
Restore SRDF pairs (incremental) restore Resume remote mirroring and initiate an
incremental data copy from the target
(R2) device to the source (R1) device.
Use this for resynchronizing SRDF
mirrors after a split if you can discard
the source data.
Resume I/O on links resume Resume I/O traffic on the SRDF links for
the remotely mirrored SRDF pairs in the
group.
Split split Stop remote mirroring between the
source (R1) device and the target
(R2) device. The target device is made
available for local host operations.
Use this when both sides require
independent access, such as for testing
purposes.
Suspend I/O on links suspend Suspend I/O traffic on the SRDF links
for the remotely mirrored SRDF pairs in
the group.
Swap SRDF pairs swap Swap the SRDF personality of the
designated dynamic SRDF pair. Source
R1 devices become target R2 devices
and target R2 devices become source R1
devices.
Swap one-half of an SRDF pair half_swap Swap the SRDF personality of one half
of the designated dynamic SRDF pair.
Source R1 devices become target R2
devices or target R2 devices become
source R1 devices.
Update R1 mirror update Update the source (R1) side with the
changes from the target (R2) side while
the target (R2) side is still operational to
its local hosts.
Use this to synchronize the R1 side with
the R2 side as much as possible before
performing a failback, while the R2 side
is still online to the host.
Syntax
You can use createpair to set the SRDF replication mode when you create SRDF device pairs.
symrdf createpair (-file option) syntax shows the syntax of createpair.
Alternatively, use symrdf set to set or modify the SRDF replication mode for a device group, a composite group, or for
devices listed in a device file.
To set the mode on a device group, composite group, storage group, and device file:
Example
To set the replication mode in group prod to synchronous:
HYPERMAX OS
Adaptive copy write pending mode is not available when the R1 side of the pair is on an array running HYPERMAX OS.
Examples
To set the replication mode in group prod to adaptive copy disk:
symrdf -g prod set mode acp_disk
To disable adaptive copy disk mode and set the replication mode in group prod to synchronous:
symrdf -g prod set mode acp_off
NOTE:
Example
To set the replication mode in group prod to asynchronous:
symrdf -g prod set mode async
Host Host
Write Disabled
R1 R2
SRDF Links
When you issue the symrdf command, device external locks are set on all SRDF devices you are about to establish. See
Device external locks and Commands to display and verify SRDF, devices, and groups.
NOTE:
The R2 may be set to read/write disabled (not ready) by setting the value of SYMAPI_RDF_RW_DISABLE_R2 to ENABLE
in the options file. For more information, refer to the Dell Solutions Enabler CLI Reference Guide.
Syntax
Use establish -full for a device group, composite group, storage group, or device file:
Use the -use_bias option in SRDF/Metro configurations to indicate that neither the Witness nor the vWitness methods of
determining bias is used:
Examples
To establish all the SRDF pairs in the device group prod:
symrdf -g prod establish -full
To establish all the pairs in an SRDF/Metro group using bias:
symrdf -f /tmp/device_file -sid 085 -rdfg 86 establish -full -use_bias
Host Host
Write Disabled
SRDF Links
R1 R2
NOTE:
When you issue the symrdf command, device external locks are set on all SRDF devices you are about to establish. See
Device external locks and Commands to display and verify SRDF, devices, and groups.
Syntax
Use incremental establish for a device group, composite group, storage group, or device file:
These commands do not include an option to definition the type of establish operation, because incremental is the default for
this operation.
NOTE:
R2 may be set to read/write disabled (not ready) by setting the value of SYMAPI_RDF_RW_DISABLE_R2 to ENABLE in the
options file. For more information, refer to the Dell Solutions Enabler CLI Reference Guide
Examples
To initiate an incremental establish on all SRDF pairs in the prod device group:
symrdf -g prod establish
To initiate an incremental establish for a list of SRDF pairs in SRDF/Metro group 86 where bias determines which side of the
device pair remains accessible to the host:
symrdf -f /tmp/device_file -sid 085 -rdfg 86 establish -use_bias
Failback to source
After a failover (planned or unplanned), use the failback command to resume normal SRDF operations by initiating read/write
operations on the source (R1) devices, and stop read/write operations on the target (R2) devices.
Failback initiates the following activities for each specified SRDF pair in a device group:
1. The target (R2) device is write disabled to its local hosts.
2. Traffic is suspended on the SRDF links.
3. If the target side is operational, and there are invalid remote (R2) tracks on the source side (and the force option is
specified), the invalid R1 source tracks are marked to refresh from the target side.
4. The invalid tracks on the source (R1) side are refreshed from the target R2 side. The track tables are merged between the
R1 and R2 sides.
5. Traffic is resumed on the SRDF links.
6. The source (R1) device is read/write enabled to its local hosts.
The target (R2) devices become read-only to their local hosts.
Host Host
Write Disabled
SRDF Links
R1 R2
R2 changes are copied to R1
SYM-001762
NOTE:
When you issue the symrdf command, device external locks are set on all SRDF devices you are about to establish. See
Device external locks and Commands to display and verify SRDF, devices, and groups.
Syntax
Use failback for a device group, composite group, storage group, or device file:
NOTE:
The R2 may be set to read/write disabled (not ready) by setting the value of SYMAPI_RDF_RW_DISABLE_R2 to ENABLE
in the options file. For more information, refer to the Dell Solutions Enabler CLI Reference Guide
Examples
To initiate a failback on all the SRDF pairs in the prod device group:
symrdf -g prod failback
Failover to target
Failovers are used to move processing to the R2 devices during scheduled maintenance (planned failover) or when an outage
makes the R1 devices unreachable (unplanned failover).
A failover transfers processing to the target (R2) devices and makes them read/write enabled to their local hosts.
Failover initiates the following activities for each specified SRDF pair in a device group:
● If the source (R1) device is operational, the SRDF links are suspended.
● If the source side is operational, the source (R1) device is write disabled to its local hosts.
● The target (R2) device is read/write enabled to its local hosts.
Host Host
Write Disabled
SRDF Links
R1 R2
While R1 is unreachable
R2 is write enabled
to its host SYM-001761
NOTE:
when you issue the symrdf command, device external locks are set on all SRDF devices you are about to establish. See
Device external locks and Commands to display and verify SRDF, devices, and groups.
Syntax
Use failover for a device group, composite group, storage group, or device file:
Examples
To perform a failover on all the pairs in the prod device group:
symrdf -g prod failover
Invalidate R1 tracks
The invalidate r1 operation invalidates all tracks on the source (R1) side, so they can be copied over from the target (R2)
side.
NOTE:
The SRDF pairs at the source must already be Suspended and write disabled (not ready).
Syntax
Use invalidate r1 for a device group, composite group, storage group, or device file:
Options
-nowd
Bypasses the validation check to ensure that the target of operation is write disabled to the host.
Examples
To invalidate the source (R1) devices in all the SRDF pairs in device group prod:
symrdf -g prod invalidate r1
Invalidate R2 tracks
The invalidate r2 operation invalidates all tracks on the target (R2) side so that they can be copied over from the source
(R1) side.
NOTE:
The SRDF pairs at the source must already be Suspended and write disabled (not ready).
Syntax
Use invalidate r2 for a device group, composite group, storage group, or device file:
Examples
To invalidate the target (R2) devices in all the SRDF pairs in device group prod:
symrdf -g prod invalidate r2
Make R1 ready
The Ready state means the specified mirror is ready to the host. The mirror is enabled for both reads and writes.
ready r1 sets the source (R1) devices to ready for their local hosts.
This operation is particularly helpful when all SRDF links are lost and the devices are operating in domino mode.
Syntax
Use ready r1 for a device group, composite group, storage group, or device file:
Examples
To make the source (R1) device ready in all the SRDF pairs in device group prod:
symrdf -g prod ready r1
Syntax
Use not_ready r1 on a device group, composite group, storage group, or device file:
Examples
To make the source (R1) devices not ready in all the SRDF pairs in device group prod:
symrdf -g prod not_ready r1
Syntax
Use ready r2 for a device group, composite group, storage group, or device file:
Examples
To make the target (R2) devices ready in all the SRDF pairs in device group prod:
symrdf -g prod ready r2
Syntax
Use not_ready r2 for a device group, composite group, storage group, or device file:
Examples
To make the target (R2) devices not ready in all SRDF pairs in device group prod:
symrdf -g prod not_ready r2
Syntax
Use merge for a device group, composite group, storage group, or device file:
Examples
To merge the track tables of all the SRDF pairs in device group prod:
symrdf -g prod merge
Example
To move one-half of the SRDF pairing of SRDF group 10 to a new SRDF group 15:
symrdf half_movepair -sid 123 -file devicefile -rdfg 10 -new_rdfg 15
All devices that are moved together must have the same SRDF personality: from R1 to R1 or from R2 to R2.
Syntax
Move SRDF pairs using a device group, storage group, or device file:
Move SRDF pairs provides details on the symrdf movepair command for device files.
Options
-exempt
Allows devices to be moved into an active SRDF/A session without affecting the state of the session or
requiring that other devices in the session be suspended.
Restrictions
The movepair operation has the following restrictions:
Examples
To move pairs in a file from SRDF group 10 to SRDF group 15:
symrdf movepair -sid 123 -file devicefile -rdfg 10 -new_rdfg 15
The first device in each line of the device file moves to the new SRDF group. The second device in each line of the file moves to
the remote SRDF group that is paired with the new SRDF group.
Syntax
Use rw_disable r2 for a device group, composite group, storage group, or device file:
Examples
To read/write disable all the target (R2) mirrors in the SRDF pairs in a device group prod:
symrdf -g prod rw_disable r2
Refresh R1
The refresh R1 mirror operation marks any changed tracks on the source (R1) side to refresh from the R2 side.
Use the refresh R1 mirror action when the R2 device holds the valid copy and the R1 device's invalid tracks require
refreshing using the R2 data.
Syntax
Use refresh r1 for a device group, composite group, storage group, or device file:
Examples
To refresh all the source (R1) devices in all the SRDF pairs in the device group prod:
symrdf -g prod refresh r1
Refresh R2
The refresh R2 mirror operation marks any changed tracks on the target (R2) side to refresh from the R1 side.
Use the refresh R2 mirror operation when the R1 device holds the valid copy and the R2 device's invalid tracks require
refreshing using the R1 data.
Syntax
Use refresh r2 for a device group, composite group, storage group, or device file:
Examples
To refresh the target (R2) devices in all the SRDF pairs in device group prod:
symrdf -g prod refresh r2
NOTE: Restore operations (incremental or full) are not allowed when the R2 device is larger than the R1 device.
When a restore is initiated for each specified SRDF pair in a device group, the following occurs:
1. The source (R1) device is write disabled to its local hosts.
2. The target (R2) device is write disabled to its local hosts.
3. Traffic is suspended on the SRDF links.
4. All tracks on the source (R1) device are marked as invalid.
5. All R1 tracks are refreshed from the R2 side. The track tables are merged between the R1 and R2 side.
6. Traffic is resumed on the SRDF links.
7. The source (R1) device is read/write enabled to its local hosts.
In SRDF/S configurations, when the restore control operation has successfully completed and the device pair is in the
Synchronized state, the source (R1) device and the target (R2) device contain identical data.
In SRDF/A configurations, when the restore control operation has successfully completed and the device pair is in the
Consistent state, the target (R2) device contains dependent write consistent data.
In SRDF/Metro configurations, once the source (R1) device and the target (R2) device contain identical data, the pair state is
changed to either ActiveActive or ActiveBias and the R2 side is made RW-accessible to the host(s).
NOTE:
Site A Site B
Host Host
Write Disabled Write Disabled
SRDF Links
R1 R2
R2 data copied to R1
NOTE:
When you issue the symrdf command, device external locks are set on all SRDF devices you are about to establish. See
Device external locks and Commands to display and verify SRDF, devices, and groups.
Syntax
Use restore -full for a device group, composite group, storage group, or device file:
Include the -use_bias option in SRDF/Metro configurations to indicate that neither the Witness nor vWitness methods of
determining bias are used:
For SRDF/A configurations, the restore operation must include all devices in the group unless the devices are exempt.
For SRDF/Metro configurations:
● The restore operation must include all devices in the group.
● If the Witness method is used to determine which side of the device pair remains accessible to the host, the Witness groups
must be online.
Use the verify command to confirm that the SRDF pairs are in the correct state:
Examples
To initiate a full restore on all SRDF pairs in the prod device group:
symrdf -g prod restore -full
To initiate a restore on a list devices in a SRDF/Metro group where bias determines which side of the device pair remains
accessible to the host:
symrdf -f /tmp/device_file -sid 085 -rdfg 86 restore -full -use_bias
NOTE: Restore operations (incremental or full) are not allowed when the R2 device is larger than the R1 device.
During an incremental restore SRDF carries out the following activities for each specified SRDF pair in a device group:
1. Set the source (R1) device to write disabled to its local hosts.
2. Set the target (R2) device to write disabled to its local hosts.
3. Suspend traffic on the SRDF links.
4. Refresh the invalid tracks on the source (R1) device from the changed tracks on the target (R2) side. The track tables are
merged between the R1 and R2 side.
5. Resume traffic on the SRDF links.
6. Set the source (R1) device to read/write enabled to its local hosts.
In SRDF/S configurations, when the restore control operation has successfully completed and the device pair is in the
Synchronized state, the source (R1) device and the target (R2) device contain identical data.
In SRDF/A configurations, when the restore control operation has successfully completed and the device pair is in the
Consistent state, the target (R2) device contains dependent write consistent data.
In SRDF/Metro configurations, once the source (R1) device and the target (R2) device contain identical data, the pair state is
changed to either ActiveActive or ActiveBias and the R2 side is made RW-accessible to the host(s).
NOTE:
R2 may be set to read/write disabled (not ready) set the value of SYMAPI_RDF_RW_DISABLE_R2 to ENABLE in the options
file. For more information, refer to the Dell Solutions Enabler CLI Reference Guide
The following image shows the incremental restore of an SRDF pair.
Host Host
Write Disabled Write Disabled
SRDF Links
R1 R2
R1 data is refreshed from R2 data
SYM-001760
NOTE:
When you issue the symrdf command, device external locks are set on all SRDF devices you are about to establish. See
Device external locks and Commands to display and verify SRDF, devices, and groups.
Syntax
NOTE: Incremental is the default for the restore operation. No option is required.
Use incremental restore for a device group, composite group, storage group, or device file:
Include the -use_bias option in SRDF/Metro configurations to indicate that neither the Witness nor vWitness methods of
determining bias are used:
For SRDF/A configurations, the restore operation must include all devices in the group unless the devices are exempt.
For SRDF/Metro configurations:
● The restore operation must include all devices in the group.
● If the Witness method is used to determine which side of the device pair remains accessible to the host, the Witness groups
must be online.
Use the verify command to confirm that the SRDF pairs are in the correct state:
Syntax
Use resume for a device group, composite group, storage group, or device file:
NOTE:
The resume operation fails if you omit the -force option when the merge track table is required.
Examples
To resume the SRDF links between all the SRDF pairs in storage group prod_sg:
symrdf -sg prod_sg resume
Split
Split SRDF pairs when you require read and write access to the target (R2) side of one or more devices in a device group,
composite group, storage group, or device file.
For a split operation, SRDF carries out the following activities for each specified SRDF pair:
1. Suspend traffic on the SRDF links.
2. Set the target (R2) device to read/write enabled to its local hosts.
After the target (R2) device is split from the source (R1) device, the SRDF pair is in the Split state.
Host Host
SRDF Links
R1 R2
R1 is Split from R2
SYM-001758
NOTE:
When you issue the symrdf command, device external locks are set on all SRDF devices you are about to establish. See
Device external locks and Commands to display and verify SRDF, devices, and groups.
Syntax
Use split for a device group, composite group, storage group, or device file:
NOTE:
Include the -force option when the device pairs are in domino mode or adaptive copy mode.
Examples
To perform a split on all the SRDF pairs in the prod device group:
symrdf -g prod split
If a split operation impacts the access integrity of a database, additional operations such as freezing may be necessary. The
freeze operation suspends writing database updates to disk.
Use the freeze operation in conjunction with the split operation.
Use the symioctl command to invoke I/O control operations to freeze access to a specified relational database or database
objects.
NOTE:
For access to the specified database, set the value of SYMCLI_RDB_CONNECT to your username and password.
Suspend/resume timestamp
Suspend/resume causes SRDF link status to change from read/write to not ready and not ready to read/write. This status
information is displayed in the output of the symdev, sympd, and symdg show commands.
NOTE:
The timestamp in the displays is relative to the clock on the host where the command was issued and is reported for each
SRDF mirror on both the R1 and R2 mirrors. This timestamp is not associated with the R2 data for SRDF/A.
Options
-immediate
For SRDF/A configurations, causes the suspend command to drop the SRDF/A session immediately.
-exempt
Suspends devices without affecting the state of the SRDF/A session or requiring that other devices in
the session be suspended.
-bias R1|R2
For SRDF/Metro configurations, specifies which side is the bias side.
Examples
To suspend the SRDF links between all the pairs in device group prod:
symrdf -g prod suspend
Restrictions
The half_swap operation has the following restrictions:
● The R2 device cannot be larger than the R1 device.
● A swap cannot occur during an active SRDF/A session or when cleanup or restore is running.
● Adaptive copy write pending is not supported when the R1 side of the RDF pair is on an array running HYPERMAX OS. If the
R2 side is on an array running HYPERMAX OS and the mode of the R1 is adaptive copy write pending, SRDF sets the mode
to adaptive copy disk.
Example
To swap the R1 designation of the associated BCV RDF1 pairs in device group prod, and refresh the data on the current R1
side:
symrdf -g Prod -bcv half_swap -refresh R1
NOTE:
Restrictions
● A swap cannot occur if the R1 device (which becomes the R2) is currently a target for a TimeFinder/Snap or TimeFinder/
Clone emulation. A device may not have two sources for data (in this case, the R1 and the emulation source). The swap
cannot occur even if the emulation session has already completed copying the data.
● Adaptive copy write pending is not available when the R1 side of the RDF pair is on an array running HYPERMAX OS. If the
R2 side is on an array running HYPERMAX OS, and the mode of the R1 is adaptive copy write pending, SRDF sets the mode
to adaptive copy disk.
Example
To swap the R1 designation of the associated BCV RDF1 pairs in device group prod, and refresh the data on the current R1
side:
symrdf -g Prod -bcv swap -refresh R1
Update R1 mirror
The update operation starts an update of the source (R1) side after a failover while the target (R2) side may still be
operational to its local hosts.
Use update to perform an incremental data copy of only the changed tracks from the target (R2) device to the source (R1)
device while the target (R2) device is still Write Enabled to its local host.
SRDF updates each specified SRDF pair in a device group as follows:
1. Suspend the SRDF (R1 to R2) links when the SRDF links are up.
2. If there are invalid remote (R2) tracks on the source side and the force option was specified, mark tracks that were changed
on the source devices for refresh from the target side.
3. Refresh the invalid tracks on the source (R1) side from the target R2 side. The track tables are merged between the R1 and
R2 sides.
4. Resume traffic on the SRDF links.
NOTE:
If you update R1 while the SRDF pair is Suspended and not ready at the source, the SRDF pair types are in an Invalid state
when the update completes. To resolve this condition, use the rw_enable r1 operation to make the SRDF pairs become
Synchronized.
When the update is complete, the pairs are in the R1 Updated state.
The following image shows an update of an SRDF pair.
Host Host
Write Disabled
SRDF Links
R1 R2
NOTE:
When you issue the symrdf command, device external locks are set on all SRDF devices you are about to control. See
Device external locks and Commands to display and verify SRDF, devices, and groups.
Syntax
Use update for a device group, composite group, storage group, or device file:
Use the update -until # command for scenarios where you want I/O to continue from the remote host and periodically
update an inactive R1 device over an extended period of time.
Options
-until
Checks the number of invalid tracks that are allowed to build up from the active R2 local I/O before
another update (R2 to R1 copy) is triggered. The update sequence loops until the invalid track count is
less than the number specified by the # value
If the invalid track count is less than the number of tracks specified by the -until # value, the
command exits. Otherwise, the following sequence of operations for update R1 mirror is retriggered until
the threshold is reached.
1. Update the R1 mirror.
2. Build changed tracks on R2.
3. Check the invalid track count.
Examples
To update all the source (R1) devices in the SRDF pairs, for device group prod:
symrdf -g prod update
To update the R1 mirror of device group prod continuously until track the number of tracks to be copied is below 1000:
Write disable R1
The write_disable R1 operation sets the source (R1) devices as write disabled to their local hosts.
Syntax
Use write_disable r1 for a device group, composite group, storage group, or device file:
Examples
To write disable all the source (R1) mirrors in the SRDF pairs in device group prod:
symrdf -g prod write_disable r1
Write disable R2
The write_disable R2 operation sets the target (R2) devices as write disabled to their local hosts.
Syntax
Use write_disable r2 for a device group, composite group, storage group, or device file:
Examples
To write disable all the target (R2) mirrors in the SRDF pairs in device group prod:
symrdf -g prod write_disable r2
Write enable R1
The read/write enable R1 operation makes the source (R1) devices accessible to their local hosts.
Syntax
Use rw_enable r1 for a device group, composite group, or device file:
Write enable R2
The read/write enable R2 operation makes the target (R2) devices accessible to their local hosts.
Syntax
Use rw_enable r2 for a device group, composite group, or device file:
Examples
To enable all the target (R2) mirrors in the SRDF pairs in device group prod:
symrdf -g prod rw_enable r2
82 Dynamic Operations
Thus, the maximum number of SRDF groups supported on the HYPERMAX OS director is effectively 186 (250-64).
SRDF/A device groups have additional configurable attributes. See Set SRDF/A group cycle time, priority, and transmit
idle .
Link limbo
Link limbo is a feature for advanced users. It allows you to set a specific length of time for Enginuity to wait when a link goes
down before updating the link status.
You can specify a link limbo value on the local side or both the local and remote sides of a dynamic SRDF group. If the link status
is still not ready after the link limbo time expires, devices are marked not ready to the link.
The value of the link limbo timer can be 0 through 120 seconds. The default is 10 seconds.
To protect from session drops after the maximum link limbo time, enable the Transmit Idle feature (see Manage transmit idle ).
NOTE:
Setting of the link limbo timer affects the application timeout period. So it is not recommended to set the timer while
running in synchronous mode.Switching to SRDF/S mode with the link limbo parameter configured for more than 10
seconds may cause an application, database, or host to fail if SRDF is restarted in synchronous or semi-synchronous mode.
Domino mode
Under certain conditions, the SRDF devices can be forced into the Not Ready state to the host if, for example, the host I/Os
cannot be delivered across the SRDF link.
Use the domino attribute to stop all subsequent write operations to both R1 and R2 devices to avoid data corruption.
While such a shutdown temporarily halts production processing, domino mode can protect data integrity in case of a rolling
disaster.
Autolink recovery
If all SRDF links fail, the array stores the SRDF states of the affected SRDF devices. This enables the array to restore the
devices to these states automatically when the SRDF links become operational.
Enable the Autolink recovery attribute (-autolink_recovery) to allow SRDF to automatically restore the SRDF links.
Valid values for -autolink_recovery are on (enabled) and off (disabled).
The default is off.
Hardware compression
SRDF hardware compression is available over Fibre Channel and GigE links. Compression minimizes the amount of data
transmitted over an SRDF link.
Use the -hwcomp option to control hardware compression. Valid values for the option are on (compression is enabled) or off
(compression is disabled). The default value is off.
Dynamic Operations 83
Software compression
Software compression is available to SRDF traffic over Fibre Channel and GigE SRDF links. If software compression is enabled,
Enginuity compresses data before sending it across the SRDF links.
The arrays at both sides of the SRDF links must support software compression and must have the software compression
feature enabled in the configuration file.
Use the -swcomp option to control software compression. Valid values for the option are on (compression is enabled) or off
(compression is disabled). The default is off.
SRDF/Metro
HYPERMAX OS 5977.691.684 and Solutions Enabler 8.1 introduced SRDF/Metro which is a significant departure from traditional
SRDF.
In SRDF/Metro configurations, R2 devices on arrays can be Read/Write accessible to hosts. SRDF/Metro R2 devices acquire
the federated personality of the primary R1 device (such as geometry and device WWN). This federated personality of the R2
device causes the R1 and R2 devices to appear to host(s) as a single virtual device across both SRDF paired arrays.
By default, an SRDF/Metro configuration uses a Witness to determine which side of the SRDF device pair remains R/W
accessible to the host or hosts in the event of link or other failures. The witness can be another array (an array Witness) or
virtual Witness (vWitness).
SRDF/Metro Operations provides more information on SRDF/Metro and how to manage it.
NOTE: If hardware compression is enabled, the maximum number of ports per director is 12.
When you create an SRDF group on VMAX3 arrays and VMAX All Flash arrays, select both the director AND the ports for the
SRDF emulation to use on each side.
84 Dynamic Operations
Syntax
Use the symrdf addgrp command to create a SRDF group.
Required options
-sid SID
The ID of the array where the group is added.
-label GrpLabel
A label for a dynamic SRDF group.
-rdfg GrpNum
An SRDF group number. Valid values are 1 - 250.
-dir Dir:Port, Dir:Port
A comma-separated list one or more ports on a local director to be added to the group.
-remote_dir Dir:Port, Dir:Port
A comma-separated list one or more ports on a remote director to be added to the group.
-remote_rdfg GrpNum
The SRDF group number on the remote array.
-remote_sid SID
The ID of the remote array.
Optional options
-vasa
Specifies that the SRDF group being created is a VASA SRDF group that can be used by VASA remote
replication. The following options cannot be used when specifying -vasa -async:
● -link_domino
● -remote_link_domino
● -autolink_recovery
● -remote_autolink_recovery
● -witness
-async
Identifies that the VASA SRDF group being created should be created in Asynchronous mode.
NOTE: This option is only allowed when the option -vasa is specified.
-sc_name
Specifies the storage container name associated with the SID.
NOTE: This option is only allowed when the option -vasa is specified.
Dynamic Operations 85
-remote_sc_name
Specifies the storage container name associated with the remote SID.
NOTE: This option is only allowed when the option -vasa is specified.
Requirements
The following are requirements for adding a dynamic SRDF group:
● The dynamic_rdf parameter must be enabled.
● The local or remote array must not be in the symavoid file.
● You can perform multiple operations (addgrp, modifygrp, removegrp), but each operation must complete before
starting the next.
● Always specify a group label when adding a dynamic group.
Example - HYPERMAX OS
Arrays running HYPERMAX OS support multiple ports per director. You specify both the director ID and the port number when
specifying the local and remote ports to add to the new SRDF group.
To specify 3 ports on each array:
86 Dynamic Operations
Example - Mixed configurations
When one array in an SRDF configuration is running HYPERMAX OS, and one array is running Enginuity 5876, specify only the
director ID on the array running 5876, and specify both the director ID and port number on the array running HYPERMAX OS.
For example:
3. Use the symcfg list -ra all -switched command to display all SRDF groups on the local array and its remotely
connected arrays.
4. Use the symrdf addgrp command to create an empty dynamic SRDF group.
In the following example, the symrdf addgrp command:
● Creates a new dynamic SRDF group, specifying the local array (-sid 6180) and remote array (-remote_sid 6240).
● Assigns an SRDF group number for the local array (-rdfg 4), and for the remote array (-remote_rdfg 4) to the
new group.
NOTE: The two SRDF group numbers can be the same or different.
● Assigns a group label (-label dyngrp4) to the new group.
This label can be up to 10 characters long, and provides a user-friendly ID to modify or delete the new group.
The group label is required to add/remove directors from the SRDF group.
● Adds directors on the local array (-dir 12a) and the remote array (-remote_dir 13a) to the new group:
symrdf addgrp -sid 6180 -rdfg 4 -label dyngrp4 -dir 12a -remote_rdfg 4 -remote_sid
6240 -remote_dir 13a
NOTE: Network topology is important when choosing director endpoints. If using Fibre Channel protocol, the
director endpoints chosen must be able to see each other through the Fibre Channel fabric in order to create
Dynamic Operations 87
the dynamic SRDF links. Ensure that the physical connections between the local RA and remote RA are valid and
operational.
5. Use the symcfg -sid SID list -rdfg GrpNum command to confirm that the group was added to both arrays.
6. Use the symrdf createpair command to add SRDF pairs to the new group.
NOTE:
When creating an RDF pair between HYPERMAX OS and Enginuity 5876, the maximum symdev number that can be
used on the array running HYPERMAX OS is FFBF (65471).
In the following example, the symrdf createpair command:
● Adds the dynamic SRDF pairs listed in the device file (-file dynpairsfile ) to the new dynamic SRDF group 4
(-rdfg 4 )
● Specifies the local array (-sid 6180 ) as the R1 side for the group (-type R1 )
● The -invalidate option (-invalidate R2 ) indicates that the R2 devices are the targets that will be refreshed
from the R1 source devices.
● Since no mode is specified in the symrdf createpair command, the default RDF mode (adaptive copy disk) will be
used for the device pairs.
The remote side must be reachable in order to set the SRDF group attributes.
Syntax
Use the symrdf set rdfg command to set the attributes for an SRDF group.
88 Dynamic Operations
Options
-both_sides
Applies the group attribute to both the source and target sides of an SRDF session. If this option is not
specified, attributes are only applied to the source side.
-limbo {0 - 120}
Sets the duration of the link limbo timer (seeLink limbo ).
-domino {on|off}
Switches domino mode on or off (see Domino mode ). This option is not allowed for VASA SRDF groups.
-autolink_recovery {on|off}
Switches autolink recovery on or off (see Autolink recovery ). This option is not allowed for VASA SRDF
groups.
-hwcomp {on|off}]
Switches hardware compression on or off (see Hardware compression ).
-swcomp {on|off}
Switches software compression on or off (see Software compression ).
NOTE:
For arrays running Enginuity 5876, you can also use the symconfigure command to set SRDF group attributes. For more
information, see the Dell Solutions Enabler Array Controls and Management CLI User Guide.
Examples
To set the link limbo value to one minute (60 seconds) for both sides of SRDF group 4 on array 6180:
symrdf -sid 6180 -rdfg 4 set rdfg -limbo 60 -both_sides
To set the Link Domino mode on both sides of group 4 on array 6180:
symrdf -sid 6180 -rdfg 4 set rdfg -domino on -both_sides
To set the Autolink Recovery mode on both sides of group 4 on array 6180:
symrdf -sid 6180 -rdfg 4 set rdfg -autolink_recovery on -both_sides
To set limbo to thirty seconds and turn off Link Domino and Autolink Recovery modes for SRDF group 12:
symrdf -sid 134 -rdfg 12 set rdfg -limbo 30 -domino off -autolink_recovery off
To turn on software compression and turn off hardware compression on both sides of the SRDF group 12:
symrdf -sid 134 -rdfg 12 set rdfg -swcomp on -hwc off -both_sides
Syntax
The symrdf modifygrp command modifies a dynamic SRDF group.
.........
-dir Dir:Port,Dir:Port,...
-remote_dir Dir:Port,Dir:Port,...
-witness
Dynamic Operations 89
Options
-dir Dir:Port, Dir:Port
A comma-separated list of one or more local director:port combinations to be added to the group.
-remote_dir Dir:Port, Dir:Port
A comma-separated list of one or more ports on a remote director to be added to the group.
-witness
Identifies the group as an SRDF/Metro Witness group. This option is not allowed for VASA SRDF groups.
NOTE: This option does NOT set the witness attribute on the group as a part of the modifygrp
(that can only be done with the addgrp command). It just acknowledges that a witness group is
being modified.
Making physical cable changes within the SRDF environment may disable the ability to modify and delete dynamic group
configurations.
NOTE:
Reassigning directors for SRDF dynamic groups requires that you understand the network fabric topology when choosing
director endpoints.
The group label or group number is required for modify operations.
90 Dynamic Operations
Removing dynamic SRDF groups
To be able to remove an SRDF group:
● Both sides of the SRDF configuration must be defined and reachable
● The group must be empty.
● At least one physical connection between the local and remote array must exist.
● In SRDF/Metro configurations:
○ You cannot remove a Witness group if an SRDF/Metro group is currently using that Witness group for protection.
○ You can remove a Witness group if it is protecting an SRDF/Metro configuration(s) and there is another Witness (either
physical (another array with witness groups to both sides of the SRDF/Metro configuration) or virtual (a vWitness that
is enabled and visible to both sides of the SRDF/Metro configuration)) available to provide the protection. The Witness
group can be removed and the new Witness array starts protecting the SRDF/Metro group(s).
NOTE:
Deleting the group removes all local and remote director support.
Syntax
Use the symrdf deletepair command to remove all devices from the group.
Use the symrdf removegrp command to remove an SRDF group.
Options
-remote -rdfgGrpNum -label GrpLabel
The SRDF group number on the remote array.
-noprompt
Prompts are not displayed after the command is entered.
-i Interval
The interval, in seconds, between attempts to acquire an exclusive lock on the array host database or on
the local and/or remote arrays.
-c Count
The number (count) of times to attempt to acquire an exclusive lock on the array host database, or on
the local and/or remote arrays.
-star
The action is targeted at an RDF group in STAR mode.
-symforce
Requests the array force the operation to be executed when normally it would be rejected.
NOTE: When used with removegrp, this option removes one side of a dynamic SRDF group if the
other side is not defined or is not accessible. Do not use this option except in emergencies.
-witness
The SRDF group is a Witness group.
Dynamic Operations 91
● The symrdf deletepair command deletes SRDF dynamic pairs defined in a device file dynpairsfile. As all device
pairs in the SRDF group are listed in the device file, the group will be emptied.
● The symrdf removegrp command removes the local and remote dynamic SRDF groups:
symrdf deletepair -sid 80 -rdfg 4 -file dynpairsfile
symrdf removegrp -sid 80 -label dyngrp4
Restrictions
To be able to remove one side of an SRDF group:
● The other side is not defined or reachable.
If the other side of the SRDF configuration is reachable, you cannot issue this command.
● The group is empty.
Syntax
Use the symrdf removegrp command with the -symforce option to remove a dynamic SRDF group from one side of an
SRDF configuration.
Example
The following example removes dyngrp4 from array 180 on the local side:
symrdf removegrp -sid 180 -label dyngrp4 -symforce
All devices for an SRDF side must be in the same column. That is, all R1 devices must be in either the left or right
column, and all R2 devices must be in the other column.
HYPERMAX OS
Solutions Enabler with HYPERMAX OS 5977 does not support meta-devices.
SRDF device pairs consisting of meta-devices on one side and non-meta-devices on the other side are valid if the meta-devices
are on an array running Enginuity 5876.
92 Dynamic Operations
NOTE:
The maximum symdev number that can be used on the HYPERMAX OS array is FFBF (65471).
Example
In the following example, the vi text editor creates the RDFG148 device file consisting of 7 SRDF pairs for the local and remote
arrays.
When the symrdf createpair -file FileName command processes the device file, the -type option determines
whether the devices in the left column are R1 or R2.
vi RDFG148
0060 0092
0061 0093
0062 0094
0063 0095
0064 0096
0065 0097
0066 0098
R2 devices larger than their corresponding R1 devices cannot restore or failover to the R1.
SYMAPI_RDF_CREATEPAIR_LARGER_R2 in the options file enables/disables creating SRDF pairs where R2 is larger than its
corresponding R1. Valid values for the option are:
ENABLE - (default value) createpair for devices where R2 is larger than its R1 is allowed.
DISABLE - createpair for devices where R2 is larger than its R1 is blocked.
Dynamic Operations 93
symrdf createpair (-file option) syntax
Use the createpair command to create SRDF device pairs.
createpair
-type <R1|R2>
-remote_sg SgName
-invalidate R1|R2 | -establish | -restore [-rp] |format -establish]>
-hop2_rdfg GrpNum]
-rdf_mode sync | semi | acp_wp | acp_disk | async
-remote
-nowd
NOTE: Create device pairs describes creating SRDF device pairs in SRDF/Metro configurations.
Options
-file Filename
The name of a device file for SRDF operations.
-rdfg GrpNum
The identity of a specific SRDF group.
When used with -sg createpair -hop2, the option identifies the SRDF group associated with the
SG.
-type [R1|R2]
Defines whether the devices listed in the left column of the device file are configured as the R1 side or
the R2 side.
-remote_sg
When used with -hop2_rdfg GrpNum, the identity of the remote storage group for the second-hop.
-invalidate [R1|R2]
Marks the R1 devices or R2 devices in the list to be the invalidated target for a full device copy once the
SRDF pairs are created.
-establish
Begins copying data to invalidated targets, synchronizing the dynamic SRDF pairs once the SRDF pairs
are created.
-restore
Begins copying data to the source devices, synchronizing the dynamic SRDF pairs once the SRDF pairs
are created.
-rp
Allows the operation even when one or more devices are tagged for RecoverPoint.
A non-concurrent R1 device can be tagged for RecoverPoint. A RecoverPoint tagged device can be
used as an R1 device. A device tagged for RecoverPoint cannot be used as an R2 device (createpair) or
swapped to become an R2 device (swap, half-swap).
-format
Clears all tracks on the R1 and R2 sides to ensure no data exists on either side, and makes the R1 read
write to the host.
94 Dynamic Operations
You can specify this option with -establish, -type, -rdf_mode, -exempt, and -g.
When used with -establish, the devices become read write on the SRDF link and are synchronized.
-rdf_mode
Sets the SRDF mode of the pairs to be one of the following:
● synchronous (sync),
● asynchronous (async),
● adaptive copy disk mode (acp_disk),
● adaptive copy write pending mode (acp_wp).
NOTE:
Adaptive copy write pending mode is not supported when the R1 mirror of the RDF pair is on an
array running HYPERMAX OS.
Adaptive Copy Disk is the default mode unless overridden by the setting of
SYMAPI_DEFAULT_RDF_MODE in the options file. See Block createpair when R2 is larger than
R1 .
-g GrpName
The name to give the device group created with the devices in the device file.
-remote
Requests a remote data copy. When the link is ready, data is copied to the SRDF mirror.
-hop2_rdfg
Specifies the SRDF group number for the second-hop. Applicable only for createpair -hop2 for an
SG.
-nowd
Bypasses the check explained in Verify host cannot write to target devices with -nowd option .
Example
In the following example:
● -file indicates devices are created using a device file devices.
● -g ProdDB names device group ProdDB.
● -sid indicates local source array is SID 810.
● -invalidate -r2 indicates that the R2 devices are refreshed from the R1 source devices.
● -type RDF1 indicates devices listed in the left column of the device file are configured as the R1 side.
symrdf createpair -g ProdDB -file devices -sid 810 -rdfg 2 -invalidate r2 -nop -type RDF1
Example
In the following example, the createpair command:
● Creates device pairs using device pairs listed in a device file devicefile,
● Ignores the check to see if the host can write to its targets (-nowd),
● Sets the mode to the default (adaptive copy disk) by not specifying another mode:
symrdf createpair -sid 123 -file devicefile -type r1 -rdfg 10 -nowd
Dynamic Operations 95
Storage groups (SGs) are a collection of devices on the array that are used by an application, a server, or a collection of servers.
Dell Solutions Enabler Array Controls and Management CLI User Guide provides more information about storage groups.
The following command options have been added or modified:).
● - sg SgName - Name of storage group on the local array. Required for all -sg operations.
● -hop2_rdfg GrpNum - SRDF group for the second hop. Used with -sg createpair -hop2.
● -rdfg GroupNum - SRDF group associated with the SG. Required for all -sg operations.
● -remote_sg SgName - Name of the storage group on the remote array. Used only for createpair operations.
This section contains:
● Pair devices using storage groups
● Pair mixed devices using storage groups
● Pair devices in cascaded storage groups
● Pair devices in storage groups (second hop)
createpair
-type <R1|R2>
-remote_sg SgName
-invalidate R1|R2 | -establish | -restore [-rp]
-format | -establish
-hop2_rdfg GrpNum]
-rdf_mode sync | semi | acp_wp | acp_disk
-remote
-exempt
-nowd
Options
-sg SgName
A storage group for SRDF operations.
-rdfg GrpNum
The name of the SRDF group that the command works on.
When used with -sg createpair -hop2, identifies the SRDF group associated with the storage
group.
-type [R1|R2]
Whether the devices are configured as the R1 side or the R2 side.
-remote_sg SgName
When used with -hop2_rdfg GrpNum, the remote storage group for the second-hop.
-invalidate [R1|R2]
Marks the source (R1) devices or the target (R2) devices to invalidate for a full copy when an SRDF pair
is created.
-establish
Begins copying data to invalidated targets, synchronizing the dynamic SRDF pairs once the SRDF pairs
are created.
-restore
Begins copying data to the source devices, synchronizing the dynamic SRDF pairs once the SRDF pairs
are created.
96 Dynamic Operations
-rp
Allows the operation even when one or more devices are tagged for RecoverPoint.
A non-concurrent R1 device can be tagged for RecoverPoint. A RecoverPoint tagged device can be
used as an R1 device. A device tagged for RecoverPoint cannot be used as an R2 device (createpair) or
swapped to become an R2 device (swap, half-swap).
-format
Clears all tracks on the R1 and R2 sides to ensure no data exists on either side, and makes the R1 read
write to the host.
You can specify this option with -establish, -type, -rdf_mode, -exempt, and -g.
When used with -establish, the devices become read write on the SRDF link and are synchronized.
-hop2_rdfg GrpNum
The SRDF group number for the second-hop. Applicable only for createpair -hop2 for an SG.
-rdf_mode Mode
The SRDF mode of the pairs as one of the following:
● synchronous (sync),
● adaptive copy disk mode (acp_disk),
● adaptive copy write pending mode (acp_wp).
NOTE:
Adaptive copy write pending mode is not supported when the R1 mirror of the SRDF pair is on an
array running HYPERMAX OS.
Adaptive Copy Disk is the default mode unless overridden by the SYMAPI_DEFAULT_RDF_MODE
options file setting. See Block createpair when R2 is larger than R1 .
-remote
Requests a remote data copy. When the link is ready, data is copied to the SRDF mirror.
-nowd
Bypasses the check explained in Verify host cannot write to target devices with -nowd option .
Dynamic Operations 97
Example
In the following example, storage group localSG includes 4 devices:
---------------------------------------------------------
Sym Device Cap
Dev Pdev Name Config Sts (MB)
---------------------------------------------------------
000A0 N/A TDEV RW 3278
000A1 N/A TDEV RW 1875
000B1 N/A TDEV RW 4125
000C1 N/A TDEV RW 3278
---------------------------------------------------------
Sym Device Cap
Dev Pdev Name Config Sts (MB)
---------------------------------------------------------
00030 N/A TDEV RW 1877
00031 N/A TDEV RW 4125
00050 N/A TDEV RW 3278
00061 N/A TDEV RW 4125
The createpair -type r1 operation pairs the devices in the localSG group with devices in the remoteSG group:
symrdf createpair -sid 123 -rdfg 250 -sg localSG -type r1 -remote_sg remoteSG
After the operation, pairings are:
Example
In the following example, local storage group localSG contains 4 devices of mixed types. Before the createpair operation,
device A0 is an R1 device and B1 is an R2 device:
---------------------------------------------------------
Sym Device Cap
Dev Pdev Name Config Sts (MB)
---------------------------------------------------------
000A0 N/A RDF1+TDEV RW 3278
000A1 N/A TDEV RW 1875
000B1 N/A RDF2+TDEV RW 4125
000C1 N/A TDEV RW 3278
The createpair operation pairs the devices in the localSG group with devices in the remoteSG group:
● -sid 123 -sg localSG -type r1 - Create device pairs so that devices in the localSG group on array 123 are R1
devices.
● -remote_sg remoteSG - Pair the devices in the localSG group with devices in the remoteSG group:
symrdf createpair -sid 123 -rdfg 250 -sg localSG -type r1 -remote_sg remoteSG
98 Dynamic Operations
After the operation, device A0 is an R11 device and device B1 is an R21 device:
---------------------------------------------------------
Sym Device Cap
Dev Pdev Name Config Sts (MB)
---------------------------------------------------------
000A0 N/A RDF11+TDEV RW 3278
000A1 N/A TDEV RW 1875
000B1 N/A RDF21+TDEV RW 4125
000C1 N/A TDEV RW 3278
Examples
To pair devices in the local parent storage group SG-P1 (including devices in SG-P1’s child storage groups) with devices in the
remote parent storage group SG-P2 (including devices in SG-P2’s child storage groups):
symrdf createpair -sg SG-P1 -remote_sg SG-P2
To pair devices in the local child storage group local-SG-Child-1 with devices in the remote child storage group remote-SG-
Child-2:
symrdf createpair –sg local-SG-Child-1 –remote_sg remote-SG-Child-2
Example
The following example creates an R1 -> R21 -> R2 configuration starting with an R1 -> R2 pair.
Before the operation, the storage group SG_ABC in RDF group 16 on local SID 085 contains 2 R1 devices:
---------------------------------------------------------
Sym Device Cap
Dev Pdev Name Config Sts (MB)
---------------------------------------------------------
01AA0 N/A RDF1+TDEV RW 3278
01AB1 N/A RDF1+TDEV RW 4125
These are paired with 2 R2 devices in storage group SG_ABC on remote SID 086 (hop 1):
Dynamic Operations 99
N/A 01AA0 RW 0 0 NR 0007A WD...
N/A 01AB1 RW 0 0 NR 0007B WD...
On the remote SID 087 (hop 2), storage group SG_ABC_HOP2 in RDF group 6 contains two unpaired devices:
The following command creates an R1 -> R21 -> R2 configuration. The devices at hop 2 (SID 087) become R2 devices:
symrdf -sg SG_ABC -sid 085 -rdfg 16 -remote_sg remote_SG_ABC_HOP2 createpair -type R1 -est
-hop2 -hop2_rdfg 6
---------------------------------------------------------
Sym Device Cap
Dev Pdev Name Config Sts (MB)
---------------------------------------------------------
0009A N/A RDF2+TDEV RW 3278
0009B N/A RDF2+TDEV RW 4125
The devices at hop 1 that were R2 before the operation, are now R21 devices.
In traditional SRDF configurations, the R2 may be set to read/write disabled (not ready) if
SYMAPI_RDF_RW_DISABLE_R2=ENABLE is set in the options file. For more information, refer to the Dell Solutions Enabler
CLI Reference Guide
Example
In the following example, the createpair -establish command:
● Creates device pairs using device pairs listed in a device file devicefile.
● Begins copying data to its targets, synchronizing the device pairs listed in the device file.
symrdf createpair -file devicefile -sid 55 -rdfg 1 -type R1 -establish
Restrictions
The symrdf createpair -format option has the following restrictions:
● Not supported in concurrent SRDF configurations
● SRDF device pairs cannot be created in an SRDF Witness group
● The R1 and R2 cannot be mapped to a host
Example
In this example, the createpair -format command:
Syntax
Use the symrdf createpair command with the ‑invalidate r1 or ‑invalidate r2 option to create devices (R1 or
R2) in a new or existing configuration.
When the command completes, the pairing information is added to the SYMAPI database file on the host.
When the command completes, you can:
● Use the establish command to start copying data to the invalidated target devices.
● Use the restore command to start copying to the invalidated source device.
● Use the query command to check the progress of the establish operation:
For example:
symrdf -sid 55 -file devicefile establish -rdfg 1
symrdf -sid 55 -file devicefile query -rdfg 1
Once synchronized, you can perform various SRDF operations on SRDF pairs listed in the device file.
Example
In the following example, the symrdf createpair command:
● Creates new SRDF pairs from the list of device pairs in the file devicefile.
● The -type R1 option identifies the first-column devices in the device file in array 55 as R1 type devices.
● The -invalidate r2 option indicates that the R2 devices are the targets to be refreshed from the R1 source devices.
● The -nowd option bypasses the validation check to ensure that the target of operation is write disabled to its host.
● The SRDF pairs become members of SRDF group 1.
Restrictions
● The device cannot be the source or target of a TimeFinder/Snap operation.
● Devices cannot be in the backend not ready state.
● The emulation type must be same (such as, AS/400 has specific pairing rules).
● SRDF device pairs cannot be created in an SRDF/Metro Witness group
● You cannot create pairs using the -restore option in any of these circumstances:
○ an optimizer swap is in progress on a device.
○ there are local invalid tracks on either the local or remote device.
○ an SRDF/A session is active and -exempt is not specified.
Example
symrdf createpair -sid 55 -file devicefile -rdfg 1 -type R1 -restore
Restrictions
The following restrictions apply to creating dynamic concurrent SRDF pairs:
● The SRDF BCVs designated as dynamic SRDF devices are not supported.
● The two SRDF mirrors of the concurrent device must be assigned to different SRDF groups.
● The concurrent dynamic SRDF, dynamic SRDF, and concurrent SRDF states must be enabled on the array.
● With the -restore option, the -remote option is also required if the link status for the first created remote mirror is
read/write.
● The following operations are blocked:
○ Adding an SRDF/Metro mirror when the device is already part of an SRDF/Metro configuration.
○ Adding an SRDF/Metro mirror when the device is already an R2 device.
○ Adding an SRDF R2 mirror to a device that has an SRDF/Metro RDF mirror.
○ Adding an SRDF/Metro mirror when the non-Metro RDF mirror is in Synchronous mode.
○ Adding an SRDF mirror in Synchronous mode when the device is already part of an SRDF/Metro configuration
Examples
In a previous example, the createpair command created dynamic device pairs in RDF group 1 using a device file named
devicefile. As a result, devices in the first column of the device file were configured as R1 devices on array 55:
Use the createpair command with the -restore -remote options to copy the data on the R2 devices to the R1 devices.
In this example:
● -restore begins a full copy from the target to the source, synchronizing the dynamic SRDF pairs in the device file.
● -remote copies data to the concurrent SRDF mirror when the concurrent link is ready.
NOTE:
These operations require the remote data copy option, or the concurrent link to be suspended.
NOTE:
The concurrent mirror device pairs must belong to a separate RA group than those defined in the first device file pairing.
NOTE:
To prevent a device group or a composite group from becoming invalid, first remove the devices from the group before
performing the deletepair action on a device file.
After execution of the symrdf deletepair command, the dynamic SRDF pairs are canceled.
NOTE:
Suspend the SRDF links using the symrdf suspend command before using the symrdf deletepair command.
Restrictions
The deletepair operation fails when any of the following conditions exist:
Examples
To delete pairs for a device group:
● symrdf suspend suspends the SRDF links for group NewGrp
● symrdf deletepair changes Newgrp to a non-SRDF group
symrdf suspend -sid 55 -g NewGrp
symrdf deletepair -sid 55 -g NewGrp
To delete pairs using a device file:
● symrdf suspend suspends the SRDF links for the devices listed in devicefile,
● symrdf deletepair deletes the specified SRDF pairs. The devices become non-SRDF devices.
● -rdfg 2 specifies the SRDF group number:
symrdf suspend -sid 55 -file devicefile -rdfg 2
symrdf deletepair -sid 55 -file devicefile -rdfg 2
This functionality is not available for diskless devices and does not delete any device pairs containing R11, R21, or R22
devices.
Examples
● To suspend the SRDF relationship for device pairs listed in device file devicefile:
symrdf suspend -sid 55 -rdfg 112 -file devicefile
The half_deletepair command can be specified using a device file or device group.
When specified using a device file, all devices listed in the first column of the file are converted to regular devices (non-SRDF).
Devices in Concurrent SRDF configurations are converted to non-concurrent SRDF devices.
For applicable SRDF pair states for half_deletepair operations, see section Concurrent SRDF operations and applicable
pair states in the Solutions Enabler SRDF Family State Tables Guide.
NOTE: Suspend the SRDF links using the symrdf suspend command before using the half_deletepair command.
Restrictions
The symrdf half_deletepair command fails when any of the following situations exist:
● The device is in one of the following BCV pair states: Synchronized, SyncInProg, Restored, RestoreInProg, and SplitInProg.
● There is a background BCV split operation in progress.
● Devices in the backend are not in the ready state.
● There is an optimizer swap in progress on a device.
● SRDF consistency protection is enabled and the devices were not suspended with the -exempt option.
● The SRDF links are not suspended.
Examples
To remove the SRDF pairing from device group Prod and convert the devices assigned to Prod to regular (non-SRDF) devices,
leaving their remote partners as SRDF devices:
symrdf suspend -g Prod
symrdf -g Prod half_deletepair
To remove the SRDF pairing of SRDF group 4 on array 1123 and convert one-half of those device pairs to regular (non-SRDF)
devices:
symrdf suspend -sid 123 -rdfg 4 -file devicefile
symrdf half_deletepair -sid 123 -rdfg 4 -file devicefile
Steps
1. Create a list of device pairings in a device file.
2. Use the createpair command to create the dynamic SRDF pairs,
3. Use the -g GroupName option to add the devices in the device file to a device group with the specified name.
For example, to create dynamic devices as specified in file devicefile and add them to a group named Newgrp:
symrdf createpair -sid 55 -rdfg 2 -file devicefile -type rdf1 -invalidate r2 -g NewGrp
All SRDF commands for these dynamic pairs can now be executed within the context of the NewGrp device group.
4. Use the -g GroupName option to perform operations on all the dynamic SRDF pairs in the group.
For example, establish the group:
symrdf -g NewGrp establish
There is no need to fully resynchronize the devices when performing the move. The current invalid track counters on both
R1 and R2 stay intact.
Syntax
SRDF pairs can be moved for a device file, storage group, or device group:
symrdf -file Filename -sid SID -rdfg GrpNum movepair -new_rdfg GrpNum
symrdf -sg SgName -sid SymmID -rdfg GrpNum movepair -new_rdfg GrpNum
NOTE:
Restrictions
The movepair operation has the following restrictions:
● A device cannot move when it is enabled for SRDF consistency.
● A device cannot move if it is in asynchronous mode when an SRDF/A cleanup or restore process is running.
● When moving one mirror of a concurrent R1 or an R21 device to a new SRDF group, the destination SRDF group must not be
the same as the one supporting the other SRDF mirror.
● When issuing a full movepair operation, the destination SRDF group must connect the same two arrays as the original
SRDF group.
● If the destination SRDF group is in asynchronous mode, the SRDF group type of the source and destination groups must
match. In other words, in asynchronous mode, devices can only be moved from R1 to R1, or from R2 to R2.
● If the destination SRDF group is supporting an active SRDF/A session, the -exempt option is required.
● If the original SRDF group is supporting an active SRDF/A session, the device pairs being moved must have been suspended
using the -exempt option.
Example
To move one-half of the SRDF pairing of SRDF group 10 to a new SRDF group 15:
symrdf half_movepair -sid 123 -file devicefile -rdfg 10 -new_rdfg 15
Dynamic R1/R2 swaps switch the SRDF personality of the SRDF device group or composite group. Swaps can also be performed
on devices in SRDF/A mode. Dynamic SRDF must be enabled to perform this operation.
Dynamic SRDF devices are configured as one of three types: RDF1 capable, RDF2 capable, or both. Devices must be configured
as both in order to participate in a dynamic swap.
Syntax
Use the symrdf list command with the -dynamic option to display SRDF devices configured as dynamic SRDF-capable:
symrdf list -dynamic [-R1] [-R2] [-both]
Syntax
You can issue the swap command for device groups, composite groups and device files:
symrdf [-g DgName |-cg CgName |-sg SgName |-f FileName] swap
-refresh {r1 | r2}
[-v | -noecho]
[-force]
[-symforce]
[-bypass]
[-noprompt]
[-i Interval]
[-c Count]
[-hop2 | -bcv [-hop2] | -all | -rbcv | -brbcv]
[-rdfg GrpNum]
[-sid SID]
Options
-bcv
Targets just the BCV devices associated with the SRDF device group for the swap action.
-all
Example
The following example:
● Swaps the R1 designation of the associated BCV RDF1 devices within device group ProdGrpB.
● Marks to refresh any modified data on the current R1 side of these BCVs from their R2 mirrors:
symrdf -g ProdGrpB -bcv swap -refresh R1
HYPERMAX OS
● Adaptive copy write pending is not supported when the R1 side of the SRDF pair is on an array running HYPERMAX OS. If
the R2 side is on an array running HYPERMAX OS and the mode of the R1 is adaptive copy write pending, SRDF sets the
mode to adaptive copy disk as a part of the swap.
This functionality requires that dynamic devices be both RDF1 and RDF2 capable.
Restrictions
The failover establish operation has the following restrictions:
● Both the R1 and the R2 devices in the failover must be dynamic SRDF devices.
● The R2 device cannot be larger than its R1 device.
● The swap cannot result in a cascaded R21<-->R21 device pair.
● This command cannot be executed on both mirrors of a concurrent R1 device (composite group operation). This swap would
convert the concurrent R1 into a concurrent R2, with a restore on both mirrors of that concurrent R2.
NOTE:
The symrdf failover -establish operation does not support devices operating in asynchronous mode with a read/
write link. This is because the R2 data is two or more HYPERMAX OS cycle switches behind the R1 data, and swapping
these devices would result in data loss.
Syntax
Options
-immediate
Deactivates the SRDF/A session immediately, without waiting for the two cycle switches to complete
before starting the failover -restore operation.
-establish
Begins copying data to invalidated targets, synchronizing the dynamic SRDF pairs once the SRDF pairs
are created.
-restore
Causes the dynamic SRDF device pairs to swap personality and start an incremental restore.
-remote
Requests a remote data copy flag with failback, failover, restore, update, and resume. When the
concurrent link is ready, data is copied to the concurrent SRDF mirror. These operations require the
remote data copy option, or the concurrent link to be suspended.
Restrictions
● If an SRDF group being failed over is operating in asynchronous mode, then all devices in the group must be failed over in the
same operation.
● The R1 and the R2 devices in the failover must be dynamic SRDF devices.
● The R2 device cannot be larger than its R1 device.
● The SRDF swap cannot result in a cascaded R21<-->R21 device pair.
● Not supported by any device group operations with more than one SRDF group.
● Cannot execute this command on both mirrors of a concurrent R2 device (composite group operation). This swap would
convert the concurrent R2 into a concurrent R1, with a restore on both mirrors of that concurrent R1.
SRDF/A restrictions
● All SRDF/A-capable devices running in asynchronous mode must be managed together in an SRDF/A session.
● For SRDF/A-capable devices enabled for consistency group protection, consistency must be disabled before attempting to
change the mode from asynchronous.
● SRDF Automated Replication (SRDF/AR) control operations are currently not supported for SRDF/A-capable devices
running in asynchronous mode.
● All SRDF/A sessions enabled within a consistency group operate in the same mode, multi-cycle or legacy (See SRDF/A cycle
modes for information on cycle modes.). For example, if:
○ SRDF group 1 connects Site A and Site B, both running HYPERMAX OS, and
○ SRDF group 2 Site A running HYMPERMAX OS and Site C running Enginuity 5876.
■ Group 1 can run in multi-cycle mode.
■ Group 2 must run in legacy mode.
If both groups are in the same consistency group and are enabled together, then group 1 will transition from multi-
cycle to legacy mode as a part of the enable.
● If there are tracks owed from the R2 to the R1, do not set mode to asynchronous.
NOTE:
If tracks are owed to the R1 device, the -force option is required to make SRDF/A-capable devices in asynchronous
mode Ready on the link.
Enginuity 5876
If either array in the solution is running Enginuity 5876, there are 2 cycles on the R1 side, and 2 cycles on the R2 side.
Each cycle switch moves the delta set to the next cycle in the process. This mode is referred to as "legacy mode".
A new capture cycle cannot start until the transmit cycle completes its commit of data from the R1 side to the R2 side, and the
R2 apply cycle is empty.
The basic steps in the life of a delta set in legacy mode include:
1. On the R1 side, host writes collect in the Capture cycle's delta set for a specified number of seconds.
The length of the cycle is specified using the -cycle_time option.
If a given track is overwritten multiple times, only the last write is preserved.
2. Once the cycle timer expires, and both the R1's Transmit cycle and the R2's Apply cycle are empty:
● The delta set in the R2's Receive cycle is moved to the R2's Apply cycle, from which it is transferred to disk.
● The delta set in the R1's Capture cycle is moved to the R1's Transmit cycle, from which it begins transferring to the R2's
Receive cycle.
● A new delta set is created as the R1 Capture cycle, to collect host writes. The delta set is received on the R2 side.
Subsequent host writes are collected into the next delta set.
Primary Site Secondary Site
R1
Capture Apply R2
N N-2
R1 Transmit Receive
N-1 N-1 R2
Mixed configurations
When one array in an SRDF configuration is running HYPERMAX OS, and one or more other arrays are running Enginuity 5876:
● SRDF/A single sessions (SSC) have only two cycles on the R1 side (legacy mode)
● SRDF/A multi-session consistency sessions (MSC) operate in legacy mode.
When a delta set is applied to the R2 target device, the R1 and R2 are in the consistent pair state. The R2 side is consistently 2
cycles behind the R1 site.
HYPERMAX OS
If both arrays in the solution are running HYPERMAX OS, both SSC and MSC operate in multi-cycle mode. There can be 2 or
more cycles on the R1, but only 2 cycles on the R2 side. Cycle switches are decoupled from committing delta sets from the R1
to the R2.
When the preset Minimum Cycle Time is reached, the R1 data collected during the capture cycle is added to the transmit queue
and a new R1 capture cycle is started. There is no wait for the commit on the R2 side before starting a new capture cycle.
The transmit queue holds cycles waiting to be transmitted to the R2 side. Data in the transmit queue is committed to the R2
receive cycle when the current transmit cycle and apply cycle are empty.
Capture
N ApplyApply
N-M-1
R1
Transmit queue
depth = M Receive R2
N-M
R1 Transmit
N-1
Transmit
N-M R2
Queuing allows smaller cycles of data to be buffered on the R1 side and smaller delta sets to be transferred to the R2 side.
The SRDF/A session can adjust to accommodate changes in the solution. If the SRDF link speed decreases or the apply rate on
the R2 side decreases, more SRDF/A capture cycles can be added to the R1 side.
Data on the R2 side can be more than 2 cycles behind the R1.
In event of R1 failure or link failure, a partial delta set of data can be discarded, preserving consistency on the R2. The maximum
data loss of for such failures can be more than two SRDF/A cycles.
The EMC VMAX3 Family Product Guide for VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS and the Dell EMC
VMAX All Flash Product Guide for VMAX 250F, 450F, 850F, 950F with HYPERMAX OS contain a detailed description of
SRDF/A multi-cycle mode.
SRDF/Asynchronous operations
All SRDF/A operations (with the exception of consistency exempt, discussed later) must be performed on all devices in an SRDF
group.
Thus, all devices in an SRDF group must be in the same SRDF device group. This is in contrast with SRDF/S, where operations
can be performed on a subset of devices in an SRDF group.
The following table summarizes the operations described in this chapter.
Display checkpoint complete status symrdf checkpoint Display a checkpoint complete status
when the data in the current cycle is
committed to the R2 side.
Delta Set Extension management symrdf set rdfa_dse Set the SRDF/A DSE attributes for an
SRDF group.
symconfigure commit
Enginuity 5786 only:
symcfg show
Manage transmit idle symrdf set rdfa -transmit_idle Allow SRDF/A sessions to manage
transient link outages without dropping.
Manage SRDF/A write pacing symrdf set rdfa_pace Enable SRDF/A write pacing for groups
or devices.
symrdf -rdfa_pace activate
symrdf -rdfa_pace deactivate
symrdf -rdfa_wpace_exempt
List SRDF/A- capable devices symrdf list -rdfa List SRDF/A capable devices.
Syntax
Use the set mode async operation to set the mode to asynchronous for a device group, composite group, or devices in a
device file:
Examples
To set device group prod to asynchronous mode:
symrdf -g prod set mode async
To set composite group Comp to asynchronous mode:
symrdf -cg Comp set mode async
To set the devices listed in device.txt to asynchronous mode:
symrdf -file device.txt set mode async
NOTE:
This operation may not be allowed on TimeFinder/Snap and TimeFinder/Clone device pairs. The Dell Solutions Enabler SRDF
Family State Tables Guide provide more information.
Syntax
Use the -consistent set mode sync operation to set the mode to synchronous for a device group, storage group, or
devices in a device file:
Examples
To switch modes from asynchronous to synchronous and maintain R2 data consistency in group prod:
symrdf -g prod -consistent set mode sync
To switch modes from asynchronous to synchronous and maintain R2 data consistency for devices listed in device file
devfile1:
symrdf -f devfile1 -consistent set mode sync
Syntax
To set the SRDF/A group-level attributes on an SRDF group:
.............
set rdfa
[-cycle_time 1 - 60]
[-priority 1 - 64]
[-transmit_idle {on|off}]
[-both_sides]
Options
-cycle_time (-cyc)
Sets the minimum time to wait before attempting an SRDF/A cycle switch. This option is not allowed for
VASA RDF groups.
Valid values are 1 through 60 seconds.
The default value for Enginuity 5876 and later is 15 seconds.
-priority (-pri)
Sets which SRDF/A sessions are dropped if the cache becomes full.
Valid values are 1 (highest priority, last to be dropped) through 64 (lowest priority).
The default value is 33.
-transmit_idle (-tra)
Allows the SRDF/A session to wait (not drop) when the link cannot transmit data. This option is not
allowed for VASA RDF groups.
Valid state values are on and off.
The default value is on.
-both_sides
Applies the SRDF/A attributes to both the source and target sides of an SRDF/A session.
If -both_sides is not specified, attributes are applied only to the source side.
Examples
To set the minimum cycle time for both sides of SRDF/A group 160:
symrdf -sid 134 -rdfg 160 set rdfa -cycle_time 32 -both_sides
To set the session priority for both sides of SRDF/A group 160:
symrdf -sid 134 -rdfg 160 set rdfa -priority 55 -both_sides
To set the cycle time and session priority for only the source side of SRDF/A group 12:
symrdf -sid 134 -rdfg 12 set rdfa -cycle_time 32 -priority 20
An RDF Set 'Attributes' operation execution is in progress for RDF group 12.
Please wait...
SRDF/A Set Min Cycle Time(1134,012)..........................Started.
Syntax
Use the symrdf verify command with -noinvalids and -consistent options to verify invalid tracks on device groups,
composite groups, storage groups, and device files.
Example
To monitor the clearing of invalid tracks every 60 seconds for the device group dg1 :
symrdf verify -g dg1 -consistent -noinv -i 60
None of the devices in the group 'dg1' are in 'Consistent with no invalid tracks' state.
Not all devices in the group 'dg1' are in 'Consistent with no invalid tracks' state.
All devices in the group 'dg1' are in 'Consistent with no invalid tracks' state.
For concurrent SRDF configurations, you must enable consistency for each R2 mirror separately.
Syntax
Restrictions
Because you must enable consistency for each R2 mirror separately in a concurrent relationship, you cannot use the -rdfg
all option.
Examples
To enable consistency protection for SRDF/A pairs in device group prod:
symrdf -g prod enable
To enable consistency protection for SRDF/A pairs listed in device file devfile1:
symrdf -file devfile1 -sid 123 -rdfg 10 enable
To enable consistency for devices in device file FileOne:
symrdf -f FileOne -sid 123 -rdfg 55 enable
To enable consistency for R2 devices in a concurrent configuration (SRDF group 56 and SRDF group 57) of devgroup2 :
symrdf -g devgroup2 -rdfg 56 enable
symrdf -g devgroup2 -rdfg 57 enable
Examples
To disable consistency protection for SRDF/A pairs in device group prod:
symrdf -g prod disable
To disable consistency protection for SRDF/A pairs listed in device file devfile1:
symrdf -file devfile1 -sid -rdfg 10 disable
The consistency exempt option (-exempt) is available with Enginuity 5876 and higher.
Use the consistency exempt option to dynamically add and remove device pairs from an active SRDF/A session without
affecting:
● The state of the session, or
● Reporting of SRDF pair states for devices that are not the target of the operation
When enabled, the consistency exempt option places devices into a consistency exempt state. Exempt devices are excluded
from the group's consistency check.
After the operation is complete, the consistency exempt state is automatically terminated. Specifically, consistency is
terminated when:
● The target devices are resumed and fully synchronized and
● Two full cycle switches have occurred, or
The devices are removed from the group.
When devices are suspended and consistency exempt (within an active SRDF/A session) they can be controlled apart from
other devices in the session. This is useful for resume, establish, deletepair, half_deletepair, movepair, and
half_movepair operations.
Restrictions
● The consistency exempt option cannot be used for:
○ Devices that are part of an SRDF/Star configuration.
○ An SRDF/A session that is in the Transmit Idle state.
● If the device is an R2 device and the SRDF/A session is active, the half_movepair and half_deletepair commands
are not available.
Steps
1. Use the createpair -establish operation to create the new device pairs, add them to a temporary SRDF group (10),
and synchronize:
symrdf createpair -file Myfile -sid 1234 -rdfg 10 -type RDF1 -establish
3. Use the suspend operation to suspend the device pairs in the temporary group so they can be moved to the active
SRDF/A group:
symrdf suspend -file MyFile -sid 1234 -rdfg 10
NOTE:
Since the temporary group is synchronous, you cannot use the consistency exempt option.
4. Use the movepair operation with the -exempt option to move the device pairs from the temporary SRDF group to the
active SRDF/A group:
symrdf movepair -file MyFile -sid 1234 -rdfg 10 -new_rdfg 20 -exempt
6. Use the verify -consistent -noinvalids operation to display when the device pairs become consistent and there
are no invalid tracks on the R1 and R2 sides:
symrdf verify -file MyFile -sid 1234 -rdfg 20 -consistent -noinvalids
NOTE: Do not enable host access to the R1 side until the pair state for the devices reaches Consistent.
Steps
1. Use the suspend operation with the -exempt option to suspend the device pairs to be removed:
symrdf suspend -file MyFile -sid 1234 -rdfg 20 -exempt
2. Use the movepair operation to move the device pairs from the current SRDF group to another SRDF group:
symrdf movepair -file MyFile -sid 1234 -rdfg 20 -new_rdfg 30
NOTE: Do not enable host access to the R1 side until the pair state for the devices reaches Consistent.
Syntax
You can issue the checkpoint operation on a device group, composite group, storage group, or device file:
Options
-c Count
Number of times (Count) to repeat the operation before exiting.
-i Interval
Number of seconds to wait between successive iterations of the operation.
Default: 10 seconds.
Minimum interval: 5 seconds.
If -c Count is not specified and -i Interval is specified, the operation repeats continuously at the specified interval.
If -c Count is specified and -i Interval is not specified, the operation repeats the specified number of iterations using the default
interval.
Restrictions
All specified devices must be in the same SRDF/A session.
Examples
To confirm R2 data copy for device group prod:
symrdf -g prod checkpoint
To confirm the R2 data copy for devices in device group Test in RA group 7 on the second hop of a cascaded SRDF
configuration:
DSE is not designed to solve permanent or persistent problems such as unbalanced or insufficient cache, host writes that
consistently overrun cache, and long link outages.
When the SRDF/A session is activated, DSE is activated (on the R1 and R2 sides) if the autostart for DSE is set to enabled on
both the R1 and the R2 sides. Autostart for DSE can be enabled/disabled, but the change does not take effect until the SRDF/A
session is dropped and re-activated. By default, autostart for DSE is enabled regardless of whether the side is the R1 or R2 side.
DSE starts paging SRDF/A tracks to the DSE pool when the array write pending count crosses the DSE threshold (-
threshold option). The default threshold is 50 percent of the System Write Pending Limit. After a cycle switch, Enginuity
reads tracks from the DSE pool back into the array cache so that they can be transferred to the R2.
Enginuity 5876
Arrays running Enginuity 5876, can share SRDF/A DSE pools among multiple SRDF/A groups. A single SRDF/A group can have
up to 4 DSE pools associated with it (one for each device emulation type).
HYPERMAX OS
Arrays running HYPERMAX OS come preconfigured with one or more Storage Resource Pools (SRPs) containing all the storage
available to the array. SRDF/A DSE allocations are made against one SRP per array designated as the SRP for DSE.
The SRP designated for DSE supports the DSE allocations for all SRDF/A sessions on the array.
The default SRP for DSE is the default SRP for FBA devices.
You can change which SRP is associated with DSE, and you can change the capacity of the SRP associated with DSE.
Dell Solutions Enabler Array Controls and Management CLI User Guide describes the steps to modify which SRP is associated
with DSE.
Restrictions
● CFGSYM access rights and Storage Admin authorization rights are required to run the symconfigure set command.
Syntax
Options
MaxCap
Specifies the maximum capacity in the array's DSE SRP. Valid values are:
● 1 - 100000 - Specifies the maximum number of GB in the specified SRP that can be used by DSE.
● NoLimit - Specifies that DSE can use the entire capacity of the specified SRP.
Examples
To set the maximum DSE capacity on SID 230 to a value of 100 GB:
. . .
Terminating the configuration change session..............Done.
Restrictions
● A DSE pool cannot have the same name as a Snap pool on the same array.
● Each DSE pool can only contain one type of device emulation: FBA, CKD3390, CKD3380, or AS400.
● Each SRDF group can have at most one pool of each emulation.
Syntax
Options
-autostart (-aut)
Whether SRDF/A DSE is automatically enabled or disabled when an SRDF/A session is activated for an
SRDF group.
Valid values are on or off.
Default is off.
Syntax
Use the -_pool commands with no PoolName argument to remove the association between the specified SRDF group and
DSE pools.
Example
To clear the DSE pool names for all 4 emulation types:
symrdf -sid 432 -rdfg 75 set rdfa_dse -fba_pool -ckd3390_pool -ckd3380_pool -as400_pool
The RDF "Attributes'' operation successfully executed for RDF group 75.
Syntax
To add and enable SAVE devices to a DSE pool:
Example
Syntax
Restrictions
The last device cannot be removed from an SRDF/A DSE pool if the pool is associated with an SRDF group.
Example
remove dev 018B from pool finance, type = rdfa_dse;
Example
Steps
1. Use the symcfg list -sid SID -pools -rdfa_dse command to list the configured DSE pools.
2. Create a text file containing the commands to set attributes for an SRDF group.
The first command in the file must be to set the threshold.
The following commands carry out the following for SRDF group 7.:
● Set the threshold,
● Associate with DSE pool r1pool,
● Specify FBA emulation, and
● Enable autostart
Syntax
Example
To display the utilization for DSE pool BC_DSE:
symcfg show -sid 03 -pool BC_DSE -rdfa_dse
The SRDF links must be in asynchronous mode and SRDF/A must be active for activate or deactivate actions to
succeed.
Use the following commands to modify the device group, composite group, or file:
symrdf [-g DgName | -cg CgName | -f FileName]
activate | deactivate -rdfa_dse
● Modify the SRDF/A DSE status using RA group operations when the SRDF link status is Read Write.
Use the following commands to modify the group:
The -both_sides option activates/deactivates SRDF/A DSE for groups on both the source and target sides. Otherwise,
the activate/deactivate is only performed on the source side.
● Set the group mode to sync or acp when SRDF/A DSE is active for an SRDF group.
Restrictions
Restrictions on activating SRDF/A DSE with dynamic cache partitioning include:
● All devices in the SRDF/A session must be in the same DCP.
● The rdfa_dse_threshold must be set, and must be lower than the rdfa_cache_percentage setting.
● The SRDF group must have at least one associated DSE pool with SAVE devices enabled.
Use the following syntax to activate SRDF/A DSE when dynamic cache partitioning is enabled:
After activation, R1 and R2 cache usage is reported as a percent of DCP Write Pending Limit.
Restrictions
When the SRDF pair is in the Transmit Idle state, only the following operations are allowed from the R1 side:
● rw_enable -r1
● write_disable -r1
● ready -r1
● not_ready -r1
● suspend -immediate
When the SRDF pair is in the Transmit Idle state, only the following operations are allowed from the R2 side:
● suspend -immediate
● failover -immediate
If at the beginning of a control action, all SRDF/A groups are not in the Transmit Idle state, the action fails if one of the groups
enters the Transmit Idle state during processing.
Syntax
set rdfa
[-transmit_idle {on | off}]
[-both_sides]
Group-level pacing
Group-level pacing is dynamically enabled for the entire SRDF/A group when slowdowns in host I/O rates, transmit cycle rates,
or apply cycle rates occur. SRDF/A group-level write pacing monitors and responds to:
● Spikes in the host write I/O rates
● Slowdowns in data transmittal between R1 and R2
● R2 restore rates.
Group-level pacing controls the amount of cache used by SRDF/A. This prevents cache overflow on both the R1 and R2 sides,
and helps the SRDF/A session to continue running.
Group-level pacing requires Enginuity 5876 or greater.
SRDF/A write pacing is not allowed on VASA SRDF groups.
HYPERMAX OS introduced enhanced group-level pacing. Enhanced group-level pacing paces host I/Os to the DSE transfer rate
for an SRDF/A session. When DSE is activated for an SRDF/A session, host-issued write I/Os are throttled so their rate does
not exceed the rate at which DSE can offload the SRDF/A session's cycle data.
Enhanced group-level pacing requires HYPERMAX OS on the R1 side. The R2 side can be running either HYPERMAX OS or
Enginuity 5876.
Enhanced group-level pacing responds only to the spillover rate on the R1 side. It is not affected by spillover on the R2 side.
Device-level pacing
Device-level pacing is for SRDF/A solutions in which the SRDF/A R2 devices participate in TimeFinder copy sessions.
NOTE:
Operations
SRDF/A write pacing bases some of its actions on the following:
● R1 side cache usage
● Transfer rate of data from transmit delta set to receive delta set
● Restore rate on the R2 side
SRDF/A group-level write pacing can respond to the following conditions:
● The write-pending level on an R2 device in an active SRDF/A session reaches the device's write-pending limit.
● The restore (apply) cycle time on the R2 side is longer than the capture cycle time.
The enhanced group-level write pacing feature can effectively pace host write I/Os in the following operational scenarios:
● Slower restore (apply) cycle times on specific R2 devices that are managed by slower-speed physical drives.
● FAST operations that lead to an imbalance in SRDF/A operations between the R1 and R2 sites.
● Sparing operations that lead to R2-side DAs becoming slower in overall restore operations.
● Production I/Os to the R2 side that lead to DAs and/or RAs becoming slower in restore operations.
● Restore delays during the pre-copy phase of TimeFinder/Clone sessions before activation.
The configuration and management of group-level write pacing are unaffected by this enhancement.
Symmetrix ID : 000195701134
S Y M M E T R I X R D F A G R O U P S
-------- ---------- -------- ----- --- --- --------- ------------------------
Write Pacing
RA-Grp
Group Flags Cycle Pri Thr Transmit Delay Thr GRP DEV FLGS
Name CSRM TDA time Idle Time (usecs) (%) SAU SAU P
-------- ---------- -------- ----- --- --- --------- ------- --- --- --- ----
153 (98) lc153142 .IS- XI. 15 33 50 000:00:00 50000 60 I.- I.- X
.
.
(FLGS) Flags for Group-Level and Device-Level Pacing:
Devs (P)aceable : X = All devices, . = Not all devices, - = N/A
An X in the FLGS P column indicates that all of the devices in the SRDF group can be paced. A period in the FLGS P column
indicates that some of the devices in the SRDF group cannot be paced either because they have been set exempt from
group-level write pacing or because they are not pace-capable.
2. Use the symrdf list command to determine which devices cannot be paced.
a. Use the symrdf list command with the -rdfa_wpace_exempt option to identify devices that are exempt from
group-level write pacing.
b. Use the symrdf list command with the -rdfa_not_pace_capable option to identify devices participating in the
SRDF/A session that are not pace-capable.
3. Use the symdev show command to obtain additional information about the devices identified in the previous step. This
command provides the following information related to write pacing:
● Whether the device is exempt from group-level write pacing
● Whether write pacing is currently activated and supported
● Whether the device is pace-capable
Syntax
Use the symrdf set rdfa_pace command to set the SRDF/A write pacing attributes for an SRDF group.
.............
set rdfa_pace
[-dp_autostart {on | off}]
[-wp_autostart {on | off}]
[-delay 1 - 1000000]
[-threshold 1 - 99]>
[-both_sides]
Options
-dp_autostart (-dp_aut)
Whether SRDF/A device-level pacing is automatically enabled or disabled when an SRDF/A session is
activated or deactivated for an SRDF group.
Valid state values are on or off.
Default is off.
-wp_autostart (-wp_aut)
Whether the SRDF/A group-level pacing feature is automatically enabled or disabled when an SRDF/A
session is activated for an SRDF group.
Valid state values are on or off.
Default is off.
-delay (-del)
Sets the maximum host I/O delay, in microseconds, that the SRDF/A write pacing can cause.
Valid values are 1 through 1000000 microseconds.
Default is 50000 microseconds.
-threshold (-thr)
Sets the minimum percentage of the array write-pending cache at which the array begins pacing host
write I/Os for an SRDF group.
Valid values are between 1 and 99.
Default is 60.
If you plan on swapping the personalities of the R1 and R2 devices, configure the same SRDF/A
write pacing values on both sides.
Examples
In the following example, SRDF/A group-level write pacing is enabled for SRDF group 12 with:
● A maximum of a 1000 microsecond delay
● A write pending cache threshold of 55 percent
If the calculated delay is less than the specified delay (1000), the calculated delay is used.
symrdf -sid 134 -rdfg 12 set rdfa_pace -delay 1000 -threshold 55 -wp_autostart on
To display two entries for each attribute being applied; one for the source side and one for the target side, use the
-both_sides, option:
symrdf -sid 432 -rdfg 75 set rdfa_pace -delay 500 -threshold 10 -wp_autostart on
-dp_autostart on -both_sides
Syntax
To activate and deactivate SRDF/A write pacing at the device-group level:
Examples
To activate group-level write pacing for SRDF group 76:
Restrictions
● The symrdf activate/deactivate -rdfa_pace commands act on all devices in the SRDF group.
● The R1 array is accessible.
● The SRDF/A session under control is active and contains at least one participating device.
● The symrdf deactivate -rdfa_pace command requires the following:
Examples
To activate group-level and device-level write pacing simultaneously for the ConsisGrp Consistency Group:
symrdf -cg ConsisGrp activate -rdfa_pace
To deactivate both group-level and device-level write pacing on the devices in DeviceFile2:
symrdf -file DeviceFile2 -sid 55 -rdfg 2 deactivate -rdfa_pace
Display SRDF/A
This section shows how to display information about: and.
1. SRDF/A groups using the query operation
2. Devices capable of participating in a SRDF/A session using the list operation
Note that the output of list and query operations varies depending on whether SRDF/A is in multi-cycle mode (HYPERMAX
OS) or legacy mode (Enginuity 5876).
Syntax
Use the show operation to display SRDF/A session status information:
symrdf show Dgname
Use the query operation to display SRDF/A group information:
symrdf -g DgName query -rdfa
Description
SRDF/A-capable devices in an SRDF group are considered part of the SRDF/A session. The session status is active or inactive,
as follows:
● Active indicates the SRDF/A mode is active and that SRDF/A session data is being transmitted in operational cycles to the
R2.
● Inactive indicates the SRDF/A devices are either Ready or Not Ready on the link and working in their basic mode
(synchronous, semi-synchronous, or adaptive copy).
NOTE:
If the links are suspended or a split operation is in process, SRDF/A is disabled and the session status shows as Inactive.
Syntax
Use the list operation to list SRDF/A-capable devices (R1, R2 and R21 devices) that are configured in SRDF groups:
symrdf list -rdfa
SRDF/A-capable does not mean the device is actually operating in asynchronous mode, only that it is capable of doing so.
There is no command that lists devices that are actually operating in asynchronous mode.
The device type shows as R1 for SRDF/A-capable devices on the R1 and as R2 for SRDF/A-capable devices on the R2.
The R21 device type represents a cascaded SRDF device configuration.
SRDF/Metro Overview
The following sections contain an overview of SRDF/Metro. For detailed information on SRDF/Metro concepts, see the Dell
SRDF Introduction.
For connectivity requirements on HYPERMAX OS 5977 and Enginuity 5876 versions SRDF/Metro, refer to the SRDF Interfamily
Connectivity Information.
For more information on disaster recovery for SRDF/Metro, see Disaster recovery facilities.
What is SRDF/Metro?
SRDF/Metro is a high availability facility, rather than a disaster recovery facility as provided by other SRDF implementations.
In its simplest form SRDF/Metro has no disaster recovery protection. However, the HYPERMAX OS 5977 Q3 2016 SR release
adds disaster recovery capabilities.
In its basic form, SRDF/Metro consists of pairs of R1 and R2 devices, which are connected by an SRDF link, just like any other
SRDF configuration. However, in SRDF/Metro both sets of devices are write accessible to host systems simultaneously. Indeed
a pair of devices appears as a single, virtual device to the host systems. SRDF/Metro synchronously copies data written to
either device in a pair to its partner. This ensures that both devices have identical content.
Disaster recovery
In its simplest form SRDF/Metro has no disaster recovery protection. However, from HYPERMAX OS 5977 Q3 2016 SR, disaster
recovery capabilities have been introduced. Either of the participating arrays can be connected to an array at a remote location.
Alternatively, for added robustness, each array can be connected to a remote array. The connections between the Metro region
and the DR arrays use SRDF/A or Adaptive Copy Disk (ADP) to replicate data. There is more information on disaster recovery
for SRDF/Metro in Disaster recovery facilities on page 168.
Array witness
When using the Array witness option, SRDF/Metro uses a third "witness" array to determine the winner side. The witness array
runs one of these operating environments:
● PowerMaxOS 5978.144.144 or later
● HYPERMAX OS 5977.945.890 or later
● HYPERMAX OS 5977.810.784 with ePack containing fixes to support SRDF N-x connectivity
● Enginuity 5876 with ePack containing fixes to support SRDF N-x connectivity
The Array witness option requires two SRDF groups; one between the R1 array and the witness array, and a second between the
R2 array and the witness array.
A witness group is an SRDF group with the sole purpose of letting an array act as a witness for any or all SRDF/Metro sessions
connected to the array at the other side of the witness group.
NOTE: The term witness array is relative to a single SRDF/Metro configuration. While the array acts as a witness for that
configuration, it may also contain other SRDF groups, including SRDF/Metro groups.
SR
p
ou
DF
gr
W
s
i tn
es
i tn
es
sg
W
DF
ro
up
SR
SRDF links
R1 R2
R1 array R2 array
Figure 14. SRDF/Metro Array witness and groups
When the Array witness option is in operation, the state of the device pairs is ActiveActive.
If the witness array becomes inaccessible from both the R1 and R2 arrays, the state of the device pairs becomes ActiveBias.
v W on
ec R1
IP
ity
itn nec
tiv
C
nn ss
Co tne
es tiv
s R ity
IP Wi
v
2
SRDF links
R1 R2
R1 array R2 array
Figure 15. SRDF/Metro vWitness vApp and connections
Unisphere for PowerMax, Unisphere for VMAX and SYMCLI provide facilities to manage a vWitness configuration. The user can
add, modify, remove, enable, disable, and view vWitness definitions on the arrays. Also, the user can add and remove vWitness
instances. To remove an instance, however, it must not be actively protecting SRDF/Metro sessions.
Device bias
Device bias is the only bias method. When making device pairs R/W on the SRDF link, use the -use_bias option to indicate
that the Device bias method should be used for the device pairs. The bias side is the R1 side. However, if there is a failure on the
array that contains the bias side, the host loses device access.
NOTE: The Device bias method provides no way to make the R2 device available to the host.
To change the bias side of a device group, composite group, storage group, or devices from one side to the other, use the set
bias R1 | R2 option.
NOTE: On arrays running PowerMaxOS 5978, the set bias operation is only allowed if the devices in the SRDF/Metro
session are operating with Device bias and are in the ActiveBias RDF pair state.
The ActiveBias pair state indicates that devices operating with Device bias are ready to provide high availability.
Replication modes
As the diagram shows, the links to the disaster-recovery site use either SRDF/Asynchronous (SRDF/A) or Adaptive Copy Disk.
In a double-sided configuration, each of the SRDF/Metro arrays can use either replication mode.
There are several criteria a witness takes into account when selecting the winner side. For example, a witness might take DR
configuration into account.
Operating environment
In a HYPERMAX OS environment, both SRDF/Metro arrays must run HYPERMAX OS 5977.945.890 or later. The disaster-
recovery arrays can run Enginuity 5876 and later or HYPERMAX OS 5977.691.684 and later.
In a PowerMaxOS environment, both SRDF/Metro arrays must run PowerMaxOS 5978.144.144 or later. The disaster recovery
arrays can run PowerMaxOS 5978.144.144 and later, HYPERMAX OS 5977.952.892 and later, or Enginuity 5876.288.195 and
later.
createpair command
-metro enables the creation of device pairs in an SRDF/Metro configuration. The createpair –metro command provides
the following operations:
● -establish [-use_bias]
● -restore [-use_bias]
● -invalidate r1
● -invalidate r2
● -exempt
● -format
Create device pairs shows how to create device pairs in an SRDF/Metro configuration.
Display SRDF/Metro
The output of show and list commands displays devices in SRDF/Metro configurations. In the example listings, text specific
to SRDF/Metro configurations appears in bold.
symdev show
Output of the symdev show command displays the ActiveActive or ActiveBias pair state. Specific results relating to SRDF/
Metro include:
● SRDF pair states (RDF Pair State ( R1 <===> R2 ) of ActiveActive or ActiveBias
● SRDF mode of Active for an SRDF device
The following output is for an R1 device when it is in an SRDF/Metro configuration and the pair state is ActiveActive. The R1
designation indicates that this is the winner side:
The following output is for an R2 device when it is in an SRDF/Metro configuration and the pair state is ActiveActive. The R2
designation indicates that this is the loser side:
S Y M M E T R I X R D F G R O U P S
RDFA Flags :
(C)onsistency : X = Enabled, . = Disabled, - = N/A
(S)tatus : A = Active, I = Inactive, - = N/A
(R)DFA Mode : S = Single-session, M = MSC, - = N/A
(M)sc Cleanup : C = MSC Cleanup required, - = N/A
Symmetrix ID : 000197100084
S Y M M E T R I X R D F G R O U P S
Legend:
Group (S)tatus : O = Online, F = Offline
Group (T)ype : S = Static, D = Dynamic, W = Witness
Director (C)onfig : F-S = Fibre-Switched, F-H = Fibre-Hub
G = GIGE, E = ESCON, T = T3, - = N/A
Group Flags :
Prevent Auto (L)ink Recovery : X = Enabled, . = Disabled
Prevent RAs Online Upon (P)ower On: X = Enabled, . = Disabled
Link (D)omino : X = Enabled, . = Disabled
(S)TAR/SQAR mode : N = Normal, R = Recovery, . = OFF,
S = SQAR Normal, Q = SQAR Recovery
RDF Software (C)ompression : X = Enabled, . = Disabled, - = N/A
RDF (H)ardware Compression : X = Enabled, . = Disabled, - = N/A
RDF Single Round (T)rip : X = Enabled, . = Disabled, - = N/A
RDF (M)etro : X = Configured, . = Not Configured
RDF Metro Flags :
(C)onfigured Type : W = Witness, B = Bias, - = N/A
(E)ffective Type : W = Witness, B = Bias, - = N/A
Witness (S)tatus : N = Normal, D = Degraded,
F = Failed, - = N/A
Options
Table 21. createpair -metro options
Option Preserves Data SRDF/Metro Group Polarity can
differ from
Not Empty Empty RW on Link NR on Link SRDF/Metro
-invalidate R1/ Y Y
R2
Y Y Y
-format Y Y Y
-establish Y Y
-restore Y Y
-exempt Y Y Y Y Y
Restrictions
The following operations are not allowed when using the symrdf createpair command to create concurrent RDF devices:
● Adding an SRDF/Metro mirror when the device is already part of an SRDF/Metro configuration.
● Adding an SRDF/Metro mirror when the device is already an R2 device.
● Adding a non-SRDF/Metro R2 mirror to a device that has an Metro RDF mirror.
● Adding an SRDF/Metro mirror when the non-SRDF/Metro mirror is in Synchronous mode.
● Adding a non-SRDF/Metro mirror in Synchronous mode when the device is already part of an SRDF/Metro configuration.
● An SRDF/Metro group cannot contain a mixture of R1 and R2 devices except for devices added as exempt that have not yet
synchronized between the two sides.
Examples
In the following example:
● -metro indicates the devices are created in a SRDF/Metro configuration.
● -sid 174 -type R1 indicates array 174 is the R1 side.
● -sg specifies the name of the storage group.
● -remote_sg specifies the remote storage group name.
● -establish starts the synchronization process from R1 to R2 devices.
NOTE: Since -use_bias is not specified, the -establish operation requires either a witness array or a vWitness,
otherwise the createpair action is blocked.
symrdf createpair -metro -sid 174 -type R1 -rdfg 2 -sg RDF1_SG -remote_sg RDF2_SG –
establish
Execute an RDF 'Create Pair' operation for storage group 'RDF1_SG' (y/[n]) ? y
Restrictions
● SRDF device pairs cannot be created in an SRDF Witness group
● Both the R1-side and R2-side arrays must be running HYPERMAX OS 5977.691.684 or later.
● The createpair -establish -metro requires that the specified RDF group be empty.
Restrictions
● Both arrays in the SRDF/Metro configuration must run HYPERMAX OS 5977 Q3 2016 SR or later.
● The -format option cannot be used to add devices into an empty SRDF group.
● The new devices must be unmapped or NR.
Example
Syntax
Use the symrdf createpair command with the -invalidate r1 or -invalidate r2 option to create devices (R1 or
R2) in a new or existing configuration.
The createpair -metro -invalidate R1/R2 operation can be used to add device pairs to an empty SRDF/Metro
configuration, or to an existing one, provided that all device pairs already in the group are Not Ready (NR) on the SRDF link.
When the command completes, you can:
● Use the establish command to start copying data to the invalidated R2/target devices.
● Use the restore command to start copying to the invalidated R1/source devices.
Example
NOTE: The -exempt option can only be used if the SRDF/Metro session contains at least one non-exempt device.
NOTE: movepair operations cannot move devices from an SRDF/A or an SRDF/Metro group.
Options
Table 22. createpair, movepair (into SRDF/Metro) options
Option Preserves Data SRDF/Metro Group Polarity can
differ from
Not Empty Empty RW on Link NR on Link SRDF/Metro
-exempt Y Y Y Y Y
Example
In the following example, (building on the createpair examples above that left the devices in the group RW on the link), the
createpair command:
● Creates device pairs using device pairs listed in a device file /tmp/device_file placing them in the SRDF/Metro session.
● -exempt option indicates that data on the R1 side of the new RDF device pairs should be preserved and host accessibility
should remain on the R1 side.
● After creating the new device pairs in RDF group 86, Solutions Enabler performs an establish on them, setting them RW
on the RDF link with SyncInProg RDF pair state. Then they will transition to the ActiveActive RDF pair state if the devices
In the following example (building on the createpair examples above), the movepair command:
● Moves existing RDF pairs using device pairs listed in a device file /tmp/device_file from RDF group 10 on array 456 to
the SRDF/Metro session.
● The -exempt option is required because the device pairs already in the session are RW on the RDF link. The -exempt
option would also be required if the R1 side of RDF group 10 was on array 456, since then the device pairs being added to
the SRDF/Metro session would have reversed polarity relative to the device pairs already in the session, whose R1 side is on
array 085.
Restrictions
The following restrictions apply when removing devices from SRDF/Metro:
● The RDF device pairs in the SRDF/Metro session must have an SRDF pair state of Suspended, SyncInProg, ActiveActive, or
ActiveBias, otherwise the operation is blocked.
● If devices that are being removed from the session have the SyncInProg SRDF pair state, the -symforce and -keep R1
options are required.
● The -keep R2 option is allowed only if the SRDF pair state is ActiveActive or ActiveBias.
● deletepair operations cannot remove last device from the group with the -exempt option.
● movepair operations cannot remove last device with or without the -exempt option.
● movepair operations cannot move into an SRDF/A or SRDF/Metro group.
NOTE: Once the deletepair or movepair is issued, it is required to clear the device inactive indication on the
inaccessible side with the command symdev ready -symforce to make the devices accessible to host again.
NOTE: When using movepair, it leaves devices in Synchronous mode and Suspended in the new group.
Examples
In the following example, the deletepair command:
● removes the RDF device pairs described in file /tmp/device_file and then deletes the RDF pairings.
● uses the -keep option because the devices are RW on the RDF link. The -keep R1 indicates that the current R1-side
devices should remain host-accessible after the deletepair operation.
After completing the movepair operation, the devices that were previously identified as R2 will remain host-accessible and will
be identified as R1 and the devices that were previously identified as R1 will be host-inaccessible and will be identified as R2.
Steps
1. Remove a device pair with the deletepair or half_deletepair command. For half_deletepair, replace both
sides of the device pair.
2. Use the applicable set -no_identity command to restore the native identity of the specified device, or all the devices in
the specified group.
To restore the personality of R2 (now non-SRDF) devices in storage group RDF_2SG:
symsg -sid 248 -sg RDF2_SG set -no_identity
Manage resiliency
This section contains information on managing the available resiliency options:
● Witness SRDF groups
● vWitness definitions
● Setting SRDF/Metro preference
symrdf addgrp -sid 0085 -rdfg 10 -remote_sid 086 -remote_rdfg 110 -dir 1g:28
-remote_dir 1g:28 -nop -label Witness1 -witness
To modify a SRDF/Metro Witness group, include the -witness option in the modifygrp operation.
For example, to add director 1g:29 to SRDF/Metro Witness group 10:
vWitness definitions
In an SRDF/Metro configuration that uses the vWitness method, you maintain a list of vWitness definitions on each of the
participating arrays. You can use SYMCLI commands to add, enable, modify, remove, suspend, and view vWitness definitions. as
the following sections show.
The Dell SRDF/Metro vWitness Configuration Guide contains more information on how to set up and manage a vWitness
configuration. That includes information on how to manage vWitness instances.
This command also enables the definition automatically, but you can disable it using symcfg disable as described in Disable
the use of a vWitness definition.
NOTE: Create only one definition for each vWitness instance, specifying either the IP address or the fully qualified DNS
name of the instance.
Example
To add and enable a vWitness definition named metrovw1 that refers to a vWitness instance at IP address 198.51.100.24 on the
storage array 1234:
Use the -force option when the definition is in use (protecting a Metro configuration), and there is another Witness (either an
Array or a Virtual Witness) available to take over from this one.
Use the -symforce when the definition is in use and there is no other Witness available to take over from this one.
Example
To disable (suspend) the availability of the vWitness definition named metrovw1 on storage array 1234 when there is no other
Witness available:
Example
To enable the vWitness definition named metrovw1:
Example
To change the IP address of a vWitness definition with the name metrovw1 on storage array 1234 to 198.51.100.32:
Example
To remove the vWitness definition named metrovw1 from storage array 1234:
The -v option produces detailed information, similar to that produced by the show argument, but for all vWitness definitions.
Output is available in text or XML format. Use -out xml to generate XML.
Use the -offline option to display information from the data cached in the Solutions Enabler database file.
Examples
Display information about all vWitness instances on the storage array 1234:
Display information about vWitness definition named metrovw1 on storage array 1234:
In the event of a link failure (or suspend), the witness decides which side remains host-accessible, giving preference to the
winner side, but not guaranteeing that is the side that remains accessible. Changing the winner side makes it appear that a
symrdf swap has been performed. It might be necessary to do this prior to suspending the group, in order to change the side
that will remain host-accessible.
Steps
1. Use the symrdf query command to display the devices before changing their bias.
2. Use the symrdf set bias command to change the bias of the devices.
For example, to change the bias of devices in storage group RDF1_SG to the R2 side:
symrdf -sid 174 -sg RDF1_SG -rdfg 2 set bias R2
The winner-side devices remain host-accessible. Following a symrdf suspend -keep R2, these are the devices that had
been the R2 side until the suspend was issued.
2. Use the symrdf establish command with the -use_bias option to resume the link. The bias remains set on the R1
side (the R2 side prior to the suspend operation):
Host
VMHBA 4 VMHBA 5 VMHBA 6 VMHBA 7
fabric A fabric A fabric B fabric B
3F:30,31 3F:10,11
SID 475 4F:30,31 4F:10,11 SID 039
000D0 000F0
000DF 000FF
1F:10,11 3F:10
4F:10
9F:8,9 10F:8,9
SID 105
Witness array
Steps
1. On the host, use the symcli command to verify the version of Solutions Enabler is 8.1 or later.
2. Use the symrdf addgrp command to create Witness SRDF groups between SIDs 475/105 and 039/105:
symrdf addgrp -witness -label SG_120 -sid 000196700475 -rdfg 120 -dir 1F:10,1F:11
-remote_sid 000197200105 -remote_rdfg 120 -remote_dir 9F:8,9F:9
symrdf addgrp -witness -label SG_121 -sid 000197200039 -rdfg 121 -dir 3F:10,4F:10
-remote_sid 000197200105 -remote_rdfg 121 -remote_dir 10F:8,10F:9
3. Use the symrdf addgrp command to create the SRDF group for the SRDF pairs between SIDs 475 and 039:
4. Use the createpair command with the -metro option to create SRDF/Metro device pairs. The file rdfg20 defines the
device pairs.
6. Use symcfg list commands with the -metro option to display the SRDF groups.
To display group 20 on SID 475:
Symmetrix ID : 000196700475
S Y M M E T R I X R D F G R O U P S
Local Remote Group RDF Metro
------------ --------------------- --------------------------- -----------------
LL Flags Dir Witness
RA-Grp sec RA-Grp SymmID ST Name LPDS CHTM Cfg CE S Identifier
------------ --------------------- --------------------------- -- --------------
20 (13) 10 20 (13) 000197200039 OD SG_20 XX.. ..XX F-S WW N 000197200105
Legend:
Group (S)tatus : O = Online, F = Offline
Group (T)ype : S = Static, D = Dynamic, W = Witness
Director (C)onfig : F-S = Fibre-Switched, F-H = Fibre-Hub
G = GIGE, E = ESCON, T = T3, - = N/A
Group Flags :
Prevent Auto (L)ink Recovery : X = Enabled, . = Disabled
Prevent RAs Online Upon (P)ower On: X = Enabled, . = Disabled
Link (D)omino : X = Enabled, . = Disabled
(S)TAR/SQAR mode : N = Normal, R = Recovery, . = OFF
S = SQAR Normal, Q = SQAR Recovery
RDF Software (C)ompression : X = Enabled, . = Disabled, - = N/A
RDF (H)ardware Compression : X = Enabled, . = Disabled, - = N/A
RDF Single Round (T)rip : X = Enabled, . = Disabled, - = N/A
RDF (M)etro : X = Configured, . = Not Configured
RDF Metro Flags :
(C)onfigured Type : W = Witness, B = Bias, - = N/A
(E)ffective Type : W = Witness, B = Bias, - = N/A
Witness (S)tatus : N = Normal, D = Degraded,
F = Failed, - = N/A
Symmetrix ID : 000197200039
Symmetrix ID : 000197200039
S Y M M E T R I X R D F G R O U P S
Symmetrix ID : 000197200105
S Y M M E T R I X R D F G R O U P S
NOTE:
● For an R1 device, the symdev list -wwn_non_native command does not show anything. (In case a set bias
R2 or suspend -keep R2 was done, the new R1 has the identity of the original R1 (and the new R2/original R1)
has no -wwn_non_native.)
● The symdev show command for the R2 device shows its native WWN (Device WWN field) and its external WWN
(Device External Identity/Device WWN field).
● The second WWN (Device External Identity) should match the native WWN of its R1 partner, and should also be the
value displayed by the symdev list -non_native_wwn command.
9. Map and mask the R2 devices to the host and access additional paths to the devices.
The following image shows the final SRDF/Metro configuration.
RDFG 20
3F:30,31 3F:10,11
SID 475 4F:30,31 4F:10,11 SID 039
000D0 000F0
000DF 000FF
1F:10,11 3F:10
4F:10
Witness
Witness
RDFG 120
RDFG 121
9F:8,9 10F:8,9
SID 105
Witness array
Figure 17. Setting up SRDF/Metro with Witness array; After
SRDF/Metro
R11 R21
Array A Array B
SRDF/A or SRDF/A or
Adaptive Copy Adaptive Copy
Disk Disk
Active link
Inactive link
R22
Array C
Remove an SRDF/ environment -remove The successful removal of a SRDF/Metro Smart DR environment
Metro Smart DR results in the following:
environment ● the state of the SRDF/Metro session does not change
● the state of the SRDF/Metro Smart DR session does not
change (unless a -force option is required)
● if the DR mode is Asynchronous at the time of issuing
the symmdr env -remove command, the devices remain
enabled.
Using the -force option results in the state of the DR
changing. This is required for removing a SRDF/Metro Smart DR
environment when:
● the SRDF/Metro state is SyncInProg or ActiveActive, and
● the DR state is Synchronized, SyncInProg, or Consistent, and
● MetroR2_DR is Suspended and the MetroR1_DR is being
removed.
Recover an SRDF/ recover Using the recover command, users can attempt to recover an
Metro Smart DR SRDF/Metro Smart DR environment from an Invalid or Unknown
environment state, and transition the environment back to a known state.
Restore for the SRDF/ restore Resumes I/O traffic on the SRDF links and initiates an incremental
Metro session re-synchronization of data from the MetroR2 to the MetroR1.
Suspend for the SRDF/ suspend [-keep <R1 | R2>] Suspends I/O traffic on the SRDF links. By default the MetroR1
Metro session remains accessible to the host, while the MetroR2 becomes
inaccessible.
Split for the DR session split Use the split operation when both the SRDF/Metro and the DR
side require independent access, such as for testing purposes.
Split stops data synchronization between the SRDF/ Metro
and DR sessions and devices are made available for local host
operations.
Failover for the DR failover Use this when a failure occurs on the SRDF/Metro side.
session
A failover stops data synchronization between the SRDF/Metro
and DR sessions, switching data processing from the SRDF/Metro
side to the DR side.
Failback for the DR failback After a failover (planned or unplanned), use failback to resume
session normal operations after resolving the cause of a failure.
Failback switches data processing from the DR side to the SRDF/
Metro side.
Update R1 for the DR update An update operation initiates an update of the R1 with the new
session data that is on DR, while the DR remains accessible to the host.
NOTE: Update R1 is not allowed if SRDF/Metro is
ActiveActive, ActiveBias, or SyncInProg.
Set mode acp_disk for set mode <acp_disk | async> Use the set mode operation to set the DR mode to Adaptive copy
the DR session or Asynchronous disk mode.
Set mode async for the
DR session
Suspended The SRDF links have been suspended and are not ready or write disabled.
If the R1 is ready while the links are suspended, any I/O accumulates as invalid
tracks owed to the R2.
Partitioned The SRDF group between the two SRDF/Metro arrays is offline.
If the R1 is ready while the group is offline, any I/O accumulates as invalid tracks
owed to the R2.
Unknown If the environment is not valid, the SRDF/Metro session state is marked as
Unknown.
If the SRDF/Metro session is queried from the DR array and the DR Link State is
Offline, the SRDF/Metro session state is reported as Unknown.
Invalid This is the default state when no other SRDF state applies.
The combination of the R1 device, the R2 device, and the SRDF link states do not
match any other pair state, or there is a problem at the disk director level.
DR pair states
Table 25. DR pair states
Pair state Description
Synchronized
NOTE: This state is only applicable when the DR pair is in Acp_disk mode.
The background copy between the SRDF/Metro and DR is complete and they are
synchronized.
The MetroR2 device states are dependent on the SRDF/Metro session state.
The DR side is not host accessible with the devices in a Write Disabled SRDF state.
The MetroR2 device states are dependent on the SRDF/Metro session state.
Consistent
NOTE: This state is only applicable when the DR pair is in Async mode.
This is the normal state of operation for device pairs operating in asynchronous
mode indicating that there is a dependent-write consistent copy of data on the DR
site.
The MetroR2 device states are dependent on the SRDF/Metro session state.
TransIdle
NOTE: This state is only applicable when the DR pair is in Async mode.
The SRDF/A session is active but it cannot send data in the transmit cycle over the
SRDF link because the SRDF link is offline.
● There may be a dependent-write consistent copy of data on the DR devices.
● The background copy may not be complete.
● The MetroR2 device states are dependent on the SRDF/Metro session state.
Split MetroR1 and the DR side are currently ready to their hosts, but synchronization is
currently suspended between the SRDF/Metro and the DR devices as the SRDF link
is Not Ready.
The MetroR2 device states are dependent on the Metro session State
Failed Over Synchronization is currently suspended between the SRDF/Metro and the DR
devices and the SRDF link is Not Ready.
Host writes accumulate and can be seen as invalids
If a failover command is issued when the DR Link state is not Offline:
● the SRDF/ Metro session is suspended
● MetroR1 and R2 are not host accessible
If a failover command is issued when the DR state is Partitioned or TransIdle, and
the DR Link state is Offline:
● the SRDF/Metro state does not change.
● the MetroR1 and MetroR2 device states regarding to their accessibility to the
host do not change.
R1 Updated The MetroR1 was updated from the DR side and both MetroR1 and MetroR2 are not
host accessible.
The SRDF/Metro session is suspended.
There are no local invalid tracks on the R1 side, and the links are ready or write
disabled.
R1 UpdInProg The MetroR1 is being updated from the DR side and both MetroR1 and MetroR2 are
not host accessible.
The SRDF/Metro session is suspended.
There are invalid local tracks on the source side, so data is being copied from the DR
to the R1 device, and the links are ready.
Acp_disk Adaptive copy mode can transfer large amounts of data without having an impact on performance.
Adaptive copy mode allows the SRDF/Metro and DR devices to be more than one I/O out of
synchronization.
NOTE: Adaptive copy mode does not guarantee a dependent-write consistent copy of data on DR
devices.
Adaptive copy mode applies when:
● If querying from the DR array and:
○ the DR state is not TransIdle, and
○ the DR Link State is offline.
● If querying from the MetroR2 array and:
○ the DR state is not TransIdle, and
○ the DR Link State is offline, and
○ the SRDF/Metro Link State is offline.
DSE
For a Smart DR configuration, it is recommended that DSE is set to autostart on both the MetroR1 to DR and MetroR2 to DR
RDF groups. Autostart is enabled by default when an SRDF group is created.
To set DSE on both sides, use:
If users do not want DSE, it can be disabled using the symrdf deactivate command.
Checkpoint
To identify when the data in the current cycle on the MetroR1 is committed on the DR site, use the symrdf checkpoint
command.
Typically the MetroR1 to DR SRDF/A session is responsible for transferring the SRDF/A cycles therefore the symrdf –sid
–rdfg checkpoint command should be run on the MetroR1 to DR devices.
Although it is possible to run the symrdf checkpoint command on the MetroR2 to DR devices, since this side is not
transferring the SRDF/A cycle, it is recommended to not rely on information gathered this way.
symmdr command
The new symmdr command allows you to query and show the entire SRDF/Metro Smart DR environment, list Smart DR
environments on arrays.
symmdr control operations can be targeted at:
● the entire SRDF/Metro Smart DR environment.
● the SRDF/Metro session of the Smart DR environment by providing the -metro option.
● the DR session of the environment by providing the -dr option.
For symmdr syntax details, please see the Dell EMC Solutions Enabler CLI Reference Guide.
Environment restrictions
Environment controls that are used to set up and remove SRDF/Metro Smart DR environments target the environment as a
whole. These control operations are:
● symmdr environment –setup
● symmdr environment –remove
● symmdr recover
The following environment control restrictions apply:
● The MetroR1 array, the MetroR2 array and DR array must have been discovered through the symcfg discover
command.
● The Metro SRDF groups, MetroR1 to DR SRDF groups and MetroR2 to DR SRDF groups must be online.
environment -remove
Name : Alaska
Service State : Normal
Capacity : 1.8 GB
Exempt Devices: No
MetroR1: 000197900048
MetroR2: 000197802041
DR : 000197801702
Legend:
Metro Flags:
(H)ost Connectivity: . = Normal, X = Degraded
(A)rray Health : . = Normal, X = Degraded
MetroR1 <-> MetroR2 Flags:
(L)ink State : . = Online, X = Offline
(W)itness State : . = Available, D = Degraded, X = Failed
(E)xempt Devices : . = No Exempt Devices, X = Exempt Devices
(S)ervice State : H = Active HA, A = Active, I = Inactive, D = Degraded
Metro <-> DR Flags:
(L)ink State : . = Online, X = Offline, 1 = MetroR1_DR Offline, 2 = MetroR2_DR Offline
(M)ode : A = Async, D = Adaptive Copy
(E)xempt Devices : . = No Exempt Devices, X = Exempt Devices
(S)ervice State : H = Active HA, A = Active, I = Inactive, D = Degraded
Name : Alaska
Service State : Normal
Capacity : 1.8 GB
Exempt Devices: No
MetroR1: 000197900048
MetroR2: 000197802041
DR : 000197801702
Legend:
Metro Flags:
(H)ost Connectivity: . = Normal, X = Degraded
(A)rray Health : . = Normal, X = Degraded
MetroR1 <-> MetroR2 Flags:
(L)ink State : . = Online, X = Offline
(W)itness State : . = Available, D = Degraded, X = Failed
(E)xempt Devices : . = No Exempt Devices, X = Exempt Devices
(S)ervice State : H = Active HA, A = Active, I = Inactive, D = Degraded
Metro <-> DR Flags:
(L)ink State : . = Online, X = Offline, 1 = MetroR1_DR Offline, 2 = MetroR2_DR Offline
(M)ode : A = Async, D = Adaptive Copy
2. Use the symmdr -sid 48 -dr_rdfg 55 -name Alaska -nop env -remove command to remove the Alaska
Smart DR environment with the DR leg remaining on MetroR1.
3. After removing the Alaska Smart DR environment, use the symrdf -sid 048 query -rdfg 99 command to see
details of the SRDF/Metro pair. The SRDF/Metro pair state remained Suspended after the removal.
4. Use the symrdf -sid 041 query -rdfg 102 command to see details of the remaining DR pair.
Name : Alaska
Service State : Normal
Capacity : 1.8 GB
Exempt Devices: No
MetroR1: 000197900048
MetroR2: 000197802041
DR : 000197801702
Legend:
Metro Flags:
(H)ost Connectivity: . = Normal, X = Degraded
(A)rray Health : . = Normal, X = Degraded
MetroR1 <-> MetroR2 Flags:
(L)ink State : . = Online, X = Offline
(W)itness State : . = Available, D = Degraded, X = Failed
(E)xempt Devices : . = No Exempt Devices, X = Exempt Devices
(S)ervice State : H = Active HA, A = Active, I = Inactive, D = Degraded
Metro <-> DR Flags:
(L)ink State : . = Online, X = Offline, 1 = MetroR1_DR Offline, 2 = MetroR2_DR Offline
(M)ode : A = Async, D = Adaptive Copy
(E)xempt Devices : . = No Exempt Devices, X = Exempt Devices
(S)ervice State : H = Active HA, A = Active, I = Inactive, D = Degraded
2. Use the symmdr -sid 48 -dr_rdfg 55 -name Alaska -nop env -remove command to remove the Alaska
Smart DR environment.
As the SRDF/Metro state is ActiveActive and the DR leg being removed is the MetroR1 to DR, the witness decided that it
should indicate the side with the active DR as the MetroR1.
3. After removing the Alaska Smart DR environment, use the symrdf -sid 048 query -rdfg 99 command from the
MetroR1 array to see details of the SRDF/Metro pair.
4. Use the symrdf -sid 041 query -rdfg 89 command from the MetroR2 array to see details of the SRDF/Metro pair.
where:
● -name: specifies the name that uniquely identifies the Smart DR environment on all three arrays.
● -i: specifies the interval, in seconds, to wait, either between successive iterations of a list, show or query operation or
between attempts to acquire an exclusive lock on the host database or on the local and/or remote arrays for control
operations.
● -c: specifies the number (count) of times to repeat the operation, displaying results appropriate to the operation at each
iteration.
● -tb: used with list or query to display capacity and invalids in Terabytes.
symmdr list
The symmdr list command reports all SRDF/Metro Smart DR environments defined on an array and identifies the
environment name, environment flags, and information about the SRDF/Metro and DR sessions.
Environment Metro DR
------------------------------- --------------------- ---------------------
Flg Capacity Flg Done Flg Done
Environment Name SE (GB) State S (%) State SM (%)
----------------- --- --------- ------------ --- ---- ------------ --- ----
Alaska .. 104.7 ActiveActive H - Consistent HA -
bermuda .. 118.4 Suspended I - SyncInProg AA 45
cayman .. 16.1 ActiveActive H - Partitioned IA -
Georgia .. 39.5 Suspended I - Consistent AA 40
Hawaii .. 105.3 SyncInProg A 85 Split IA -
idaho X- - Unknown - - Unknown -- -
Legend:
Environment Flags:
(S)Service State : . = Normal, X = Environment Invalid, D = Degraded
(E)xempt : . = No Exempt Devices, X = Exempt Devices
Metro Flags:
(S)ervice State : H = Active HA, A = Active, I = Inactive, D = Degraded
DR Flags:
(S)ervice State : H = Active HA, A = Active, I = Inactive, D = Degraded
(M)ode : A = Async, D = Adaptive Copy
symmdr show
The symmdr show command shows details of an SRDF/Metro Smart DR environment configuration. This information includes:
● MetroR1, MetroR2 and DR arrays
● SRDF groups between the MetroR1 and MetroR2 arrays
● SRDF groups between the MetroR1 and DR arrays
● SRDF groups between the MetroR2 and DR arrays
● whether the SRDF groups above exist
● whether the SRDF device pairs between the SRDF groups exist
● identifying whether or not devices from each site are mapped to a host
● identifying exempt devices on each site
● optionally, the devices on each array
Example
Name: Alaska
Legend:
(M)apped device(s) : . = Mapped, M = Mixed, X = Not Mapped
(E)xempt device(s) : . = Not Exempt, X = Exempt
symmdr query
The symmdr query command reports on an SRDF/Metro Smart DR environment defined on an array and identifies the
environment name, environment flags, and information about the SRDF/Metro and DR sessions.
Example
The following is an example of an SRDF/Metro Smart DR environment on array 044 with a Normal Service state:
Name : Alaska
Service State : Normal
Capacity : 104.7 GB
Exempt Devices: No
MetroR1: 000197900044
MetroR2: 000197900055
DR : 000197900033
Legend:
Metro Flags:
(H)ost Connectivity: . = Normal, X = Degraded
(A)rray Health : . = Normal, X = Degraded
MetroR1 <-> MetroR2 Flags:
(L)ink State : . = Online, X = Offline
(W)itness State : . = Available, D = Degraded, X = Failed
(E)xempt Devices : . = No Exempt Devices, X = Exempt Device
(S)ervice State : H = Active HA, A = Active, I = Inactive, D = Degraded
Metro <-> DR Flags:
(L)ink State : . = Online, X = Offline, 1 = MetroR1_DR Offline, 2 = MetroR2_DR Offline
(M)ode : A = Async, D = Adaptive Copy
(E)xempt : . = No Exempt Devices, X = Exempt Devices
(S)ervice State : H = Active HA, A = Active, I = Inactive, D = Degraded
Name : Alaska
Service State : Degraded – Run Recover
Capacity : 104.7 GB
Exempt Devices: No
MetroR1: 000197900044
MetroR2: 000197900055
DR : 000197900033
Legend:
Metro Flags:
(H)ost Connectivity: . = Normal, X = Degraded
(A)rray Health : . = Normal, X = Degraded
MetroR1 <-> MetroR2 Flags:
(L)ink State : . = Online, X = Offline
(W)itness State : . = Available, D = Degraded, X = Failed
(E)xempt Devices : . = No Exempt Devices, X = Exempt Device
(S)ervice State : H = Active HA, A = Active, I = Inactive, D = Degraded
Metro <-> DR Flags:
(L)ink State : . = Online, X = Offline, 1 = MetroR1_DR Offline, 2 = MetroR2_DR Offline
(M)ode : A = Async, D = Adaptive Copy
(E)xempt : . = No Exempt Devices, X = Exempt Devices
(S)ervice State : H = Active HA, A = Active, I = Inactive, D = Degraded
The following example shows monitoring a SRDF/Metro Smart DR environment on array 044 while the SRDF/Metro and DR
sessions become fully synchronized. This example specifies a count of 30 waiting for 600 seconds in between each display:
Name : Alaska
Service State : Normal
Capacity : 104.7 GB
Exempt Devices: No
MetroR1: 000197900044
MetroR2: 000197900055
DR : 000197900033
Legend:
Metro Flags:
(H)ost Connectivity: . = Normal, X = Degraded
(A)rray Health : . = Normal, X = Degraded
MetroR1 <-> MetroR2 Flags:
(L)ink State : . = Online, X = Offline
(W)itness State : . = Available, D = Degraded, X = Failed
(E)xempt Devices : . = No Exempt Devices, X = Exempt Device
Name : Alaska
Service State : Normal
Capacity : 104.7 GB
Exempt Devices: No
MetroR1: 000197900044
MetroR2: 000197900055
DR : 000197900033
Legend:
Metro Flags:
(H)ost Connectivity: . = Normal, X = Degraded
(A)rray Health : . = Normal, X = Degraded
MetroR1 <-> MetroR2 Flags:
(L)ink State : . = Online, X = Offline
(W)itness State : . = Available, D = Degraded, X = Failed
(E)xempt Devices : . = No Exempt Devices, X = Exempt Device
(S)ervice State : H = Active HA, A = Active, I = Inactive, D = Degraded
Metro <-> DR Flags:
(L)ink State : . = Online, X = Offline, 1 = MetroR1_DR Offline, 2 = MetroR2_DR Offline
(M)ode : A = Async, D = Adaptive Copy
(E)xempt : . = No Exempt Devices, X = Exempt Devices
(S)ervice State : H = Active HA, A = Active, I = Inactive, D = Degraded
Name : Alaska
Service State : Normal
Capacity : 104.7 GB
Exempt Devices: No
MetroR1: 000197900044
MetroR2: 000197900055
DR : 000197900033
Legend:
Metro Flags:
(H)ost Connectivity: . = Normal, X = Degraded
(A)rray Health : . = Normal, X = Degraded
MetroR1 <-> MetroR2 Flags:
(L)ink State : . = Online, X = Offline
(W)itness State : . = Available, D = Degraded, X = Failed
(E)xempt Devices : . = No Exempt Devices, X = Exempt Device
(S)ervice State : H = Active HA, A = Active, I = Inactive, D = Degraded
Metro <-> DR Flags:
(L)ink State : . = Online, X = Offline, 1 = MetroR1_DR Offline, 2 = MetroR2_DR Offline
Name : Alaska
Service State : Normal
Capacity : 104.7 GB
Exempt Devices: No
MetroR1: 000197900044
MetroR2: 000197900055
DR : 000197900033
Legend:
Metro Flags:
(H)ost Connectivity: . = Normal, X = Degraded
(A)rray Health : . = Normal, X = Degraded
MetroR1 <-> MetroR2 Flags:
(L)ink State : . = Online, X = Offline
(W)itness State : . = Available, D = Degraded, X = Failed
(E)xempt Devices : . = No Exempt Devices, X = Exempt Device
(S)ervice State : H = Active HA, A = Active, I = Inactive, D = Degraded
Metro <-> DR Flags:
(L)ink State : . = Online, X = Offline, 1 = MetroR1_DR Offline, 2 = MetroR2_DR Offline
(M)ode : A = Async, D = Adaptive Copy
(E)xempt : . = No Exempt Devices, X = Exempt Devices
(S)ervice State : H = Active HA, A = Active, I = Inactive, D = Degraded
Host Host
Write disabled
SRDF Links
R1 R2
SRDF/Metro is SyncInProg
DR
Host Host
Write disabled Write disabled
R2 re-synchronizes data to R1
SRDF Links
R1 R2
SRDF/Metro is SyncInProg
DR
A restore temporarily makes the MetroR1 inaccessible to the host while the symmdr restore command is running. Once the
restore command completes successfully:
● the MetroR1 is accessible to the host
Host Host
Write disabled
DR
Once the suspend command completes successfully, the SRDF/Metro state becomes Suspended.
If -keep r1 is specified or no -keep is specified:
● When the MetroR1 is mapped to the host, the MetroR1 remains accessible to host
● If the DR Service state was Active HA or Active, the DR Service State is Active
● If the DR Service state was Degraded, and if the DR Link state was MetroR2 DR Offline, the DR Service State is Active
If -keep r2 is specified:
● When the MetroR2 is mapped to the host, the MetroR2 remains accessible to the host and becomes the MetroR1, while
what was the MetroR1 before becomes inaccessible to the host and then becomes the MetroR2.
● If the DR Service State was Active HA, the DR Service State is Active
● If the DR Service State was Active, a –force is required, and the DR Service State is Inactive
● If the DR Service state was Degraded and the action results in DR being dropped, a –force is required, and the DR Service
state is Inactive
Example
The following example shows the result of the symmdr –sid 702 –name metrodr1 suspend –metro –keep R1
command on array 702 when the SRDF/Metro state is ActiveActive, the DR state is Consistent and DR mode is Asynchronous:
Host Host
SRDF Links
R1 R2
DR is SyncInProg
DR
Write disabled
Host
DR Site
Figure 22. Establish for the DR session
Host Host
Write disabled Write disabled
If Async: DR is Consistent
If Acp_disk: DR is SyncInProg
DR re-synchronizes data
to SRDF/Metro
DR
Write disabled
Host
DR Site
Figure 23. Restoring the DR session
A restore temporarily makes the MetroR1 inaccessible to the host while the symmdr restore command is running. Once the
restore command completes successfully:
● the MetroR1 is accessible to the host,
● the DR is Write Disabled (WD) to the host,
● the MetroR2 remains inaccessible to the host,
● the Metro Service state remains Inactive.
When DR mode is async:
Once the restore command completes:
● the DR Service state is Active
● While the re-synchronization is ongoing the DR state is Consistent and copy direction is SRDF/Metro <– DR
● Once the MetroR1 contains the same data as the DR the DR state is Consistent and copy direction is SRDF/Metro –> DR
When DR mode is Acp_disk
Once the restore command completes:
Host Host
SRDF Links
R1 R2
DR is Suspended
DR
Write disabled
Host
DR Site
Figure 24. Suspend for the DR session
Host Host
SRDF Links
R1 R2
DR is Split
DR
Host
DR Site
Figure 25. Split for the DR session
Site A Site B
Host Host
Write disabled Write disabled
SRDF Links
SRDF/Metro cannot be
R1 ActiveActive, ActiveBias, or SyncInProg R2
DR is R1 UpdInProg
DR synchronizes data
to R1
DR
Host
DR Site
Figure 26. Update R1 for the DR session
Name : Alaska
Service State : Normal
Capacity : 1.8 GB
Exempt Devices: No
MetroR1: 000197900044
MetroR2: 000197802011
DR : 000197801722
Legend:
Metro Flags:
(H)ost Connectivity: . = Normal, X = Degraded
(A)rray Health : . = Normal, X = Degraded
MetroR1 <-> MetroR2 Flags:
(L)ink State : . = Online, X = Offline
(W)itness State : . = Available, D = Degraded, X = Failed
(E)xempt Devices : . = No Exempt Devices, X = Exempt Devices
(S)ervice State : H = Active HA, A = Active, I = Inactive, D = Degraded
Metro <-> DR Flags:
(L)ink State : . = Online, X = Offline, 1 = MetroR1_DR Offline, 2 = MetroR2_DR
Offline
(M)ode : A = Async, D = Adaptive Copy
(E)xempt Devices : . = No Exempt Devices, X = Exempt Devices
(S)ervice State : H = Active HA, A = Active, I = Inactive, D = Degraded
2. A link issue occurs between the MetroR2 array and the DR Array, and states change to Degraded.
● Smart DR Service State: Degraded – Manual Recovery
● SRDF/Metro Service State: Active HA
● SRDF/Metro pair state: ActiveActive
● DR Service State: Degraded
A manual recovery is required to make the MetroR2 to DR RDF group online resulting in the DR Link State : Online.
Name : Alaska
Service State : Degraded - Manual Recovery
Capacity : 1.8 GB
MetroR1: 000197900044
MetroR2: 000197802011
DR : 000197801722
Legend:
Metro Flags:
(H)ost Connectivity: . = Normal, X = Degraded
(A)rray Health : . = Normal, X = Degraded
MetroR1 <-> MetroR2 Flags:
(L)ink State : . = Online, X = Offline
(W)itness State : . = Available, D = Degraded, X = Failed
(E)xempt Devices : . = No Exempt Devices, X = Exempt Devices
(S)ervice State : H = Active HA, A = Active, I = Inactive, D = Degraded
Metro <-> DR Flags:
(L)ink State : . = Online, X = Offline, 1 = MetroR1_DR Offline, 2 =
MetroR2_DR Offline
(M)ode : A = Async, D = Adaptive Copy
(E)xempt Devices : . = No Exempt Devices, X = Exempt Devices
(S)ervice State : H = Active HA, A = Active, I = Inactive, D = Degraded
3. After a successful manual recovery, the MetroR2 to DR link is back online, but the Smart DR state is still Degraded:
● Smart DR Service State: Degraded – Run Recover
● SRDF/Metro Service State: Active HA
● SRDF/Metro pair state: ActiveActive
● DR Service State: Degraded
The output of the symmdr -sid 044 -name Alaska query shows the MetroR2 to DR link is back online:
Name : Alaska
Service State : Degraded - Run Recover
Capacity : 1.8 GB
Exempt Devices: No
MetroR1: 000197900044
MetroR2: 000197802011
DR : 000197801722
Legend:
Metro Flags:
(H)ost Connectivity: . = Normal, X = Degraded
(A)rray Health : . = Normal, X = Degraded
MetroR1 <-> MetroR2 Flags:
(L)ink State : . = Online, X = Offline
(W)itness State : . = Available, D = Degraded, X = Failed
(E)xempt Devices : . = No Exempt Devices, X = Exempt Devices
(S)ervice State : H = Active HA, A = Active, I = Inactive, D = Degraded
Metro <-> DR Flags:
(L)ink State : . = Online, X = Offline, 1 = MetroR1_DR Offline, 2 = MetroR2_DR
Offline
(M)ode : A = Async, D = Adaptive Copy
(E)xempt Devices : . = No Exempt Devices, X = Exempt Devices
(S)ervice State : H = Active HA, A = Active, I = Inactive, D = Degraded
4. After a successful manual recovery, the Smart DR Service State changes to Degraded – Run Recover. Use the symmdr
-sid 044 -name Alaska recover command to bring the Alaska environment back to a Normal state and finish the
recovery process:
5. After the successful symmdr recover command, the Alaska Smart DR environment is back online with a Normal service
state:
● Smart DR Service State: Normal
● SRDF/Metro Service State: Active HA
● SRDF/Metro pair state: ActiveActive
● DR Service State: Active HA
● DR pair state: Consistent
The output of the symmdr -sid 044 -name Alaska query shows the environment operating as expected:
Name : Alaska
Service State : Normal
Capacity : 1.8 GB
Exempt Devices: No
MetroR1: 000197900044
MetroR2: 000197802011
DR : 000197801722
Legend:
Metro Flags:
(H)ost Connectivity: . = Normal, X = Degraded
(A)rray Health : . = Normal, X = Degraded
MetroR1 <-> MetroR2 Flags:
(L)ink State : . = Online, X = Offline
(W)itness State : . = Available, D = Degraded, X = Failed
(E)xempt Devices : . = No Exempt Devices, X = Exempt Devices
(S)ervice State : H = Active HA, A = Active, I = Inactive, D = Degraded
Metro <-> DR Flags:
(L)ink State : . = Online, X = Offline, 1 = MetroR1_DR Offline, 2 =
MetroR2_DR Offline
(M)ode : A = Async, D = Adaptive Copy
(E)xempt Devices : . = No Exempt Devices, X = Exempt Devices
(S)ervice State : H = Active HA, A = Active, I = Inactive, D = Degraded
Example 2:
The following example presents a scenario where a site failure causes the witness to identify both sides of the SRDF/Metro
pairs as R2 devices, making them both inaccessible to the host.
The Alaska environment has the following states:
● Smart DR Service State: Degraded – Run Recover
● SRDF/Metro Service State: Degraded
● SRDF/Metro pair state: Suspended
● DR Service State: Inactive
● DR pair state: Invalid
1. Run the symmdr -sid 044 -name Alaska recover command:
2. The output of the symmdr -sid 044 -name Alaska query command shows the Alaska environment successfully
recovered:
Name : Alaska
Service State : Normal
Capacity : 1.8 GB
Exempt Devices: No
MetroR1: 000197900044
MetroR2: 000197802011
DR : 000197801722
Legend:
Metro Flags:
(H)ost Connectivity: . = Normal, X = Degraded
(A)rray Health : . = Normal, X = Degraded
MetroR1 <-> MetroR2 Flags:
(L)ink State : . = Online, X = Offline
(W)itness State : . = Available, D = Degraded, X = Failed
(E)xempt Devices : . = No Exempt Devices, X = Exempt Devices
(S)ervice State : H = Active HA, A = Active, I = Inactive, D = Degraded
Metro <-> DR Flags:
(L)ink State : . = Online, X = Offline, 1 = MetroR1_DR Offline, 2 =
MetroR2_DR Offline
(M)ode : A = Async, D = Adaptive Copy
(E)xempt Devices : . = No Exempt Devices, X = Exempt Devices
(S)ervice State : H = Active HA, A = Active, I = Inactive, D = Degraded
Syntax
Use the following SYMAPI options file setting to enable storrdfd:
SYMAPI_USE_RDFD=ENABLE
When using GNS, enabling the gns_remote_mirror option in the daemon_options file will not mirror the CG if it includes any
devices listed in "Mirroring exceptions" in the Dell Solutions Enabler Array Controls and Management CLI User Guide
Syntax
Enable GNS on each host using the following SYMAPI options file setting:
SYMAPI_USE_GNS=ENABLE
SYMAPI
Base Daemon
RDF Daemon
GNS Daemon
SYMAPI
Base Daemon
RDF Daemon
GNS Daemon
SYM-001827
In the image above, Host-1 and Host-2 run all three daemons: base daemon, SRDF daemon, and GNS daemon to ensure data
consistency protection
NOTE:
Dell EMC strongly recommends running redundant SRDF daemons on at least two control hosts at each site. This ensures at
least one SRDF daemon is available to perform time-critical, consistency monitoring operations.
Dell EMC recommends that you do not run the SRDF daemon on the same control host running the database applications.
Use this control host to issue other control commands (such as SRDF, TimeFinder, and Clone operations).
If the control host is powerful enough to efficiently handle all CPU operations, and is configured with sufficient gatekeeper
devices for all your management applications, you can run ECC and Unisphere for VMAX with the Solutions Enabler
daemons.
All devices containing application and array data must be included in the consistency group for each DBMS or across the
DBMS controlling the multi-database transactions.
Steps
1. Use the symcfg list command to list all SRDF (RA) groups on the source arrays connected to the local hosts to determine
which devices to include in the CG:
symcfg list -rdfg all
2. Use the symcg create command to create a consistency group (ConsisGrp) on one of the local hosts.
Specify the SRDF type of the group and the -rdf_consistency option:
In the following command, the -rdf_consistency option adds the imported ConsisGrp definition to the SRDF
consistency database on Host-1:
8. From one of the local hosts, use the symcg enable command to enable the composite group for consistency protection:
symcg -cg ConsisGrp enable
The ConsistGrp CG becomes an SRDF consistency group managed by the SRDF daemon.
The SRDF daemon watches for any problems with R1->R2 data within the ConsistGrp CG.
Example
In the following example, the symdg command:
● Translates devices to SRDF
● Adds all devices from a device group Symm64DevGrp to a composite group ConsistGrp.
● Adds the composite group to the SRDF consistency database on the host
● Enables the group for SRDF consistency protection:
For SYMCLI to access a specified database, you must set the SYMCLI_RDB_CONNECT environment variable to the
username and password of the array administrator's account.
NOTE:
The Bourne and Korn shells use the export command to set environment variables. The C shell uses the setenv
command.
Connecting by network
When connecting by the network, add a database-specific variable to the RDB_CONNECT definition.
When connecting through the network in an Oracle environment, Oracle has a network listener process running.
An Oracle connection string such as the Transparent Network Substrate (TNS) is required.
Examples
In the following example, a local connect is used. The export command sets the variable to a username of "array" and a
password of "manager".
export SYMCLI_RDB_CONNECT=array/manager
In the following example, the export command adds the TNS alias name "api217":
export SYMCLI_RDB_CONNECT=array/manager@api217
set SYMCLI_RDB_CONNECT=array/manager@HR
Optionally, set the SYMCLI_RDB_TYPE environmental variable to a specific type of database (oracle, informix, sqlserver, or
ibmudb) so that you do not have to include the -type option on the symrdb rdb2cg command line.
To set the environmental variable to oracle :
export SYMCLI_RDB_TYPE=oracle
Examples
In the following example, the symrdb rdb2cg command:
● Translates the devices of an Oracle-type database named oradb to an RDF1 type composite group named ConsisGrpDb .
● The -rdf_consistency option adds the composite group to the SRDF consistency database on the host:
symrdb -type oracle -db oradb rdb2cg ConsisGrpDb -cgtype rdf1 -rdf_consistency
In the following example, the symrdb tbs2cg command translates the devices of an oracle type tablespace orats to an RDF1
type composite group named ConsisGrpTs:
symrdb -type oracle -tbs orats tbs2cg ConsisGrpTs -cgtype rdf1 -rdf_consistency
Example
In the following example, the symvg command:
● Translates the devices of a logical volume group named LVM4vg to an RDF1 type composite group named ConsisGrp.
● The -rdf_consistency option adds the composite group to the SRDF consistency database on the host:
Restrictions
● You can have either consistency protection or the domino effect mode enabled for a device, but not both.
● When a composite group is enabled for consistency protection:
○ Its name cannot be changed without first disabling the consistency protection. After the name change, re-enable the
composite group using the new name.
○ If the composite group is enabled for SRDF/A consistency protection, the SRDF daemon immediately begins cycle
switches on the SRDF groups within the composite group (or named subset).
The cycle switches for all SRDF groups will be performed at the same time. The interval between these cycle switches
is determined by the smallest minimum cycle time defined on the R1 SRDF groups in the composite group (or named
subset).
The smallest minimum cycle time supported by the SRDF daemon is 3 seconds. This value is used if the smallest minimum
cycle time across all component groups is less than 3 seconds.
● If you change the minimum cycle time for any of the R1 SRDF groups while the composite group (or named subset) is
enabled for SRDF/A consistency protection, the new minimum cycle time will not take effect until you disable consistency
protection and then re-enable it.
● You can change contents of a composite group by doing one of the following:
○ Disable consistency protection on a composite group while you add or remove devices, and then re-enable consistency
protection after editing the composite group.
Devices in the composite group are unprotected during the time required to edit and then re-enable the composite group.
○ For RDF1 composite groups, you can dynamically modify the composite group while maintaining consistency protection
during the editing process.
Modify consistency groups provides more information.
Examples
To enable consistency protection for all device pairs in composite group prod CG:
Restrictions
When a subset of a CG is enabled for consistency protection at the SRDF group name level:
● You must disable consistency protection on the subset before you can:
○ Change the name of the subset.
○ Add or remove SRDF groups to the subset.
NOTE:
For an RDF1 composite group, you can dynamically modify the contents of a subset while consistency protection is
enabled. Modify consistency groups provides more information.
● You cannot enable a composite group at the CG level and a member SRDF group at the same time.
○ If a composite group is enabled at the CG level, no part of it can be simultaneously enabled at the SRDF group name
level.
○ If a subset of the group is enabled at the SRDF group name level, the group cannot be enabled at the CG level.
Examples
In the following example, composite group SALES consists of a set of concurrent SRDF devices distributed across two arrays,
076 and 077.
● On array 076:
○ SRDF group 100 operates in asynchronous mode, and
○ SRDF group 120 operates in synchronous mode.
● On array 077:
○ SRDF group 101 operates in asynchronous mode, and
○ SRDF group 121 operates in synchronous mode.
To create two named subsets of the composite group:
One containing the asynchronous SRDF groups:
NOTE:
As a result, the specified group will no longer be associated with the name.
Examples
To enable consistency protection for SRDF/S pairs in the prod CG:
Examples
To enable consistency protection for SRDF/A pairs in the prod2 CG:
Syntax
Use the symcg enable and symcg disable commands to enable/disable consistency protection at the composite group
level. All device pairs in the specified group are enabled/disabled.
If the concurrent mirrors are in asynchronous mode, the enable command enables consistency with MSC consistency
protection.
If the concurrent mirrors are in synchronous mode, the enable command enables consistency with RDF-ECA consistency
protection.
Examples
In the following example, composite group prod contains a concurrent R1 with two asynchronous target mirrors.
To enable consistency protection with MSC consistency protection for the two target mirrors:
Restrictions
● If the two mirrors of the concurrent R1 devices in the composite group are operating in different modes (one mirror in
synchronous mode and the other mirror in asynchronous mode), SRDF consistency protection cannot be enabled at the
composite group level.
You must individually enable each group representing the device mirrors by its group name.
● The following table lists the combinations of consistency protection modes allowed for the mirrors of a concurrent
relationship.
Steps
1. Use the symcg command to define the group name to associate with the SRDF group number.
In the following example, the name cGrpA is associated with SRDF group 55 on array 123:
2. Use the symcg command to enable consistency protection for the SRDF group.
In the following example, the name cGrpA is associated with SRDF group 55 on array 123:
● If the mirrors in SRDF group 55 are operating in asynchronous mode, the SRDF group is enabled with MSC consistency
protection.
● If the mirrors in SRDF group 55 are operating in synchronous mode, the SRDF group is enabled with RDF-ECA
protection.
3. Repeat the steps above to enable consistency protection for the second concurrent SRDF group
Use a unique name for the second group.
Syntax
Use the symrdf verify -enabled command to validate whether device pairs are enabled for consistency protection.
Use the symrdf verify -enabled -synchronized -consistent command to verify whether the device pairs are
enabled for consistency protection and are in the synchronized OR consistent pair state.
Examples
To verify whether the device pairs in the STAGING group are enabled for consistency protection:
If all devices in the STAGING group were enabled for consistency protection, the following message displays:
To verify whether the device pairs in the STAGING group are enabled for consistency protection and are in the synchronized or
consistent pair state:
If all devices are enabled and in the synchronized OR consistent pair state, the following message displays:
"All devices in the group 'STAGING' are 'Enabled' and in 'Synchronized, Consistent'
states."
'Synchronized, Consistent' states."Blocking symcg enable on R2 side
After deletion, SRDF consistency protection on the R2 data cannot be guaranteed even though the devices formerly in the
CG may remain enabled.
Best practice is to disable consistency protection before deleting a group. Enable and disable SRDF consistency protection
provides more information.
Syntax
symcg delete GroupName
Options
-force
Required if the group is disabled and there are members in the group.
-symforce
Required if the group is enabled. The composite group remains enabled but is removed from the SYMAPI
database.
Syntax
Use the suspend, split or failover commands to suspend consistency protection for all devices in an SRDF consistency
group where all devices are either synchronous or asynchronous.
For asynchronous replication, use the symrdf -cg verify command with the -cg_consistent option to ensure that the
SRDF consistency group is SRDF-consistency enabled and in a consistent state.
A consistent state means that at least two cycle switches have occurred and all devices in each SRDF (RA) group have reached
a consistent state.
The state of the R2 devices at the end of the deactivation varies depending on whether the suspend or split command is
used:
NOTE:
If you execute the failover command on both mirrors of a concurrent R1 device, the concurrent R1 is converted into a
concurrent R2 with a restore on both mirrors of the concurrent R2.
Options
The state of the R2 devices at the end of the deactivation varies depending on whether the suspend or split command is
used:
symrdf -cg suspend
The R2 devices are in the write disabled state and cannot be accessed by the target-side hosts. R2
database copy is consistent with the production copy on the R1 side.
symrdf -cg split
The R2 devices are enabled for both reads and writes by the target-side hosts.
Examples
To deactivate consistency in a consistency group named ConsisGrp:
To resume the SRDF links between the SRDF pairs in the SRDF consistency group and I/O traffic between the R1 devices and
their paired R2 devices:
Examples
To verify that the SRDF consistency group ConsisGroup is SRDF-consistency enabled and in a consistent state:
(For synchronous operations) To verify if the device pairs in ConsisGroup are in Synchronized state:
Syntax
Use the msc_cleanup command to cleanup after a session is dropped for devices operating in SRDF/A mode with consistency
enabled MSC. The command can be executed by composite group from the R1 or R2 site or by SRDF group from the R2 site.
Use the symcfg list command to check whether a MSC cleanup operation is required.
Use the symcfg list command with the -rdfg all option to display whether a MSC cleanup operation is required for
only SRDF (RA) groups on the specified array.
Examples
To cleanup a composite group (mycg):
To cleanup from the remote host at the R2 site for array 123 and direct the command to SRDF group 4:
Dell EMC strongly recommends running GNS on your hosts to ensure consistency protection while dynamically
modifying CGs.
● The SRDF groups affected by the symcg modify command cannot contain any devices enabled for consistency protection
by another CG.
● Devices within SRDF groups of the CG to be modified must be in one of the following SRDF pair states:
○ Synchronized
○ SyncInProg with invalid tracks owed to the R2
○ Consistent with no invalid tracks
○ Within an affected SRDF group, device pairs can be a mixture of Synchronized and SyncInProg or a mixture of
Consistent and SyncInProg.
NOTE:
Adaptive copy write pending mode (acp_wp) is not supported when the R1 side of the RDF pair is on an array
running HYPERMAX OS, and diskless R21 devices are not supported on arrays running HYPERMAX OS.
RDFG 100
40 40
41 41
R1 Consistency Group
50 RDFG 101 50
51 51
Figure 28. Staging area for adding devices to the R1CG consistency group
RDFG 101 is established between the same array as the RDFG 100 in the R1CG consistency group.
The following image shows the R1CG consistency group after the dynamic add operation:
SID 306 SID 311
Workload Site Target Site
RDFG 100 40
40
41
41
50
50
51
51
R1CG Consistency Group
Staging Area Staging Area
RDFG 101
Figure 29. R1CG consistency group after a dynamic modify add operation
The dynamic modify remove operation must never leave an SRDF group empty.
The following image shows empty group RDFG 34 configured to receive devices removed from RDFG 32:
RDFG 32 40
40
41
41
C5
AF
C6
B1
MyR1 Consistency Group
Staging Area Staging Area
RDFG 34
Figure 30. Preparing the staging area for removing devices from the MyR1 CG
The staging area consists of RDFG 34, an R1->R2 configuration established between the same array as RDFG 32 in the MyR1
consistency group.
The following image shows the MyR1 consistency group and its staging area after the dynamic modify remove operation has
completed.
SID 306 SID 311
Workload Site Target Site
RDFG 32 40
40
41
41
AF C5
RDFG 34
B1 C6
Examples
To move devices 50 and 51 from SRDF group 101 in the staging area to SRDF group 100 in R1CG on array 306:
symcg -cg R1CG modify -add -sid 306 -stg_rdfg 101 -devs 50:51 -cg_rdfg 100
The following table lists the allowable consistency modes for the SRDF groups of a concurrent CG.
Examples
In this example, device 20 is added to two independently-enabled SRDF groups of a CG.
The following image shows the staging area shared by array 306, 311, and 402 in a concurrent SRDF configuration:
40
41 20
RD 21
FG
80
Staging Area 20
21
51
RDFG 45 40
SID 306 41
Workload Site
RDFG 85
20
SID 402 21
2nd Target Site
Asynchronous
symcg -cg ConCG modify -add -sid 306 -stg_rdfg 80,81 -devs 20 -cg_rdfg 70,71
Simple R1 (R1- Allowed Not allowed Not allowed Allowed Not allowed Not allowed
>R2)
Concurrent R11 Not allowed Not allowed Not allowed Not allowed Not allowed Not allowed
Cascaded R1 Allowed Allowed Not allowed Allowed Allowed Not allowed
The following table lists the allowable consistency modes for the hops of a cascaded CG.
Examples
The following image shows a cascaded SRDF configuration sharing the staging area among array 306, 311, and 402:
Site A
RDFG 38 RDFG 39 40
40 40
41 41 41
Staging Area
20 RDFG 28 20 RDFG 29 20
21
51 21
51 21
symcg -cg CasCG modify -add -sid 306 -stg_rdfg 28 -devs 20:21 -stg_r21_rdfg 29
-cg_rdfg 38 -cg_r21_rdfg 39
Table 33. Allowable device types for removing devices from an RDF1 CG
Device type in CG Enabled at CG level Enabled at SRDF group name level
Simple R1 (R1->R2) Allowed Allowed
Concurrent R11 Not applicable Not applicable
Cascaded R1 Not applicable Not applicable
Example
To remove devices 50 and 51 from RDFG 100 of R1CG on array 306 to RDFG 101 in the staging area:
symcg -cg R1CG modify -remove -sid 306 -stg_rdfg 101 -devs 50:51 -cg_rdfg 100
Table 34. Allowable device types for removing devices from a concurrent RDF1 CG
Device type in CG Enabled at CG level Enabled at SRDF group name level
Simple R1 (R1->R2) Allowed Allowed
Concurrent R11 Not allowed Only allowed if both mirrors are not
enabled by the same SRDF group name.
Cascaded R1 Now allowed Not allowed
Example
To remove devices 20 through 30 from SRDF groups 70 and 80 of ConCG on array 306 into SRDF groups 71 and 81 in the
staging area:
symcg -cg ConCG modify -remove -sid 306 -stg_rdfg 71,81 -devs 20:30 -cg_rdfg 70,80
Simple R1 (R1- Allowed Not applicable Not applicable Allowed Not applicable Not applicable
>R2)
Concurrent R11 Not allowed Not allowed Not allowed Not allowed Not allowed Not allowed
Cascaded R1 Allowed Allowed Not allowed Allowed Allowed Not allowed
Example
To remove device 20 of SRDF groups 38 (R1->R21) and 39 (R21->R2) of CasCG on array 306 into SRDF groups 28 and 29 in
the staging area:
symcg -cg CasCG modify -remove -sid 306 -cg_rdfg 38 -devs 20 -cg_r21_rdfg 39 -stg_rdfg
28 -stg_r21_rdfg 29
SYMAPI
Host
RDF DBMS
Consistency Restartable
Group Copy
SYMAPI
RDF daemon
Oracle Instance
R1 R2
Host
Site C SYM-001828
Site D
Figure 34. Using an SRDF consistency group with a parallel database configuration
The same consistency group definition must exist on both hosts. If enabled, Group Name Services (GNS) automatically
propagates a composite group definition to the arrays and to all locally-attached hosts running the GNS daemon.
Although each production host can provide I/O to both R1 devices in the configuration, the DBMS has a distributed lock
manager that ensures two hosts cannot write data to the same R1 device at the same time.
The SRDF links to two remote arrays (B and D) enable the R2 devices on those arrays to mirror the database activity on their
respective R1 devices.
A typical remote configuration includes a target-side host or hosts (not shown in the illustration) to restart and access the
database copy at the target site.
Using an SRDF consistency group with a parallel database configuration shows the SRDF daemons located on the production
hosts. Dell EMC recommends that you do not run the SRDF daemon on the same control host running database applications.
R1 R2
Oracle Instance
Host BCV
RDF
Consistency
Group
R2
R1
BCV
Figure 35. Using an SRDF consistency group with BCVs at the target site
You must split the BCV pairs at the target sites to access data on the BCVs from the target-side hosts.
The recovery sequence in a configuration that includes BCVs at the target site is the same as described in Recovering from a
failed dynamic modify operation with the following exception:
At the end of the sequence, the DBMS-restartable copy of the database exists on the target R2 devices and on the BCVs if the
BCVs were synchronized with the target site's R2 devices at the time the interruption occurred.
When data propagation is interrupted, the R2 devices of the suspended SRDF pairs are in a Write Disabled state. The target-
side hosts cannot write to the R2 devices, thus protecting the consistent DBMS-restartable copy on the R2 devices.
You can perform disaster testing and business continuance tasks by splitting off the BCV version of the restartable copy, while
maintaining an unchanged R2 copy of the database. The R2 copy can remain consistent with the R1 production database until
normal SRDF mirroring between the R1 and R2 sides resumes.
This configuration allows you to split off and access the DBMS-restartable database copy on the BCVs without risking the data
protection that exists on the R2 devices when propagation of data is interrupted.
To manage the BCVs from the R2 side, associate the BCVs with a single SRDF consistency group defined on the target-site
host that is connected to arrays B and D.
Using an SRDF consistency group with BCVs at the target site shows the SRDF daemons located on the production hosts.
NOTE: Dell EMC recommends: Do not run the SRDF daemon on the same control host running database applications.
Site B
Target
RDFG 45
R2
Site C
R1
Target
Site A
Source
R2
RDFG 101
The two R2 devices operate independently but concurrently using any combination of SRDF modes.
NOTE:
For Enginuity 5876 or higher, both legs of the concurrent SRDF configuration can be in asynchronous mode
If both R2 mirrors are synchronous:
● A write I/O from the host at the R1 device side is returned as completed when both remote array' signal that the I/O is in
cache at the remote side.
If one R2 is synchronous and the other R2 is adaptive copy:
● I/O from the R2 operating in synchronous mode must present ending status to the sending array before a second host I/O
can be accepted.
The host does not wait for the R2 operating in adaptive copy mode.
Recovery Site
Franklin, Massachusetts
Workload site
Boston, Massachusetts
RDFG 45
rono us R2
Synch
R1
Sy
nc
hro
no
us
RDFG 101 R2
Recovery Site
Manchester, New Hampshire
Recovery Site
Arizona
Workload site
Massachusetts
RDFG 45
hr onous
Async R2
R1
As
y nc
hro
nou
s
RDFG 101
R2
Recovery Site
Texas
With concurrent SRDF, you can build a device group or a composite group containing devices that only belong to the two SRDF
groups representing the concurrent remote mirrors.
Consistency protection
You can enable consistency protection for devices in a concurrent configuration.
Dell Solutions Enabler SRDF Family State Tables Guide provides more information.
Steps
1. Create the initial R1 -> R2 pair between the first array and second array.
2. Create the R11 -> R2 pair between first array and the third array.
Steps
1. Use the symdg command to create an R1 device group.
Alternatively, use the -rdfg ALL option to simultaneously establish both mirrors of each SRDF pair in one command:
NOTE:
Business Continuance Volume (BCV) devices cannot contain concurrent SRDF mirrors.
Syntax
Use the symrdf split command to split concurrent SRDF pairs, either one at a time or at the same time.
NOTE:
Concurrent R1 devices can have two mirrors participating in different consistency groups with MSC consistency protection
enabled.
To split the concurrent pairs one at a time:
Examples
To split the concurrent pairs for device group concGrp one at a time:
(restore)
Remote Site B
Host
Local Site A
RDFG Group 1 R2
R1 Restore R1
Split
RDF Group 2
R2
Remote Site C
Syntax
Use the symrdf restore command to restore from the specified RDFG group:
Examples
To restore devices in group concGrp from RDFG group 1:
RDFG Group 1
Host R2
Local Site A
Restore R2
R1
Restore R1
New data
R2
RDF Group 2
Remote Site C
Figure 40. Restoring the source device and mirror in a concurrent SRDF configuration
You cannot simultaneously restore from both remote mirrors to the R1 device.
Syntax
Use the symrdf restore command with the remote option to restore both the R1 devices and R2 devices on the second leg
from the specified RDFG group:
Examples
To restore the both the R1 and the R2 devices in RDF group 1 using the data in RDF group 2:
Use the query -rdfg all command to display the state of concurrent SRDF pairs.
In the following example, concurrent SRDF pairs are in the process of synchronizing (SyncInProg):
symrdf -g conrdf query -rdfg all
During synchronization, use the symrdf verify -summary command to displays a summary message every 30 seconds
until both concurrent mirrors of each SRDF pair are synchronized:
symrdf -g conrdf verify -summary -rdfg all -i 30 -synchronized
.
.
None of the devices in the group 'conrdf' are in 'Synchronized' state.
.
.
Not All devices in the group 'conrdf' are in 'Synchronized' state.
.
.
All devices in the group 'conrdf' are in 'Synchronized' state.
Host I/O
Cascaded SRDF uses a new type of SDRF device: the R21 device. An R21 device is both an R1 mirror and an R2 mirror, and is
used only in cascaded SRDF configurations.
An R21 device is both:
● An R2 in relation to the R1 source device at the primary site, and
● An R1 in relation to the R2 target device at the tertiary site.
There are two sets of pair states in a cascaded configuration:
● Pair states between the primary and secondary site (R1 -> R21)
● Pair states between the secondary and tertiary sites (R21 -> R2)
These two pair states are separate from each other.
When performing a control operation on one pair, the state of the other device pair must be known and considered.
TheDell Solutions Enabler SRDF Family State Tables Guide lists the applicable pair states for cascaded operations.
To perform cascaded SRDF operations with Access Control enabled, you need SRDF BASECTRL, BASE, and BCV access
types. Dell Solutions Enabler Array Controls and Management CLI User Guide provides more information.
Synchronous Asynchronous
Adaptive copy disk
* Adaptive Copy Write Pending mode is not supported when the R1 mirror of the RDF pair is on an array running HYPERMAX
OS.
NOTE:
Asynchronous mode can be run on either the R1-> R21 hop, or the R21 -> R2 hop, but not both.
*Adaptive copy write pending mode (acp_wp) is not supported when the R1 side of the RDF pair is on an array running
HYPERMAX OS, and diskless R21 devices are not supported on arrays running HYPERMAX OS.
Steps
1. Create the initial R1 -> R21 pair between array A and array B for the first hop. SRDF/S, SRDF/A, adaptive copy disk mode, or
adaptive copy write-pending mode is allowed over the first hop.
NOTE:
Adaptive copy write pending mode (acp_wp) is not supported when the R1 side of the RDF pair is on an array running
HYPERMAX OS.
Only one hop (R1 -> R21 or R21 -> R2) can be asynchronous at a time. If R1 -> R21 is in asynchronous mode, R21 -> R2
must be in adaptive copy disk mode.
2. Create the R21 -> R2 pair between array B and array C for the second hop. SRDF/S, SRDF/A or adaptive copy disk mode is
allowed over the second hop.
The most common implementation is SRDF/S mode for the first hop and SRDF/A mode for the second hop.
NOTE:
For cascaded SRDF without Extended Distance Protection (EDP), the R21 device paired with an R2 device must be in
either asynchronous or adaptive copy disk mode.
NOTE:
Adaptive copy write pending mode (acp_wp) is not supported when the R1 side of the RDF pair is on an array running
HYPERMAX OS.
Example
In the following example, TestFile1 specifies two device pairs on SIDs 284 and 305:
0380 07A0
0381 07A1
1. Use the symrdf createpair command to configure the device pairs, SRDF group, and SRDF mode for the first (R1 ->
R2) hop:
symrdf createpair -file TestFile1 -sid 305 -rdfg 210 -type R2 -establish -rdf_mode
sync
Synchronous
0380 07A0
RDFG: 210 03A0
0381 07A1 03A1
The SRDF R1 -> R2 device pairs are created and established in SRDF synchronous mode.
TestFile2 specifies two device pairs on SIDs 305 and 282:
● 07A0 03A0
● 07A1 03A1
2. Use a second symrdf createpair command to configure the device pairs, SRDF group, and SRDF mode for the second
hop(R21 -> R2):
symrdf createpair -file TestFile2 -sid 305 -rdfg 230 -type R1 -establish -rdf_mode
acp_disk
Hop 1 Hop 2
Adaptive
Synchronous copy disk
0380 RDFG: 210 07A0 RDFG: 230 03A0
0381 07A1 03A1
Devices 0390 and 0391 are R21 devices in the cascaded configuration. They are:
● R2 devices in the R1 ->R21 relationship
● R1 devices in the R21-> R2 relationship
To create an RDF1 composite group, add devices and set an SRDF group name:
1. To create an empty RDF1 composite group testcg:
2. To add all devices visible to the local host at SID 284 to composite group testcg:
3. To add all devices visible to the local host at SID 256 to composite group testcg:
R1 R21 R2
If either the R1 or the R2 mirror of an R21 SRDF device is made NR or WD, the R21 device will be NR or WD to the host.
Dell Solutions Enabler SRDF Family State Tables Guide provides more information.
Hop 1 Hop 2
Hop 2 Hop 1
RDF link RDF link
R21
R1 R2
RBCV
Control Host
Examples
Use the -hop2 option with -rdfg name: to operate on the second hop SRDF relationship for the specified -rdfg name:.
In the following example a composite group has 4 devices spread across two arrays:
The following command only operates on the R21->R2 SRDF relationships associated with all the R1 devices using SRDF groups
named name1:
Hop 1 Hop 2
Without EDP, the R21 disk device has its own local mirrors so there are three full copies of data, one at each of the three sites.
With EDP, the R21 diskless device has no local mirrors.
Thus, there are only two full copies of data, one on the R1 disk device and one on the R2 disk device.
When using a diskless R21 device, changed tracks received from the R1 mirror are saved in cache until these tracks are sent to
the R2 disk device. Once the data is sent to the R2 device and the receipt is acknowledged, the cache slot is freed and the data
no longer exists on the R21.
SRDF/EDP restrictions
The following rules apply when creating diskless SRDF devices:
● A diskless device cannot be mapped to the host. Therefore, no host is able to directly access a diskless device for I/O data
(read or write).
● The diskless SRDF devices are only supported on GigE and Fibre RAs.
● Other replication technologies (TimeFinder/Snap, TimeFinder/Clone, Open Replicator, and Federated Live Migration) do not
work with diskless devices as the source or the target of the operation.
● The symreplicate command returns an error if a diskless device is found in the configuration.
● Diskless devices are not supported with thin CKD devices.
● The R1 and R2 volumes must be both thin or both standard. For example:
○ Thin R1-> diskless R21->thin R2, or
○ Standard, fully provisioned R1 -> diskless R21 -> standard, fully provisioned R2.
a. Adaptive copy mode on the first leg does not provide full time consistency of the R21 or R2 devices.
Syntax
NOTE:
Adaptive copy write pending mode (acp_wp) is not supported when the R1 side of the RDF pair is on an array running
HYPERMAX OS.
In an SRDF/EDP configuration, you cannot bring devices Read Write on the link until the diskless devices are designated as
being R21s.
Use the -invalidate R2 option instead of the -establish option.
NOTE:
Since the R21 devices are diskless and cannot be mapped, you do not need to make the device Not Ready or Write Disabled
before using the -invalidate R2 option.
In the following example procedure, TestFile1 specifies two device pairs on SIDs 284 and 305:
● 0380 07A0
● 0381 07A1
1. Use the symrdf createpair command to configure the device pairs, SRDF group, and SRDF mode for the first (R1 ->
R2) hop:
symrdf createpair -file TestFile1 -sid 305 -rdfg 210 -type R2 -invalidate R2
-rdf_mode sync
Synchronous
0380
RDFG: 210 07A0 RDFG: 230 03A0
0381 07A1 03A1
The SRDF device pairs are created and placed in synchronous mode.
● TestFile2 specifies two device pairs:
○ 07A0 03A0
○ 07A1 03A1
2. Use a second symrdf createpair command to configure the device pairs, SRDF group, and SRDF mode for the second
(R21 -> R2) hop:
symrdf createpair -file TestFile3 -sid 305 -rdfg 230 -type R1 -establish -rdf_mode
acp_disk
Hop 1 Hop 2
Adaptive
Synchronous copy disk
0380
RDFG: 210 07A0 RDFG: 230 03A0
0381 07A1 03A1
Diskless devices should only be used as R21 devices in a cascaded environment. Diskless R1, R2, or R22 devices should only
be used as an intermediate step to create a diskless R21 device.
You can control SRDF pairs with diskless devices and without diskless devices in a single control operation if some of the
R21 devices in the group are diskless and others are not.
● The following configurations are supported when the R21 is a diskless SRDF device:
○ R1->R21->R2
○ R11->R21->R2
○ R11->R21->R22
● You cannot set the mode for an SRDF group containing diskless and non-diskless devices to asynchronous.
SRDF modes in cascaded configurations lists the modes allowed for cascaded SRDF configurations.
SRDF modes in cascaded configurations with EDP lists the modes allowed for cascaded SRDF configurations where the R21
is diskless.
All other combinations are blocked. If synchronous mode is not allowed, specify a valid SRDF mode when creating these
device pairs
NOTE:
The adaptive copy write pending -> asynchronous combination in SRDF modes in cascaded configurations with EDP
cannot reach the Consistent state. The R21->R2 hop hangs in the SyncInProg state with 0 invalid tracks. To have the
R2 reach the Consistent state in an R1->R21->R2 setup, configure synchronous -> asynchronous.
For more information about the symconfigure command, see the Dell Solutions Enabler Array Controls and Management
CLI User Guide.
Syntax
Use the symconfigure add rdf mirror command to add both static and dynamic SRDF mirrors to diskless devices.
Restrictions
● Either the local or the remote device can be diskless, however, both the local and the remote SRDF device cannot be
diskless.
● Diskless devices can only be configured on a fibre or GigE SRDF directors.
● Cannot add a mix of diskless and non-diskless SRDF devices to an SRDF group with devices in Async mode.
● The create pair action is blocked if it results in an R1->R21->R2 relationship where the R1 and the R2 are diskless devices.
● When configuring a diskless device the modes should be set as per rules discussed in Control and set restrictions for diskless
devices in cascaded SRDF .
Synchronous Asynchronous
R1 R21
R21 R2
R1 R21
R21 R2
Hop 2 Hop 1
Do not change the SRDF mode from SID 256 -> SID 321. The R1 -> R21 hop is now Asynchronous. Only adaptive copy
disk mode is supported for the R21 -> R2 hop.
Syntax
Output of the symrdf list command includes the SRDF Mirror Type associated with the SRDF group.
Example
In the following example, Mirror Type is in bold text.
symrdf list -sid 305 -cascaded
Diskless devices
NOTE:
symcg,symdg, or symdev commands used with the relabel option fail when the scope includes any diskless device.
Syntax
Use the symrdf list command with the -diskless_rdf option to view only SRDF diskless devices.
Use the -R1, -R2, -R21, or -dynamic options to display only the selected device types.
The specified diskless SRDF or SRDF capable devices are displayed.
Example
To display SRDF diskless devices:
Syntax
Use the symdev list command with the -dldev option to display all configured diskless devices.
Use the -R1, -R2, -R21, or -dynamic options to display only the selected device types.
Example
To display all diskless devices for Symm 305:
symdev list -sid 305 -dldev
Syntax
In the following example, output of the symdev show command displays the following information about the specified diskless
device:
● Device Configuration - shows the device as being an R21 diskless device.
● Device SA Status - always N/A. Diskless devices cannot be mapped to a host.
● Paired with Diskless Device - indicates if the device is in an SRDF relationship with a diskless SRDF device, and the device
type for the SRDF partner of this device.
.
Device Configuration : RDF21+DLDEV (Non-Exclusive Access)
.
.
Device Status : Ready (RW)
Device SA Status : N/A (N/A)
Mirror Set Type : [R2 Remote,R1 Remote,N/A,N/A]
Mirror Set DA Status : [RW,RW,N/A,N/A]
Mirror Set Inv. Tracks : [0,0,0,0]
Back End Disk Director Information
{
Hyper Type : R2 Remote
Hyper Status : Ready (RW)
Disk [Director, Interface, TID] : [N/A,N/A,N/A]
Disk Director Volume Number : N/A
Hyper Number : N/A
Mirror Number : 1
Syntax
Use the symrdf -cg CGName -rdfg name: name -hop2 query command to display information about the second
hop SRDF pair of a cascaded SRDF relationship, for the specified subset of the composite group.
With an R1->R21-> R2 configuration, issuing a query -hop2 from an RDF1 composite group indicates that the query
should show the relationship of the R21-> R2 device pairs. Thus the query displays the R21 device from the R1 mirror point
of view (and vice versa for RDF2 CG).
To see both hops of the RDF1 or RDF2 CG that contains devices in a cascaded SRDF relationship, use the symrdf -cg
query command with the -hop2 and the -detail options.
Syntax
To display detailed information about the second hop SRDF pair of a cascaded SRDF relationship, use the -detail option with
the symrdf query command.
Detailed output displays the association of the cascaded pair with the appropriate local pair.
NOTE:
Example
To display detailed information about the second hop SRDF pair of a cascaded SRDF relationship for composite group testcg:
symrdf query -cg testcg -rdfg name:name1 -hop2 -detail
RDFG Names:
{
RDFG Name : name1
RDF Consistency Mode : NONE
}
Hop-2
{
Symmetrix ID : 000192600305 (Microcode Version: 5876)
Remote Symmetrix ID : 000192600282 (Microcode Version: 5876)
RDF (RA) Group Number : 230 (E5)
Hop-2
{
Symmetrix ID : 000192600321 (Microcode Version: 5876)
Remote Symmetrix ID : 000192600198 (Microcode Version: 5876)
RDF (RA) Group Number : 70 (45)
Hop-2 Track(s) 0 0 0 0
Hop-2 MBs 0.0 0.0 0.0 0.0
The CG definition can span cascaded and concurrent SRDF configurations (SRDF/A and SRDF/S) across multiple arrays.
NOTE:
SRDF/Star requires a Star control host at the workload site, SRDF/A recovery links, and a Star control host at one of the
target sites. A Star control host is a host which is locally attached to only one of the sites in the SRDF/Star triangle and is
where the symstar commands are issued.
SRDF/Star topologies include:
● Cascaded SRDF/Star
● Cascaded SRDF/Star with R22 devices
● Concurrent SRDF/Star
● Concurrent SRDF/Star with R22 devices
The following prerequisites exist for the SRDF/STAR topologies:
● SRDF/STAR topologies without R22 devices cannot have any RDF device pairs in the recovery SRDF group.
● The SRDF/STAR topologies with R22 devices must have RDF device pairs configured between all the devices in the
recovery SRDF group.
Synchronous
R21
R1
Host I/O
Asynchronous
SRDF/A
Recovery links
R2
Asynchronous
target site
SYM-001849-update
London
In cascaded SRDF/Star, the synchronous target site is always more current than the asynchronous target site, but it is possible
to determine which site's data to use for recovery.
NOTE: During normal operations, the recovery links between the workload and the asynchronous target site are inactive.
Concurrent SRDF/Star
NOTE: Cascaded and Concurrent SRDF/Star environments dramatically reduce the time to reestablish replication
operations in the event of a failure.
In a concurrent configuration, data at the workload site is replicated directly to two remote target sites:
● The synchronous target site is within synchronous distances and is linked to the workload site by SRDF/S replication.
● The asynchronous target site can be hundreds of miles from the workload site and is linked to the workload site by SRDF/A
replication.
Synchronous
R2
R1
Host I/O
SRDF/A
Recovery links
Asynchronous
R2
SYM-001849-update
Asynchronous
target site
London
Synchronous
R21
R11
SRDF/A
Recovery links
Asynchronous
R22
Asynchronous
target site
London
Synchronous
R21
R11
Asynchronous
SRDF/A
Recovery links
R22
Asynchronous
target site
London
SYM-001849
R11 and R22 devices have two mirrors, each paired with a different mirror.
Only one of the R22 mirrors can be active (read/write) on the link at a time.
SRDF/Star features
● Differential synchronization greatly reduces the time to establish remote mirroring and consistency.
Dell Technologies strongly recommends that you have BCV device management available at both the synchronous and
asynchronous target sites.
● With Enginuity 5876.159.102 and higher, a mixture of thin and (non-diskless) thick devices is supported.
SRDF/Star state
SRDF/Star state refers to the workload site and both target sites as a complete entity.
NOTE: The configuration must be in the Star Protected state in order to have SRDF/Star consistent data protection and
incremental recovery capabilities.
NOTE: In the following diagrams, one of the targets is labeled as the (Sync) target in order to differentiate between the
two target sites.
A C A C A C
Workload Async Target Sync Target Async Target (Sync) Target Workload
(R11) (R2) (R2) (R2) (R2) (R11)
Cascaded SRDF/Star
(R21) (R1) (R2) (R21)
Sync Target Workload (Sync) Target Async Target
B B B B
A C A C A C A C
Workload Async Target Sync Target AsyncTarget Async Target Workload (Sync) Target Workload
(R1) (R2) (R21) (R2) (R21) (R1) (R2) (R1)
A C A C A C
Workload Async Target Sync Target Async Target (Sync) Target Workload
(R11) (R22) (R21) (R22) (R22) (R11)
Cascaded SRDF/Star
Cascaded SRDF/Star
(R21) (R11) (R22) (R21)
Sync Target Workload (Sync) Target Async Target
B B B B
A C A C A C A C
Workload Async Target Sync Target Async Target Async Target Workload (Sync) Target Workload
(R11) (R22) (R21) (R22) (R21) (R11) (R22) (R11)
Transient fault operations Used to recover from a temporary failure caused by loss of
network connectivity or either target site.
Transient faults do not disrupt production at the workload
site, so these operations can be executed at the workload
site.
Normal operations
The following image shows the normal operations that are available from each state.
Disconnected
Disconnected
connect Legend
disconnect
Async Target
Connected
Connected protect Sync Target
unprotect
Single Action
disconnect
Protected
Dual Action
Protected enable
isolate
Isolated disable
Isolated
STAR Protected
Fault
Dell EMC strongly recommends that you capture a gold copy at the failed target site after the reset action and before
the connect operation.
The rounded rectangles that represent the target sites after a switch are not color coded because the definition of the
workload site and the target sites can change after the switch.
enable Fault
STAR Protected
Before initiating the halt operation, stop the application workload at the current workload site and unmount the file
systems. If you change your mind after halting SRDF/Star, issue the halt -reset command to restart the workload at
the current workload site.
The following image shows the planned switch operations that are available from each state.
protect
Async Target
halt
Protected
Sync Target
Protected
Async Target
enable halt (cascaded)
Single Action
Dual Action
Normal operations
In Cascaded SRDF/Star, the consistency of the asynchronous site data is dependent on the consistency of the synchronous site
data.
The asynchronous target can only be protected if the synchronous target is protected as well. After the two sites have been
connected, the synchronous target must be protected first.
NOTE:
The synchronous target site can be isolated if the asynchronous target site has a target site state of Disconnected, Isolated,
or PathFail.
The following image shows the normal operations that are available from each state.
unprotect (sync)
Single Action
disconnect
Connected
Dual Action
Protected protect (async)
unprotect (async)
Isolated Protected
disable
STAR Protected
This diagram assumes that the synchronous target stayed protected during the fault.
Disconnected
reset reset
connect
(Star Tripped)
Legend
Connected PathFail PathFail
Async Target
protect
cleanup cleanup
Single Action
Protected (Star Tripped)
STAR Protected
● The reset operation transitions the state from PathFail to Disconnected after a transient fault from the loss of the
asynchronous target site.
STAR Protected
Legend
PathFail
Fault
reconfigure -reset
The rounded rectangles that represent the target sites after a switch are not color coded because the definition of the
workload site and the target sites can change after the switch.
Protected
Sync Target
PathFail
PathFail switch
(keep local data) Disconnected
disconnect -trip Single Action
Disconnected
(Star Tripped) Fault
PathFail
PathFail reconfigure
connect
protect
Displaying the symstar query ● Displays the status of a given SRDF/Star site W/T
configuration configuration.
show
symstar show command ● Displays the contents of the internal definition for
list a given SRDF/Star site configuration.
symstar list command ● Lists each SRDF/Star composite group
configuration, including workload name, mode of
operation, CG and Star states, and target names
and states.
Removal of a CG from setup -remove Removes the CG from Star control.
SRDF/STAR control
Isolate SRDF/Star sites isolate Isolates one target site from the SRDF/Star W
configuration and makes its R2 devices read/write
enabled to their hosts.
Unprotect target sites unprotect Disables SRDF/Star consistency protection to the W
specified target site.
Halt target sites halt Used to prepare SRDF/Star for a planned switch W/T
of the workload to a target site. This action write-
disables the R1 devices, drains all invalid tracks and
MSC cycles so that NewYork=NewJersey=London,
suspends SRDF links, disables all consistency
protection, and sets adaptive copy disk mode.
Clean up metadata cleanup Cleans up internal meta information and cache at the T
remote site after a failure at the workload site.
SRDF/Star consistency modifycg Maintains consistency protection when adding W
group operations or removing device pairs from an SRDF/Star
consistency group.
Upgrade an existing SRDF/ configure Upgrades or transitions an existing SRDF/Star W
Star environment environment to employ R22 devices, provided the
current SRDF/Star environment is operating in
Transition SRDF/Star to use
normal condition.
R22 devices
Begin SRDF synchronization connect Starts the SRDF data flow in adaptive copy disk W
mode.
Enable full SRDF/Star enable Enables complete SRDF/Star consistency protection W
protection across the three sites.
SRDF/Star consistency protect Synchronizes devices between the workload and W
group operations target sites and enables SRDF/Star consistency
protection to the specified target site.
Change the SRDF/Star reconfigure Transitions the SRDF/Star setup from concurrent W
replication path SRDF to cascaded SRDF or vice versa after a site
or link failure, or as part of a planned event.
Reconfiguring mode:
cascaded to
concurrent ,Reconfiguring
cascaded
paths,Reconfiguring mode:
concurrent to
cascaded ,Reconfigure
mode without halting the
workload site
Reset after a transient reset Cleans up internal meta information and cache at W
failure the remote site after transient fault (such as a loss
of connectivity to the synchronous or asynchronous
Recovery operations:
target site).
Concurrent SRDF/Star ,
Recovery operations:
Cascaded SRDF/Star
Switch workload operations switch Transitions workload operations to a target site after T
to a target site a workload site failure or as part of a planned event.
Workload switching:
Concurrent SRDF/Star ,
Unplanned workload
switching: cascaded SRDF/
Star Unplanned workload
switching to asynchronous
target site: Cascaded
SRDF/Star
Verify that the given site or verify Returns success if the state specified by the user W/T
SRDF/Star setup is in the matches the state of the Star setup.
desired state
Displaying the symstar
configuration
The symstar man page provides more detailed descriptions of the options used with the symstar command.
-remote Indicates the remote data copy flag. Used with the connect
action when keeping remote data and the concurrent link is
ready. Data is also copied to the concurrent SRDF mirror.
NOTE: Not required if the concurrent link is suspended.
Steps
1. Verify the SRDF/Star control host is locally connected to only one of the three sites.
Step 1: Verify SRDF/Star control host connectivity
2. Verify the settings for each array to be included in the SRDF/Star configuration.
Step 2: Verify array settings
3. NOTE: The RDF groups between all the SRDF/Star sites must exist and the RDF device pairs must be created
between the applicable SRDF/Star sites, before creating the SRDF/Star composite group. Refer to Dynamic Operations,
Concurrent Operationsand Cascaded Operations.
Create a composite group at the workload site.
Step 3: Create an SRDF/Star composite group
4. Create an SRDF/Star options file containing specific parameters for the setup procedure.
Step 4: Create the SRDF/Star options file
5. Issue the SRDF/Star symstar setup command to read and validate the information in the host composite group
definition, and build the SRDF/Star definition file that defines the R1 composite group.
Step 5: Perform the symstar setup operation
An SRDF/Star environment contains one or more triangles, where each triangle has a unique SRDF group for the
synchronous link, the asynchronous link, and the recovery group link. No sharing of SRDF groups is allowed between any
two SRDF/Star triangles.
The examples in this section use the following names:
● StarGrp - the composite group and
● NewYork - workload site
● NewJersey - synchronous target site
● London - asynchronous target site
9. Optionally, configure a non-R22 STAR CG to an R22 STAR CG.
Transition SRDF/Star to use R22 devices
Steps
● Issue the symcfg list command to verify the configuration.
The following output displays the required connectivity of Local, Remote, Remote under Attachment:
symcfg list
S Y M M E T R I X
Mcode Cache Num Phys Num Symm
SymmID Attachment Model Version Size (MB) Devices Devices
000194901217 Local VMAX-1SE 5876 28672 369 6689
000194901235 Remote VMAX-1SE 5876 28672 0 6890
000194901241 Remote VMAX-1SE 5876 28672 0 7007
● Verify that the SRDF directors are Fibre or GigE (RF or RE).
● Issue the symcfg list -v command to verify that the following states exist for each array within SRDF/Star:
○ Concurrent SRDF Configuration State = Enabled
○ Dynamic SRDF Configuration State = Enabled
○ Concurrent Dynamic SRDF Configuration = Enabled
○ RDF Data Mobility Configuration State = Disabled
● Issue the symcfg list -rdfg -v command to verify that each SRDF group in the composite group has the following
configuration:
○ Prevent RAs Online Upon Power On = Enabled
○ Prevent Auto Link Recovery = Enabled
NOTE:
Preventing automatic recovery preserves the remote copy that was consistent at the time of the link failure.
Steps
Create an RDF1 type composite group, with RDF consistency protection, on the Star control host for the array at the workload
site (NewYork).
Example
This step varies depending on the topology of the SRDF configuration:
● For Concurrent SRDF/Star, proceed to Step 3, option A: Create a composite group in Concurrent SRDF/Star .
● For Cascaded SRDF/Star, skip to Step 3, option B: Create a composite group in Cascaded SRDF/Star.
Synchronous
SRDF
group
R2
Star Control Host R1 22
SRDF
BCV
group
60
SRDF
group SRDF/A
23 Recovery Link
SymmID=11 As
SRDF
yn
ch
group
ro
62
no
us
CG StarGrp contains
SRDF groups 22 and 23.
R2
Recovery group for 22 is 60.
Recovery group for 23 is 62
Asynchronous BCV
target site
London SYM-001849
Figure 67. Concurrent SRDF/Star setup using the StarGrp composite group
NOTE:
Dell Solutions Enabler Array Controls and Management CLI User Guide provides additional information on composite groups
and using the symcg -cg command.
Complete the following steps to build an RDF1 type composite group on the Star control host of the SRDF/Star workload site
(NewYork, SID 11) in a concurrent configuration:
Steps
1. Determine which devices on the local array are configured as concurrent dynamic devices.
To list the concurrent dynamic devices for array 11:
NOTE:
Specify the -dynamic and -both options to display dynamic SRDF pairs in which the paired devices can be either R1
or R2 devices.
2. Create an RDF1-type composite group with consistency protection on the Star control host at the workload site.
To create composite group StarGrp on array NewYork:
NOTE:
NOTE:
With concurrent SRDF, the command that adds one of the two concurrent groups adds both concurrent groups (in this
example, the synchronous SRDF group 22 is automatically added with the asynchronous SRDF group 23).
4. Create two SRDF group names; one for all synchronous links and one for all asynchronous links.
To create two SRDF group names NewJersey for SRDF group 22 on SID 11 and SRDF group name London for SRDF
group 23 on SID 11:
NOTE:
You could include additional synchronous SRDF groups in (synchronous) NewJersey using the sid:rdfg syntax. If the
CG contains more than one triangle, you must issue the above command to set the SRDF group name for each additional
SRDF group.
You must also include the names NewJersey and London in the SRDF/Star options file as the values for the synchronous
and asynchronous target site names, respectively.
Step 4: Create the SRDF/Star options file provides more information.
5. For each source SRDF group that you added to the composite group, define a corresponding recovery SRDF group at the
remote site.
A recovery SRDF group can be static or dynamic, but it cannot be shared. A recovery SRDF group cannot contain any
devices.
In the following example for a non-R22 Star CG:
● SRDF group 60 is an empty static or dynamic group on the remote array to which source SRDF group 22 is linked.
● Recovery SRDF group 62 was configured on the other remote array as a match for the source SRDF group 23.
To set the remote recovery group for StarGp RDF group 22 to SRDF group 60 at the remote site:
To set the remote recovery group for StarGp RDF group 23 to SRDF group 62 at the remote site:
These two recovery group definitions represent one recovery SRDF group as viewed from each of the two target sites.
NOTE: If the CG contains more than one triangle, you must issue the above command to set the recovery group for
each additional SRDF group.
Synchronous
SRDF R2
group
Star Control Host R1 22
BCV
SRDF
SRDF
group
SymmID=11 group Asynchronous
23
60
SR ver
re
co
DF y li
/A n k
R2
CG StarGrp contains SRDF group 22.
Asynchronous
SRDF group 23 is the empty recovery group. BCV target site
London
Figure 68. Cascaded SRDF/Star setup using the StarGrp composite group
Complete the following steps to build an RDF1-type composite group on the Star control host of the SRDF/Star workload site
(NewYork, SID 11) in a cascaded environment:
Steps
1. Determine which devices on the local array (-sid 11) are configured as cascaded dynamic devices.
To list the cascaded dynamic devices for array 11:
NOTE:
Specify the -dynamic and -both options to display dynamic SRDF pairs in which the paired devices can be either R1
or R2 devices.
2. Create an RDF1-type composite group with consistency protection on the Star control host at the workload site.
To create composite group StarGrp on NewYork:
NOTE:
Specify the -rdf_consistency option to specify consistency protection for the group.
3. Add devices to the composite group from those SRDF groups that represent the cascaded links for the SRDF/Star
configuration.
NOTE:
The site named NewJersey includes synchronous SRDF group 22 on array 11. If the CG contains more than one
triangle, you must issue the above command to set the SRDF group name for each additional SRDF group.
Include the site names NewJersey and London in the SRDF/Star options file as the values for the synchronous and
asynchronous target site names, respectively. Step 4: Create the SRDF/Star options file provides more information.
5. For each source SRDF group added to the composite group, define a corresponding recovery SRDF group at the local
(workload) site.
The recovery SRDF group:
● Can be static or dynamic.
● Cannot be shared.
● Cannot contain any devices.
● Must be empty.
For the cascaded setup in Cascaded SRDF/Star setup using the StarGrp composite group, the recovery SRDF group is the
empty SRDF group 23 configured between the NewYork synchronous site and the London asynchronous site.
To add this recovery SRDF group:
Description
The SRDF/Star options file specifies the names of each SRDF/Star site and other required parameters.
Syntax
The SRDF/Star options file must conform to the following syntax:
SYMCL_STAR_OPTION=Value
#Comment
SYMCLI_STAR_WORKLOAD_SITE_NAME=WorkloadSiteName
SYMCLI_STAR_SYNCTARGET_SITE_NAME=SyncSiteName
SYMCLI_STAR_ASYNCTARGET_SITE_NAME=AsyncSiteName
SYMCLI_STAR_ADAPTIVE_COPY_TRACKS=NumberTracks
SYMCLI_STAR_ACTION_TIMEOUT=NumberSeconds
SYMCLI_STAR_TERM_SDDF=Yes|No
SYMCLI_STAR_ALLOW_CASCADED_CONFIGURATION=Yes|No
SYMCLI_STAR_SYNCTARGET_RDF_MODE=ACP|SYNC
SYMCLI_STAR_ASYNCTARGET_RDF_MODE=ACP|ASYNC
NOTE: If the options file contains the SYMCLI_STAR_COMPATIBILITY_MODE parameter, it must be set to v70.
NumberTracks
Maximum number of invalid tracks allowed for SRDF/Star to transition from adaptive copy mode to
synchronous or asynchronous mode. SRDF/Star will wait until the number of invalid tracks is at or below
the NumberTracks value before changing the SRDF mode.
The default is 30,000.
NumberSeconds
Maximum time (in seconds) that the system waits for a particular condition before returning a timeout
failure.
The wait condition may be the time to achieve R2-recoverable SRDF/Star protection or SRDF
consistency protection, or the time for SRDF devices to reach the specified number of invalid tracks
while synchronizing.
The default is 1800 seconds (30 minutes). The smallest value allowed is 300 seconds (5 minutes).
SYMCLI_STAR_TERM_SDDF
Enables/disables termination of SDDF (Symmetrix Differential Data Facility) sessions on both the
synchronous and asynchronous target sites during a symstar disable.
● Yes - Terminates SDDF sessions during a symstar disable.
● No - (Default setting) Deactivates (instead of terminates) the SDDF sessions during a symstar
disable.
SYMCLI_STAR_ALLOW_CASCADED_CONFIGURATION
Enables/disables STAR mode for cascaded SRDF/Star configurations.
● Yes - STAR mode for a cascaded SRDF/Star configuration.
● No is the default setting.
SYMCLI_STAR_SYNCTARGET_RDF_MODE
Sets the SRDF mode between the workload site and the synchronous target site at the end of the
symstar unprotect operation.
● ACP - (default setting) Sets the SRDF mode between the workload site and the synchronous target
site transitions to adaptive copy mode at the end of the symstar unprotect operation.
● SYNC - Sets the SRDF mode between the workload site and synchronous target site remains
synchronous at the end of the symstar unprotect action.
SYMCLI_STAR_ASYNCTARGET_RDF_MODE
Sets the SRDF mode between the workload site and the asynchronous target site at the end of the
symstar unprotect operation.
● ACP - (default setting) Sets the SRDF mode between the workload site and the asynchronous target
site to transition to adaptive copy mode at the end of the symstar unprotect operation.
● ASYNC - The SRDF mode between the workload site and asynchronous target site remains
asynchronous at the end of the symstar unprotect action.
Description
The SRDF/Star symstar setup command:
● Reads and validates the information in the host composite group definition, and
● Builds the SRDF/Star definition file that defines the R1 consistency group for the workload site.
This information is combined with the settings in the SRDF/Star options file, and then automatically written in an internal format
to the SFS on a array at each site.
Syntax
The following is the syntax for the symstar setup command:
NOTE: The –opmode <concurrent | cascasded> is required with setup –options for SRDF/Star Configurations
with R22 devices. It is not allowed without R22 devices.
Options
-reload_options
Updates the options values in the SRDF/Star definition file.
NOTE:
Specify the -distribute option from the workload site when both target sites are reachable.
Examples
To build the definition file for the StarGrp CG using the settings from the options file created in Step 4 (MyOpFile.txt):
Description
Once the setup is complete and the SRDF/Star definition file is distributed to the SFS at the other sites, issue the symstar
buildcg command, on the synchronous and asynchronous site Star control hosts, to create the composite groups needed for
recovery operations at the synchronous and asynchronous target sites.
The setup and buildcg actions ignore BCV devices that you may have added to the composite group at the workload site
(NewYork). If remote BCVs are protecting data during the resynchronization of the synchronous and asynchronous target
sites, manually add the BCVs to the synchronous and asynchronous composite groups.
The next step varies depending on whether BCV devices are used:
● If BCV devices are used to retain a consistent restartable image of the data, proceed to Step 7: (Optional) Add BCV devices
to the SRDF/Star configuration.
● If not, skip to Step 8: Bring up the SRDF/Star configuration.
Syntax
Examples
To create the matching composite groups for NewJersey and London:
● Issue the following on the Star control host(s) that is locally-attached to the symm(s) at the NewJersey site:
● Issue the following on the Star control host(s) that is locally-attached to the symm(s) at the London site:
Restrictions
● The setup and buildcg actions ignore BCV devices that you may have added to the composite group at the workload site
(NewYork).
Description
BCVs retain a consistent restartable image of the data volumes during periods of resynchronization.
BCVs are optional, but strongly recommended at both the synchronous and asynchronous target sites (NewJersey and
London).
Use the following steps to add BCV devices to the SRDF/Star configuration:
1. Add BCVs at the remote target sites by associating the BCVs with the composite group.
To associate the BCVs with the composite group StarGrp:
symbcv -cg StarGrp -sid 11 associateall dev -devs 182:19A -rdf -rdfg 22
To associate the BCVs with the composite group StarGrp in a Concurrent SRDF/Star configuration:
symbcv -cg StarGrp -sid 11 associateall dev -devs 3B6:3C9 -rdf -rdfg 23
NOTE:
NOTE:
You can associate BCVs to a composite group either before or after performing the setup operation. The setup operation
does not save BCV information for the composite group, so any BCVs that were associated are excluded from the internal
definitions file copied to the remote hosts.
NOTE: symstar query command provides an example of the output returned with this command.
2. The next step varies depending on whether the system state is Connected or Disconnected.
If the system state is:
● Connected - The devices are already read/write (RW) on the SRDF link.
3. Use the following commands to bring up SRDF/Star: first NewJersey and then London:
Options
connect
Sets the mode to adaptive copy disk and brings the devices to RW on the SRDF links, but does not wait
for synchronization.
protect
Transitions to the correct SRDF mode (synchronous or asynchronous), enables SRDF consistency
protection, waits for synchronization, and sets the STAR mode indicators.
enable
Provides complete SRDF/Star protection, including:
● Creates and initializes the SDDF sessions,
● Sets the STAR mode indicators on the recovery groups,
● Enables SRDF/Star to wait for R2-recoverable STAR protection across SRDF/S and SRDF/A before
producing a STAR Protected state.
NOTE:
To bring up London and then NewJersey in a concurrent SRDF/Star configuration, you can reverse the order of the
symstar protect commands.
Description
The symstar query command displays the local and remote array information and the status of the SRDF pairs in the
composite group.
NOTE:
Using the -detail option with symstar query includes extended information, such as the full Symmetrix IDs, status
flags, recovery SRDF groups, and SRDF mode in the output.
Description
The symstar show command displays the contents of the SRDF/Star definition file that was created by the symstar
setup command.
NOTE:
To display all the devices with SRDF/Star, include the -detail option.
Examples
To display the SRDF/Star definition file for the StarGrp composite group, enter:
symstar -cg StarGrp show
WorkloadSite: NewYork
SyncTargetSite: NewJersey
AsyncTargetSite: London
Adaptive_Copy_Tracks: 30000
Action_Timeout: 1800
Term_Sddf: Yes
Allow_Cascaded_Configuration: No
Description
The symstar list command displays configuration information about the SRDF/Star composite groups that have the SRDF/
Star definition file defined locally or on locally attached SFS devices.
Examples
To list the configurations for all the SRDF/Star composite groups, enter:
symstar list
S T A R G R O U P S
-----------------------------------------------------------------------------
First Target Second Target
Flags Workload Star ----------------- -----------------
Name MLC Name State Name State Name State
-----------------------------------------------------------------------------
abc_test_cg_1 CW. MyStarSit* Unprot MyStarSit* Conn MyStarSit* Disc
boston_grp CFV Hopkinton Trip Westborou* Pfl Southboro* Pfl
citi_west CFV Site_A Unprot Site_B Disc Site_C Conn
ha_apps_cg CS. Boston Unprot Cambridge Haltst SouthShor* Haltfl
ny CW. A Unprot B Halt C Halt
star_cg AS. Boston Prot NewYork Prot Philly Prot
ubs_core AFI A_Site Trip B_Site Pfl C_Site Pfl
zcg AW. SITEA - SITEB - SITEC -
zcg2 ..I - - - - - -
zcg3 ..I - - - - - -
Legend:
Flags:
M(ode of Operation) : C = Concurrent, A = Cascaded, . = Unknown
L(ocal Site) : W = Workload, F = First target,
S = Second target, . = Unknown
C(G State) : V = Valid, I = Invalid, R = RecoveryRequired,. = Not defined
States:
Star State : Prot = Protected, Prprot = PartiallyProtected,
Trip = Tripped, Tripip = TripInprogress,
Unprot = Unprotected, - = Unknown
NOTE: An entry containing a dash or a dot in the symstar list output indicates the command was unable to determine
this value.
NOTE: SRDF/Star must be disabled with both target sites in the Unprotected state.
Examples
To remove StarGrp CG from Star control from the workload site:
Setup............................................................Started.
Terminate STAR target SID:000197800188...........................Started.
Terminate STAR target SID:000197800188...........................Done.
Terminate STAR target SID:000197100084...........................Started.
Terminate STAR target SID:000197100084...........................Done.
Terminate STAR target SID:000196801476...........................Started.
Terminate STAR target SID:000196801476...........................Done.
Setting Star data consistency indicators.........................Started.
Setting Star data consistency indicators.........................Done.
Setting Star data consistency indicators.........................Started.
Setting Star data consistency indicators.........................Done.
Setting Star data consistency indicators.........................Started.
Setting Star data consistency indicators.........................Done.
Setting Star data consistency indicators.........................Started.
Setting Star data consistency indicators.........................Done.
Setting Star data consistency indicators.........................Started.
Setting Star data consistency indicators.........................Done.
Setting Star data consistency indicators.........................Started.
Setting Star data consistency indicators.........................Done.
Deleting persistent state information............................Started.
Deleting persistent state information............................Done.
Deleting distributed setup information...........................Started.
Deleting distributed setup information...........................Done.
Deleting local setup information.................................Started.
Deleting local setup information ................................Done.
Setup............................................................Done.
NOTE:
You can run setup -remove -force from a non-workload site when the remote sites are in the PathFail state or in a
STAR Tripped state.
The setup -remove -force command removes all distributed SRDF/Star definition files associated with an SRDF/Star
consistency group even when its definition no longer exists in the SYMAPI database. It also removes the host's local
definition files for the SRDF/Star CG.
If a site is unreachable, you must run the setup -remove -force command at that site to remove the SRDF/Star
definition file from the SFS, and remove the host's local definition files of the SRDF/Star CG.
Description
There may be occasions when it is necessary to isolate one of the SRDF/Star sites, perhaps for testing purposes, and then
rejoin the isolated site with the SRDF/Star configuration.
NOTE: In rejoining an isolated site to the SRDF/Star configuration, any updates made to London's R2 devices while isolated
are discarded. That is, the data on the R1 devices overwrites the data on the R2 devices.
Issue the symstar isolate command to temporarily isolate one or all of the SRDF/Star sites. The symstar
isolatecommand has the following requirements:
● SRDF/Star protection must be disabled.
● The site to be isolated must be in the Protected state.
● If there are BCVs at the target site that are paired with the SRDF/Star R2 devices, split these BCV pairs before executing
the command.
NOTE:
In a cascaded SRDF/Star configuration, you can isolate the synchronous site depending on the state of the asynchronous
site, if the CG is non-diskless and the synchronous site is in a Protected state.
Description
If SRDF/Star is running normally and in the STAR Protected state, the symstar disable command disables STAR but leaves
both target sites in the Protected state, from which you can isolate either site.
Examples
To isolate site London by splitting its SRDF pairs and making the R2 devices read/write-enabled to the London host:
Description
If the site you want to isolate is in the Disconnected state, first get it to the Protected state with the connect and protect
commands.
Examples
Examples
Description
To unprotect the target sites, first turn off SRDF/Star protection (assuming the system state is STAR Protected).
Options
disable
Disables SRDF/Star protection and terminates the SDDF sessions.
unprotect
Disables SRDF consistency protection and sets the STAR mode indicators.
Example
Execute the following command sequence from the workload site (NewYork):
Description
The halt operation is used to prepare for a planned switch of the workload site to a target site. It suspends the SRDF links,
disables all consistency protection, and sets the mode to adaptive copy disk. In addition, this operation write-disables the R1
devices and drains all invalid tracks to create a consistent copy of data at each site.
NOTE: All RDF links between the 3 sites, including the RDF links for the recovery leg, must be online before you initiate the
halt operation.
Examples
To halt SRDF/Star, enter:
Description
The symstar cleanup command cleans up internal metadata and array cache after a failure.
The cleanup action applies only to the asynchronous site.
Examples
To clean up any internal metadata or array cache for composite group StarGrp remaining at the asynchronous site (London)
after the loss of the workload site:
The add operation adds the device pairs from the SRDF groups in the staging areas to the SRDF/Star consistency group.
The remove operation moves the device pairs from the SRDF/Star consistency group into the SRDF groups in the staging
areas.
Do not enable the gns_remote_mirror option in the GNS daemon's options file when using GNS with SRDF/Star.
This option is not supported in SRDF/Star environments.
gns_remote_mirror does not remotely mirror CGs that contain concurrent or cascaded devices. If you are using
GNS, enabling the gns_remote_mirror option will not mirror the CG if it includes any devices as listed in the
"Mirroring exceptions" in the Dell Solutions Enabler Array Controls and Management CLI User Guide. Refer to the
guide for a detailed description of GNS.
To switch to a remote site, issue the symstar buildcg command to build a definition of the CG at each site in
the SRDF/Star configuration.
In the event the symstar modifycg command fails, you can rerun the command or issue symstar recover. No
control operations are allowed on a CG until after a recover completes on that CG.
Description
The symstar modifycg command moves devices between the staging area and the SRDF/Star CG, and updates the CG
definition.
Syntax
Options
-devs SymDevStart:SymDevEnd or SymDevName, SymDevStart:SymDevEnd or SymDevName... or -file FileName
Specifies the ranges of devices to add or remove.
-stg_rdfg GrpNum,GrpNum
Indicates the SRDF group(s) comprising the staging area. For a concurrent CG, two groups must be
specified, separated by a comma. These SRDF groups are associated with the SRDF groups in the
-cg_rdfg option. This association is based on their order in -stg_rdfg and -cg_rdfg.
-cg_rdfg CgGrpNum,CgGrpNum
The SRDF group(s) within the SRDF/Star CG in which to add or remove devices. For a concurrent
SRDF/Star CG, two SRDF groups must be specified, separated by a comma. These SRDF groups are
associated with the SRDF groups in the -stg_rdfg option. This association is based on their order in
-cg_rdfg and -stg_rdfg.
-stg_r21_rdfg GrpNum
Examples
The following example shows:
● CG ConStarCG spans a concurrent SRDF/Star configuration.
● The 3 arrays are: 306, 311, and 402.
● The staging area contains devices 20 and 21.
SID 311
1st Target Site
Synchronous
40
RDFG 45
41
40
FG
RD
40
41 20
RD 21
FG
80
Staging Area 20
21
51
RDFG 45 40
SID 306 41
Workload Site
RDFG 85
20
SID 402 21
2nd Target Site
Asynchronous
To add only device 20 from the staging area into SRDF groups 40 and 80 of ConStarCG:
symstar -cg ConStarCG modifycg -add -sid 306 -stg_rdfg 45,85 -devs 20 -cg_rdfg 40,80
The following image shows ConStarCg after device 20 was added. Note that device 21 is still in the staging area:
41
21
20
RD
FG
80
RDFG
Staging Area 45
21
51 RDFG
40
85 41
SID 306
Workload Site 20
SID 402 21
2nd Target Site
Asynchronous
Restrictions
● The add operation can only add new device pairs to an existing Star triangle within the SRDF/Star CG. It cannot add a new
Star triangle to the SRDF/Star CG.
● If the target of the operation is a concurrent SRDF/Star CG (with or without R22 devices), the devices to be added must be
concurrent R1 devices.
● If the target of the operation is a cascaded SRDF/Star CG (with or without R22 devices), the devices to be added must be
cascaded R1 devices.
● If the target of the operation is a cascaded SRDF/Star CG (with or without R22 devices) and the devices to be added are
cascaded R1 devices with a diskless R21, then the R21 devices in the affected triangle of the SRDF/Star CG must also be
diskless.
● If the target of the operation is a cascaded SRDF/Star CG (with or without R22 devices) and the devices to be added are
cascaded R1 devices with a non-diskless R21, then the R21 devices in the affected triangle of the SRDF/Star CG must also
be non-diskless.
● The following table lists the valid SRDF/Star states for adding device pairs to a CG in a concurrent SRDF/Star configuration.
Table 44. Allowable SRDF/Star states for adding device pairs to a concurrent CG
State of 1st target site State of 2nd target site STAR state
(Synchronous) (Asynchronous)
Connected Connected Unprotected
Protected Connected Unprotected
Connected Protected Unprotected
Protected Protected Unprotected
Protected Protected Protected
Description
Use the symstar show -cg CgName -detail command to check that the devices were moved to the concurrent CG.
Example
To check if device 20 was added to ConStarCG:
Restrictions
The following table shows the valid states for adding device pairs to a CG in a cascaded SRDF/Star configuration.
Example
The following example shows:
● CG CasStarCG spans a cascaded SRDF/Star configuration.
● The 3 arrays are: 306, 311, and 402.
● The staging area contains devices 20 and 21.
SID 311 SID 432
SID 306 1st Target Site 2nd Target Site
Workload Site Synchronous Asynchronous
RDFG 84 RDFG 85
40 40 40
CasStarCG
41 41 41
20 RDFG 74 20 RDFG 75 20
Staging Area
21
51 21
51 21
51
symstar -cg CasStarCG modifycg -add -sid 306 -stg_rdfg 74 -devs 20:21 -stg_r21_rdfg 75
-cg_rdfg 84 -cg_r21_rdfg 85
40
40 RDFG 84 40 RDFG 85
CasStarCG 41 41
41
20 20
20
21 21 21
51
RDFG 74 RDFG 75
Staging Area
Table 46. Pair states of the SRDF devices after symstar modifycg -add completion
State of SRDF/Star sites Mode of device pairs in CG Pair state of devices in Possible delay for symstar
CG after symstar modifycg modifycg -add command
-add
Connected Adaptive copy disk Synchronized or SyncInProg No delay because command
completes when pair is
SyncInProg.
Protected SRDF/S Synchronized Completes when devices are
synchronized.
SRDF/A Consistent without invalid Completes when the
tracks consistency exempt option
(-exempt) clears on the
devices added to the CG.
Star Protected SRDF/S Synchronized Completes when devices are
synchronized.
SRDF/A Consistent without invalid Completes when devices are
tracks recoverable.
Description
Use the symstar show -cg CgName -detail command to verify that the devices were moved.
Never use the dynamic modifycg -remove operation to remove an existing triangle from the SRDF/Star CG. You cannot
remove the last device from a SRDF/Star triangle.
Restrictions
The following restrictions apply to the SRDF groups and devices in the staging area for dynamic symstar modifycg
-remove operations:
● SRDF groups in the staging area are not in the STAR state.
● SRDF groups in the staging area are not in asynchronous mode.
Example
To move device 35 from the RDG groups 40 and 80 of ConStarCG into SRDF groups 45 and 85 of the staging area:
symstar -cg ConStarCG modifycg -remove -sid 306 -stg_rdfg 45,85 -devs 35 -cg_rdfg 40,80
Restrictions
The following table shows the valid states for removing device pairs from a CG in a concurrent SRDF/Star configuration.
Table 47. Allowable states for removing device pairs from a concurrent SRDF/Star CG
State of 1st target site State of 2nd target site Star state
(Synchronous) (Asynchronous)
Connected Connected Unprotected
Protected Connected Unprotected
Connected Protected Unprotected
Protected Protected Unprotected
Protected Protected Protected
Example
To check if the dynamic remove operation was successful for ConStarCG:
Example
To move devices 21 and 22 from SRDF groups 84 and 85 of ConStarCG into SRDF groups 74 and 75 of the staging area:
symstar -cg ConStarCG modifycg -remove -sid 306 -stg_rdfg 74 -devs 21:22 -stg_r21_rdfg
75
-cg_rdfg 84 -cg_r21_rdfg 85
Restrictions
The following table shows the valid states for removing device pairs from a CG in a cascaded SRDF configuration.
Table 48. Allowable states for removing device pairs from a cascaded SRDF/Star CG
State of 1st target site State of 2nd target site Star state
(Synchronous) (Asynchronous)
Connected Connected Unprotected
Protected Connected Unprotected
Protected Protected Unprotected
Protected Protected Protected
Example
To check if the dynamic remove operation was successful for ConStarCG:
Steps
1. Reissue the modifycg command using exactly the same parameters as the command that failed.
2. If the command fails again, execute the following command at the workload site:
If the workload site or any of the SRDF/Star CG sites are unreachable, specify -force:
The symstar recover command uses all existing information of a dynamic modifycg operation in SFS.
The recover operation either completes the unfinished steps of the dynamic modifycg operation or rolls back any tasks
performed on the CG by this operation, placing the CG into its original state before failure.
2. Issue the symstar -cg CgName recover -force command to retry the failed operation.
To retry the failed symstar modifycg -add for CG SampleCG:
RecoverAdd..................................................Done.
If the recovery determines that a rollback is necessary, SRDF rolls back the operation and removes any devices added before
the failure. Final line of output:
RecoverRollBack.............................................Done.
Table 49. Possible pair states of the SRDF devices after a recovery
State of SRDF/Star sites Mode of device pairs in CG Pair state of devices in CG after a
recovery
Disconnected Adaptive copy disk Suspended a
PathFail SRDF/S Suspended a
PathFail SRDF/A Suspended a
a. The SRDF pair state can be Partitioned instead of Suspended if the SRDF link is offline.
NOTE:
If remote BCVs are configured, split the remote BCVs after a transient fault to maintain a consistent image of the
data at the remote site until it is safe to reestablish the BCVs with the R2 devices. Resynchronization temporarily
compromises the consistency of the R2 data until the resynchronization is fully completed. The split BCVs retain a
consistent restartable image of the data volumes during periods of SRDF/Star resynchronization.
The next step varies depending on whether SRDF/Star data at the remote site are protected with TimeFinder BCVs:
● If SRDF/Star data at the remote site are protected with TimeFinder BCVs, proceed to Step 2.
● If not, skip to Step 3.
2. If SRDF/Star data at the remote site are protected with TimeFinder BCVs, perform the appropriate TimeFinder actions.
To split off a consistent restartable image of the data volumes prior to resynchronization at the asynchronous target
(London) site:
3. Issue the symstar -cg CgName command with the connect, protect, and enable options to return the
asynchronous site to the SRDF/Star configuration.
To connect, protect and enable the CG StarGrp at site London:
4. If any London BCV pairs are part of the composite group, issue the symmir -cg CgName establish command to
reestablish them.
To reestablish the BCV pairs:
Protected
R2
Control Host R11
X
PathFail
Asynchronous
(recovery links)
R2
Asynchronous
target site
London
The image shows a fault where the links between the workload site and the asynchronous target sites are lost.
● The asynchronous target site (London) is accessible by the recovery SRDF groups at the synchronous site (NewJersey).
● The failure causes SRDF/Star to enter a tripped state.
You can restore SRDF/Star protection to the asynchronous target site by reconfiguring from concurrent SRDF/Star to
cascaded mode.
Syntax
Options
-path SrcSiteName:TgtSiteName
Specifies the sites on which the new SRDF pairs are created when the reconfigure command is
issued.
-site TgtSiteName
Specifies the SiteName to apply the given action.
Example
To reconfigure CG StarGrp so that the path to London is NewJersey -> London:
Protected
R21
Host I/O R1
Connected
SRDF/A
Recovery links
R2
Asynchronous
target site
London
Restrictions
● If the asynchronous target site is in the Disconnected state and STAR is unprotected, specify the -full.
● If the asynchronous target site is in the PathFail state and STAR is unprotected, specify the -reset and -full options.
● Specify the -full option only when an SRDF incremental resynchronization is not available.
● Perform the recover operation to recover from PathFail (asynchronous target site) and a tripped state (SRDF/Star).
Steps
1. Confirm the system state using the symstar query command.
2. Stop the application workload at the current workload site, unmount the file systems, and export the volume groups.
3. Perform the SRDF/Star halt action from the Star control host.
To halt CG StarGrp:
NOTE:
If you change your mind after halting SRDF/Star, issue the halt -reset command to restart the workload site on the
same Star control host.
The halt action at the initial workload site (NewYork):
● Disables the R1 devices,
● Waits for all invalid tracks and cycles to drain,
● Suspends the SRDF links,
● Disables SRDF consistency protection, and
● Sets the STAR mode indicators.
The target sites transition to the Halted state, with all three sites having the data.
Halted
R2
Star Control Host R11
Halted
R2
Asynchonous
target site
SYM-001849
London
4. From a Star control host at the synchronous target site (NewJersey), issue the switch command to switch the workload to
the synchronous target site (NewJersey).
Disconnected
R11
Star Control Host
R2
Disconnected
R2
Asynchonous
target site
SYM-001849 London
Connected
R11 Star Control Host
R2
Connected
R2
Asynchonous
target site
SYM-001849
London
Protected
R11 Star Control Host
R2
Protected
R2
Asynchonous
target site
SYM-001849 London
PathFail
R2 Star Control Host
& Host I/O
R11
BCV
X
SRDF/A
recovery
links
PathFail
R2
BCV Asynchronous
target site
London
SYM-001849
If you switch the workload to the synchronous target site but choose to keep the data from the asynchronous target site,
there is a wait for all the SRDF data to synchronize before the application workload can be started at the synchronous site.
The symstar switch command does not return control until the data is synchronized.
This procedure:
● Brings up the synchronous NewJersey site as the new workload site.
● Asynchronously replicates data from NewJersey data to the asynchronous target site (London).
NOTE:
If the links from the workload to the asynchronous target are in the TransmitIdle state, issue the following command to get
the asynchronous site to the PathFail state:
symstar -cg StarGrp disconnect -trip -site London
Steps
1. From a Star control host at the synchronous target site (NewJersey), issue the symstar cleanup command to clean up
any internal metadata or cache remaining at the asynchronous site.
To clean up the London site:
After a workload site failure, splitting the remote BCVs maintains a consistent image of the data at the remote site until
it is safe to reestablish the BCVs with the R2 devices.
The next step varies depending on whether SRDF/Star data at the remote site are protected with TimeFinder BCVs:
● If SRDF/Star data at the remote site are protected with TimeFinder BCVs, proceed to Step 2.
● If not, skip to Step 3.
2. If SRDF/Star data are protected with TimeFinder BCVs at the London site, perform the appropriate TimeFinder actions.
Prior to the switch and resynchronization between NewJersey and London, there is no existing SRDF relationship between
the synchronous and asynchronous target sites.
BCV control operation must be performed with a separate device file instead of the composite group.
In the following example, the device file (StarFileLondon) defines the BCV pairs on array 13 in London.
To split off a consistent restartable image of the data volumes during the resynchronization process using the device file:
3. From a Star control host at the synchronous target site (NewJersey), issue the symstar switch command to start the
workload at the specified site. The following command:
● Specifies NewJersey as the new workload site (-site NewJersey)
● Retains the data at the NewJersey data instead of the London data (-keep_data NewJersey):
Disconnected
R11 Star Control Host
& Host I/O
R2
BCV
Disconnected
R2
Asynchronous
BCV target site
London
4. From a Star control host at the synchronous target site (NewJersey), issue the connect command to connect
NewJersey to London (asynchronously):
Disconnected
R11 Star Control Host
& Host I/O
R2
BCV
Connected
R2
Asynchronous
BCV target site
London
SYM-001849
Figure 82. Concurrent SRDF/Star: new workload site connected to asynchronous site
5. From a Star control host at the synchronous target site (NewJersey), issue the protect and enable commands to:
● Protect NewJersey to London
● Enable SRDF/Star
Disconnected
R11 Star Control Host
& Host I/O
R2
BCV
Protected
R2
Asynchronous
BCV target site
London
SYM-001849
You can begin the workload at NewJersey any time after the switch action completes. However, if you start the
workload before completing the connect and protect actions, you will have no remote protection until those actions
complete.
The next step varies depending on whether SRDF/Star data at the remote site are protected with TimeFinder BCVs:
● If RDF/Star data at the remote site are protected with TimeFinder BCVs, proceed to Step 6.
● If not, skip to Step 7.
7. When the NewYork site is repaired, you may want to bring NewYork back into the SRDF/Star while retaining the workload
site at NewJersey.
Protected
R11 Star Control Host
R2 & Host I/O
BCV
Protected
R2
BCV Asynchronous
target site
London
If you switch the workload to the asynchronous target site but choose to keep the data from the synchronous target site,
there is a wait for all the SRDF data to synchronize before the application workload can be started at the asynchronous site.
The symstar switch command does not return control until the data is synchronized.
This procedure:
● Brings up the asynchronous London site as the new workload site.
● Asynchronously replicates data from London data to the asynchronous target site (NewJersey).
Steps
1. From a Star control host at the asynchronous target site (London), issue the symstar cleanup command to clean up
any internal metadata or cache remaining at the asynchronous site.
NOTE:
After a workload site failure, splitting the remote BCVs maintains a consistent image of the data at the remote site until
it is safe to reestablish the BCVs with the R2 devices.
The next step varies depending on whether SRDF/Star data at the remote site are protected with TimeFinder BCVs:
● If SRDF/Star data at the remote site are protected with TimeFinder BCVs, proceed to Step 2.
● If not, skip to Step 3.
2. If SRDF/Star data are protected with TimeFinder BCVs at the NewJersey site, perform the appropriate TimeFinder actions.
Prior to the switch and resynchronization between NewJersey and London, there is no existing SRDF relationship between
the synchronous and asynchronous target sites.
BCV control operation must be performed with a separate device file instead of the composite group.
In the following example, the device file (StarFileNewJersey) defines the BCV pairs on array 13 in London.
To split off a consistent restartable image of the data volumes during the resynchronization process using the device file:
3. From a Star control host at the asynchronous target site (London), issue the symstar switch command to start the
workload at the specified site. The following command:
● Specifies London as the new workload site (-site NewJersey)
● Retains the data at the NewJersey data instead of the London data (-keep_data NewJersey):
The workload site switches to London and the R2 devices at London become R1 devices.
The London site connects to the NewJersey site and retrieves the NewJersey data.
NOTE:
The connect action is not required because the switch action specified that SRDF retrieve the remote data from the
NewJersey site.
R2
R2
BCV
Connected
Disconnected
BCV
Workload site
London
4. From a Star control host at the asynchronous target site (London), issue the protect command to protect London to
NewJersey:
Asynchronous
target site
Synchronous NewJersey
target site
NewYork
R2
R2
BCV
Protected
Disconnected
BCV
Workload site
London
NOTE:
The next step varies depending on whether SRDF/Star data at the remote site are protected with TimeFinder BCVs:
● If SRDF/Star data at the remote site are protected with TimeFinder BCVs, proceed to Step 5.
● If not, skip to Step 6.
5. Reestablish any BCV pairs at the NewJersey site.
Use either:
● The device file syntax (-f StarFileNewJersey), or
● The -cg syntax (if you have associated the NewJersey BCV pairs with the StarGrp composite group on the Star
control host).
To reestablish NewJersey BCV pairs in the composite group StarGrp using the -cg syntax:
6. The London site is at asynchronous distance from both NewYork and NewJersey. SRDF/Star supports only one
asynchronous site.
When the NewYork site is repaired, you cannot connect and protect NewYork without switching the workload back to a
configuration that has only one asynchronous site (NewYork or NewJersey).
However, you can connect to NewYork. The connect action sets the mode to adaptive copy disk and brings the devices to
RW on the SRDF links.
To connect to NewYork, issue the connect command from the London site:
Asynchronous
target site
Target site NewJersey
NewYork
R2
R2
BCV
Protected
Connected
BCV
Workload site
London
If the workload remains at the asynchronous London site, you can perform a protect action on NewYork only if you first
unprotect NewJersey.
Using SYMCLI to Implement SRDF/Star Technical Note provides expanded operational examples for SRDF/Star.
Steps
1. Stop the workload at the site where the Star control host is connected.
2. Issue the halt command from the Star control host where the workload is running.
To halt SRDF from the NewJersey Star control host:
3. Run the following commands from the Star control host at the original site of the workload (NewYork):
Synchronous
R21
Host I/O R11
SRDF/A
X Asynchronous
recovery links
R22
Asynchronous
Target site
London
Protected
R21
Star Control Host R11
X PathFail
R2
Asynchronous
target site
London
Steps
1. Display the state the state of SRDF devices and the SRDF links that connect them using the symrdf list command.
See Options for symrdf list command for a list of symrdf list command options.
The next step varies depending on the state of the links to the asynchronous target site (London).
● If the links to the asynchronous target are in the TransmitIdle state, proceed to Step 2.
● If the links to the asynchronous target are in the PathFail state, skip to Step 3.
2. Transition links to the asynchronous site to the PathFail state using the symstar -cg CgName disconnect -trip command.
3. Issue the symrdf list command to verify the configuration is now has the following states:
Synchronous target site (NewJersey): Protected
Asynchronous target site (London): PathFail
STAR state: Tripped
4. From the Star control host at the workload site, issue the symstar -cg CgName reset command to clean up any
internal metadata or cache remaining at the asynchronous site after the transient fault occurred.
To clean up cache and metadata for CG StarGrp at site London:
Protected
R21
Star Control Host R1
Disconnected
R2
Asynchronous
target site
London
Performing this operation changes the STAR mode of operation from cascaded to concurrent.
If:
● The asynchronous target site is no longer accessible, but
● The workload site is still operational, and
● The asynchronous target site is accessible through the recovery SRDF group,
You can:
● Reconfigure the SRDF/Star environment, and
● Resynchronize data between the workload site and the asynchronous target site to
● Achieve direct SRDF/A consistency protection between the workload site and the asynchronous target site.
Cascaded SRDF/Star with transient fault shows cascaded SRDF/Star with the workload site at NewYork, and a fault between
the synchronous target site (NewJersey), and the asynchronous target site (London). The SRDF states are as follows:
● Synchronous target site (NewJersey): Protected
● Asynchronous target site (London): PathFail
● STAR state: Tripped
The first step varies depending on the state of the links to the asynchronous target site (London).
● If the links to the asynchronous target are in the TransmitIdle state, proceed to Step 1.
● If the links to the asynchronous target are in the PathFail state, skip to Step 2.
1. Transition links to the asynchronous site to the PathFail state using the symstar -cg CgName disconnect -trip
command.
2. Issue the symstar reconfigure command from the workload site (NewYork) Star control host.
NOTE:
If the system was not STAR Protected, specify the -full option to perform full resynchronization.
Protected
R2
R11
Star Control Host
Disconnected
R2
Asynchronous
target site
London
Steps
1. At the current workload site (NewYork), perform the SRDF/Star halt action.
To halt CG StarGrp:
Halted
R2
R21
Star Control Host R1 X
X Halted
R2
Asynchonous
target site
London
2. From a Star control host at the synchronous target site (NewJersey), issue the switch command to switch the workload
to the synchronous target site (NewJersey).
Disconnected
Disconnected
R2
Asynchonous
target site
London
SYM-001849
NOTE:
The entire SRDF/Star environment can also be halted from a non-workload site.
There is limited support when switching from NewYork to London. When configured as Cascaded SRDF/Star with the
workload at London, only the long-distance link can be protected. The short-distance link can only be connected. SRDF/
Star cannot be enabled at London.
You cannot retain the data at the asynchronous target site if you move the workload to the synchronous target site.
In the following image, loss of the workload site (NewYork) has resulted in a system state of NewJersey:Pathfail:
PathFail R2
R21
Star Control Host X
R1
Protected
R2
Asynchonous
target site
London
Steps
1. The first step varies depending on the state of the asynchronous target site (London).
● If the asynchronous target site (London) is in Disconnected or PathFail state, skip to Step 2.
● If the asynchronous target site (London) is in Protected state, issue a disconnect command from a Star control host
at the synchronous target site (NewJersey) to get the asynchronous site to the PathFail state:
2. From a Star control host at the synchronous target site (NewJersey), issue the symstar cleanup command to clean up
any internal metadata or cache remaining at the asynchronous site.
To clean up the London site:
3. From a Star control host at the synchronous target site (NewJersey), issue the symstar switch command to start the
workload at the specified site. The following command:
● Specifies NewJersey as the new workload site (-site NewJersey)
● Retains the data at the NewJersey data instead of the London data (-keep_data NewJersey):
Disconnected R2
R1
R21 Star Control Host
Disconnected
R2
Asynchonous
target site
London
5. After the switch, you can bring up SRDF/Star in a cascaded mode or reconfigure to come up in concurrent mode. The
following examples explain the steps required for each mode:
● Proceed to Step 6 to bring up SRDF/Star in cascaded mode (the default).
● Skip to Step 8 to reconfigure SRDF/Star in concurrent mode.
6. From a Star control host at the new workload site (NewJersey), issue two connectconnect commands to:
● Connect NewJersey to NewYork (synchronously)
● Connect NewYork to London (asynchronously):
Connected R2
R1
R21 Star Control Host
Connected
R2
Asynchonous
target site
London
Protected R2
R1
R21 Star Control Host
Protected
R2
Asynchonous
target site
London
Disconnected
R1 Star Control Host
R2
Disconnected
R2
Asynchonous
target site
London
Protected
R1 Star Control Host
R2
Protected
R2
Asynchonous
target site
London
PathFail R2
R21
Star Control Host X
R1
Protected
R2
Asynchonous
target site
London
From a Star control host at the asynchronous target site (London), perform the following steps to:
● Switch the workload site to London
● Keep the data from the asynchronous target site (London):
Steps
1. If London is in a Protected state, issue the disconnect command:
2. If the disconnect leaves London in a CleanReq state, issue the cleanup command:
3. Issue the switch command to switch the workload site to the asynchronous target site (London) and keep the
asynchronous target's (London) data:
4. The London site is at asynchronous distance from both NewYork and NewJersey. SRDF/Star supports only one
asynchronous site.
When the NewYork site is repaired, you cannot connect and protect NewYork without switching the workload back to a
configuration that has only one asynchronous site (NewYork or NewJersey).
However, you can connect to NewYork. The connect action sets the mode to adaptive copy disk and brings the devices to
RW on the SRDF links.
Issue two connect commands to connect the workload site (London) to both target sites (NewJersey and NewYork):
NewYork
Connected R2
R21
R1
Protected
R1
Workload site
Star Control Host London
Figure 101. Cascaded SRDF: after switch to asynchronous site, connect, and protect
Steps
1. If London is in a Protected state, issue the disconnect command:
2. If the disconnect leaves London in a CleanReq state, issue the cleanup command:
3. Issue the switch command to switch the workload site to the asynchronous target site (London) and keep the
synchronous target's (NewJersey) data:
The workload site switches to London and the R2 devices at London become R1 devices.
The London site connects to the NewJersey site and retrieves the NewJersey data.
The connect action is not required because the switch action specified that SRDF retrieve the remote data from the
NewJersey site.
NewYork site
Disconnected R2
R21
R1
Connected
R1
Workload site
Star Control Host London
Reconfiguration operations
This section describes the following topics:
● Reconfiguring from Cascaded SRDF/Star to Concurrent SRDF/Star
● Reconfiguring cascaded paths
● Reconfiguring from Concurrent SRDF/Star to Cascaded SRDF/Star
● Reconfiguring without halting the workload site
Steps
1. From a Star control host at the workload site, issue the halt command to stop SRDF:
Halted R21
Star Control Host R1
Halted
R2
Asynchronous
target site
London
2. Issue the symstar reconfigure command to reconfigure the NewYork -> NewJersey -> London path to NewYork
-> London:
Halted
R2
Star Control Host R11
Halted
R2
Asynchronous
target site
London
Steps
1. From a Star control host at the workload site, issue the halt command to stop SRDF:
Halted R21
R2
Halted
R1
Workload site
Star Control Host
London
2. Issue the symstar reconfigure command to reconfigure the London -> NewJersey -> NewYork path to London ->
NewYork:
R2
R2
Halted
Halted
R11
Workload site
London
Star Control Host
Steps
1. From a Star control host at the workload site, issue the halt command to stop SRDF:
Halted R21
R2
Halted
R1
Workload site
Star Control Host
London
2. Issue the symstar reconfigure command with -path and -remove options to reconfigure the path from:
London -> NewJersey -> NewYork
to:
London -> NewYork -> NewJersey:
Halted R2
R2
Halted
R1
Steps
1. From a Star control host at the workload site, issue the halt command to stop SRDF:
Halted
R2
Star Control Host R11
Halted
R2
Asynchronous
target site
London
2. Issue the symstar reconfigure command to reconfigure the path from NewYork -> London to NewYork ->
NewJersey -> London:
Halted R21
Star Control Host R1
Halted
R2
Asynchronous
target site
London
Steps
1. From a Star control host at the workload site, issue the halt command to stop SRDF:
R2
R2
Halted
Halted
R11
Workload site
Star Control Host London
2. Issue the symstar reconfigure command to reconfigure the concurrent path from London -> NewYork to cascaded
path London -> NewJersey -> NewYork:
Halted R21
R2
Halted
R1
Workload site
Star Control Host London
These operations take the system out of the STAR Protected state.
Once reconfiguration is complete, re-enable STAR protection.
NOTE:
SYMCLI_STAR_COMPATIBILITY_MODE=v70
Example
This command is allowed from the workload site only while in the following states:
● Disconnected/Connected/Halted (to synchronous target site) and
● Disconnected/Connected/Halted (to asynchronous target site)
After the configure command completes, target sites are in the same states as they were in when the configure
command was issued.
Example
To immediately upgrade SRDF/Star to use R22 devices:
symstar -cg StarGrp configure -add recovery_rdf_pairs -opmode cascaded
Issue the symstar show command to verify R22 devices are configured as the recovery SRDF pairs. For example (truncated
output):
...
Last Action Performed :ConfigureAddRcvryRDFPair
Last Action Status :Successfull
Last Action timestamp :03/15/2008_12:29:37
R1 device migration
Before you can migrate an R1 device to a new array, you must create a temporary concurrent SRDF configuration with the new
array as one of the R2 sites.
This section describes the steps to complete an R1 migration, including:
● Configure a temporary SRDF group and R1 device to enable the migration.
● Establish a concurrent SRDF relationship to transfer data to the from the old R1 device to the device that will become the
new R1.
● Replacing the R1 device with the newly-populated device in the SRDF pair.
RDFG 13 RDFG 45
R1 R2
RDFG 17 RDFG 7
Site C
Site for new R1 device
Temporary New
Pair Pair
RDFG 13
R11 R2
RDFG 45
RDFG 17 RDFG 7
Site C
Target
R2
RDFG 101 RDFG 72
Steps
1. Wait until the two R2 devices are near synchronization with the R11 device.
2. Shut down any applications writing to the source device.
3. Use the symrdf migrate -replace R1 command to replace the source device.
RDFG 13
R11 RDFG 45
R2
RDFG 17 RDFG 7
Site C
Source
R1
RDFG 101 RDFG 72
R2 device migration
R2 device migration allows you to replace the original R2 devices with new R2 devices. It shows the initial two-site topology, the
migration process, and the final SRDF topology.
R1 R2
Site A Site B
Site A
R11 R2 R1
R2
R2
Site C Site C
RDFG 13
R1 R2
RDFG 45
RDFG 17
Site C
Site for new R2 device
RDFG 101
RDFG 13
R11 RDFG 45
R2
RDFG 17
Site C
Target
R2
RDFG 101
The establish action creates a concurrent SRDF relationship to transfer data from the existing source device to both target
devices.
In the preceding example, the R1 becomes the R11 device writing to two target R2 devices.
● The source site continues to accept I/Os from the host.
● There is no need to shut down the applications writing to R1.
● No temporary pairing (like an R1 migration) is required.
● The source and target devices do not have to be close to synchronization.
NOTE:
It may be necessary to modify existing device group or composite group scripts to accommodate the new configuration.
RDFG 13
R1 R2
RDFG 45
RDFG 17
Site C
Target
RDFG 101
R2
Devices
● The new device (R1 or R2) cannot be an SRDF device before migration.
● The existing device (R1 or R2) and the replacement device cannot be diskless.
● The new R1 device cannot be larger than the existing R1 device.
● The existing R1 device cannot have any local invalid tracks.
● After migration, the R2 device cannot be larger than the R1 device.
● The existing (R1 or R2) and the new device cannot be configured for SRDF/Star.
● The existing device and the replacement device cannot be a source or a target device for TF/Mirror, TF/Snap, TF/Clone,
Open Replicator, and Federated Live Migration.
This restriction does not apply to the SRDF partner of the existing device.
● The existing R1/R2 device pair cannot be in a concurrent SRDF relationship.
Set the -config option to equal pair in symrdf migrate -setup to indicate this pair is not part of such a
configuration.
● An SRDF consistency protection group must be enabled at the RDFG-name level, NOT at the composite-group level.
Otherwise, the migrate -setup command stops the monitoring/cycle switching of your composite group.
Sample procedure: migrating R1 devices , explains the procedure for an SRDF consistency protection group enabled at the
composite-group level.
SID 43 SID 90
Workload Site Target Site
R1 R2
RDFG 13 RDFG 45
05A 012
056 029
51
RDFG 17 RDFG 7
005
006
51
SID 306
Figure 120. R1 migration example: Initial configuration
The preceding image shows an R1 and R2 relationship between array 43 and array 90.
After R1 migration, the devices in array 306 will become the source devices for array 90.
RDFG Names:
{
RDFG Name : siteb
RDF Consistency Mode : MSC
05A 005
056 006
R1 devices 05A and 056 in array 43 are paired with the new devices 005 and 006 in array 306.
It may be necessary to modify existing device group or composite group scripts to accommodate the temporary change of
the existing R1 devices to R11 devices.
The symrdf -migrate -setup -config pair -force command establishes a concurrent SRDF relationship between the R1
devices in array 43 and the new devices in array 306 using SRDF group 17.
This is a temporary relationship to transfer data from the existing R1 to its replacement.
symrdf -sid 043 -rdfg 17 -f R1MigrateFile migrate -setup -config pair -force
NOTE: If the host is reading and writing to the R1 device during this action, a synchronized pair state may not be attainable
because the pair is operating in adaptive copy disk mode.
RDFG 17 RDFG 7
Temporary New
Pair Pair
RDFG 101 RDFG 72
R2
005
006
51
SID 306
Figure 121. Concurrent SRDF relationship
2. Terminate any TF/Mirror, TF/Snap, TF/Clone, Open Replicator, and Federated Live Migration sessions.
3. Use the symrdf migrate -replace command to set R1 (R11) device as USR-NR, complete the final synchronization of
data between the existing and the new device, and reconfigure the devices into a new SRDF pair.
The device pairings of the replaced devices are removed. The new devices become R1 devices paired with the existing R2
devices using the original SRDF mode of the replaced pair.
NOTE:
The migrate -replace R1 command waits for synchronization to finish and may take a long time. To avoid the
locking of the SYMAPI database for this entire time, set the environment variable SYMCLI_CTL_ACCESS=PARALLEL. If
you set this variable, you may need to run the symcfg sync command after the R1 migration is complete.
In the following example, the migrate -replace R1 command specifies the new SRDF group 72 to reconfigure and connect
the new R1 devices 005 and 006 in array 306 with the R2 devices 012 and 029 in Symmetix 90:
symrdf -sid 043 -rdfg 17 -f R1migrateFile migrate -replace r1 -config pair -new_rdfg 72
05A 012
056 029
51
RDFG 7
New
Pair
RDFG 72
R1
005
006
51
SID 306
Figure 122. Migrated R1 devices
R2migrateFile
05A 005
056 006
SID 43 SID 90
R1 R2
RDFG 13 RDFG 45
05A 012
056 029
51
RDFG 17
RDFG 101
005
006
51
SID 306
Figure 123. R2 migration example: Initial configuration
The preceding example shows the R1 and R2 relationship between array 43 and array 90.
05A 005
056 006
When migration is complete, R1 devices 05A and 056 in array 43 will be paired with the new devices 005 and 006 on array 306.
You may need to modify existing device group or composite group scripts to accommodate the temporary change of the
existing R1 devices to R11 devices.
The symrdf migrate -setup -config pair command establishes a concurrent SRDF relationship between the R1 devices
05A and 056 in array 43 and the new devices 005 and 006 in array 306 using SRDF group 17:
symrdf -file R2migrateFile -sid 043 -rdfg 17 migrate -setup -config pair
SID 43 SID 90
R11 R2
RDFG 13 RDFG 45
05A 012
056 029
51
RDFG 17 RDFG 7
RDFG 101
R2
005
006
51
SID 306
Figure 124. Concurrent SRDF relationship
After replacing R2, you must modify device groups and/or composite groups to remove all BCVs, VDEVS, TGTs from
the original R2 and then add appropriate counterparts to the new R2. You must also recreate any TF/Mirror, TF/Snap,
TF/Clone, Open Replicator, and Federated Live Migration sessions on the new R2.
In the following example, the symrdf migrate -replace R2 -config pair command uses the SRDF group 17 to
reconfigure and connect the R1 devices 05A and 056 with the new R2 devices 005 and 006:
symrdf -file R2migrateFile -sid 043 -rdfg 17 migrate -replace R2 -config pair
05A 012
056 029
51
RDFG 17 RDFG 7
RDFG 101
R2
005
006
51
SID 306
Figure 125. Migrated R2 devices
When migration is complete, the array 306 devices become the R2 devices and are paired with the R1 devices in Symmetix 43.
This new pair uses the original SRDF mode of the replaced pair.
Table 50. SRDF migrate -setup control operation and applicable pair states
Pair state: existing R1->R2
R1 updinprog
Partitioned1
Partitioned2
TransmitIdle
Syncronized
R1 updated
SyncInProg
Suspended
Failed over
Contro Consistent
Invalid
l
Split
operati
on: a b
migrate P P Pc Pc P
-setup
RDFG 13 RDFG 45
R1 R2
RDFG 17 RDFG 7
Site C
Site for new R1 device
Figure 126. R1 migration: applicable R1/R2 pair states for migrate -setup
The R1 in array A and the R2 in array B must be in one of the applicable pair states before issuing the symrdf migrate
-setup command, which establishes a concurrent SRDF relationship among the three sites.
The following image shows a sample configuration for an R2 migration:
RDFG 13 RDFG 45
R1 R2
RDFG 17 RDFG 7
Site C
Site for new R2 device
RDFG 101
Figure 127. R2 migration: applicable R1/R2 pair states for migrate -setup
The R1 in array A and the R2 in array B must be in one of the applicable pair states before issuing the symrdf migrate
-setup command, which establishes a concurrent SRDF relationship among the three sites.
Pair states for migrate -replace for first leg of concurrent SRDF
R1 migration: R11/R2 applicable pair states for migrate -replace (first leg) shows the SRDF pair state required before replacing
an R1, the R11 and its existing device.
R2 migration:R11/R2 applicable pair states for migrate -replace (first leg) shows the SRDF pair state required when replacing
R2, the R11 and its existing R2 device. For the purpose of this discussion, this is the first leg of the concurrent SRDF relationship
for both R1 and R2 migrations.
The following table lists the applicable pair states for symrdf migrate -replace for an R1 and an R2 migration.
Table 51. SRDF migrate -replace control operation and applicable pair states
Pair state: Existing ->R2
R1 updinprog
Partitioned1
Partitioned2
TransmitIdle
Syncronized
R1 updated
SyncInProg
Suspended
Failed over
Consistent
Contro
Invalid
l
Split
operati
on: a b
migrate P P P P P
-replace
The following image shows a sample concurrent SRDF configuration for an R1 migration:.
RDFG 13 RDFG 45
R11 R2
RDFG 17 RDFG 7
Site C
Target
Figure 128. R1 migration: R11/R2 applicable pair states for migrate -replace (first leg)
The R11 in array A and the R2 device in array B must be in one of the applicable pair states before issuing the symrdf
migrate -replace command.
The following image shows a sample concurrent SRDF configuration for an R2 migration:
RDFG 13 RDFG 45
R11 R2
RDFG 17
Site C
Target
RDFG 101 R2
Figure 129. R2 migration:R11/R2 applicable pair states for migrate -replace (first leg)
The R11 in array A and the R2 device in array B must be in one of the states before issuing the symrdf migrate -replace
command
Pair states for migrate -replace for second leg of concurrent SRDF
Before replacing an R1, the R11 and its replacement device must in a specific SRDF pair state shown in R1 migration: applicable
R11/R2 pair states for migrate -replace (second leg). This temporary pairing was used to perform the concurrent SRDF data
transfer to the new device. When replacing R2, the R11 and the new R2 device (new pair) must also be in a certain pair state
shown in R2 migration: applicable R11/R2 pair states for migrate -replace (second leg) .
The following table lists the applicable pair states for symrdf migrate -replace for an R1 and an R2 migration.
Table 52. SRDF migrate -replace control operation and applicable pair states
Pair state: Temporary or New ->R2
R1 updinprog
Partitioned1
Partitioned2
TransmitIdle
Syncronized
R1 updated
SyncInProg
Suspended
Failed over
Consistent
Contro
Invalid
l
Split
operati
on: a b
migrate P P P
-replace
The following image shows a sample concurrent SRDF configuration for an R1 migration.
RDFG 13 RDFG 45
R11 R2
Site C
Target
RDFG 101
R2 RDFG 72
Figure 130. R1 migration: applicable R11/R2 pair states for migrate -replace (second leg)
The R11 device in array A and the R2 device in array C must be in one of the applicable pair states before issuing the symrdf
migrate -replace command.
The following image shows a sample concurrent SRDF configuration for an R2 migration:
RDFG 13 RDFG 45
R11 R2
Site C
Target
RDFG 101
R2
Figure 131. R2 migration: applicable R11/R2 pair states for migrate -replace (second leg)
The R11 in array A and the R2 device in array C must be in one of the states before issuing the symrdf migrate -replace
command.
● When running symreplicate against device groups and composite groups of type ANY:
○ Concurrent SRDF devices are not supported for device groups (DG) or composite groups (CG).
○ The following combinations of standard devices are supported when using the -consistent option:
■ All STDs are non-SRDF
■ All STDs are R1 devices
■ All STDs are R2 devices
■ STDs contain a mixture of R1s and non-SRDF devices
■ STDs contain a mixture of R2 and non-SRDF devices
NOTE:
Device external locks in the array are held during the entire symreplicate session. Locks are necessary to block other
applications from altering device states while the session executes. Manage locked devices provides more information.
Host
0000
STD R2
2
1 3
01C0 0210
R1
BRBCV
BCV
Local Remote
SID 0001 SYM-001823 Site
SYMCLI_REPLICATE_HOP_TYPE=SINGLE
3. Use the symdg add dev command to add the devices to the device group.
4. Use the symbcv associate command to associate an equal number of R1-BCV devices of matching sizes.
5. Use the symbcv associate command to associate an equal number of BRBCV devices (remote BCVs), also of matching
sizes.
NOTE:
The symreplicatecommand uses composite groups (-cg) to implement single-hop or multi-hop configurations for
devices that span multiple arrays.
The following must be true before you start a symreplicate session:
● Both sets of BCV pairs must have a pairing relationship.
● The local BCV pairs must be established.
● The SRDF pairs must be in the Suspended pair state.
● The remote BCVs (BRBCVs) must be in the split pair state.
● No writes are allowed to the BRBCV by any directly attached host at the remote site.
Optionally, you can manually reproduce the single-hop replication cycle using a sequence of SRDF and TimeFinder CLI
commands.
Examples
To execute thesymreplicate setupcommand on a device group (DevGrp1) using an options file (OpFile):
The first cycle of the symreplicate start -setup command puts the devices into the required pair state.
To execute the symreplicate start command with the -setup option:
-exact option
Use the -exact option to start the symreplicate session with the STD-BCV pair relationships in the exact order that they
were associated/added to the device group or composite group.
-optimize option
Use the -optimize option in conjunction with the -setup option or the setup argument to optimize the disk I/O on
standard/BCV pairs in the device or composite group.
The -optimize option splits all pairs and performs an optimized STD-BCV pairing within the specified group.
If you use the -optimize option with device groups, the device pair selection attempts to distribute I/O by pairing devices in
the group that are not on the same disk adapter.
NOTE:
Syntax
Use the -optimize option with composite groups to specify the same pairing behavior for an RA group.
Use the -optimize_rag option with either the -setup option or the setup argument to configure pair assignments for RA
groups that provide remote I/O optimization (distribution by using different remote disk adapters).
Examples
● The first-hop local BCV device pairs are automatically resynchronized, and
● The split operation is reattempted.
The consistent split error recovery operation is attempted the number of times specified in the
SYMCLI_REPLICATE_CONS_SPLIT_RETRY file parameter, defined in the replicate options file.
If a value is not specified, then the recovery operation is attempted 3 times before terminating the symreplicate session.
Setting the symreplicate control parameters provides more information.
Steps
1. Wait for any ongoing establish to complete.
2. Split the BCV pairs:
NOTE:
You may have to include additional command options in some of the above steps (for example, establish -full for
BCV pairs without relationships).
Hop 1 Hop 2
Local
0040 1
R1 R2 R2
2 4
01A0
3 01A1
R1 R1
RBCV RRBCV
Before you begin: setting the hop type and use final parameters
Set the replication type parameter in the replicate options file before you configure a multi-hop symreplicate session.
Set the parameter as follows:
SYMCLI_REPLICATE_HOP_TYPE=MULTI
Set the replication use final BCV parameter in the replicate options file to FALSE to prevent the final Hop 2 BCV from being
updated:
SYMCLI_REPLICATE_USE_FINAL_BCV=FALSE
Steps
1. Use the symdg create command to create an R1 device group (-g ) or composite group (-cg).
2. Use the symdg add dev command to add any number of R1 devices.
The following must be true before you start a symreplicate session without a setup operation:
● The local SRDF pairs must be synchronized
● The BCV pairs must be established
● The remote SRDF pairs must be suspended.
● If the final BCVs in the second-hop array are used, the BCVs must be in the split state.
Device pair state can be configured automatically using the symreplicate setup command or the -setup option with
the symreplicate start command.
Setting up pair states automatically provides more information.
Steps
1. Wait for any ongoing establish to complete.
2. Split the BCV pairs (2b in Automated data copy path in multi-hop SRDF ):
The -remote option specifies that the remote SRDF pairs establish.
5. Use either a device file or the -rrbcv option to establish the BCV pairs in the second hop (2d in Automated data copy path
in multi-hop SRDF ):
or
NOTE:
To use the -rrbcv option, the SRDF BCV devices must have been previously associated with the group, using
symbcv -rrdf
6. Wait for any ongoing establish to complete.
7. Split the 2nd hop BCV pairs:
Perform Steps 5 and 7 when you want to use the final hop 2 BCVs in the replicate cycle.
Optionally, use the -preaction and -postaction options to specify scripts for symreplicate to run before and
after splitting the BCVs (step 2).
NOTE:
You may have to include additional command options in some of the above steps (such as establish -full for BCV
pairs without relationships).
SRDF/AR
devices participating
in the replication cycle
Host
sid 0001 sid 0002 sid 0003
0026 0038
R1 R2
RBCV BCV
0027 0039
R1 R2
RBCV BCV
In the image above, Devices 0027 and 0039 are not part of the SRDF/AR copy cycle.
To access these devices from the production host during the SRDF/AR copy cycle, you must define separate device files on the
host that include the standard R2 device and the R2 BCV on Hop 1 and Hop 2.
The device files are used to establish the BCV pairs, split BCV pairs, and access the BCV devices.
Example
For example, if a 1-hour copy cycle completed in 1.5 hours, the next cycle could be set to begin immediately (IMMEDIATE) or in
half an hour (NEXT).
Best practice
● Start the symreplicate session with the basic parameters set.
● Use symreplicate query to monitor session progress, and record the timing results of the initial copies.
● Adjust the various timing parameters to best accommodate the copy requirements for your needs.
The following table lists two parameter setups for an initial symreplicate session trial:
SYMCLI_REPLICATE_CYCLE=0 Cycle through the first copy, then wait 60 minutes (delay),
and then another cycle, delay, and so on.
SYMCLI_REPLICATE_CYCLE_DELAY=60
Syntax
Use the symreplicate stats command to display statistical information for cycle time and invalid tracks.
Options
-log
Write information to a specified log file.
-cycle
Display only cycle time statistics for the last SRDF/AR cycle time, the maximum cycle time and the
average cycle time.
-itrks
Display only invalid track statistics for the last SRDF/AR cycle, the maximum invalid tracks and the
average number of invalid tracks per SRDF/AR cycle.
-all
(default) Display both the cycle time and invalid tracks statistics.
Example
To display both cycle time and invalid track statistics for device group srdfar on SID 1123:
Invalid Tracks:
---------------------------------------
Last Cycle: 12345 ( 9055.5 MB)
Maximum: 10780 ( 8502.3 MB)
Average: 11562 ( 7500.0 MB)
Clustered SRDF/AR
Clustered SRDF/AR enables you to start, stop, and restart symreplicate sessions from any host connected to any local array
participating in the symreplicate session.
In the clustered SRDF/AR environment, you can write the replication log file directly to the Symmetrix File System (SFS)
instead of the local host directory of the node that began the session.
Syntax
Use the symreplicate start command with the -sid and -log options to write the log file to the SFS. The following
options must be specified:
Options
-sid
ID of the array where the log file is to be stored at the start of the symreplicate session.
-g or -cg
Group name.
-log LogFilename
(Optional) User log filename.
Restrictions
● If Symmetrix ID (-sid)is not specified at the start of the session, the log file is written to local disk using the default
SYMAPI log directory. This is not restartable from another node.
● If a user log file name (-log LogFilename) is specified when a session is started, the -log option must be specified for
all other commands in the session sequence.
● If only the group name (-g , -cg) is specified when a session is started:
○ The log file is given the same name as the group,
○ Specify only the -g or -cg option for all other commands in the session sequence.
HYPERMAX OS restrictions
In HYPERMAX OS 5977, the following options for the symreplicate start command are not supported, and the command
fails with the message "Illegal option".
● - vxfs
● -rdb
Example
To write the log file for device group session1 to a file named srdfar1.log at the SFS on array 201:
Options
-recover
Recovers the device locks from the previously started session. Verify that no other currently running
symreplicate session is using the same devices before using the -recover option.
Example
To restart the SRDF/AR session from another local host:
Syntax
Use the symreplicate list command with the -sid option to display a list of the current SRDF/AR log files written to
the SFS at the specified SID.
Use the symreplicate list command with the -sort option to sort the log file list by name (default) or type.
Example
To list the log files at SID 201:
Syntax
Use the symreplicate show -log LogfileName -sid SID -all command to display the information content of a
particular log file.
Dell Solutions Enabler CLI Reference Guide provides more information.
Options
-log
Required. Log filename.
-sid
Required. Symmetrix ID.
-args
Display only command line arguments.
Example
To display the log file srdfar1.log at SID 201:
Syntax
Use the symreplicate delete -log LogFile.log command to delete the specified log file written to SFS.
Specify either the group name (-g, -cg) or the log filename (-log) depending on whether a user log name was specified when
the session was started.
Example
To delete log file srdfar1.log written to the SFS:
If you specify an options file on restart, you may not change the following options:
○ SYMCLI_REPLICATE_USE_FINAL_BCV=<TRUE|FALSE>
○ SYMCLI_REPLICATE_HOP_TYPE=<RepType>
If you attempt to change these options, an error message is displayed. All other options may be changed, and the new
values take effect immediately.
NOTE:
#Comment
SYMCLI_REPLICATE_HOP_TYPE=<RepType>
SYMCLI_REPLICATE_CYCLE=<CycleTime>
SYMCLI_REPLICATE_CYCLE_OVERFLOW=<OvfMethod>
SYMCLI_REPLICATE_CYCLE_DELAY=<Delay>
SYMCLI_REPLICATE_NUM_CYCLES=<NumCycles>
SYMCLI_REPLICATE_USE_FINAL_BCV=<TRUE|FALSE>
SYMCLI_REPLICATE_LOG_STEP=<TRUE|FALSE>
SYMCLI_REPLICATE_GEN_TIME_LIMIT=<TimeLimit>
SYMCLI_REPLICATE_GEN_SLEEP_TIME=<SleepTime>
SYMCLI_REPLICATE_RDF_TIME_LIMIT=<TimeLimit>
SYMCLI_REPLICATE_RDF_SLEEP_TIME=<SleepTime>
SYMCLI_REPLICATE_BCV_TIME_LIMIT=<TimeLimit>
SYMCLI_REPLICATE_BCV_SLEEP_TIME=<SleepTime>
SYMCLI_REPLICATE_MAX_BCV_SLEEP_TIME_FACTOR=<Factor>
SYMCLI_REPLICATE_MAX_RDF_SLEEP_TIME_FACTOR=<Factor>
SYMCLI_REPLICATE_PROTECT_BCVS=<Protection>
SYMCLI_REPLICATE_TF_CLONE_EMULATION=<TRUE|FALSE>
SYMCLI_REPLICATE_PERSISTENT_LOCKS=<TRUE|FALSE>
SYMCLI_REPLICATE_CONS_SPLIT_RETRY=<NumRetries>
SYMCLI_REPLICATE_R1_BCV_EST_TYPE=<EstablishType>
SYMCLI_REPLICATE_R1_BCV_DELAY=<EstablishDelay>
SYMCLI_REPLICATE_FINAL_BCV_EST_TYPE=<EstablishType>
SYMCLI_REPLICATE_FINAL_BCV_DELAY=<EstablishDelay>
SYMCLI_REPLICATE_ENABLE_STATS=<TRUE|FALSE>
SYMCLI_REPLICATE_STATS_RESET_ON_RESTART=<TRUE|FALSE>
By default, symreplicate sleeps between 15 and 60 seconds when checking on the state of
SRDF devices, up to a maximum time of 4 hours.
By default, symreplicate sleeps between 15 and 60 seconds when checking on the state of SRDF
devices, up to a maximum time of 4 hours.
Indicates that TF/Clone emulation is enabled/disabled.
FALSE
(default) The TF/Clone emulation default is disabled.
TRUE
Clone emulation is enabled.
SYMCLI_REPLICATE_PERSISTENT_LOCKS=<TRUE|FALSE>
Allows device locks to persist in the event of a system crash or component failure.
TRUE
Causes symreplicate to acquire the device locks for the symreplicate session with the
SYMAPI_DLOCK_FLAG_PERSISTENT attribute.
FALSE
The persistent attribute will not be used to acquire the device locks for the session. If the base
daemon (storapi daemon) is running and persistent locks are not set, the base daemon will
release the device locks in the event of a failure.
SYMCLI_REPLICATE_CONS_SPLIT_RETRY=<NumRetries>
Specifies the number of error recovery attempts that will be made when a consistent split operation fails
because the timing window closed before the split operation completed.
3 (default)
Used if the SYMCLI_REPLICATE_CONS_SPLIT_RETRY option parameter is not specified when
a consistent split (-consistent) is requested.
0
No retry attempts are made
SYMCLI_REPLICATE_R1_BCV_EST_TYPE=<EstablishType>
Specifies the establish type for the local/first hop BCV devices. EstablishType specifies the way that
BCV establish operations will be executed by TimeFinder. Valid values are:
SINGULAR
BCV devices will be established one at a time; the next device will not be established until the
previous device has been established.
SERIAL
BCV devices will be established as fast as the establish requests can be accepted by the array.
PARALLEL
BCV devices establish requests will be passed in parallel to each of the servicing DA directors.
SYMCLI_REPLICATE_R1_BCV_DELAY=<EstablishDelay>
How long to wait between issuing establish requests. Establish types of SINGULAR and PARALLEL, for
an <EstablishDelay> can be specified through the SYMCLI_REPLICATE_R1_BCV_DELAY file parameter.
SYMCLI_REPLICATE_FINAL_BCV_EST_TYPE=<EstablishType>
Identifies the establish type for the remote/second hop BCV devices.
SYMCLI_REPLICATE_FINAL_BCV_DELAY=<EstablishDelay>
Indicates how long to wait between issuing establish requests for the remote/second hop BCV devices.
For an establish type of PARALLEL the delay value indicates how long to wait before passing the next
establish request to an individual servicing DA director. Values for EstablishDelay:
Range: Delay of 0 to 30 seconds
Default: 0
SYMCLI_REPLICATE_ENABLE_STATS=<TRUE|FALSE>
Enables or disables the gathering of statistics.
Recover locks
Use the symreplicate start or restart command with the -recover option to recover the device locks and restart the
session.
NOTE:
Device locks can be recovered as long as exactly the same devices are still locked under the lock holder ID of the previous
symreplicate session.
Release locks
Optionally, you can release the device external locks held in the array for a terminated SRDF/AR session.
Locks may need to be released manually if a session is terminated unexpectedly due to a system crash or component failure.
Device locks for a terminated session can be released manually for a device group, composite group or log file without restarting
the session.
Syntax
Use the symreplicate release command to release any device external locks associated with devices in the specified
device group that are still held from when they were locked from the terminated SRDF/AR session.
Restrictions
● The SRDF/AR session for the targeted devices must not be active.
● Devices must have been locked by the previous session and the lock holder ID must match the previous session's ID.
● The number of devices to be unlocked must be less than or equal to the total number of devices in the previous SRDF/AR
session.
The force (-force) option is required to release device locks in the following situations:
● If the release action is requested in a clustered SRDF/AR environment on a host that did not initiate the session and the
status of the session cannot be determined.
● If any of the devices' lock holder ID in the targeted SRDF/AR session do not match the session's lock hoder ID, and the user
wants to release the devices locked with the session's lock holder ID.
Example
To release devices locks on a terminated session for device group prod on array 35002:
Multi-hop operations
You can manage various compounded remote configurations using both the TimeFinder and SRDF components of SYMCLI.
You can also multi-hop to a second level SRDF where Remote site G functions as a remote mirror to the standard devices of site
A and Remote site I remotely mirrors Site A's BCV.
In addition, you can also create a cascaded SRDF configuration, where tertiary site B functions as a remote partner to the R21
device at Site C, which is the remote partner of the local RDF standard device at Site A; and tertiary site D functions as a
remote partner to the R21 device at Site E, which is the remote partner of the local BCV device at Site A.
For details on multi-hop operations, see section Various remote multihop configurations in the Dell EMC Solutions Enabler
TimeFinder Family (Mirror, Clone, Snap, VP Snap) Version 8.2 and higher CLI User Guide.
Steps
1. Use the symdg create command to create an empty device group:
2. Use the symdg add dev command to add devices to the new device group:
3. Use the symbcv associate commands to associate the devices with a local BCV, and remote BCVs:
All devices must be established with the symmir and symrdf commands.
HOST
Standard BCV001
DEV001
Device Gr
oup: pr
od
Device Gr
oup Type: RDF1 Local
SYMMETRIX 344402
DEV001 BCV001
Standard
BCV
R1
(symmir)
RA Group: 1 RA Group: 2
(symrdf) (symrdf -bcv)
SRDF HOP1 -
SRDF Links
SYMMETRIX SYMMETRIX
RBCV001 BRBCV001
16 symmir -f <> -sid 056 split Splits the BCV-associated hop 2 device
pair.
or
symmir -g <> -rrbcv
1
R1 R2
6
Standard
14 X 7 5 X 2
Standard
3
R1 R1
4 R2
BCV BCV
BCV
R2 BCV
BCV
12 X 9 15 X 16
10
R1
11 R2
BCV
= Establish
Site C SID 056
X = Split
Site E
SYM-001822
You must have established SRDF device groups before you perform any symmir and symrdf operations.
Perform operations such as establish and restore in the same manner for remote sites.
Dell Solutions Enabler TimeFinder SnapVX CLI User Guide provides more information.
Examples
To split the BCV pair within Site A:
or
EMC VMAX3 Family Product Guide for VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS, Dell EMC VMAX All Flash
Product Guide for VMAX 250F, 450F, 850F, 950F with HYPERMAX OS , and Dell PowerMax Family Product Guide provide
detailed information about TimeFinder SnapVX.
Examples
The following examples use the configuration shown in SnapVX and Cascaded SRDF:
● Create, and link a SnapVX snapshot (named LocalSnap) on the local array:
● Create, and link a SnapVX snapshot (named Hop1Snap) on the remote array at Hop 1:
● Create, and link a SnapVX snapshot (named Hop2Snap) on the remote array at Hop 2:
00B8
00B9
00BE
00BF
● Create, activate, and link a SnapVX snapshot (named SiteBSnap) of devices in RDF group SiteB at remote array 197300078:
● Create, activate, and link a SnapVX snapshot (namedSiteCSnap) on devices in RDF group SiteC at the remote array
197300238:
Synchronous or
R2
Asynchronous
R1
Gold
In a basic recovery environment, a primary R1 site replicates to the secondary R2 site over a synchronous or asynchronous link.
A gold copy (BCV or clone) can be built on the R2 site to augment recovery restart strategies.
Restart restrictions
NOTE: symrecover options file parameters provides a complete list of parameters and optional recovery actions to be set in
the symrecover options file.
● A recovery fails if monitoring a leg that has an R22 device when the other SRDF mirror of the R22 is read/write (RW) on the
link (such states as synchronized, syncinprog, or consistent).
● The recovery does not start when the -restart_group_on_startup parameters are specified, and an R22 device has
another SRDF mirror that is already RW on the link.
If you are managing using device groups, symrecover can be started at other sites.
Restart restrictions
NOTE:
See symrecover options file parameters for a complete list of parameters and optional recovery actions to be set in the
symrecover options file.
● A recovery fails if monitoring a leg that has an R22 device when the other SRDF mirror of the R22 is read/write (RW) on the
link (such states as synchronized, syncinprog, or consistent).
● The recovery does not start when the -restart_group_on_starup parameters are specified, and an R22 device has
another SRDF mirror that is already RW on the link.
Syntax
Use the following syntax to launch SRDF Automated Recovery operations:
symrecover [-h]
symrecover [-env | -version]
Options
NOTE:
Either a device group (-g DgName) or composite group (-cg CgName)must be specified.
-g DgName
Specifies a device group.
-cg CgName
Specifies a composite group.
-mode {SYNC | ASYNC}
Specifies the SRDF session type, either synchronous or asynchronous. There is no default; this option
must be specified.
-out LogPath
Specifies an alternate fully-qualified directory location for the log file.
-options FileName
Specifies the fully-qualified name of the file that contains program options. See symrecover options file
parametersfor a list of possible settings.
Restrictions
● You can define devices in groups on the R2 side with a corresponding partner but symrecover cannot start in this
environment. You cannot monitor groups on the R2 side when the remote partner is concurrent. You must monitor these
groups from the host.
● The symrecover command does not support the monitoring or recovery of a device group or composite group that is set
with an ANY group type.
● Any options specified on the command line take precedence over the options specified by -options FileName.
● In a cascaded SRDF environment:
○ Specify the target composite group.
○ Do not use the -mode option.
Examples
To start a recovery in a basic SRDF/S environment:
where the cg_mon_opts options file includes the following settings and default values for a BCV gold copy:
Syntax
To recover a cascaded SRDF environment, add the following parameter settings to the options file in the previous example:
cascaded_monitor_both_hops = 1
goldcopy_location = All
Options
cascaded_monitor_both_hops = 1
Allows recovery on both hops.
goldcopy_location = All
Builds gold copies at the R21 and R2 sites.
The hop2 (R21->R2 link) restarts quickly and safely in ADCOPY mode, during the R2 resynchronization period.
email_addr_source= e_addr1 Specifies an address that will be used as the 'from' field for
any e-mails that symrecover sends.
No validity checks are done for the e-mail address. If this
setting is not specified, then a default value is generated
based on the array's hostname and current user account.
goldcopy_type=CopyType Specifies the type of gold copy to create on the R2 side. Valid
(case-insensitive) values are:
NONE = No gold copy is desired. All other goldcopy_*
options are ignored.
BCV = BCV gold copy on the R2 side is created. This is the
default.
CLONE = Clone gold copy on the R2 is created.
NOTE: For the BCV gold copy, the R2 BCVs must be
paired with the R2 devices before starting symrecover.
For the clone gold copy, the target devices must have
a clone session with the R2 devices before starting
symrecover.
goldcopy_state_startup= CopyType Specifies the desired state of the R2 gold copy upon routine
startup. Valid (case-insensitive) values are:
ESTABLISH = The devices must be established (BCV gold
copy only).
SPLIT = The devices must be split (BCV gold copy only).
ACTIVATED = The devices must be in the copied state (clone
gold copy only).
CREATED = The devices must be in the precopy state (clone
gold copy only).
NONE = The devices must be unchanged. This is the default.
NOTE: If the gold copy type is BCV and the default
state of the BCVs is ESTABLISH, this is likely to increase
SRDF/A session drops.
goldcopy_clone_list= List For a clone gold copy, this option tells symrecover which
list within the device group or the composite group to search
for clone devices. Valid (case-insensitive) values are:
TGT = Uses the TGT list.
BCV = Uses the BCV list.
run_once= [0|1] Specifies to check the status of the group once. If the group
needs recovery actions perform them. Exit after one check.
This option ignores the setting of restart_max_attempts.
Valid values are:
0 = Disable the option. This is the default.
1 = Enable status check.
NOTE: monitor_only, run_once, and
run_until_first_failure are mutually exclusive
options.
run_until_first_failure= [0|1] Specifies to monitor the group until the first failure occurs
and then exit without performing any recovery action. This
restart_max_attempts= attempts Specifies the maximum number of restart attempts that are
performed within the restart_window interval. After this
limit is reached the program terminates.
The range is from 0 to maxint. The value of 0 specifies to
attempt indefinitely. The default value is 5 attempts.
restart_max_wait_state_change= statetime Specifies the length of time (in seconds) during a restart for
a program to wait for a group to change to a desired state
(once requested).
Valid values are 0 to maxint. The value of 0 specifies to wait
forever. The default is 0.
restart_max_wait_warn_interval= warntime Specifies the length of time (in seconds) to display a progress
warning message while waiting for a state change to occur
during a restart.
Valid values are 0 and 30 to maxint. The value of 0 specifies
to wait forever. The default is 600 seconds.
restart_rdfa_min_cycle_warn_value= warntime Specifies the maximum value (in seconds) to which a trigger
can occur with a warning message, indicating the RDFA
minimum cycle time has exceeded this value.
Valid values are 0 and 30 to maxint. The value of 0 means
this feature is turned off, which is the default.
restart_state_syncinprog_wait_time time The maximum length of time (in seconds) during a group
syncinprog state that sleep is done before rechecking the
group status.
Valid values are [30] to [maxint]. The default is [120] seconds.
restart_state_transmit_warn_interval= time Specifies the interval of time (in seconds) that while a
group remains in a transmit idle state, to generate a warning
message.
Valid values are 0 to maxint. The default is 300 seconds.
restart_state_transmit_wait_time= Specifies the maximum length of time (in seconds) that during
transwaittime a group transmit idle state, a sleep is done before rechecking
the group status.
Valid values are 30 to maxint. The default is 120 seconds.
restart_window= time Specifies a time window (in seconds) during which no more
than restart_max_attempts failures and accompanying
restart attempts will be tolerated before monitoring is
terminated. The window begins at the time of the first failure
and ends restart_window seconds later. A new window
begins with a failure after expiration of the previous window.
log_level= level The desired logging level. Valid values are:
0 = Off
1 = Only errors are reported
2 = Errors and warnings are reported
3 = Errors, warnings, and informational messages are reported
(default)
4 = All messages are reported