NetWorker Cloning Integration Guide
NetWorker Cloning Integration Guide
Release 8.0
Copyright (2011 - 2012) EMC Corporation. All rights reserved. Published in the USA.
Published June 2012
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without
notice.
The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect
to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular
purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries.
All other trademarks used herein are the property of their respective owners.
For the most up-to-date regulatory document for your product line, go to the technical documentation and advisories section on the
EMC online support website.
CONTENTS
Preface
Chapter 1
Introduction
Revision history ..........................................................................................
Cloning integration feature..........................................................................
Staging integration feature..........................................................................
Benefits of cloning and staging ...................................................................
Additional data protection.....................................................................
Performance..........................................................................................
Storage optimization.............................................................................
Licensing ....................................................................................................
Version requirements ..................................................................................
NetWorker components...............................................................................
NetWorker server...................................................................................
NetWorker clients..................................................................................
Storage node ........................................................................................
NetWorker Management Console ..........................................................
Volumes................................................................................................
Pools ....................................................................................................
Save sets ..............................................................................................
NetWorker repositories .........................................................................
Cloning example .........................................................................................
Staging data example .................................................................................
Chapter 2
Chapter 3
12
12
13
13
14
14
14
16
16
16
16
17
17
17
17
18
18
18
20
20
24
24
24
25
26
26
27
29
30
30
31
31
Software Configuration
Filesystem configuration .............................................................................
Storage nodes.............................................................................................
Determining the read and write source ..................................................
Criteria for reading the clone data .........................................................
Criteria for writing the clone ..................................................................
Directing a clone from one storage node to another storage node..........
Directing clones from all storage nodes to a single storage node ...........
34
35
36
38
39
39
39
Contents
Chapter 4
Cloning Procedures
Cloning data ...............................................................................................
NetWorker release 7.6 SP1 and later .....................................................
NetWorker releases prior to 7.6 SP1 ......................................................
Cloning options...........................................................................................
Automated cloning......................................................................................
Configuring auto-clone ..........................................................................
Schedule cloning ........................................................................................
Scheduling clone operations.................................................................
Starting scheduled clone operations manually ......................................
Monitoring scheduled clone operations ................................................
Viewing the clone status of a save set ...................................................
Volume cloning ...........................................................................................
Save set cloning..........................................................................................
Scripted cloning..........................................................................................
NetWorker 7.6 Service Pack 1 enhancements ........................................
NetWorker 7.5 enhancements ...............................................................
nsrclone option descriptions.................................................................
Using the nsrclone options....................................................................
Using the nsrclone command to specify a browse and retention policy..
Cloning archived data .................................................................................
Scheduling a clone session for archive data ..........................................
Cloning an archive volume manually .....................................................
Considerations to improve cloning performance..........................................
Cloning validation .......................................................................................
Displaying the backup versions in the GUI ...................................................
Chapter 5
52
52
52
52
53
53
55
56
59
59
59
59
60
61
62
62
63
63
64
65
65
65
65
66
66
40
40
41
42
42
42
43
44
44
45
46
46
49
49
49
49
68
68
68
70
71
71
72
74
Contents
Chapter 6
Staging
Staging overview.........................................................................................
Staging example ...................................................................................
The destination...........................................................................................
Working with staging policies......................................................................
Creating a staging policy .......................................................................
Editing a staging policy .........................................................................
Copying a staging resource ...................................................................
Deleting a staging policy .......................................................................
Staging from the NetWorker Management Console ......................................
Staging from the command line...................................................................
Finding the clone ID of a save set ..........................................................
Chapter 7
75
75
77
79
81
88
88
89
89
89
91
92
92
92
93
93
96
97
97
98
99
Contents
PREFACE
As part of an effort to improve its product lines, EMC periodically releases revisions of its
software and hardware. Therefore, some functions described in this document might not
be supported by all versions of the software or hardware currently in use. The product
release notes provide the most up-to-date information on product features.
Contact your EMC representative if a product does not function properly or does not
function as described in this document.
Note: This document was accurate at publication time. New versions of this document
might be released on the EMC online support website. Check the EMC online support
website to ensure that you are using the latest version of this document.
Purpose
This document contains planning, practices, and configuration information for using the
NetWorker cloning feature.
Audience
This document is part of the EMC NetWorker documentation set, and is intended for use
by system administrators. It contains planning, practices, and configuration information
for using the NetWorker cloning feature.
Readers of this document should be able to perform the following tasks:
Identify the different hardware and software components that comprise the NetWorker
datazone.
This guide has been written for NetWorker release 8.0 unless specified otherwise.
Related documentation
The following documentation resources provide more information about NetWorker
software:
Technical notes and white papers provide in-depth technical reviews of products
regarding business requirements, applied technologies, and best practices.
Preface
NOTICE is used to address practices not related to personal injury.
Note: A note presents information that is important, but not hazard-related.
IMPORTANT
An important notice contains information essential to software or hardware operation.
Typographical conventions
EMC uses the following type style conventions in this document:
Normal
Bold
Italic
Courier
Used for:
System output, such as an error message or script
URLs, complete paths, filenames, prompts, and syntax when shown
outside of running text
Courier bold
Courier italic
<>
[]
{}
...
Preface
Technical support For technical support, go to EMC online support and select Support.
On the Support page, you will see several options, including one to create a service
request. Note that to open a service request, you must have a valid support agreement.
Contact your EMC sales representative for details about obtaining a valid support
agreement or with questions about your account.
Your comments
Your suggestions will help us continue to improve the accuracy, organization, and overall
quality of the user publications. Send your opinions of this document to:
[email protected]
Preface
10
CHAPTER 1
Introduction
This chapter includes the following sections:
Introduction
12
12
13
13
16
16
16
20
20
11
Introduction
Revision history
The following table lists the revision history of this document.
Table 1
Revision History
Revision
Date
A01
June 2012
Save set
Volume
Pool
Further selection criteria can also be used to specify particular types of data or clients.
Although the clone operation creates a copy of the original backup data, it is not an exact
copy since only the data within the backup is an exact copy. Some metadata is changed so
that the clone copy can be managed as a separate and independent copy from the
original. This capability allows the clone copy to be used for subsequent operations
without any dependency to the original.
Multiple clone copies can be created so that a backup can be protected against
corruption, local damage, site disaster, or loss. Cloning also provides a mechanism that
can be used to move data from one storage type to another. For example, for offsite
storage you can move data from disk to tape.
Clone operations can be configured to be run by:
A schedule
A customized script
Information about the volumes, status, and history of cloning operations can be viewed
and monitored from the NetWorker Administration window. Clone-related messages are
also logged to the NetWorker message file and the savegrp log file, which are located in
the NetWorker_install_dir\logs directory.
12
Introduction
Calendar-based process, such as keeping the save set for 30 days on the staging
device before moving the data to the next device.
Event-based process, such as when available space in the staging pool drops below a
set threshold. When this happens, the oldest save sets are moved until available
space reaches a preset upper threshold.
Staging does not affect the retention policy of backup data. Therefore, staged data is still
available for recovery.
When the stage process encounters an error after successfully cloning specified save sets,
it deletes only those successful save sets from the source volume before the program is
stopped. This ensures that after staging, only a single set of save sets exists in either the
source volumes or clone volumes.
Performance
Storage optimization
A NetWorker cloning operation is performed only after a successful backup which provides
the following benefits:
Allows the backup process to complete at maximum performance without any impact
to speed due to multiple write acknowledgements, delays, or retries on one or more
devices. Cloning limits the impact to the client, while providing data protection as
quickly as possible.
Ensures that the backup is successful, that the data is valid, and that the clone
operation will complete successfully.
Ensures that the storage requirements have been determined and that the appropriate
storage is made available.
13
Introduction
Allows cloning to be scheduled and prioritized outside of the backup window when
resources are less constrained.
Allows for recoveries to be initiated easily because the backup operation has already
completed.
Note: You cannot use the NetWorker software to create an instant clone by writing to two
devices simultaneously. This operation is also referred to as parallel cloning, twinning, or
inline copy. Where parallel cloning or twinning is required, consider using the NetWorker
cloning feature. Using cloning will help ensure that the initial backup completes
successfully. Additional data protection can also be implemented by using the optimum
devices and bandwidth available for the backup environment.
Validate that the original backup data can be read successfully which provides
additional assurance that the data can be recovered. It also validates that the media
where the backup resides is intact.
With cloning, multiple copies of the data are available. One copy can be shipped
offsite for vaulting which provides faster data rates than backing up directly to tape.
This copy can be made available for recovery at the original site or offsite.
Performance
Performance is one benefit of staging. Data is backed up to near-line storage which is
usually a backup-to-disk. The data can then be migrated to tape later based on staging
policy settings for the disk device.
Storage optimization
The storage device that is used for the initial backup is often a compromise between a
number of factors which include the following:
Location
Availability
Capacity
Speed
Cost
As a result, the backup data on the initial storage device is unlikely to be on the ideal or
optimum storage for the entire duration of the datas retention period.
14
Introduction
Cloning and staging can help to use the storage devices more effectively by moving data
between different types of devices. This ability provides the following benefits:
Backups that are stored on local tape devices can be copied to other devices in remote
locations without impact to the initial backup performance.
Backups from disk devices can be copied to tape to facilitate offsite or long term
storage.
By moving data from disk to tape, you can use the storage capacity more effectively. The
use of a deduplicated disk allows the initial storage space to be reclaimed for new
backups.
Once backups have been cloned to other storage devices, the original backups can be
deleted. This allows for the following:
New backups can be written to the disk device since the initial storage space can be
reclaimed for new backups.
Tape
Tape is still the most commonly used backup storage medium and the following issues
might be encountered when using it:
Note: Use backup-to-disk where high performance backups are required within a short
backup window. The data can be staged to tape for longer term retention.
Disk devices
Disk devices are becoming more cost effective and offer advantages when deduplicating
and replicating data. However, disk devices have limited capacity and can sometimes
require considerable effort.
15
Introduction
Licensing
In most cases, the functionality used for cloning or staging is incorporated into the
existing NetWorker base product and requires no additional licenses or enablers.
However, there are some devices that offer additional functionality and these might
require additional licenses and enablers in order for this functionality to be used for
cloning or staging, or for additional capacity to be made available.
To ensure that the appropriate capacity and functionality licensing is applied and enabled
for the devices that are being used, refer to the EMC NetWorker Licensing Guide.
Version requirements
NetWorker client and servers that support cloning should meet the following version
requirements:
NetWorker server must be installed with NetWorker 7.6 Service Pack 1 (SP1) or later
software.
NetWorker components
The NetWorker software has a number of components that allow for flexibility in the
deployment of NetWorker datazone configurations. It allows for the scaling of data and the
number of clients it supports.
The section includes the following topics:
Volumes on page 17
Pools on page 18
NetWorker server
The NetWorker server is the main component that manages the other components that
comprise the backup infrastructure.
A datazone is comprised of a NetWorker server and the group of components and client
data that the server manages. A customer site may have one or multiple datazones
depending on the size, distribution, and departmental organization.
16
Introduction
NetWorker clients
NetWorker clients are computers, workstations, or files servers whose data can be backed
up and restored with the NetWorker software. Each NetWorker client requires that the
NetWorker software be installed and that the client is configured on the NetWorker server.
The software also enables interaction with the NetWorker Application Modules.
In some cases, additional software is also installed that allows for local storage devices
and dedicated storage nodes.
Storage node
The NetWorker storage node is a system that has a NetWorker storage device attached and
is able to store backup data.
The storage node can be one of two types:
Note: Use dedicated systems for shared storage nodes and to direct all client data to the
dedicated storage node.
Volumes
NetWorker devices use data volumes to store and manage the backup data.
Every volume must belong to a pool which allows multiple volumes to be used. In the case
of tape cartridges, this procedure ensures that the correct volume and the storage node
are always used.
NetWorker components
17
Introduction
Pools
A pool can be used to group the backups together so that data of a similar type or profile
can be stored together. For example, you can create a pool for Sunday Full backups.
Pools also allow data to be directed to specific storage nodes or locations which help to
organize the data for optimum storage and recovery and are used during cloning sessions.
Save sets
The backup data consists of one of more save sets in a single session or thread of data
that has been generated by a NetWorker client of a NetWorker module. A save set contains
at least one file which is located on a NetWorker volume.
Save set attributes provide the following information:
These attributes allow the NetWorker software to ensure that the data is managed
according to the policies and configuration settings applied. You can determine the status
and type of the save set.
These attributes also allow you to determine:
NetWorker repositories
NetWorker software uses two separate repositories to manage data that has been backed
up by using the save command. The following repositories record metadata irrespective of
NetWorker client, NetWorker module, or data type:
18
Media database
Introduction
Media database
Information on the save set is stored in the media database. This database contains all of
the records for all of the save sets that are currently under the control of the NetWorker
software, and that have the potential to be used for recovery purposes.
The media database provides the following information:
The media database contains limited details on what is inside the save set. The names
and attributes of the files within the save set are stored in separate client indexes.
Unlike client indexes, media database entries are relatively small and require only a small
amount of space for each save set. As such, the disk space requirements for the media
database are generally small and disk size is dependent on the number of volumes and
save sets.
Client index
There is a separate client index repository for each unique NetWorker client configured
with the NetWorker software. The client indexes contain references to the save set IDs and
record each file that was included in the backup of a given NetWorker client.
The entries in the client file index record the following information for filesystem backups:
Filename
Note: For NetWorker module backups, the client file index includes metadata about the
individual application objects.
As some save sets might contain many files (100,000 or more) the information stored in
the client indexes can grow. This growth impacts the amount of disk space required to
store them. The save set browse retention policy allows customers to manage index space
for save sets.
NetWorker components
19
Introduction
Cloning example
In this example, three save sets are created by a backup of a client with three data drives.
These save sets are stored on a volume that is accessible through Storage Node A. Once a
cloning action occurs, the copies of these save sets are sent to a clone pool on Storage
Node B. Figure 1 on page 20 illustrates a cloning environment.
20
Introduction
In Figure 2 on page 21, the staging action will result in the deletion of the original save
sets on the Volume A1, once they had been successfully staged (cloned) to volume B1.
The Xs indicate that once a successful clone copy has completed, the original save sets
are deleted. This is the difference between a clone and a stage operation. The save sets
appear to move from one storage to another. The resulting save set is identical to that of
the first, but in a different location.
21
Introduction
22
CHAPTER 2
Planning and Practices
This chapter includes the following sections:
Cloning requirements..............................................................................................
Cloning policy .........................................................................................................
Consider the application .........................................................................................
Consider the recovery scenario................................................................................
Consider the browse and retention policies.............................................................
24
24
29
30
30
23
Cloning requirements
The following requirements apply when performing clone operations:
A minimum of two storage devices must be enabled. One to read the existing data and
one to write the cloned data:
If libraries with multiple devices are used, the NetWorker server automatically
mounts the volumes required for cloning.
If stand-alone devices are used, mount the volumes manually. A message displays
in the Alert tab of the Monitoring option that indicates which volumes to mount.
The destination volume must be a different volume from the source volume, and must
belong to a clone pool.
Note: Only one clone of a particular save set can reside on a single volume. If three clones
of the same save set are specified, the NetWorker software will ensure that each clone is
written to a separate volume.
Cloning policy
Cloning data has many benefits and can be used to protect and maximize the data
protection infrastructure.
The following section lists some of these benefit, describes common scenarios, and
provides advice on data selection.
Note: Ensure that all target volumes do not already contain the same clone save sets.
Volumes that contain failed clone save sets might prevent additional clone sessions from
completing.
24
Ensure that the volume that is used for cloning does not already contain a copy of the
save set. Only one instance of a save set can exist on the same volume or pool.
As with backup pools, there can be multiple clone pools. Clone pools can be used to sort
data by type, retention period, or location.
A clone pool can also be associated with one or more devices to limit the number or type
of devices that are used. By using clone pools, you can expire the original save sets and
reclaim the space on the initial or primary storage device while also maintaining the data
for future recoveries. This extends the retention periods within clone storage pools and
devices.
Save set clones have their own retention and browse periods which allow them to be
managed independently from the original backup.
Note: The retention policy specified in a clone pool will be overwritten if a retention policy
is specified in a scheduled clone operation or through the nsrclone command.
Cloning policy
25
Save sets
NetWorker save sets have various status conditions that allow you to determine:
Clone attributes
Clone attributes include the following:
Browsable: Select if the save set still has an entry in the client file index.
Recyclable: Select if all save sets have passed both the browse and retention policy
time periods. The volume might be available for automatic relabeling and overwriting
provided that all of the save sets on the volume are recyclable.
Recoverable: Select if the entry for the save set has been removed from the client file
index, but is still available for recovery from the media. That is, the volume has not
passed its retention policy.
In-progress: Select if the save set is currently in the process of being backed up.
IMPORTANT
In-progress save sets cannot be cloned.
Aborted: Select if the save set was either aborted manually by the administrator
during a backup, or because the computer crashed.
IMPORTANT
Aborted save sets cannot be cloned.
26
IMPORTANT
Suspect save sets are not cloned. The following error message appears:
nsrclone: skipping suspect save set <ssid> cloneid <cloneid> nsrclone: error, no
complete save sets to clone.
Multiplexed backups
Multiplexed save sets can be cloned. Clone copies of multiplexed save sets are written as
a single contiguous data stream on the target media (demultiplexed). This behavior can be
an advantage since multiplex backups have a read and recovery overhead. By cloning
multiplexed save sets, you remove this overhead which allows recoveries from the clone
to be read faster than the original backup.
When cloning multiplex save sets, note that only one save set will be cloned to the same
target at the same time. However, multiple clone sessions can be started at the same time
from the same source provided that they all have separate target volumes.
Use the EMC Data Domain backup-to-disk and optimized cloning feature with Data
Domain devices.
Plan ahead to ensure that the volumes are available and that they are read in an
optimum sequence.
A custom, scripted solution that uses the nsrclone command can be created and used to
manage save set spanning.
Cloning policy
27
source device. Cloning or moving save sets from tape to disk or from disk to Virtual Tape
Library (VTL) is no different than cloning data from like devices. This allows devices to be
used efficiently and effectively in the right places.
Example
Advanced file type device (AFTD) disk devices can used for the initial backups because of
their speed and versatility.
Tape devices can be used to clone the data. This allows for an extended retention period
without increasing the disk space requirements.
The use of deduplication can also provide efficient use of storage. Cloning to or from
deduplication devices can ensure that these devices are used effectively.
Clone resources that are created with the nsradmin program on page 29
Clone operations that mix save sets from different source devices, such as Data
Domain devices, AFTD devices, or Network Data Management Protocol (NDMP)
devices, may be written to different cloning target volumes. The full set of clone
volumes should be shipped offsite.
Note: Although this behavior is by design and is recommended as a best practice, it is
possible to write all save sets in the clone operation to the same clone volume.
28
It is a best practice to not mix normal data and NDMP data because of the way in
which the data is written to tape differs. The number of filemarks and positioning is
different for NDMP data.
If the clone operation includes save sets from different devices, and you want all of
the save sets to be written to the same volume, include only one volume in the clone
target pool.
This issue does not occur when the storage node is on the NetWorker server. The storage
node is not remote.
You can use these resources to edit the clone item as a scheduled clone resource in the
GUI. The corresponding NSR task resource must have its name and action attributes
specified as follows:
name: "clone.nsrclone_resource_name"
action: "NSR clone:nsrclone_resource_name"
For example, if the NSR clone resource was named TestClone1, the name and action
attributes of the NSR task resource would be:
name: clone.TestClone1
29
On a deduplication device within the same data center (within a Data Domain
environment)
In some cases:
More copies may be required to ensure that all of the recovery scenarios can be
accommodated while maintaining the expected return on investment. This
requirement may not apply to all clients and all data or be practical. However,
consider the reasons why cloning is being used to ensure that the actions that are
being proposed or performed meet the requirements or expectations.
Additional protection can also be achieved by changing the target or moving tapes to a
second location once the cloning operation is complete.
The retention policy determines the length of time that the data remains available for
recovery on the NetWorker media database.
The browse policy determines how long the details of the data remain available for
browsing and selection on the NetWorker client index.
Both the browse and retention polices impact the amount of disk space required by the
NetWorker server. The recovery procedure is likely to be different if one or both of these
polices has elapsed. The browse and retention polices should be equal to or greater than
the client or data requirements and allow for the expected recovery conditions.
The NetWorker software is very versatile when recovering data because of how it handles
the data on the media. When determining data recovery options, consider:
30
The data is written in a format that is self-describing. This allows data to be read and
recovered by using different NetWorker instances or versions.
The data remains on the media until the save set data has expired. The media is
relabeled or staged in the case of an AFTD.
Up until the point when the media is relabeled, recoveries are still possible, regardless
of the browse policy, expiration status, or even if the volume is known to the
NetWorker software in the media database entry.
While this versatility can be relied upon when unexpected events occur, it does not
replace the requirement to plan and manage the data appropriately. Care and
consideration should be given when selecting browse and retention polices. Also consider
the location and number of copies of volumes and save sets. This ensures that the data is
available at the most likely time by using the simplest procedures.
Browse policy
For every backup that is created by using the NetWorker software, you must assign two
policies to determine how long data should be maintained and be available after the
recovery. The most important policy from an ease of recovery perspective is the browse
policy.
The browse policy determines how long the backup will be browsable, so that the ability to
review and select data for recovery is possible. This policy determines how long index data
is maintained in the respective client index, so that a browse policy of seven days will
remove data from the client index after seven days has elapsed. This allows different
clients, or different data types, or even different groups of clients to have browse periods
that differ.
Once the browse policy for a save set has expired, it is possible regenerate the index for a
given save set. Restoring save sets that are not in the media database on page 81
provides details.
Note: The browse policy is limited by the retention policy. The browse period cannot
exceed the time set for retention policy.
Retention policy
As with the browse policy, the retention policy is also assigned for every NetWorker
backup, regardless of its source or type. The policy lets the NetWorker software know how
long the data within a save set is expected to be maintained for recovery.
By having separate browse and retention policies, the user is not bound by the retention
period for client index information. This is useful as the recovery of data is more likely to
occur within a short period of time from when the backup was made. However, the need to
retain the information for business or regulatory reasons is likely to exceed this period by
some period. It is therefore possible to have a browse period which is long enough to
accommodate the most likely recovery scenario, but maintain a retention period to satisfy
the business or regulatory criteria. This approach allows the disk space required by the
client index to be maintained at a more acceptable level without the overhead of large
disk space requirements and the performance and scaling concerns that would be
associated with this.
31
Example
Figure 3 on page 32 shows how browse and retention policies can be used to maintain the
data available for recovery while minimizing the disk space required for client indexes and
maximizing the storage space available. By having this cascading retention period, you
can free the space on the immediate or high performance devices, and still maintain the
recovery options from the less costly, lower performance devices.
32
CHAPTER 3
Software Configuration
This chapter includes the following sections:
Software Configuration
34
35
42
44
46
49
33
Software Configuration
Filesystem configuration
Before you start to configure cloning, you must consider the type of data that is being
cloned. This section describes a basic cloning operation that uses a standard filesystem
backup where a client, pool, or volume that has one or more filesystem save sets is
required to be cloned to a second device. This device is typically located in a different
location.
For specific application data cloning considerations, see NetWorker Module for
Databases and Applications on page 95 and the NMM section in the NetWorker
Procedure Generator.
Figure 4 on page 34 illustrates the principle for all cloning operations.
34
Software Configuration
In this figure:
The clone operation takes the copy from the storage node or another storage node
that has access to the same volumes and reads the data from the volume, storage
node A.
Data is then directed to a different device. The data can be accessed in one of three
ways:
Through the same storage node
From a storage node in a different location
By using a different device (storage node B)
Most of the configuration principles in this section apply to all cloning operations.
Storage nodes
When performing clone operations, you can select the storage node that is used for the
source and target.
This section describes the criteria that you can use to determine:
The storage node from which the clone data is read (read source).
The storage node to which the clone data is written (write source).
Storage nodes
35
Software Configuration
36
The clone copies are created from and reside in the appropriate locations or media
formats.
Software Configuration
Figure 6 on page 37 illustrates the storage node selection criteria for reading the clone
data.
Figure 6 Storage node selection criteria for reading the clone data
Storage nodes
37
Software Configuration
If the source volume is mounted, then the storage node of the device on which the
volume is mounted is used as the read source.
If the FORCE_REC_AFFINITY environment variable is set to Yes:
The selection criteria in step a on page 38 are ignored.
The selection criteria behaves as though the volume is not mounted as described
in step a on page 38 .
38
Software Configuration
The Clone Storage Node attribute of the read source storage node is used as the write
source.
If the read source host does not have a Client resource, the Storage Nodes attribute of
the NetWorker server is used as the write source.
No matter where the cloned data is directed, the client file index and the media database
entries for the cloned save sets still reside on the NetWorker server. This ensures that the
browse and retention policies are handled consistently regardless of where the clone data
is directed.
39
Software Configuration
If the source volume is mounted, then the storage node of the device on which the
volume is mounted is used as the read source:
If the FORCE_REC_AFFINITY environment variable is set to Yes:
The selection criteria in step a on page 38 are ignored.
The selection criteria behave as though the volume is not mounted as
described in step a on page 38 .
When cloning is used in a VTL environment such as an EMC CLARiiON Disk Library
(CDL), the NetWorker software behaves as if the FORCE_REC_AFFINITY environment
variable is set to Yes.
In cases where tape is used as a secondary storage tier where selected data is cloned
to tape for offsite storage or for extended data retention periods. This allows disk
devices to be used for the initial backup where their speed and flexibility can be most
effectively used for fast backup and recovery performance.
In cases where tape is used as the primary backup media, there are still benefits in
creating clone copies, including:
Secondary copy at different location or for offsite storage
Data validation
40
Software Configuration
Unlike disk-based devices, tape devices read data in a serial format. This means that
while multiplexing is beneficial from a backup streaming perspective, not so when it
comes to recovery.
If recovery speed is important, the use of clone copies as the source is likely to result
in faster recovery throughput.
Tape clone copies are often the preferred method to read data in disaster recovery
situation. The ability to acquire, install, and configure a tape unit to read data is often
the first task on a disaster recovery plan.
By creating a copy of the backup on tape, you can eliminate the need for appliances such
as VTLs or disk systems to be in place. This often takes longer to acquire, install, and
configure. However, ensure that the tape copy is a full and complete copy, without the
dependence on other backups or deduplication appliances to complete the restore
operation.
Plenty of time to create further copies to tape or other disk-based devices for longer
term retention.
Storage nodes
41
Software Configuration
For file type devices, automatic and manual cloning begins only after all the save sets
in a savegroup have been backed up.
For AFTD, automatic cloning begins only after all the save sets in a savegroup have
been backed up.
Note: You can begin manually cloning a save set as soon as it has finished its backup.
AFTD devices allow a maximum of two concurrent clone operations. One clone
operation can use the writable device and the other clone operation can use the
read-only device.
Begin the manual cloning process while the other two larger save sets are still being
backed up.
Launch the cloning process for that save set as each save set is backed up.
42
Software Configuration
provides more information. You can also output the backup data of Avamar deduplication
nodes to tape volumes. Backup-to-tape for Avamar deduplication clients on page 43
provides more information.
IMPORTANT
For disaster recovery, you must replicate the client data to another Avamar deduplication
node. You must also clone the metadata. Both the metadata and the client data are
required to recover backed-up client data.
Example
Clients mars, venus, and jupiter have been configured as deduplication clients and
assigned to a backup group named Dedupe backups. This group is scheduled for a daily
level full backup.
To get a monthly tape backup of these clients:
1. Create another instance of the mars, venus, and jupiter clients.
IMPORTANT
Do not select the Deduplication backup checkbox on the Apps & Modules tab of the
Create Client resource.
2. On the General tab of the Create Client resource, assign mars, venus, and jupiter to a
backup group named Tape backups.
3. Schedule this group for a monthly full backup on one day of the month. Skip every
other day of the month.
Note: The Avamar documentation describes the tape out options that are available for
Avamar.
43
Software Configuration
Clone formats
Data stored on a Data Domain device may be cloned by the NetWorker software in one of
two formats, depending on the type of media on which the clone copy will be stored:
44
Software Configuration
Clone requirements
To clone data from one Data Domain device to another by NetWorker clone-controlled
replication (optimized cloning), ensure that requirements are met.
The following eight requirements assume that the previous creation of a clone target pool
named, for example, as newclonepool:
1. Ensure that both the source and target storage nodes are clients of the same
NetWorker server.
2. Ensure that the Data Domain systems are properly licensed, including a replication
license, which is required to create optimized clones.
3. Ensure that the Client resource for the NetWorker server and both storage nodes
specify, in their Aliases attribute, all of their names in use.
For example:
Fully-qualified name
Short name
Aliases
IP address
Note: If an nsrclone command or script is used to perform an optimized clone from a
host that is not the NetWorker server, then this command must specify the NetWorker
server by its primary hostname as listed in the NMC Enterprise view. Otherwise, a
regular clone might be produced instead of an optimized clone.
4. Ensure that a target pool (for example, newclonepool) has been created for Backup
Clone type with the Media type required attribute set to Data Domain.
With this setting, if a Data Domain device is not available for a clone operation in the
specified target pool, then NMC displays a "Media waiting" message.
Note: The Default Clone pool does not allow any modification. The required media
type cannot be set in that pool.
5. Ensure that the Client resource for the source storage node specifies, in its Clone
Storage Node attribute, the target storage node hostname:
If the Clone storage node attribute is not specified, then the NetWorker server
becomes the storage node for the clone operation.
If the Clone storage node attribute lists a storage node for a volume that is not Data
Domain, and media type required is not set to Data Domain in the target clone
pool, then only regular clones may be stored on those volumes.
Note: This setting is not required if the target storage node is on the NetWorker server.
45
Software Configuration
6. Ensure that the source Data Domain device is mounted and available on the source
storage node.
If the source device is not mounted, then a regular, non-deduplicated clone will be
performed, except if the specified target pool is of Backup Clone type with the Media
type required attribute set to Data Domain.
7. Ensure that the target Data Domain device is labeled and mounted on the target
storage node. The pool selected for the device label operation (for example,
newclonepool) must be of Backup Clone pool type.
8. Verify that the target clone pool (for example, newclonepool) is properly specified or
selected:
For CLI clone operations, use the nsrclone -b newclonepool command.
For scheduled clone operations, in the Write clone data to pool attribute of the
Clone resource, select newclonepool.
For auto-clone operations for a group, in the Clone pool attribute of the Group
resource, select newclonepool.
For clones of entire volumes, Cloning by pools on page 46 provides details.
Cloning by pools
In order to copy save sets from Data Domain storage to a device, a special pool must be
specified. This pool is known as a "clone pool." A clone pool must be assigned to a device
on the target Data Domain system, where it will be available for use.
There are two main purposes for a clone pool:
To copy existing deduplicated VTL or CIFS/NFS AFTD save sets to a Data Domain
device.
To copy the existing save sets from one Data Domain device to another Data Domain
device, typically at a remote location for disaster recovery purposes.
46
Software Configuration
47
Software Configuration
6. Cloning of save sets in virtual tape libraries on either disk library engine from one disk
library engine addresses these points:
Allows a single embedded storage node to clone from VTLs in either disk library
engine.
Allows a single embedded storage node to clone from other virtual tapes if the
other disk library engine is down.
Target tape library (PTL) connected to the same disk library engine
However, it cannot see the virtual tape library in the other disk library engine. But it can:
Treat engine A, which functions as a Fibre Channel initiator, as a SAN client of engine B
that functions as a Fibre Channel target.
Engine B provides the same VTL used by the production node to engine A.
Each disk library engine becomes a SAN client of the other, just like any other
SAN-connected NetWorker storage node. This requires each embedded storage node to
have this capacity.
In the previous scenarios, one or more virtual tape libraries were created within the disk
library and assigned to the NetWorker storage nodes. These virtual tape libraries can be
used by a single or multiple production storage nodes, or by the embedded storage node.
For the embedded storage node to access these virtual tape libraries for cloning
operations, the virtual tape libraries must also be assigned to the NetWorker storage node
which is the SAN client in the Disk Library Console program. This allows the embedded
storage node to:
Read from the virtual tapes that were created by the production storage nodes.
Write to either virtual or physical tape devices that are attached to the disk library or to
a second or remote storage node.
The NetWorker software allows a tape library to be shared by two or more storage nodes.
This can occur in two instances both of which are supported by the embedded storage
node cloning capability:
48
Dynamic Drive Sharing (DDS), where one or more of the tape drives in a tape library are
shared by two or more storage nodes.
Without DDS, where one or more of the tape drives are dedicated (not shared) by two
or more storage nodes.
Software Configuration
Cloning node affinity for all disk library virtual tape libraries
By default, the NetWorker software determines which storage node will read a source
volume in a clone operation by first considering if the source volume is already mounted.
While this is an efficient choice for many situations, it is not preferred for environments
where clone operations are to be performed by the disk library embedded storage node.
NetWorker version 7.4 SP1 and later incorporates a feature where the mounted status of a
source volume is ignored when determining the read source for virtual tape libraries.
Note: Use this feature when performing any cloning operation that involves the disk library
embedded storage node. Note that this feature is applied to the NetWorker server, not the
embedded storage node that is running inside the disk library.
The NetWorker software also automatically activates a feature that ignores the mounted
tape status when determining cloning node affinity for all disk library VTLs have the virtual
jukebox attribute set to Yes in the Jukebox resource. This same functionality is available,
but not automatically enabled for all other non-VTL jukeboxes.
To enable this feature on non-VTL jukeboxes on the NetWorker server:
1. Set the environment variable FORCE_REC_AFFINITY to Yes.
2. Restart the NetWorker processes.
The NetWorker software can clone from virtual tape in the disk library through a
production storage node to a SAN-attached tape library to produce copies of save
sets. This operation is a standard NetWorker cloning procedure.
For the disk library, a virtual tape drive works in conjunction with a SAN-attached
target tape device to complete the cloning process.
Cloning from a production storage node to a second storage node can also be
performed over IP.
49
Software Configuration
IMPORTANT
Do not use a production storage node to perform cloning operations when the embedded
storage node cloning capability is present.
Advantages
The advantages of cloning data to physical tapes include the following:
Cloning can occur with the disk libraries under NetWorker control with standard
NetWorker policy support. Multiple retentions policies for different cloned copies of
data can be used.
Copying can occur from one tape type (virtual) to another tape type (target tape
library), also known as tape conversion.
Copying can occur from multiple virtual tapes to a single tape, also known as tape
stacking.
Disadvantages
The disadvantages of cloning data to physical tapes include the following:
50
Consumes SAN bandwidth as data must be from virtual tape over the SAN to a target
device on the SAN.
CHAPTER 4
Cloning Procedures
This chapter includes the following sections:
Cloning Procedures
52
52
53
55
59
60
61
65
65
66
66
51
Cloning Procedures
Cloning data
NetWorker clone operations can be configured by using several different methods. Each
method is suited to different environments and storage needs. You may need to use
multiple or mixed cloning approaches to achieve the required control and flexibility.
Clone operations can be configured to be run by:
A schedule
A customized script
Combines the flexibility of using the nsrclone command and avoids some of the
performance limitations that were often associated with the legacy automatic cloning
method.
Automated clone operations. These are linked to regular backup group operations and
are enabled through the Backup Group resource.
Cloning options
Table 2 on page 52 lists the cloning options and describes how and when they are
typically used.
Table 2 Cloning options (1 of 2)
52
Cloning option
Description
Automated
Scheduled
Cloning Procedures
Description
Volume
Save Set
Scripted
Automated cloning
Note: Use the scheduled cloning GUI instead of using the auto-clone option.
Save sets can be automatically cloned when the backup group that contains them is
completed. Because the cloning occurs immediately after the group completes, this clone
method is suitable for smaller environments, or a small number of clients, where the clone
operations need to be completed quickly and immediately within the backup window.
Unlike scheduled cloning, automated cloning runs immediately after the backup group
completes. This ensures that the backup data is cloned as quickly as possible. However, it
also means that the cloning operation is likely to interfere with the backup window and
might vary in start and end times.
Configuring auto-clone
To configure auto-cloning:
1. In the NetWorker Administration window, select Configuration.
2. Create a Group resource and then select Properties.
3. Specify the Clones option.
4. Select the clone pool that will be used to direct the backup.
The EMC NetWorker Administration Guide provides details on creating a clone pool.
Figure 7 on page 54 displays the auto-clone action. Once the backup of the three save
sets has completed, the clone of the save sets automatically starts. This action provides
two copies of the backup on completion of the savegroup.
Automated cloning
53
Cloning Procedures
A savegroup that has the auto-clone attribute enabled starts a cloning session after the
backup is complete. If the savegroup is aborted or stopped after the backup is complete,
the auto-clone session does not take place and the following occurs:
Message appears in the logs to indicate that the save set cloning session failed.
Since the group is marked as successful in the NMC, the restart option is not enabled on
the savegroup.
To start the savegroup again, in NMC select start on the savegroup. The backup session
begins with auto-clone enabled.
54
Cloning Procedures
Schedule cloning
NetWorker scheduled clone operations can be configured and run in NMC according to a
schedule for predetermined clients, pools, save sets, and devices.
This method is suitable for environments where copies of save sets need to be regularly
provided. Such an environment is typically part of a well-defined maintenance cloning
window, which runs independent of the main backup operation.
Figure 8 on page 55 shows the schedule pane for a clone session.
Schedule cloning
55
Cloning Procedures
56
Cloning Procedures
9. Use the Pool attribute to ensure that only certain media types are used to hold clone
data. Pools direct backups to specific media volumes.
For example, to ensure that this clone session replicates only to:
A certain type of disk, such as a Data Domain type disk, select a clone pool that
uses only Data Domain type disks.
Tape (tape out), select a clone pool that uses only tape devices.
10. Select Continue on save set error to force the NetWorker software to skip invalid save
sets and to continue the clone operation.
If this option is not selected (default setting), an error message results and the clone
operation stops if an invalid save set or invalid volume identifier is encountered.
11. To restrict the number of clone instances that can be created for any save set that is
included in the particular scheduled clone operation:
a. Type a value in the Limit number of save set clones attribute.
A value of zero (0) means that an unlimited number of clones might be created for
this scheduled clone operation. The NetWorker software allows one copy of a save
set on any given volume since a clone is created for each volume in the pool. Only
one clone is created for each run of a scheduled clone operation.
b. Consider limiting the number of save set clones in cases where the clone operation
has not completed and is being retried.
For example, if you type a value of 1 in this attribute and then retry a partially
completed clone operation, only the save sets that were not successfully cloned
the first time will be eligible for cloning. In this way, unnecessary clone instances
will not be created.
Regardless of the value in this attribute, the NetWorker software always limits the
number of save set clone instances to one per volume. A clone pool can have
multiple volumes. This attribute limits the number of save set clone instances that
can be created for a clone pool in a particular scheduled clone operation.
12. Select Enable to allow the clone session to run at its scheduled time.
13. In the Start Time attribute, perform either of the following:
Click the up and down arrows to select the time to start the clone session.
or
Type the time directly into the attribute fields.
14. From the Schedule Period attribute:
a. Select Weekly by day or Monthly by day depending on how you want to schedule
the clone session.
b. Select the days of the week or month on which the scheduled clone is to occur.
15. To repeat the clone session within a day, specify an Interval time in hours.
For example, if the start time is 6 a.m., and the interval is 6 hours, then the clone
session will run at 6 a.m., 12 p.m., and 6 p.m.
Schedule cloning
57
Cloning Procedures
16. If the Limit the number of save set clones value is set, then the repeat clone session
skips those save sets in the pool for which the specified number of clones already
exists.
17. Click the Save Set Filters tab to specify the save sets to be included in this scheduled
clone session.
To limit save sets by various filter criteria, perform either of the following:
Select the clone save sets that match selections.
or
Select the clone specific save sets to explicitly identify the save sets to be cloned.
18. Click OK to save the scheduled clone session.
Displaying a list of the save sets that will be cloned based on the filter criteria
To display a list of the save sets that will be cloned based on the filter criteria that you
specified, select Preview Save Set Selection.
Groups (savegroups)
Filter save sets by name (save set name as specified in the Client resource)
Include save sets from the previous (save sets from the past number of days, weeks,
months, or years)
58
Cloning Procedures
Completion status of each save set that is included in the scheduled clone
Volume cloning
Volume cloning is the process of reproducing complete save sets from a storage volume to
a clone volume. You can clone save set data from backup or archive volumes.
Volume cloning uses nsrclone. However, it uses the volume name as an argument. Cloning
in the NetWorker software operates at the save set level and does not specifically
duplicate tape volume. This might result in multiple volumes being used.
The following describes the volume cloning process:
1. Instructs nsrclone to clone all of the save sets that exist on a particular volume.
2. When volumes are cloned, a list of all of the save sets that reside on them is created,
and these in turn are then cloned.
3. Save sets that begin (header sections of continued save sets) on a specified volume
will be completely copied:
Volumes may be requested during the cloning operation in addition to those
specified on the command line.
Save sets that reside on the specified volumes that begin elsewhere (middle
sections or tail sections of continued save sets) are not cloned.
Volume cloning
59
Cloning Procedures
60
Cloning Procedures
Note: A long date range might result in too many selected save sets. This can increase
response time or even require that you close and reopen the browser connection to
the NetWorker Console.
7. Use the Status attribute to limit the search to save sets that have a particular status.
The values that can be selected include the following:
All
Browsable
Recyclable
Scanned-in
Recoverable
Suspect
8. Use the Maximum Level attribute to limit the search to save sets of a particular backup
level.
The level All is specified by default. All the levels up to and including the selected level
are displayed. For example:
If you select level 5, save sets backed up at levels full, 1, 2, 3, 4, and 5 are
displayed.
If you select level Full, only those save sets backed up at level full are displayed.
If you select All, save sets for all levels are displayed.
9. Click the Save Set List tab. The save sets that fit the criteria appear in the Save Sets
list.
10. From the Save Set list, select the save sets to clone.
11. From the Media menu, select Clone.
12. From the Target Clone Media Pool list, select a clone pool.
13. Click OK, then click Yes on the confirmation screen.
Scripted cloning
As of NetWorker 7.6 SP1, most of the functionality provided in the nsrclone.exe command
is now provided in the NMC Clone resource user interface.
However, for some situations or circumstances, the use of the nsrclone.exe command
within a script can still have advantages. For example, a scripted cloning solution could be
used for any of the following scenarios:
To control the conditions before cloning occurs. For example, following a specific
event or test, or as part of a workflow.
To control the actions after cloning has been successful. For example, deleting files, or
moving data as part of a workflow.
Scripted cloning
61
Cloning Procedures
To create multiple clones. For example, clone 1 on disk, clone 2 to tape, each with
specific dependencies, timing, and logic.
Note: When using the scripted cloning feature, use the latest versions of NetWorker
software. This will minimize the complexity of the logic in the cloning script.
62
Clients
Groups
Cloning Procedures
Description
Specifies the upper non-inclusive integer limit such that only save sets with a lesser number
of clone copies in the target clone pool are considered for cloning. This option is useful when
retrying aborted clone operations. Because the target is a clone pool, each save set's
original copy or clone is never considered when counting the number of copies of the save
set. Likewise, any AFTD read-only mirror clone is not considered because its read or write
master clone is counted and there is only one physical clone copy between the related clone
pair. Recyclable, aborted, incomplete and unusable save set or clones are excluded in the
counting. This option can be used only with the -t or -e option.
-l level or range
Specifies the level or n1-n2 integer range from 0 to 9 for save sets that are considered for
cloning. Manual for ad-hoc or client-initiated save sets, full for level full save sets, incr for
level incremental save sets, and integers 0 through 9, where save set0 also means full, can
be used. More than one level can be specified by using multiple -l options and the -l n1 to n2
range format. This option can be used only with the -t or -e option.
Specifies the save set name for save sets that are considered for cloning. More than one
save set name can be specified by using multiple -N options. This option can be used only
with the -t or -e option.
-c client name
Specifies the save sets in the particular client. More than one client name can be specified
by using multiple -c options. This option can be used only with the -t or -e option.
-g group name
Specifies the save sets in the particular group. More than one group name can be specified
by using multiple -g options. This option can be used only with the -t or -e option.
2. Copy all save sets that were not copied to the default clone pool in a prior partially
aborted nsrclone session:
nsrclone -S -e now -C 1
3. Copy all save sets that were not copied to the default clone pool in a previous partially
aborted nsrclone session and with extended retention and browse periods:
nsrclone -S -e now -C 1 -y 12/12/2010 -w 12/12/2009
Scripted cloning
63
Cloning Procedures
Use the nsrclone command with the -y option when creating a clone save set.
Specify a retention policy for an existing clone save set by using the nsrmm -e
command.
64
Cloning Procedures
For a given group, the NetWorker software runs only one clone at a time, regardless of
the parallelism setting.
Other groups that have auto-clone configured are able to run in parallel. However, they
will also run only one clone at a time, assuming that there is no contention for
volumes or devices.
65
Cloning Procedures
Cloning validation
Clone data does not require validation because the data is read from the source in its
native and self-describing form and then it is written to the target. The action of creating a
clone validates the ability to read the source data from the media. Therefore, subsequent
clone operations based on the clone will also be validated as further copies are created.
If there are actions that are expected after a clone operation, then it is likely that some
form of validation is used. This is important if the follow-on action has a destructive or
irreversible nature, such as the deletion of the source data through expiration or
relabeling.
For individual save sets, use the mminfo command to check that the clone save set is valid
and not in an aborted or error state.
Additional clone copies can also be used to:
Although a check of individual save sets may help confirm a successful clone operation, it
does not confirm that recovery is possible:
Always ensure that all save sets have been identified and cloned successfully.
Application-based backups are a particular example where multiple save sets may be
required.
If the application object is present in the most recent backup, you can view versions
for that application object. The versions are not cached, so a newly scanned version
should be detected if present:
a. From the View Versions pane, look for the savetime that the data was scanned.
b. If the savetime is found, choose this savetime as the new browse time to proceed.
c. Use the Change Browse Time attribute to set the time slightly ahead of the most
recent save set that was scanned.
66
If the scanned backup version does not appear in the NetWorker User program,
validate the rollover save set.
CHAPTER 5
Recovering Data from Clones
This chapter includes the following sections:
Clones recovery.......................................................................................................
Recovery scenarios .................................................................................................
Required save sets and volumes for recovery of cloned data ...................................
Recovery tasks ........................................................................................................
68
68
70
75
67
Clones recovery
When using cloning, ensure that you can recover the cloned save sets for all of the
recovery scenarios that are expected to occur. These recovery scenarios and the steps to
recover the cloned save sets are likely to be specific to the situation. Recovery scenarios
on page 68 provides details.
To ensure that the recovery of cloned data:
Verify that all relevant recovery scenarios have been accounted for as described in
Recovery scenarios on page 68. For example, if you expect to rely on the clone copy
for recovery, then you must ensure that the recovered save sets come from the clone
copy and not from the original volume. This is important for situations where both or
all copies are available, as well as when the original is not. Selecting clone volumes
to recover data from on page 74 provides details.
Ensure that all the required save sets and volumes are available for recovery.
Required save sets and volumes for recovery of cloned data on page 70 provides
detailed information.
Ensure that recovery procedures are in place and have been regularly tested as
described in Recovery tasks on page 75.
Recovery scenarios
When a recovery operation is initiated, there are two common assumptions about recovery
operation:
That the recovery will be performed within a short period of time after the backup
(hours or days).
That the recovery will use the original data volumes and that the backup server will be
fully operational. You can use the standard NetWorker recovery procedure because
the backups are present in both the client file indexes and the media database.
Required save sets and volumes for recovery of cloned data on page 70 provides
details.
However, if the recovery operation occurs after the NetWorker browse or retention periods
have expired or following a site or building loss, then the volumes may not be readily
available and additional actions might be required. Table 4 on page 69 details the
recovery scenarios and necessary actions.
68
Description
Section
The backups are present and browsable in both the client file
indexes and in the media database.
If the volume has been recycled, then the media entries have
been purged. In this case the data is longer available to recover
from and alternative recovery sources will need to be used.
Recovery scenarios
69
Description
Section
If the clone volumes do not contain all of the necessary data for
the recovery, then the number of available recovery options
might be limited.
Data might be able to be recovered, if:
The original bootstrap (media and client index) information is
available.
The original volumes still exist and can be used for recovery.
A recovery of that data might not be possible in cases where:
No bootstrap backups exist for the period of time where the
recovery is requested.
The original data volumes are missing or have been recycled.
Restoring recoverable
save sets to the client file
index on page 77
Restoring save sets that
are not in the media
database on page 81
detailed information.
When clone volumes are being used, ensure that all the clone save sets are available for
recovery. Selecting clone volumes to recover data from on page 74 provides details.
70
where:
NW_server_name is the name of the NetWorker server host.
NW_client_name is the name of the NetWorker client host.
group_name is the name of the group which contained the NetWorker client when
the backup occurred.
date1 is at least one day before the date range of the NetWorker clone to be
restored.
date2 is at least one day after the date range of the NetWorker clone to be restored.
For example, to list the save set details which reside on a NetWorker server called
krkr-pdc.krkr.local, an NMM client named krkr8x64.krkr.local in a group called grupa2
on dates Dec 14 13:48:00 2010 and Dec 15 13:57:00 2010, use the command:
mminfo -S -s krkr-pdc.krkr.local -c krkr8x64.krkr.local -q
"group=grupa2,savetime>12/14/2010 13:48:00,savetime<12/15/2010
13:57:00" > out3.txt
2. Edit the output.txt file, which resides in the same directory where the mminfo
command was run.
If the output file contains the following message, the media database does not
contain the NetWorker save sets for the client or query options specified:
mminfo: no matches found for the query
An r in the ssflags output denotes that a save set is recoverable and that it has
exceeded its defined browse policy.
An E in the ssflags output denotes a save set that is eligible for recycling and that it
has exceeded its defined retention policy. This is also referred to as an expired save
set.
71
In the case of incremental or differential save sets, the ssflags value will contain an E only
when all dependent incremental, differential, or full backups have also exceeded their
defined retention policy period.
When all save sets on a volume are eligible for recycling, the volume can be overwritten.
If you are know the list of required save sets that are to be recovered, perform the
steps outlined in Using the backup time to list all of the save sets on page 72.
If you do not know all of the required save sets that are to be recovered, perform the
steps outlined in Using savetime to determine the full set of save sets to recover on
page 73.
where:
group_name is the name of the group which contained the NetWorker client when the
backup occurred.
date1 is at least one day before the date range of the NetWorker clone to be restored.
date2 is at least one day after the date range of the NetWorker clone to be restored.
This query will return all of the save sets for the group in the time range specified. The -ot
flag sorts the save sets by time and the information is stored in a file called output.txt.
This file resides in the same directory from which the mminfo command was run.
Note: If you experience restore issues using this easier method, use the procedures in
Generating a media database listing of all of the save sets on page 71 to validate the
output.
72
For example, use the following command to restore a backup that occurred on
4/28/2010:
mminfo -S -s bv-nwsvr-1 -c bv-accounting-1 -q
"group=BV-accounting-1_Group,savetime>4/27/2010,savetime<4/29/20
10"
where:
bv-accounting-1 is the NetWorker client.
bv-nwsvr-1 is the NetWorker server.
BV-accounting-1_Group is the group.
4. Identify the most recent full backup for the filesystem save set in the mminfo report.
The full backup should be identified by using a rollover save set:
Save set name of the filesystem.
Save set name does not have the K in the sflags.
5. Obtain the *snap_sessionid from the full backup.
73
The volumes required for recovery appear in the Required Volumes window of the
NetWorker User program. The EMC NetWorker Administration Guide provides information
on viewing volumes that are required for data recovery.
You can also run the scanner program on a clone volume to rebuild entries in the client file
index, the media database, or both. After you re-create the entries, normal recovery is
available. The EMC NetWorker Administration Guide provides information on restoring a
save set entry in the online indexes.
74
Recovery tasks
This section discusses recovery from cloned save sets to help you identify what save sets
are required, and ensure that these save sets are in a state that can be used for recovery:
Restoring cloned data that is browsable in the client file index on page 75
Restoring save sets that are not in the media database on page 81
For each backup, repeat the tasks listed in this section until all of the incremental
backups that occurred after the full backups have been recorded.
To restore multiple clients, repeat the recovery tasks for each client.
where
time1 is the required retention time.
time2 is the required browse time.
SSID is the save set value recorded for each save set from the output of the mminfo
command.
If the cloneid is not identified with the -S option, the following error message appears:
Save set ssid cannot be marked as notrecyclable. Please specify the
ssid/cloneid of the particular clone instance.
2. For each save set, use its associated SSID and cloneid that is recorded in the List
Required Save sets section to reset the save set to expired/recoverable:
nsrmm -o notrecyclable -S SSID/cloneid
Recovery tasks
75
3. Repopulate the client file index with the save set information:
nsrck -L 7 -t date client 1>nsrck.txt 2>&1
where:
date is a date after the completion of the latest save set that will be restored.
client is the name of the NetWorker client.
Note: Ensure that the volume containing the index backup is available for mounting.
4. Review the output in nsrck.txt for errors once the command has completed:
If the following messages are reported, type the following command:
nsrck -L 2 client
File attribute messages such as the following will not impact the restore and can be
safely ignored:
32222:uasm: Warning: Some file attributes were not recovered:
C:\Program
Files\Legato\nsr\index\clientname\db6\tmprecov\C\Program
Files\Legato\nsr\index\clientname\db6\
If the nsrck command fails with the error "xxxxx", the index backup might no longer
be referenced in the media database.
Use the following command to scan all SSIDs recorded for each save set:
scanner -i -S SSID device
where:
SSID is the save set id of the save set that will be restored.
device is the device containing the volume for the save set to be restored.
5. Ensure that the NetWorker User program is closed on the NMM clients before running
the scanner command. If the program is open while scanner is run, the scanner
command may fail with the following errors:
For NetWorker 7.6.1 and earlier:
"Index error, flush Failed"
76
6. For each save set, modify the browse times of the existing save sets, if browse and
retention times set by scanner are not a long enough duration to complete recovery
procedures:
nsrmm -s NetWorker_server_name -w time2 -S SSID
where:
NetWorker_server_name is the name of the NetWorker server.
time2 is the new browse time.
SSID is the save set value recorded for each save set.
7. Ensure that the new browse dates for the save sets are far enough in the future to
allow sufficient time for the restore to complete.
8. Restore the data. The EMC NetWorker Administration Guide provides detailed
information.
where:
NetWorker_server_name is the name of the NetWorker server.
time1 is the new retention time.
SSID is the save set value recorded for save set.
Note: Ensure that the new browse and retention dates for the save sets are far enough
in the future to allow sufficient time for the restore operation to complete.
2. Repopulate the client file index on the NetWorker server with the save set information:
nsrck -L 7 -t date client 1>nsrck.txt 2>&1
where:
date is a date after the completion of the latest save set that will be restored.
client is the name of the NetWorker client.
Note: Ensure that the volume containing the index backup is available for mounting.
Recovery tasks
77
3. Review the output in nsrck.txt for errors once the command has completed.
Consider the following:
If the following messages are reported, run the following command:
nsrck -L 2 client
File attribute messages such as the following will not impact the NetWorker restore
and can be safely ignored:
32222:uasm: Warning: Some file attributes were not recovered:
C:\Program
Files\Legato\nsr\index\clientname\db6\tmprecov\C\Program
Files\Legato\nsr\index\clientname\db6\
If the nsrck command fails with the error "xxxxx", the index backup might no longer
be referenced in the media database. Use the following command to scan all SSIDs
recorded for each save set:
scanner -i -S SSID device
where:
SSID is the save set id of the save set that will be restored.
device is the device containing the volume for the save set to be restored.
4. Ensure that the NetWorker User program is closed on the NMM clients before running
the scanner command. If the program is open while scanner is run, the scanner
command may fail with the following errors:
For NetWorker 7.6.1 and earlier:
"Index error, flush Failed"
78
5. For each save set, modify the browse times of the existing save sets. If browse and
retention times set by scanner are not a long enough duration to complete recovery
procedures:
nsrmm -s NetWorker_server_name -w time2 -S SSID
where:
NetWorker_server_name is the name of the NetWorker server.
time2 is the new desired browse time.
SSID is the save set value recorded for each save set.
Note: Ensure that the new browse dates for the save sets are far enough in the future
to allow sufficient time for the restore to complete.
6. Restore the data. The EMC NetWorker Administration Guide provides detailed
information.
where:
time1 is the required retention time.
time2 is the required browse time.
SSID is the save set value recorded for each save set from the output of the mminfo
command.
If the cloneid is not identified with the -S option, the following error message appears:
Save set ssid cannot be marked as notrecyclable. Please specify the
ssid/cloneid of the particular clone instance.
2. For each save set, use its associated SSID and cloneid that is recorded in the List
Required Save sets section to reset the save set to expired/recoverable:
nsrmm -o notrecyclable -S SSID/cloneid
3. Repopulate the client file index with the save set information:
nsrck -L 7 -t date client 1>nsrck.txt 2>&1
where:
date is a date after the completion of the latest save set that will be restored.
client is the name of the NetWorker client.
Note: Ensure that the volume containing the index backup is available for mounting.
Recovery tasks
79
4. Review the output in nsrck.txt for errors once the command has completed:
If the following messages are reported, type the following command:
nsrck -L 2 client
File attribute messages such as the following will not impact the NetWorker restore
and can be safely ignored:
32222:uasm: Warning: Some file attributes were not recovered:
C:\Program
Files\Legato\nsr\index\clientname\db6\tmprecov\C\Program
Files\Legato\nsr\index\clientname\db6\
If the nsrck command fails with the error "xxxxx", the index backup might no longer
be referenced in the media database. Use the following command to scan all SSIDs
recorded for the save sets:
scanner -i -S SSID device
where:
SSID is the save set id of the save set that will be restored.
device is the device containing the volume for the save set to be restored.
5. Ensure that the NetWorker User program is closed on the NMM clients before running
the scanner command. If the program is open while scanner is run, the scanner
command may fail with the following errors:
For NetWorker 7.6.1 and earlier:
"Index error, flush Failed"
6. Modify the browse times of the existing save sets, if browse and retention times set by
scanner are not a long enough duration to complete recovery procedures:
nsrmm -s NetWorker_server_name -w time2 -S SSID
where:
NetWorker_server_name is the name of the NetWorker server.
time2 is the new browse time.
SSID is the save set value recorded for each save set.
80
Note: Ensure that the new browse dates for the save sets are far enough in the future
to allow sufficient time for the restore to complete.
7. Restore the data. The EMC NetWorker Administration Guide provides detailed
information.
Task 1: Identify the clone volumes that are required for scanning on page 81
Task 3: Recover the clone save sets that do not exist in the media database on
page 82
Task 5: Scan the required save sets into the media database and the client file index
on page 84
Task 6: Validate that the save sets are in the client file index on page 85
Task 7: Generate a media database listing of all of the save sets on page 85
Task 1: Identify the clone volumes that are required for scanning
The scanning procedure is used to rebuild index and media database entries:
When restoring from a full backup, the volumes from the date of the full backup are
required to recover the data from.
When restoring from an incremental backup, the volumes from the day of the
incremental backup to the most recent full backup are required to recover the data
from.
Selecting clone volumes to recover data from on page 74 provides information on how
to ensure that the recovery comes from the clone copy and not the original in situations
where both or all of the copies that are available.
Note: If other volumes are required to be scanned, review Selecting clone volumes to
recover data from on page 74 to identify what save sets are missing so that the
additional volumes can be retrieved.
Recovery tasks
81
Task 3: Recover the clone save sets that do not exist in the media database
If the NetWorker clone save sets that are required for a restore operation are no longer in
the media database, you must scan the clone volumes to regenerate the media and index
database for these save sets. You can use the scanner command to scan the volumes.
To scan the required volume:
1. Mount the volume containing the clone save sets into the drive.
Note: If the volume itself is no longer in the NetWorker media database, choose the
option load without mount while loading the tape.
2. From a command prompt on the NetWorker server, obtain a listing of the save sets on
the clone volume to generate a report of the save sets on the volume. Use the
following command:
scanner -v device 1>scanner_output.txt 2>&1
or
scanner -v \\.\Tape0 1>scanner_output.txt 2>&1
3. Ensure that the NetWorker User program is closed on the NMM clients before running
the scanner command. If the program is open while scanner is run, the scanner
command may fail with the following errors:
For NetWorker 7.6.1 and earlier:
"Index error, flush Failed"
82
4. Open the scanner_output.txt file, which resides in the same directory the scanner
command was run from.
5. If the scanner_output.txt file displays only the following message:
scanner: SYSTEM error: Cannot stat <device_name>: No such file or
directory
a. Check the device name specified in the scanner command for errors.
b. Retry the scanner command with the correct device name.
Recovery tasks
83
Task 5: Scan the required save sets into the media database and the client file index
Depending on your IT procedures and urgency of the restore request, you might choose to
scan individual save sets from the clone volumes. Scanning should be run to regenerate
both the media database and client file index entries.
Consider:
It is not possible to specify the scanning order when save sets are supplied through
the -S parameter to scanner.
The end-to-end process of recovering from scanned clones might take several days, so
resetting the browse and retention times to a sufficient point-in-time in the future will
help to ensure that the scanned save sets do not prematurely expire before you are
finished restoring the data.
To scan the required save sets into the media database and the client file index:
1. Use the following command to scan the save sets:
scanner -i -S SSID device 1>scanneri.txt 2>&1
where:
SSID is the SSID recorded for save set.
device is the device with media that contains the save set.
2. Ensure that the NetWorker User program is closed on the NMM clients before running
the scanner command. If the program is open while scanner is run, the scanner
command may fail with the following errors:
For NetWorker 7.6.1 and earlier:
"Index error, flush Failed"
IMPORTANT
It is critical that the cover save sets be scanned first.
3. Review the output of the scanneri.txt file for errors.
84
Task 6: Validate that the save sets are in the client file index
For each save set that was scanned, you can use the nsrinfo command to validate that the
data has been repopulated in the client file index.
To validate that the save sets are in the client file index:
1. During the inspection of the scanner output, review the savetime recorded for the save
sets:
2. Run the nsrinfo command against each savetime to confirm that the client file index
was populated with the necessary save set details:
nsrinfo -t exact_savetime client
where:
exact_savetime is the savetime recorded from the scanner output.
client is the name of the NetWorker client.
For example:
nsrinfo -t 1292314893 krkr8x64
scanning client `krkr8x64' for savetime 1292314893(14.12.2010
09:21:33) from the backup namespace
C:\LG_PLACEHOLDER_1492021383
1 objects found
3. For all recorded savetimes, run the nsrinfo command against each savetime to confirm
that the client file index was populated with the necessary save set details:
nsrinfo -t exact_savetime client
where:
exact_savetime is the savetime recorded from the scanner output.
client is the name of the NetWorker client.
Recovery tasks
85
86
CHAPTER 6
Staging
This chapter includes the following sections:
Staging overview.....................................................................................................
The destination.......................................................................................................
Working with staging policies..................................................................................
Staging from the NetWorker Management Console ..................................................
Staging from the command line...............................................................................
Staging
88
89
89
92
93
87
Staging
Staging overview
NetWorker staging is a separate process but relies on the cloning mechanism.
Save set staging is the process of transferring data from one storage medium to another
medium, and then removing the data from its original location. For example, the initial
backup data can be directed to a high performance file type or advanced file type device.
In this way, the backup time is reduced by taking advantage of a file or advanced file type
device. At a later time, outside of the regular backup period, the data can be moved to a
less expensive but more permanent storage medium, such as magnetic tape. After the
backup data is moved, the initial backup data can be deleted from the file or advanced file
type device so that sufficient disk space is available for the next backup.
Staging example
In Figure 9 on page 88, the staging action will result in the deletion of the original save
sets on the Volume A1, once they had been successfully staged (cloned) to volume B1.
The Xs indicate that once a successful clone copy has completed, the original save sets
are deleted. This is the difference between a clone and a stage operation. The save sets
appear to move from one storage to another. The resulting save set is identical to that of
the first, but in a different location.
Figure 9 Staging
88
Staging
The destination
A save set can be staged from one disk to another as many times as required. For
example, a save set could staged from disk 1, to disk 2, to disk 3, and finally to a remote
tape device or cloud device. Once the save set is staged to a tape or cloud device, it
cannot be staged again. However, you could still clone the tape or cloud volume.
Staging can be driven by any of the following:
Calendar-based process, such as keeping the save set for 30 days on the staging
device before moving the data to the next device.
Event-based process, such as when available space in the staging pool drops below a
set threshold. When this happens, the oldest save sets are moved until available
space reaches a preset upper threshold.
Staging does not affect the retention policy of backup data. Therefore, staged data is still
available for recovery.
When the stage process encounters an error after successfully cloning specified save sets,
it deletes only those successful save sets from the source volume before the program is
aborted. This ensures that after staging only a single set of save sets exists in either the
source volumes or clone volumes.
The EMC NetWorker Administration Guide provides information on file type device (FTD)
and advanced file type device (AFTD) configuration.
The destination
89
Staging
7. In the Devices attribute, select the file type and adv_file type devices as the source
device for staging.
Note: The adv_file device and its corresponding _AF_readonly device will both be
selected automatically, even if only one device was selected as the source of staging.
You can assign multiple devices to the staging policy, but a given device cannot be
controlled by more than one staging policy.
8. For the Destination Pool attribute, select the destination pool for the staged data.
Note: The Default volume can only be staged to the Default or Default Clone pool.
Similarly, the Default Clone volume can only be staged to the Default or Default Clone
pool and Archive data can only be staged to the Archive Clone pool. The other volume
types can be staged to any pool. If the Clone pool that you have selected is restricted
to storage node devices, you will also need to modify Clone Storage Node attribute.
9. In the High-Water Mark (%) attribute, type or select a number.
This value is the point at which save sets should be staged, measured as the
percentage of available space used on the filesystem partition that the file device is
on. Staging continues until the low-water mark is reached (see step 10 ).
Note: The high-water mark must be greater than the low-water mark.
10. In the Low-Water Mark (%) attribute, type or select a number. This is the point at which
the staging process will stop, measured as the percentage of available space on the
filesystem partition that the file device is on.
11. From the Save Set Selection attribute, select from the list to determine the save set
selection criteria for staging.
12. In the Max Storage Period attribute, type the number of hours or days for a save set to
be in a volume before it is staged to a different storage medium.
90
Staging
Note: The Max Storage Period attribute is used in conjunction with the filesystem
Check Interval attribute. Once the Max Storage Period value is reached, staging does
not begin until the next filesystem check.
13. In the Max Storage Period Unit attribute, select Hours or Days.
14. In the Recover Space Interval attribute, type the number of minutes or hours between
recover space operations for save sets with no entries in the media database from file
or advanced file type devices.
15. In the Recover Space Interval Unit attribute, select Minutes or Hours.
16. In the File System Check Interval attribute, type the number of minutes or hours
between filesystem checks.
Note: At every File System Check interval, if either the High-Water Mark or Max Storage
Period has been reached, a staging operation is initiated.
17. In the File System Check Interval Unit attribute, select Minutes or Hours.
18. To invoke the staging policy immediately, complete this step. Otherwise, skip this
step:
a. Click the Operations tab.
b. In the Start Now attribute, select one of these operations:
Recover space Recovers space for save sets that have no entries in the media
database and deletes all recycled save sets.
Check file system Checks filesystem and stage data, if necessary.
Stage all save sets Stages all save sets to the destination pool.
The selected operation applies to all devices associated with this policy.
Note: The choice you make takes effect immediately after clicking OK. After the staging
operation is complete, this attribute returns to the default setting (blank).
19. When all the staging attributes are configured, click OK.
91
Staging
92
Staging
The EMC NetWorker Command Reference Guide or the UNIX man pages provide
information about nsrstage or mminfo commands.
93
Staging
94
CHAPTER 7
NetWorker Module for Databases and
Applications
This chapter includes the following sections:
NMDA does not support save set bundling for regular manual backups or EMC PowerSnap
snapshot backups. NMDA performs save set bundling for regular scheduled Oracle
backups only.
95
All subsequent level 1 incremental backups that are dependent on the level 0 backup.
The EMC NetWorker Module for Databases and Applications Administration Guide
provides details on NMDA support of full and incremental Oracle backups.
IMPORTANT
NMDA does not support save set bundling for regular manual backups or EMC PowerSnap
snapshot backups. NMDA performs save set bundling for regular scheduled Oracle
backups only.
Save set bundling automatically enables the following for Oracle:
Improved staging
Oracle-aware staging causes NMDA Oracle save sets that have a dependency on each
other to be staged together:
During automatic staging, if the staging criteria determine that a particular NMDA
save set should be staged and the save set is part of a save set bundle, the
NetWorker server stages the entire save set bundle.
During manual staging with the nsrstage command, if one or more save sets being
staged are from a save set bundle, all the save sets in the bundle are staged.
Policy uniformity
Policy uniformity is enabled automatically whenever you enable save set bundling. If
you do not want to use save set bundling, you can still enable policy uniformity
separately. NMDA policy uniformity on page 99 provides more details.
Note: After a staging operation during which all the save sets in a bundle are staged, the
resulting available space on the staging device might exceed the lower-water mark
specified in the staging policy.
The EMC NetWorker Administration Guide provides details on how to work with staging
policies and perform automatic and manual staging operations through the NetWorker
server.
The EMC NetWorker Module for Databases and Applications Administration Guide
provides information on how to configure save set bundling for NMDA scheduled backups.
If an error occurs during save set bundling, the bundling operation fails but the scheduled
backup can finish successfully. Information about the bundling failure is printed to the
savegrp output and to the NMDA debug file.
96
The NetWorker server cannot simultaneously stage all the save sets from a save set
bundle if some of the save sets were backed up to separate volumes. The server
simultaneously stages save sets only if they are located on the same staging volume.
Example 3 on page 98 provides more information.
To ensure the proper staging of all the save sets from a save set bundle, do not split
the backup between different staging volumes. If required, split the backup into
different backup cycles, with each cycle going to a separate volume.
NetWorker staging policies must not cause the save sets of an NMDA backup cycle to
be staged before the cycle is complete. For example, if a 1-week NMDA cycle starts on
Sunday, the staging policy must not cause the partially complete save set bundle to
be staged before the final backup of the cycle occurs on Saturday.
To prevent a staging operation from splitting an NMDA backup cycle, adjust the
NetWorker staging policy accordingly. For example, adjust the policy so that older save
sets are staged before new ones, or adjust the high-water and low-water marks.
The EMC NetWorker Administration Guide provides details on how to work with staging
policies and perform automatic and manual staging operations through the NetWorker
server.
The nsrdasv program connects to the Oracle database by attempting to use the login
and password from the RMAN script.
If a login and password are not available from the script, the program uses the
ORACLE_SID value from the NMDA configuration file to search the nwora.res file for
the NSR_ORACLE_CONNECT_FILE parameter, and uses the connection strings from the
specified connection file.
After connecting to the Oracle database, the nsrdasv program obtains all the required
information about the backups by using the V$ views. The EMC NetWorker Module for
Databases and Applications Administration Guide provides more details on the
nwora.res file and the requirements of save set bundling.
The nsrdasv program creates a save set bundle for each incremental level 0 backup.
The program adds the save sets from subsequent incremental backups to the bundles
of the level 0 backups they are dependent on. Example 1 on page 98 and Example 2
on page 98 illustrate different scenarios for how the save set bundle is formed.
97
The name that the nsrdasv program assigns to a save set bundle is the save time of
the oldest save set in the bundle.
After a scheduled backup, the NetWorker server stores the save set bundle name and
the list of save sets it contains in the media database.
You can view the bundle information by using the mminfo command, as described in
Save set bundling information in the media database on page 98.
Example 1 Save set bundling for a 1-week scheduled backup cycle of a tablespace
This example illustrates a scenario where NMDA combines existing bundles into a new
save set bundle.
Two save set bundles are created by separate level 0 backups of files A and B. Then a new
backup set is created by a level 1 backup of both files A and B. As the new backup set is
dependent on both of the preceding level 0 backups, NMDA combines all three backups
into the same save set bundle.
Example 3 Splitting a save set bundle across volumes
In this example, a save set bundle is split across multiple volumes. A level 0 backup of file
A is performed to volume A. An incremental backup of file A is then performed to volume
B. Although both backups are recorded as belonging to the same save set bundle, the
save set bundle is split across volumes. During staging, only the save sets on the same
volume can be staged together.
The mminfo -r command can display the name of the bundle associated with a save
set. For example, the following command displays a list of all save sets and their
bundles:
mminfo -a -r "ssid,ssbundle"
The mminfo -q command can display all the save sets in a specific bundle. For
example, the following command displays all the save sets in the bundle named
12983479182:
mminfo -a -q "ssbundle=12983479182"
The EMC NetWorker Command Reference Guide and the UNIX man pages provide more
information on the mminfo command and its available options.
98
99
100