NetBackup102 AdminGuide SnapshotClient
NetBackup102 AdminGuide SnapshotClient
Release 10.2
NetBackup™ Snapshot Client Administrator's Guide
Document version: 10.2
Technical Support
Technical Support maintains support centers globally. All support services will be delivered
in accordance with your support agreement and the then-current enterprise technical support
policies. For information about our support offerings and how to contact Technical Support,
visit our website:
https://www.veritas.com/support
You can manage your Veritas account information at the following URL:
https://my.veritas.com
If you have questions regarding an existing support agreement, please email the support
agreement administration team for your region as follows:
Japan [email protected]
Documentation
The latest documentation is available on the Veritas website:
https://sort.veritas.com/documents
Documentation feedback
Your feedback is important to us. Suggest improvements or report errors or omissions to the
documentation. Include the document title, document version, chapter title, and section title
of the text on which you are reporting. Send feedback to:
You can also see documentation information or ask a question on the Veritas community site:
http://www.veritas.com/community/
https://sort.veritas.com/data/support/SORT_Data_Sheet.pdf
Legal Notice
Copyright © 2023 Veritas Technologies LLC. All rights reserved.
Veritas, the Veritas Logo, Veritas Alta, and NetBackup are trademarks or registered trademarks
of Veritas Technologies LLC or its affiliates in the U.S. and other countries. Other names may
be trademarks of their respective owners.
This product may contain third-party software for which Veritas is required to provide attribution
to the third party (“Third-party Programs”). Some of the Third-party Programs are available
under open source or free software licenses. The License Agreement accompanying the
Software does not alter any rights or obligations you may have under those open source or
free software licenses. Refer to the Third-party Legal Notices document accompanying this
Veritas product or available at:
https://www.veritas.com/about/legal/license-agreements
The product described in this document is distributed under licenses restricting its use, copying,
distribution, and decompilation/reverse engineering. No part of this document may be
reproduced in any form by any means without prior written authorization of Veritas Technologies
LLC and its licensors, if any.
http://www.veritas.com
Contents
About restoring over the SAN to a host acting as both client server
and media server ............................................................ 245
About restoring directly from a snapshot .................................... 246
About restoring from a disk snapshot .............................................. 247
About restoring on UNIX ........................................................ 247
About restoring on Windows ................................................... 250
Instant Recovery Makes the backups available for recovery from snapshot.
NetBackup for Hyper-V Backs up and restores Windows and Linux Hyper-V
virtual machines (guest operating systems).
NetBackup for VMware Backs up and restores Windows and Linux VMware
virtual machines (guest operating systems).
Hardware snapshot support for Hardware snapshot support for VMware datastore on
VMware the Network Attached Storage (NAS) host.
Block level incremental backup Enables NetBackup to back up only the changed data
(BLIB) blocks of VMware virtual machines and Oracle or DB2
database files.
Snapshot Management Server Servers to manage arrays for discovery of devices and
snapshot life cycle management.
About snapshots
A snapshot is a point-in-time, read-only, disk-based copy of a client volume. After
the snapshot is created, NetBackup backs up data from the snapshot, not directly
from the client’s primary or original volume. Users and client operations can access
the primary data without interruption while data on the snapshot volume is backed
up. The contents of the snapshot volume are cataloged as if the backup was
produced directly from the primary volume. After the backup is complete, the
snapshot-based backup image on storage media is indistinguishable from a
traditional, non-snapshot backup image.
All the features of Snapshot Manager (including off-host backup, FlashBackup, and
Instant Recovery) require the creation of a snapshot.
you can use Snapshot Manager for taking snapshots of your images. You can
protect all the on-premise storage arrays that are supported by Snapshot Manager.
The following table describes the underlying snapshot management tasks.
Tasks Description
Configure the Snapshot You can configure the Snapshot Manager server as a
Manager server in snapshot management server. To configure the Snapshot
NetBackup. Manager server in NetBackup you need to add the credentials
of the Snapshot Manager server in the NetBackup UI.
Configure the Snapshot The Snapshot Manager plug-ins that are installed on the
Manager plug-ins in Snapshot Manager server must be registered and configured
NetBackup. in the associated NetBackup server.
Configure a standard policy See “Configuring a Snapshot Manager policy” on page 50.
to use the VSO snapshot
See “Selecting the snapshot method” on page 58.
method.
the client data is in a VxFS file system over a VxVM volume, NetBackup could
create the snapshot with a file system method. On the other hand, NetBackup can
use a volume manager method to create the snapshot of the same data, such as
VxVM or FlashSnap. Between VxVM and FlashSnap, only FlashSnap supports the
Persistent FastResync feature of VxVM mirror volumes. To take advantage of the
Persistent FastResync feature, you would have to select the FlashSnap method.
backup agent, greatly reducing the backup effect on the client’s computing resources.
The backup agent sends the client’s data to the storage device.
Figure 1-2 shows a backup agent.
NetBackup master
server
LAN / WAN
NetBackup
client SCSI
Backup agent
Robot on
Disks of client SAN
data on SAN
Note: NetBackup for NDMP add-on software is required, and the NAS vendor must
support snapshots.
As in a copy-on-write, transactions are allowed to finish and new I/O on the primary
disk is briefly halted. When the mirror image is brought up-to-date with the source,
the mirror is split from the primary. After the mirror is split, new changes can be
made to the primary but not to the mirror. The mirror can now be backed up (see
next diagram).
Introduction 23
Benefits of copy-on-write versus mirror
If the mirror is to be used again it must be brought up-to-date with the primary
volume (synchronized). During synchronization, the changes that were made to
the primary volume—while the mirror was split—are written to the mirror.
Since mirroring requires a complete copy of the primary on a separate device (same
size as the primary), it consumes more disk space than copy-on-write.
See “Benefits of copy-on-write versus mirror” on page 23.
■ It consumes less disk space: No need for ■ It has less effect on the performance of
secondary disks containing complete the host being backed up (NetBackup
copies of source data. client), because the copy-on-write
■ Relatively easy to configure (no need to mechanism is not needed.
set up mirror disks). ■ Allows for faster backups: The backup
■ Creates a snapshot much faster than one process reads data from a separate disk
created by a large, unsynchronized mirror, (mirror) operating independently of the
because mirror synchronization can be primary disk that holds the client’s source
time consuming. data. Unlike copy-on-write, disk I/O is not
shared with other processes or
applications. Apart from NetBackup, no
other applications have access to the
mirror disk. During a copy-on-write, other
applications as well as the copy-on-write
mechanism can access the source data.
NetBackup master
server
LAN / WAN
SCSI SCSI
The figure shows the following phases in the local backup process:
Phase Action
Data mover: NetBackup media A NetBackup media server reads raw data from the client
server (UNIX clients only) snapshot and writes it to a storage device, using mapping
information that the client provides.
Data mover: Network Attached An NDMP (NAS) host performs the snapshot-only backup
Storage for Instant Recovery only.
Data mover: Third-Party Copy A third-party copy device reads raw data from the client
Device data mover (UNIX clients snapshot and writes the data to a storage device. To do
only) so, the third-party copy device uses the Extended Copy
command and mapping information from the client. Many
kinds of devices, such as routers and disk arrays, are
designed as third-party copy devices.
Data mover: NDMP Use to replicate NDMP snapshots. Select this agent in
a policy that uses NDMP with Replication Director.
The mapping methods are installed as part of the NetBackup Snapshot Manager
product. Depending on whether the backup data is configured over physical devices,
logical volumes, or file systems, NetBackup automatically selects the correct
mapping method.
LAN / WAN
Data sharing
(mirrors or
replication)
primary client alternate client media server storage
The figure shows the following phases in the alternate client backup process:
Introduction 28
Off-host backup methods
Phase Action
Phase 1 Primary and alternate client collaborates to create the snapshot on the
alternate client.
Phase 2 Alternate client sends the snapshot data to the media server.
Phase 3 Media server reads the snapshot data from the alternate client.
Note: The mirror disk need not be visible to the primary client, only to the alternate
client.
Introduction 29
Off-host backup methods
Figure 1-7 Alternate client and split mirror: primary client and alternate client
share data through mirroring.
NetBackup master
server
Phase Action
Figure 1-8 shows the media server and alternate client on the same host.
Introduction 30
Off-host backup methods
LAN / WAN
alternate client/
primary client media server
Primary mirror
disk disk storage
Phase Action
A single alternate client can handle backups for a number of primary clients, as
shown in the following diagram.
Multiple clients can share an alternate backup client of the same operating system
type.
LAN / WAN
Figure 1-10 Multiple clients with SSO: alternate client performs backup for
multiple primary clients with NetBackup SSO option on a SAN
LAN / WAN
Solaris
Shared
Fibre Channel/SAN Storage
Alternate client/
Windows
media server, with
clients
SSO
Windows
and is used to complete the backup. After the backup, the snapshot volume is
unmounted. The mirror is resynchronized with the replicating volume, and the
replication is resumed.
Figure 1-11 Replication: primary client and alternate client share data through
replication
NetBackup master
server (Detail from policy attributes dialog.
Requires the VVR snapshot method.)
LAN / WAN
primary replication
alternate media
client
client server
primary
volume
The NetBackup client’s primary volume
replicating mirror
is replicated on an alternate client.
volume volume
Storage
Phase Action
Only the VVR snapshot method for UNIX clients supports this configuration. This
configuration requires the Veritas Volume Manager (VxVM version 3.2 or later) with
the VVR license.
Introduction 33
Off-host backup methods
Figure 1-12 Alternate client split-mirror backup with FlashBackup policy type
NetBackup master
server
alternate
primary client media
client
server
Phase Action
Phase Action
Note: For a multi-ported SCSI disk array, a Fibre Channel SAN is not required.
NetBackup
master
server (Detail from policy attributes dialog.)
LAN / WAN
media
client
server
Fibre Channel/SAN
robot on
SAN
disks of client
data on SAN
Introduction 35
Snapshot Manager requirements
Phase Action
■ To use Snapshot Manager to back up a VxFS file system, the client’s VxFS file
system has to be patched with the dynamic linked libraries.
■ For the VxVM snapshot method, all clients must have VxVM 3.1 or later.
■ For the FlashSnap and VVR snapshot methods, all clients must have VxVM 3.2
or later. Each method requires its own add-on license to VxVM.
■ For the disk array snapshot methods, assistance may be required from the disk
array vendor.
■ To use the snapshot and off-host backup features of NetBackup Snapshot
Manager with a NetBackup Oracle policy, UNIX clients must have Oracle8i or
later installed.
■ HP clients must use the OnlineJFS file system, not the default JFS.
■ Backup of an AIX 64-bit client with the NetBackup media server (data mover)
method and the VxVM or VxFS_Checkpoint snapshot method may fail with
NetBackup status code 11. This failure may occur if the client volumes are
configured with Storage Foundation 5.0 MP3. A NetBackup message similar to
the following appears in the job's Detailed Status tab:
This error occurs because the required VxVM libraries for 64-bit AIX are not
installed in the correct location. The libraries should be installed in
/opt/VRTSvxms/lib/map/aix64/.
cp /usr/lpp/VRTSvxvm/VRTSvxvm/5.0.3.0/inst_root/
opt/VRTSvxms/lib/map/aix64/* /opt/VRTSvxms/lib/map/aix64/
■ For off-host backups that use the NDMP data mover option to replicate
snapshots, see the NetBackup Replication Director Solutions Guide for a list of
limitations.
■ In a clustered environment, Instant Recovery point-in-time rollback is not
supported for the backups that were made with a disk array snapshot method.
The disk array snapshot methods are described in the chapter titled Configuration
of snapshot methods for disk arrays.
See “About the new disk array snapshot methods” on page 146.
■ For the TimeFinder, ShadowImage, or BusinessCopy legacy snapshot methods
(when you use the NetBackup media server or Third-Party Copy Device backup
methods): The NetBackup clients must have access to the mirror (secondary)
disk containing the snapshot of the client’s data. The NetBackup clients must
also be able to access the primary disk. The NetBackup media server only needs
access to the mirror (secondary) disk.
■ For the TimeFinder, ShadowImage, or BusinessCopy legacy snapshot methods,
a Volume Manager disk group must consist of disks from the same vendor.
■ The NetBackup media server off-host backup method does not support the
clients that use client deduplication. If the client is enabled for deduplication,
you must select Disable client-side deduplication on the policy Attributes
tab.
Introduction 38
Snapshot Manager terminology
■ For the NetBackup media server or Third-Party Copy Device backup method:
The disk must return its SCSI serial number in response to a serial-number
inquiry (serialization), or the disk must support SCSI Inquiry Page Code 83.
■ Multiplexing is not supported for Third-Party Copy Device off-host backups.
■ For alternate client backup: The user and the group identification numbers (UIDs
and GIDs) for the files must be available to the primary client and the alternate
backup client.
■ Inline Tape Copies (called Multiple Copies in Vault) is not supported for
Third-Party Copy Device off-host backups.
■ For media servers running AIX (4.3.3 and higher), note the following:
■ Clients must be Solaris, HP, or AIX.
■ Requires the use of tape or disk LUNs to send the Extended copy commands
for backup.
■ The tape must be behind a third-party-copy-capable FC-to-SCSI router. The
router must be able to intercept Extended Copy commands that are sent to
the tape LUNs.
■ The mover.conf file must have a tape path defined, not a controller path.
Term Definition
Alternate client backup The alternate client performs a backup on behalf of another client.
Backup agent (see also A general term for the host that manages the backup on behalf of the NetBackup client.
Third-Party Copy Device) The agent is either another client, the NetBackup media server, a third-party copy device,
or a NAS filer.
BCV The mirror disk in an EMC primary-mirror array configuration (see mirror). BCV stands
for Business Continuance Volume.
Bridge In a SAN network, a bridge connects SCSI devices to Fibre Channel. A third-party copy
device can be implemented as part of a bridge or as part of other devices. Note that not
all bridges function as third-party copy devices.
Introduction 39
Snapshot Manager terminology
Term Definition
Cache Copy-on-write snapshot methods need a separate working area on disk during the lifetime
of the snapshot. This area is called a cache. The snapshot method uses the cache to
store a copy of the client’s data blocks that are about to change because of file system
activity. This cache must be a raw disk partition that does not contain valuable information:
when you use the cache, the snapshot method overwrites any data currently stored there.
Copy-on-write In NetBackup Snapshot Manager, one of two types of supported snapshots (see also
mirror). Unlike a mirror, a copy-on-write does not create a separate copy of the client’s
data. It creates a block-by-block "account" from the instant the copy-on-write was activated.
The account describes which blocks in the client data have changed and which have
not. The backup application uses this account to create the backup copy. Other terms
and trade names sometimes used for copy-on-write snapshots are space-optimized
snapshots, space-efficient snapshots, and checkpoints.
Data movement A copy operation as performed by a third-party copy device or NetBackup media server.
Data mover The host or entity that manages the backup on behalf of the NetBackup client. The data
mover can be either the NetBackup media server, a third-party copy device, or a NAS
filer.
device A general term for any of the following: LUN, logical volume, vdisk, and BCV or STD.
Disk group A configuration of disks to create a primary-mirror association, using commands unique
to the disks’ vendor. See mirror and volume group.
Extent A contiguous set of disk blocks that are allocated for a file and represented by three
values:
■ Device identifier
■ Starting block address (offset in the device)
■ Length (number of contiguous blocks)
The mapping methods in Snapshot Manager determine the list of extents and send the
list to the backup agent.
FastResync (VxVM) Formerly known as Fast Mirror Resynchronization or FMR, VxVM FastResync performs
quick and efficient resynchronization of mirrors. NetBackup’s Instant Recovery feature
uses FastResync to create and maintain a point-in-time copy of a production volume.
Fibre Channel A type of high-speed network that is composed of either optical or of copper cable and
employing the Fibre Channel protocol. NetBackup Snapshot Manager supports both
arbitrated loop and switched fabric (switched Fibre Channel) environments.
Introduction 40
Snapshot Manager terminology
Term Definition
File system Has two meanings. For a product, such as UFS (Sun Solaris) or VxFS (Veritas) file
systems, file system means the management and the allocation schemes of the file tree.
Regarding a file tree component, file system means a directory that is attached to the
UNIX file tree by means of the mount command. When a file system is selected as an
entry in the NetBackup Backup Selections list, this definition applies.
File system means a directory that is attached to the UNIX file tree by means of the
mount command. When a file system is selected as an entry in the NetBackup Backup
Selections list, this definition applies.
Instant Recovery A restore feature of a disk snapshot of a client file system or volume. Client data can be
rapidly restored from the snapshot, even after a system restart.
Mapping Converting a file or raw device (in the file system or Volume Manager) to physical
addresses or extents for backup agents on the network. NetBackup Snapshot Manager
uses the VxMS library to perform file mapping.
Mapping methods A set of routines for converting logical file addresses to physical disk addresses or extents.
NetBackup Snapshot Manager includes support for file-mapping and volume-mapping
methods.
■ A disk that maintains an exact copy or duplicate of another disk. A mirror disk is often
called a secondary, and the source disk is called the primary. All writes to the primary
disk are also made to the mirror disk.
■ A type of snapshot that is captured on a mirror disk. At an appropriate moment, all
further writes to the primary disk are held back from the mirror, which "splits" the
mirror from the primary. As a result of the split, the mirror becomes a snapshot of the
primary. The snapshot can then be backed up.
NetBackup media server An off-host backup method in which the NetBackup media server performs the data
method movement.
Off-host backup The off-loading of backup processing to a separate backup agent executing on another
host. NetBackup Snapshot Manager provides the following off-host backup options:
Alternate Client, NetBackup media server, Third-Party Copy Device, and Network Attached
Storage.
Primary disk In a primary-mirror configuration, client applications read and write their data on the
primary disk. An exact duplicate of the primary disk is the mirror.
Raw partition A single section of a raw physical disk device occupying a range of disk sectors. The
raw partition does not have a file system or other hierarchical organization scheme (thus,
a "raw" stream of disk sectors). On some operating systems, such as Solaris and HP-UX,
a raw partition is different from a block device over which the file system is mounted.
Introduction 41
Snapshot Manager terminology
Term Definition
Recovery Manager (RMAN) Oracle's backup and recovery program. RMAN performs backup and restore by making
requests to a NetBackup shared library.
RMAN Proxy Copy An extension to the Oracle8i media management API which enables media management
software such as NetBackup to perform data transfer directly.
SAN (storage area network) A Fibre Channel-based network connecting servers and storage devices. The storage
devices are not attached to servers but to the network itself, and are visible to all servers
on the network.
Snapshot A point-in-time, read-only, disk-based copy of a client volume. A snapshot is created with
minimal effect on other applications. NetBackup provides several types, depending on
the device where the snapshot occurs: copy-on-write, mirror, clone, and snap.
Snapshot method A set of routines for creating a snapshot. You can select the method, or let NetBackup
select it when the backup is started (auto method).
Snapshot mirror A disk mirror created by the Veritas Volume Manager (VxVM). Snapshot mirror is an
exact copy of a primary volume at a particular moment, reproduced on a physically
separate device.
Snapshot source The entity (file system, raw partition, or logical volume) to which a snapshot method is
applied. NetBackup automatically selects the snapshot source according to the entries
in the policy’s Backup Selections list.
Snapshot volume A mirror that has been split from the primary volume or device and made available to
users. Veritas Volume Manager (VxVM) creates snapshot volumes as a point-in-time
copy of the primary volume. Subsequent changes in the primary volume are recorded in
the Data Change Log. The recorded changes can be used to resynchronize with the
primary volume by means of VxVM FastResync. The changes that were made while the
snapshot volume was split are applied to the snapshot volume to make it identical to the
primary volume.
Standard device Refers to the primary disk in an EMC primary-mirror disk array (see primary disk).
Storage Checkpoint (VxFS) Provides a consistent and a stable view of a file system image and keeps track of modified
data blocks since the last checkpoint. Unlike a mirror, a VxFS Storage Checkpoint does
not create a separate copy of the primary or the original data. It creates a block-by-block
account that describes which blocks in the original data have changed from the instant
the checkpoint was activated.
A Storage Checkpoint stores its information in available space on the primary file system,
not on a separate or a designated device. (Also, the ls command does not list Storage
Checkpoint disk usage; you must use the fsckptadm list command instead.)
Introduction 42
Snapshot Manager assistance
Term Definition
■ A backup agent on the SAN that operates on behalf of backup applications. The
third-party copy device receives backup data from a disk that is attached to Fibre
Channel and sends it to a storage device. The third-party copy device uses the SCSI
Extended Copy command. The third-party copy device is sometimes called a Copy
Manager, third-party copy engine, or data mover. In SAN hardware configurations, a
third-party copy device can be implemented as part of a bridge, router, or storage
device. The third-party copy device may or may not be the device to which the storage
units are connected.
■ An off-host backup method in NetBackup Snapshot Manager that allows backups to
be made by means of a backup agent on the SAN.
UFS file system The UNIX file system (UFS), which is the default file system type on Sun Solaris. The
UFS file system was formerly the Berkeley Fast File System.
VxMS (Veritas Federated A library of routines (methods) used by NetBackup Snapshot Manager to obtain the
Mapping Services) physical addresses of logical disk objects such as files and volumes.
Volume A virtual device that is configured over raw physical disk devices (not to be confused with
a NetBackup Media and Device Management volume). Consists of a block and a character
device. If a snapshot source exists over a volume, NetBackup automatically uses a
volume mapping method to map the volume to physical device addresses.
Volume group A logical grouping of disks, created with the Veritas Volume Manager, to allow more
efficient use of disk space.
VxFS The Veritas extent-based File System (VxFS), designed for high performance and large
volumes of data.
VxVM The Veritas Volume Manager (VxVM), which provides the logical volume management
that can also be used in SAN environments.
Snapshot Manager help from For help creating a policy, click the primary server name at the top of the left pane and
NetBackup Administration click Create a Snapshot Backup Policy.
Console
Introduction 43
About open file backups for Windows
Snapshot Manager For a document containing additional Snapshot Manager assistance, see the tech note
assistance from the web NetBackup Snapshot Client Configuration. This document may be accessed from the
following link:
http://www.veritas.com/docs/000081320
This document includes the following:
Compatibility list For a complete list of supported platforms, snapshot methods, data types, and database
agents, and supported combinations of platform and snapshot methods, see the
NetBackup 7.x Snapshot Manager Compatibility document:
http://www.netbackup.com/compatibility
NDMP information on the web The Veritas Support website has a pdf document on supported NDMP operating systems
and NAS vendors. The document also contains configuration and troubleshooting help
for particular NAS systems.
http://www.veritas.com/docs/000027113
The document’s title is: NetBackup for NDMP Supported OS and NAS Appliance
Information.
■ For Instant Recovery using the VxFS_Checkpoint method, the VxFS file system
with the Storage Checkpoints feature must be installed on clients.
■ For devices in LVM, if multiple paths are used, ensure that the multipathing
service is enabled.
■ Junction mount point on filler or array is not supported for snapshot operations
with VSO FIM snapshot method.
■ For client-side nested mount points, rollback of parent mount points is not
supported.
UNIX
■ NetBackup Snapshot Manager is installed with the NetBackup client software.
Every NetBackup server includes the NetBackup client software, by default. So,
you can use NetBackup Snapshot Manager on a NetBackup server or client, if
the Snapshot Manager is supported on that platform.
■ For NetBackup 10.2, Snapshot Manager for Solaris is supported on SPARC
computers only.
Installation 46
Snapshot Manager installation notes
Location Unix
■ Encryption and compression are supported, but are applied only to the backup
copy that is written to a storage unit. The snapshot itself is neither compressed
nor encrypted.
■ FlashBackup policies do not support encryption or compression.
■ BLIB with Snapshot Manager (Perform block level incremental backups
option on the policy Attributes tab): BLIB is supported with NetBackup for
Oracle, NetBackup for DB2, and with VMware.
If you choose the Perform block level incremental backups option on the
policy Attributes tab, the other features of Snapshot Manager are grayed out.
■ Ensure that the number of LUNs exposed to the HBAs, do not reach the
maximum limit during any snapshot related operations.
■ For all other cases, select Standard for UNIX clients and MS-Windows
for Windows clients.
■ For all other cases, select Standard for UNIX clients.
4 Select a storage unit, storage unit group, or a storage lifecycle policy as the
Policy storage.
5 Select a storage lifecycle policy as the Policy storage.
6 Make sure Perform snapshot backups is selected.
Note: When you select Perform snapshot backups, the Bare Metal Restore
option is disabled.
Note: Perform snapshot backups must be selected for the policy to reference
any storage lifecycle policy with a Snapshot destination.
■ Wildcards are permitted if the wildcard does not correspond to a mount point or
a mount point does not follow the wildcard in the path.
Note: This is applicable to a Storage Lifecycle Policy that has snapshot as the
first operation and does not contain any backup or replicate operation.
For example, in the path /a/b, if /a is a mounted file system or volume, and
/a/b designates a subdirectory in that file system: the entry /a/b/*.pdf causes
NetBackup to make a snapshot of the /a file system and to back up all pdf files
in the /a/b directory. But, with an entry of /* or /*/b, the backup may fail or
have unpredictable results, because the wildcard corresponds to the mount
point /a. Do not use a wildcard to represent all or part of a mount point.
In another example, /a is a mounted file system which contains another mounted
file system at /a/b/c (where c designates a second mount point). A Backup
Selections entry of /a/*/c may fail or have unpredictable results, because a
mount point follows the wildcard in the path.
Information is available on the Cross mount points policy attribute.
See “Snapshot tips” on page 75.
■ For a raw partition backup of a UNIX client, specify the /rdsk path, not the /dsk
path. You can specify the disk partition (except on AIX) or a VxVM volume.
Examples:
On Solaris: /dev/rdsk/c0t0d0s1
/dev/vx/rdsk/volgrp1/vol1
On HP: /dev/rdsk/c1t0d0
/dev/vx/rdsk/volgrp1/vol1
On Linux: /dev/sdc1
On AIX clients, backing up a native disk partition is not supported. A raw partition
backup must specify a VxVM volume, such as /dev/vx/rdsk/volgrp1/vol1.
Note that /dev/vx/dsk/volgrp1/vol1 (without the "r" in /rdsk) does not work.
Policy configuration 55
Off-host backup configuration options
To back up a virtual machine that does not have a NetBackup client installed
on it, you must select this option. If a NetBackup client is installed on the virtual
machine, you can back up the virtual machine in the same way as an ordinary
physical host (a snapshot-based backup is not required).
The VMware backup host option requires the FlashBackup-Windows or
MS-Windows policy type.
See the NetBackup for VMware Administrator’s Guide for further information:
https://www.veritas.com/docs/DOC5332
Note: The VMware backup host is not displayed when you select the Retain
snapshots for Instant Recovery or SLP management check box as VMware
backup is not supported for Instant Recovery.
■ Alternate Client
Select this option to designate the backup agent as a NetBackup media server,
a third-party copy device that implements the SCSI Extended Copy command,
or a NAS filer (Network Attached Storage).
Select this option to designate another client (alternate client) as the backup
agent.
An alternate client saves computing resources on the original client. The alternate
client handles the backup I/O processing on behalf of the original client, so the
backup has little effect on the original client.
Enter the name of the alternate client in the Machine field.
See “About using alternate client backup” on page 71.
■ Data Mover
Select this option to designate the backup agent as a NetBackup media server,
a third-party copy device that implements the SCSI Extended Copy command,
or a NAS filer (Network Attached Storage).
The Data Mover option requires the Standard, FlashBackup, or MS-Windows
policy type.
Select the type of data mover in the Machine drop-down:
Network Attached Storage An NDMP host (NAS filer) performs the backup
processing, by means of the NAS_Snapshot method.
NetBackup for NDMP software is required on the
NetBackup server. This option is required for NAS
snapshots.
NetBackup Media Server A Solaris, HP, AIX media server performs the backup
processing (for Solaris, HP, and AIX clients only).
Policy configuration 57
Automatic snapshot selection
Third-Party Copy Device A third-party copy device handles the backup processing.
For Solaris, HP, AIX, and Linux clients only.
http://www.veritas.com/docs/000081320
■ If the policy had been configured for a particular snapshot method, click the
Snapshot Manager Options option and set the snapshot method to that
particular one. NetBackup selects a snapshot method when the backup starts.
Use of the auto method does not guarantee that NetBackup can select a snapshot
method for the backup. NetBackup looks for a suitable method according to the
following factors:
■ The client platform and policy type.
■ The presence of up-to-date software licenses, such as VxFS and VxVM.
■ How the client data is configured. For instance:
■ Whether a raw partition has been specified for a copy-on-write cache.
See “Entering the cache” on page 129.
■ Whether the client’s data is contained in the VxVM volumes that were
configured with one or more snapshot mirrors.
Note: The auto method cannot select a snapshot method that is designed for a
particular disk array, such as EMC_TimeFinder_Clone or HP_EVA_Vsnap. You
must select the disk array method from the drop-down list on the Snapshot Options
dialog box.
6 In the pull-down menu, select VSO as the Snapshot method for the policy,
to manage snapshots using the Snapshot Manager.
■ Choose auto if you want NetBackup to select the snapshot method.
See “Automatic snapshot selection” on page 57.
■ The available methods depend on how your clients are configured and
which attributes you selected on the Attributes tab.
Only one snapshot method can be configured per policy.
Configure each policy for a single method and include only clients and backup
selections for which that snapshot method can be used. For example, for the
nbu_snap method (which applies to Solaris clients only), create a policy that
includes Solaris clients only. The snapshot method you select must be
compatible with all items in the policy’s Backup Selections list.
Policy configuration 60
Selecting the snapshot method
Snapshot methods
Table 3-1 describes each snapshot method (not including the disk array methods).
See “Disk array methods at a glance” on page 150.
Method Description
https://www.veritas.com/support/en_US/article.DOC5332
Method Description
VSS VSS uses the Volume Shadow Copy Service of Windows and
supports Instant Recovery. VSS is for local backup or alternate
client backup.
http://www.veritas.com/docs/000081320
For alternate client backup, the client data must reside on: either
a disk array such as EMC, HP, or Hitachi with snapshot capability,
or a Veritas Storage Foundation for Windows 4.1 or later volume
with snapshots enabled. VSS supports file system backup of a
disk partition (such as E:\) and backup of databases.
Method Description
Note that all files in the Backup Selections list must reside in
the same file system.
VxVM For any of the following types of snapshots with data configured
over Volume Manager volumes, for clients on Solaris, HP, AIX,
Linux, or Windows. Linux and AIX clients require VxVM 4.0 or
later.
Method Description
This setting overrides a cache that is specified on Host Properties > Clients >
Client Properties dialog > UNIX Client > Client Settings.
See “Entering the cache” on page 129.
Do not specify wildcards (such as /dev/rdsk/c2*).
A complete list of requirements is available.
See “Cache device requirements” on page 126.
If the client is restarted, the snapshots that have been kept must be remounted
before they can be used for a restore. You can use the bpfis command to discover
the images.
See the bpfis man page or the NetBackup Commands manual.
Note however that NetBackup automatically remounts Instant Recovery snapshots.
If the snapshot is on an EMC, Hitachi, or HP disk array, and you want to use
hardware-level restore, important information is available.
See the Warning under Hardware-level disk restore in the NetBackup Snapshot
Manager Configuration document. This document may be accessed from the
following location:
http://www.veritas.com/docs/000081320
0-auto (the default) If the policy is not configured for Instant Recovery, you can
select this option. The auto option attempts to select the
available provider in this order: Hardware, Software, System.
3-hardware Use the hardware provider for your disk array. A hardware
provider manages the VSS snapshot at the hardware level
by working with a hardware storage adapter or controller. For
example, if you want to back up an EMC CLARiiON or HP
EVA array by means of the array’s snapshot provider, select
3-hardware.
0-unspecified If the policy is not configured for Instant Recovery, you can
select this option.
Snapshot Resources
To configure the disk array methods, see the chapter titled Configuration of snapshot
methods for disk arrays:
See “Disk array configuration tasks” on page 152.
METHOD=USER_DEFINED
DB_BEGIN_BACKUP_CMD=your_begin_script_path
DB_END_BACKUP_CMD=your_end_script_path
In this example, the script shutdown_db.ksh is run before the backup, and
restart_db.ksh is run after the snapshot is created.
Policy configuration 71
About using alternate client backup
VxFS 3.4 or later (VxFS 3.3 VxFS, at same level as primary client or higher
for HP, VxFS 4.0 for AIX
and Linux)
VxVM 3.2 or later (UNIX) VxVM, at same level as primary client or higher.
VxVM 3.1 or later Note: For the VVR method, the alternate client must be at
(Windows) exactly the same level as primary client.
For VxVM on Windows, use VxVM 3.1 or later with all the latest
VxVM service packs and updates.
■ VSS, for snapshots using the Volume Shadow Copy Service of Windows.
This method is for Windows clients, where the client data is stored on a
disk array such as EMC or Hitachi, or in a Veritas Storage Foundation for
Windows 4.1 or later volume. Supports Exchange.
■ The disk array-related snapshot methods.
Configuration Description
Client data is on a Dell EMC disk array To run the backup on an alternate client: choose
Standard as the policy type, select Perform
snapshot backups, Perform off-host backup,
and Use alternate client. Then select the alternate
client. On the Snapshot Options display, specify
VSO snapshot method.
Configuration Description
Client data is replicated on a remote To run the backup on the replication host (alternate
host client), choose: Standard as the policy type, select
Perform snapshot backups, Perform off-host
backup, and Use alternate client. Then select
the alternate client (the replication host). On the
Snapshot Options display, specify the VVR
snapshot method.
Client data is on a JBOD array in VxVM To run the backup on the alternate client, choose:
volumes with snapshot mirrors Standard (for UNIX client) or MS-Windows
configured (Windows client) as the policy type and Perform
snapshot backups, Perform off-host backup,
and Use alternate client. Then select the alternate
client. On the Snapshot Options display, specify
the FlashSnap method.
Snapshot tips
Note the following tips:
■ In the Backup Selections list, be sure to specify absolute path names. Refer
to the NetBackup Administrator’s Guide, Volume I for help specifying files in the
Backup Selections list.
■ If an entry in the Backup Selections list is a symbolic (soft) link to another file,
Snapshot Manager backs up the link, not the file to which the link points. This
NetBackup behavior is standard. To back up the actual data, include the file
path to the actual data.
■ On the other hand, a raw partition can be specified in its usual symbolic-link
form (such as /dev/rdsk/c0t1d0s1). Do not specify the actual device name
that /dev/rdsk/c0t1d0s1 points to. For raw partitions, Snapshot Manager
automatically resolves the symbolic link to the actual device.
■ The Cross mount points policy attribute is not available for the policies that
are configured for snapshots. This option is not available because NetBackup
does not cross file system boundaries during a backup of a snapshot. A backup
of a high-level file system, such as / (root), does not back up the files residing
in lower-level file systems. Files in the lower-level file systems are backed up if
they are specified as separate entries in the Backup Selections list. For
instance, to back up /usr and /var, both /usr and /var must be included as
separate entries in the Backup Selections list.
For more information on Cross mount points, refer to the NetBackup
Administrator’s Guide, Volume I.
■ On Windows, the \ must be entered in the Backup Selections list after the drive
letter (for example, D:\).
See “Configuring a FlashBackup policy for NetBackup Administration Console”
on page 88.
Primary disk
Phase 2 Mirror was synchronized with primary and split at 8:24 pm.
Phase 4 Backup of mirror was completed at 10:14 pm; file access time on mirror
is reset to 8:01 pm.
LAN / WAN
NetBackup client
NDMP host (NAS filer)
CIFS or NFS mount
Data is mounted on client and resides on NAS (NDMP) Snapshot of client volume is
host. made here.
Note: Windows pathnames must use the Universal Naming Convention (UNC).
NetBackup creates snapshots on the NAS-attached disk only, not on the storage
devices that are attached to the NetBackup server or the client.
■ NetBackup clients that are used to perform backups must have Snapshot
Manager installed.
■ On NetBackup clients for Oracle: NetBackup for Oracle database agent
software must be installed on all clients.
Note: Contact the Veritas sales team to ensure that you have the appropriate
licensing for this option.
■ The NAS host must support NDMP protocol version V4 and the NDMP V4
snapshot extension, with additional changes made to the snapshot extension.
The NetBackup Snapshot Manager Configuration online pdf contains a list of
NAS vendors that NetBackup supports for NAS snapshots. This online pdf
includes requirements specific to your NAS vendor.
See: http://www.veritas.com/docs/000081320
■ NetBackup must have access to each NAS host on which a NAS snapshot is
to be created. To set up this authorization, you can use either of the following:
■ In the NetBackup Administration Console: the Media and Device
Management > Credentials > NDMP Hosts option or the NetBackup Device
Configuration Wizard.
OR
■ The following command:
■ The client data must reside on a NAS host and be mounted on the client by
means of NFS on UNIX or CIFS on Windows. For NFS mounts, the data must
not be auto-mounted, but must be hard (or manually) mounted.
■ For NAS snapshot, you must create a NAS_Snapshot policy.
See “Setting up a policy for NAS snapshots” on page 83.
■ On Windows clients, to restore files from a NAS_Snapshot backup, the
NetBackup Client Service must be logged in as the Administrator account. The
NetBackup Client Service must not be logged in as the local system account.
The Administrator account allows NetBackup to view the directories on the
NDMP host to which the data is to be restored. If you attempt to restore files
from a NAS_Snapshot and the NetBackup Client Service is logged in as the
local system account, the restore fails.
NAS snapshot configuration 83
Logging on to the NetBackup Client Service as the Administrator
8 Select Data Mover from the Use list and Network Attached Storage from the
Machinelist.
When the policy executes, NetBackup automatically selects the NAS_Snapshot
method for creating the snapshot.
As an alternative, you can manually select the NAS_Snapshot method from
the Options dialog from the policy Attributes display.
9 On the Schedule Attributes tab, select the following:
■ Instant Recovery
Choose Snapshots only. The other option (Snapshots and copy
snapshots to a storage unit) does not apply to NAS_Snapshot.
■ Override policy storage unit
If the correct storage unit was not selected on the Attributes tab, select it
here.
10 For the Backup Selections list, specify the directories, volumes, or files from
the client perspective, not from the NDMP host perspective. For example:
■ On a UNIX client, if the data resides in /vol/vol1 on the NDMP host nas1,
and is NFS mounted to /mnt2/home on the UNIX client: specify /mnt2/home
in the Backup Selections list.
■ On a Windows client, if the data resides in /vol/vol1 on the NDMP host
nas1, and is shared by means of CIFS as vol1 on the Windows client,
specify \\nas1\vol1.
■ Windows path names must use the Universal Naming Convention (UNC),
in the form \\server_name\share_name.
■ The client data must reside on a NAS host. The data must be mounted on
the client by means of NFS on UNIX or shared by means of CIFS on
Windows. For NFS mounts, the data must be manually mounted by means
of the mount command, not auto-mounted.
■ For a client in the policy, all paths must be valid, or the backup fails.
■ The ALL_LOCAL_DRIVES entry is not allowed in the Backup Selections
list.
11 On the policy Attributes tab: if you click Apply or OK, a validation process
checks the policy and reports any errors. If you click Close, no validation is
performed.
NAS snapshot configuration 85
NAS snapshot naming scheme
NAS+NBU+PFI+client_name+policy_name+sr+volume_name+date_time_string
NAS+NBU+PFI+sponge+NAS_snapshot_pol1+sr+Vol_15G+2005.05.31.13h41m41s
Where:
Client name = sponge
Policy name = NAS_snapshot_pol1
sr = indicates that the snapshot was created for a NAS snapshot.
Volume name = Vol_15G
Date/Time = 2005.05.31.13h41m41s
Chapter 5
FlashBackup configuration
This chapter includes the following topics:
■ About FlashBackup
■ FlashBackup restrictions
About FlashBackup
FlashBackup is a policy type that combines the speed of raw-partition backups with
the ability to restore individual files. The features that distinguish FlashBackup from
other raw-partition backups and standard file system backups are these:
■ Increases backup performance as compared to standard file-ordered backup
methods. For example, a FlashBackup of a file system completes faster than
other types of backup in the following case:
■ the file system contains a large number of files
■ most of the file system blocks are allocated
■ For a complete list of supported platforms, snapshot methods, and data types,
see the NetBackup Snapshot Manager Compatibility document:
http://www.netbackup.com/compatibility
FlashBackup restrictions
Note the following restrictions:
■ FlashBackup policies do not support file systems that HSM manages.
■ FlashBackup policies for UNIX clients do not support Instant Recovery.
■ FlashBackup does not support VxFS storage checkpoints that the
VxFS_Checkpoint snapshot method uses.
■ FlashBackup supports the following I/O system components: ufs, VxFS, and
Windows NTFS file systems, VxVM volumes and LVM volumes, and raw disks.
Other components (such as non-Veritas storage replicators or other non-Veritas
volume managers) are not supported.
■ FlashBackup on Linux supports only the VxFS file system on VxVM volumes.
For Linux clients, no other file system is supported, and VxFS file systems are
not supported without VxVM volumes.
■ FlashBackup on AIX supports only the VxFS file system, with VxVM or LVM
volumes. For AIX clients, no other file system is supported, and the data must
be over a VxVM or LVM volume.
■ Note these restrictions for Windows clients:
■ The use of FlashBackup in a Windows Server Failover Clustering (WSFC)
environment is supported, with the following limitation: Raw partition restores
can only be performed when the disk being restored is placed in extended
maintenance mode or removed from the WSFC resource group.
■ FlashBackup-Windows and Linux policies do not support a Client Direct
restore.
■ FlashBackup-Windows policies support Instant Recovery, but only for backup
to a storage unit (not for snapshot-only backups).
■ FlashBackup-Windows policies do not support the backup of Windows
system-protected files (the System State, such as the Registry and Active
Directory).
■ FlashBackup-Windows policies do not support the backup of Windows OS
partitions that contain the Windows system files (usually C:).
FlashBackup configuration 88
Configuring a FlashBackup policy for NetBackup Administration Console
3 In the All Policies pane, right-click and select New Policy... to create a new
policy.
FlashBackup configuration 90
Configuring a FlashBackup policy for NetBackup Administration Console
4 On the Attributes tab, select the Policy type: FlashBackup for UNIX clients,
or FlashBackup-Windows for Windows clients.
FlashBackup-Windows supports the backup and restore of NTFS files that are
compressed.
The files are backed up and restored as compressed files (they are not
uncompressed).
5 Specify the storage unit.
FlashBackup and FlashBackup-Windows policies support both tape storage
units and disk storage units.
6 Select a snapshot method in one of the following ways:
■ Click Perform snapshot backups on the Attributes tab.
For a new policy, NetBackup selects a snapshot method when the backup
starts.
For a copy of a policy that was configured for a snapshot method, click the
Snapshot Client Options option and set the method to auto. NetBackup
selects a snapshot method when the backup starts.
■ Click Perform snapshot backups, click the Snapshot Client Options
option and select a snapshot method.
See “Selecting the snapshot method” on page 58.
7 Windows only: to enable the backup for Instant Recovery, select Retain
snapshots for Instant Recovery or SLP management.
Instant Recovery is not supported for FlashBackup with UNIX clients.
8 UNIX only: if you selected nbu_snap or VxFS_Snapshot as the snapshot
method, specify a raw partition as cache, in either of these ways:
■ Use the Host Properties node of the Administration Console to specify the
default cache device path for snapshots. Click Host Properties > Clients,
select the client, then Actions > Properties, UNIX Client > Client Settings.
■ Use the Snapshot Client Options dialog to specify the cache.
See “Entering the cache” on page 129.
The partition to be used for the cache must exist on all clients that are
included in the policy.
10 To reduce backup time when more than one raw partition is specified in the
Backup Selections list, select Allow multiple data streams.
11 Use the Schedules tab to create a schedule.
FlashBackup policies support full and incremental types only. User backup and
archive schedule types are not supported.
A full FlashBackup backs up the entire disk or raw partition that was selected
in the Backup Selections tab (see next step). An incremental backup backs
up individual files that have changed since their last full backup, and also backs
up their parent directories. The incremental backup does not back up files in
parent directories unless the files have changed since the last full backup.
For incremental backups, a file is considered “changed” if its Modified Time
or Create Time value was changed.
Note on FlashBackup-Windows: The NTFS Master File Table does not update
the Create Time or Modified Time of a file or folder when the following changes
are made:
■ Changes to file name or directory name.
■ Changes to file or directory security.
■ Changes to file or directory attributes (read only, hidden, system, archive
bit).
12 On the Backup Selections tab, specify the drive letter or mounted volume
(Windows) or the raw disk partition (UNIX) containing the files to back up.
For Windows
where:
HP examples /dev/rdsk/c1t0d0
/dev/vx/rdsk/volgrp1/vol1
For UNIX Backup Selections tab must specify the raw (character)
device corresponding to the block device over which the
file system is mounted. For example, to back up /usr,
mounted on /dev/dsk/c1t0d0s6, enter raw device
/dev/rdsk/c1t0d0s6. Note the r in /rdsk.
Note: Wildcards (such as/dev/rdsk/c0*) are not
permitted. Specifying the actual device file name such as
/devices/pci@1f,0/pci@1/scsi@3/sd@1,0:d,rawis
not supported.
■ To use multiple data streams, other directives had to be added to the policy’s
Backup Selections (file) list.
The following procedure and related topics explain how to configure a FlashBackup
policy with a CACHE= entry in the policy’s Backup Selections list. This means of
configuration is provided for backward compatibility.
To configure FlashBackup policy for backward compatibility (UNIX only)
1 Leave Perform snapshot backups deselected on the policy Attributes tab.
NetBackup uses nbu_snap (snapctl driver) for Solaris clients or VxFS_Snapshot
for HP.
2 On the policy’s Backup Selections tab, specify at least one cache device by
means of the CACHE directive. For example:
CACHE=/dev/rdsk/c0t0d0s1
This cache partition is for storing any blocks that change in the source data
while the backup is in progress.CACHE= must precede the source data entry.
Note the following:
■ Specify the raw device, such as /dev/rdsk/c1t0d0s6. Do not specify the
block device, such as /dev/dsk/c1t0d0s6.
■ Do not specify the actual device file name. For example, the following is
not allowed:
/devices/pci@1f,0/pci@1/scsi@3/sd@1,0:d,raw
CACHE=/dev/rdsk/c1t4d0s0
/dev/rdsk/c1t4d0s7
CACHE=/dev/rdsk/c1t4d0s1
FlashBackup configuration 94
Configuring FlashBackup policy for backward compatibility (UNIX only)
/dev/rdsk/c1t4d0s3
/dev/rdsk/c1t4d0s4
Note: CACHE entries are allowed only when the policy’s Perform snapshot
backups option is deselected. If Perform snapshot backups is selected,
NetBackup attempts to back up the CACHE entry and the backup fails.
All entries must specify the raw device, such /dev/rdsk/c0t0d0s1. Do not use the
actual file name; you must use the link form of cxtxdxsx.
FlashBackup configuration 95
Configuring FlashBackup policy for backward compatibility (UNIX only)
Note: Only one data stream is created for each physical device on the client. You
cannot include the same partition more than once in the Backup Selections list.
The directives that you can use in the Backup Selections list for a FlashBackup
policy are as follows:
■ NEW_STREAM
■ UNSET_ALL
FlashBackup configuration 96
Configuring FlashBackup policy for backward compatibility (UNIX only)
Each backup begins as a single stream of data. The start of the Backup Selections
list up to the first NEW_STREAM directive (if any) is the first stream. Each NEW_STREAM
entry causes NetBackup to create an additional stream or backup.
Note that all file paths that are listed between NEW_STREAM directives are in the same
stream.
Table 5-1 shows a Backup Selections list that generates four backups:
1 CACHE=/dev/rdsk/c1t3d0s3 CACHE=/dev/cache_group/rvol1c
/dev/rdsk/c1t0d0s6 /dev/vol_grp/rvol1
2 NEW_STREAM NEW_STREAM
/dev/rdsk/c1t1d0s1 UNSET CACHE
CACHE=/dev/cache_group/rvol2c
/dev/vol_grp/rvol2
3 NEW_STREAM NEW_STREAM
UNSET CACHE UNSET CACHE
CACHE=/dev/rdsk/c1t3d0s4 CACHE=/dev/cache_group/rvol3c
/dev/rdsk/c1t2d0s5 /dev/vol_grp/rvol3
/dev/rdsk/c1t5d0s0 /dev/vol_grp/rvol3a
4 NEW_STREAM NEW_STREAM
UNSET CACHE UNSET CACHE
CACHE=/dev/rdsk/c0t2d0s3 CACHE=/dev/cache_group/rvol4c
/dev/rdsk/c1t6d0s1 /dev/vol_grp/rvol4
The backup streams are issued as follows. The following items correspond in order
to the numbered items inTable 5-1:
1. The first stream is generated automatically and a backup is started for
/dev/rdsk/c1t0d0s6 (Solaris) or /dev/vol_grp/rvol1 (HP). The CACHE=
entry sets the cache partition to /dev/rdsk/c1t3d0s3 (Solaris) or
/dev/cache_group/rvol1c (HP).
8 Click Next.
9 NetBackup automatically displays the most recent backup. To select a different
date range, click Edit.
10 Select Use backup history, select the image file. Then click Apply.
11 Select the mount files or folders that you want to restore. Then click Next.
12 Select the restore target options that you want for the recovery target.
13 Select the recovery options that you want for the restore. Then click Next.
The page will show a summary of the options which you have selected for the
recovery.
14 Click Start recovery.
If you need the raw partition backups from the Restore type drop-down list, follow
the procedure below:
Recovery using Flashbackup policy with Raw partition Backups
1 Follow from Step 1 to Step 4.
2 Select Raw partition backups from the Restore type drop-down list.
3 Follow from Step 6 to Step 10.
4 Select the volume files or folders that you want to restore. Then click Next.
Make sure root node of the hierarchy is selected.
Before selecting the volume files, you need to unmount the files.
To unmount the volume use the following command:
/opt/VRTS/bin/vxumount -o mntunlock=VCS /fbu or umount /fbu
Note: The special drive with the convention \\.\E: is applicable for raw partition.
Even though E is visible do not select it, as its not applicable for raw partition.
Chapter 6
Instant Recovery
configuration
This chapter includes the following topics:
■ Modifying the VxVM or FlashSnap resync options for point in time rollback
■ Supports NetBackup clients on Linux and Windows. The primary server can be
on any supported operating system.
■ Uses snapshot technologies to create disk images.
■ Can create a snapshot and a backup to tape or disk, from one policy.
■ Enables random-access (non-sequential) restores of dispersed files from full
backups.
■ Enables block-level restore and file promotion from VxFS_Checkpoint snapshots
(UNIX) and file promotion from NAS_Snapshot. Also enables Fast File Resync
from VxVM and FlashSnap snapshots on Windows.
■ Enables rollback from the backups that were created using the following:
VxFS_Checkpoint, VxVM, FlashSnap, NAS_Snapshot, or disk array methods.
■ Enables rollback from the backups that were created using VSO method.
■ Can restore to a different path or host.
■ Provides the resource management by means of a rotation schedule.
■ Supports Oracle, Microsoft Exchange, DB2, SAP, and SQL-Server databases.
■ Supports Oracle.
Note: Even if each policy has its own separate snapshot devices, conflicts can
occur when you browse for restore. Among the available snapshots, it may be
difficult to identify the correct snapshot to be restored. It is therefore best to
configure only one policy to protect a given volume when you use the Instant
Recovery feature of NetBackup.
2 Make sure that the media server is listed under Additional Servers, not under
Media Servers.
Note: on UNIX, this procedure places a SERVER = host entry in the bp.conf
file for each host that is listed under Additional Servers. In the bp.conf file,
the media server must not be designated by a MEDIA_SERVER = host entry.
Note: NetBackup Instant Recovery retains the snapshot. The snapshot can be
used for restore even if the client has been restarted.
In this figure, the next Instant Recovery backup overwrites the snapshot that was
made at 12:00 noon.
■ The serial number of the array is specified in the Array Serial # field. Contact
your array administrator to obtain the disk array serial numbers and designators
(unique IDs) for the array.
The unique ID snapshot resource or source LUN containing the primary data is
specified in the Source Device.
The maximum number of snapshots to retain is determined by the number of
configured devices in the Snapshot Device(s) field. For example, if you enter
two devices, only two snapshots can be retained. The above example specifies
three devices (0122;0123;0124), so three snapshots can be retained. When the
maximum is reached, the fourth snapshot overwrites the first one.
■ The particular devices to use for the snapshots are those named in the Snapshot
Device(s) field.
■ The order in which the devices are listed in the Snapshot Device(s) field
determines their order of use. Device 0122 is used for the first snapshot, 0123
for the second, and 0124 for the third.
Preconfiguration of the snapshot devices may be required.
See the appropriate topic for your disk array and snapshot method.
Note: For Windows clients using the VSS method on disk arrays that are configured
for clones or mirrors: you must synchronize the clones or mirrors with their source
before you run the backup.
For Instant Recovery backups, it is good practice to set the backup retention level
to infinite. A retention period that is too short can interfere with maintaining a
maximum number of snapshots for restore.
Instant Recovery configuration 109
Configuring a policy for Instant Recovery
The disk array snapshot See the appropriate topic for your disk array and
methods. snapshot method.
10 To enter the files and folders to be backed up, use the Backup Selections
tab.
■ When backing up Oracle database clients, refer to the NetBackup for Oracle
System Administrator’s Guide for instructions.
■ Snapshot Manager policies do not support the ALL_LOCAL_DRIVES entry
in the policy’s Backup Selections list, except for the policies that are
configured with the VMware method.
■ Snapshot Manager policies do not support the ALL_LOCAL_DRIVES entry
in the policy’s Backup Selections list.
Note: If the cache runs out of space, the snapshot may fail.
■ HP_EVA_Snapshot
■ HP_EVA_Vsnap
■ Hitachi_CopyOnWrite
For raw partitions: Cache size = volume size * the number of retained snapshots
For file systems: Cache size = (consumed space * the number of retained snapshots)
+ approximately 2% to 5% of the consumed space in the file system
Note:
vxassist snapstart X:
where X is the drive letter. This command creates a snapshot mirror of the
designated drive.
2 For a volume that is not associated with a drive letter, enter:
This command shows information for the specified disk group, including the
names of the volumes that are configured for that group.
■ Create the snapshot by entering the following:
Where:
■ Brackets [ ] indicate optional items.
Instant Recovery configuration 116
About configuring VxVM
=SNAP_vol1_NBU/cache=NBU_CACHE
Where:
■ Brackets [ ] indicate optional items.
■ make volume specifies the name of the volume snapshot.
The number for nmirror should equal the number for ndcomirror.
Note: For Linux, the init value should be init=active instead of init=none.
For Solaris 10 with Storage Foundation 5.1, the init value should be
init=active instead of init=none.
3 Set the Maximum Snapshots (Instant Recovery only) value on the NetBackup
Snapshot Client Options dialog.
/usr/openv/netbackup/SYNC_PARAMS
2 In the file, enter the numeric values for the options, on one line. The numbers
apply to the options in the bulleted list above, in that order.
For example:
6 3 1000
Instant Recovery configuration 119
Instant Recovery for databases
Use the Storage > Storage Lifecycle Policies node of the NetBackup
Administration Console. Click Actions > New > Storage Lifecycle Policies.
Click Add.
■ For snapshots, select Snapshot on the New Storage Destination dialog.
You can specify a retention period appropriate for snapshots (such as two
weeks). Click OK.
■ For backup copies to disk, select Backup on the New Storage Destination
dialog. Specify a disk storage unit and a longer retention period (such as
six months). Click OK.
■ For backup copies to tape, select Duplication on the New Storage
Destination dialog. Specify a tape storage unit and a longer retention period
(such as five years). Click OK and finish creating the lifecycle policy.
2 Create a policy for snapshots. (Use the Policies node of the Administration
Console.)
On the policy Attributes tab:
■ You can specify the lifecycle policy in the Policy storage unit / lifecycle
policy field. You can later change the lifecycle policy in the schedule, as
explained later in this procedure.
■ Select Perform snapshot backups.
■ On the Snapshot Options dialog box, the Maximum Snapshots (Instant
Recovery only) parameter sets the maximum number of snapshots to be
retained at one time. When the maximum is reached, the next snapshot
causes the oldest job-complete snapshot to be deleted.
A snapshot is considered to be job complete once all its configured
dependent copies (for example, Backup from Snapshot, Index, Replication)
are complete.
Note that if you also set a snapshot retention period of less than infinity in
the lifecycle policy, the snapshot is expired when either of these settings
takes effect (whichever happens first). For example, if the Maximum
Snapshots value is exceeded before the snapshot retention period that is
specified in the lifecycle policy, the snapshot is deleted.
The same is true for the Snapshot Resources pane on the Snapshot
Options dialog box. If the snapshot method requires snapshot resources,
the maximum number of snapshots is determined by the number of devices
that are specified in the Snapshot Device(s) field. For example, if two
devices are specified, only two snapshots can be retained at a time. Either
the Snapshot Device(s) field or the snapshot retention period in the lifecycle
policy can determine the retention period.
Instant Recovery configuration 121
About storage lifecycle policies for snapshots
Error 156 can be a result of different problems, listed below are some of them:
Instant Recovery configuration 122
About storage lifecycle policies for snapshots
VxVM failing to get the version of the disk group, run appropriate VxVM command
outside of NetBackup to see whether you can get the version information for the
disk group in use.
Bpfis log
The device that is to backup by this process is being used by another process.
Check whether any other process is holding the same device.
Bpfis log
Policy validation fails for valid backup selection. If there filer’s volume is mounted
on a windows client, run NetBackup client service on the client and the alternate
client with a valid credentials to access CIFS share, and check that the filers are
up, and the volume is seen mounted on the windows client.
Bpfis log
For windows client, live browse from the snapshot fails with the following error
message. Make sure that NetBackup client service on the client and the alternate
client is running with a valid credential to access CIFS share
ERROR: permissions denied by client during rcmd.
Snapshot backup for windows client fail with status 55. Make sure that NetBackup
client service on the client and the alternate client is running with a valid credential
to access CIFS share.
Bpfis log
Live browse or ‘backup from snapshot’ operation for windows client fail with error
43, status 156. Enable create_ucode & convert_ucode on primary volume.
Bpfis log
NBUAdapter log
About nbu_snap
The nbu_snap snapshot method is for Solaris clients only. It is for making
copy-on-write snapshots for UFS or VxFS file systems.
The information in this section applies to either Standard or FlashBackup policy
types.
nbu_snap is not supported in clustered file systems. It is not supported as the
selected snapshot method or as the default snapctl driver when you configure
FlashBackup in the earlier manner.
See “Configuring FlashBackup policy for backward compatibility (UNIX only)”
on page 92.
Configuration of software-based snapshot methods 126
Software-based snapshot methods
Warning: Choose a cache partition carefully! The cache partition’s contents are
overwritten by the snapshot process.
■ Specify the raw partition as the full path name of either the character special
device file or the block device file. For example:
Or
/dev/dsk/c2t0d3s3
Or
/dev/vx/dsk/diskgroup_1/volume_3
/usr/openv/netbackup/bin/driver/snapon /omo_cat3
/dev/vx/rdsk/zeb/cache
Example output:
■ Id of each snapshot
■ Size of the partition containing the client file system
■ Amount of file system write activity in 512-byte blocks that occurred during
the nbu_snap snapshot (under the cached column).
The more blocks that are cached as a result of user activity, the larger the
cache partition required.
snapcachelist shows each cache device in use and what percentage has
been used (busy). For each cache device that is listed, busy shows the total
space that is used in the cache. This value indicates the size of the raw partition
that may be required for nbu_snap cache.
More details are available on the snap commands.
See “nbu_snap commands” on page 289.
The snap commands can be used in a script.
If the cache partition is not large enough, the backup fails with status code 13,
"file read failed." The /var/adm/messages log may contain errors such as the
following:
5 Using the information that snaplist and snapcachelist provide, you have
several options:
■ Specify a larger (or smaller) partition as cache, depending on the results
from snaplist and snapcachelist.
■ Reschedule backups to a period when less user activity is expected.
Configuration of software-based snapshot methods 129
Software-based snapshot methods
■ If multiple backups use the same cache, reduce the number of concurrent
backups by rescheduling some of them.
6 When you are finished with the snapshot, you can remove it by entering the
following:
/usr/openv/netbackup/bin/driver/snapoff snapid
where snapid is the numeric id of the snapshot that was created earlier.
NetBackup policies do not control any snapshots that you create manually with
the snapon command. When snapon is run manually, it creates a copy-on-write
snapshot only. The snapshot remains on the client until it is removed by entering
snapoff or the client is restarted.
About VxFS_Checkpoint
The VxFS_Checkpoint snapshot method is for making copy-on-write snapshots.
This method is one of several snapshot methods that support Instant Recovery
backups. Note that for VxFS_Checkpoint, the Instant Recovery snapshot is made
on the same disk file system that contains the client’s original data.
For VxFS_Checkpoint, VxFS 3.4 or later with the Storage Checkpoints feature must
be installed on the NetBackup clients. HP requires VxFS 3.5; AIX and Linux require
VxFS 4.0.
Note: On the Red Hat Linux 4 platform, VxFS_Checkpoint snapshot method supports
Storage Foundation 5.0 MP3 RP3 HF9 or later versions.
Note: Off-host backup is not supported for a VxFS 4.0 multi-volume system.
Configuration of software-based snapshot methods 131
Software-based snapshot methods
Block-Level restore
If only a small portion of a file system or database changes on a daily basis, full
restores are unnecessary. The VxFS Storage Checkpoint mechanism keeps track
of the data blocks that were modified since the last checkpoint was taken. Block-level
restores take advantage of this feature by restoring only changed blocks, not the
entire file or database. The result is faster restores when you recover large files.
See “About Instant Recovery: block-level restore” on page 233.
Configuration of software-based snapshot methods 132
Software-based snapshot methods
About VxFS_Snapshot
The VxFS_Snapshot method is for making copy-on-write snapshots of local Solaris
or HP clients. Off-host backup is not supported with this snapshot method.
Note the following:
■ VxFS_Snapshot supports the FlashBackup policy type only.
■ The VxFS_Snapshot method can only be used to back up a single file system.
If multiple file systems are specified in the policy’s Backup Selections list when
you use this method, the backup fails.
■ In a FlashBackup policy, if the Backup Selections list contains CACHE= entries,
FlashBackup does support the backup of multiple file systems from a single
policy. For each file system, a separate cache must be designated with the
CACHE= entry. Make sure you create a separate policy for each file system.
See “Configuring FlashBackup policy for backward compatibility (UNIX only)”
on page 92.
■ You must designate a raw partition to be used for copy-on-write cache.
Raw partition example:
Solaris: /dev/rdsk/c1t0d0s3
Or
/dev/dsk/c1t0d0s3
HP: /dev/rdsk/c1t0d0
Or
/dev/dsk/c1t0d0
About VxVM
The VxVM snapshot method is for making mirror snapshots with Veritas Volume
Manager 3.1 or later snapshot mirrors. (On Windows, make sure that VxVM has
the latest VxVM service packs and updates.)
Configuration of software-based snapshot methods 133
Software-based snapshot methods
Note: On the Red Hat Linux 4 platform, VxVM snapshot method supports Storage
Foundation 5.0 MP3 RP3 HF9 or later versions.
The VxVM snapshot method works for any file system that is mounted on a VxVM
volume. However, before the backup is performed, the data must be configured
with either of the following: a VxVM 3.1 or later snapshot mirror or a VxVM 4.0 or
later cache object. Otherwise, the backup fails.
Note the following:
■ See “Creating a snapshot mirror of the source” on page 133.
Or refer to your Veritas Volume Manager documentation.
■ Help is available for configuring a cache object.
See “About VxVM instant snapshots” on page 134.
Or refer to your Veritas Volume Manager documentation.
■ For Instant Recovery backups of the data that is configured on VxVM volumes
on Windows, the VxVM volume names must be 12 characters or fewer.
Otherwise, the backup fails.
■ VxVM and VxFS_Checkpoint are the only snapshot methods in Snapshot
Manager that support the multi-volume file system (MVS) feature of VxFS 4.0.
■ Since VxVM does not support fast mirror resynchronization on RAID 5 volumes,
VxVM must not be used with VxVM volumes configured as RAID 5. If the VxVM
snapshot method is selected for a RAID 5 volume, the backup fails.
where:
■ disk_group is the Volume Manager disk group to which the volume belongs.
■ volume_name is the name of the volume that is designated at the end of
the source volume path (for example, vol1 in /dev/vx/rdsk/dg/vol1).
■ fmr=on sets the Fast Mirror Resynchronization attribute, which
resynchronizes the mirror with its primary volume. This attribute copies only
the blocks that have changed, rather than performing a full
resynchronization. Fast mirror resynchronization can dramatically reduce
the time that is required to complete the backup.
Fast Mirror Resynchronization (FMR) is a separate product for Veritas
Volume Manager.
3 With the Media Server or Third-Party Copy method, the disks that make up
the disk group must meet certain requirements.
See “Disk requirements for Media Server and Third-Party Copy methods”
on page 226.
About FlashSnap
FlashSnap uses the Persistent FastResync and Disk Group Split and Join features
of Veritas Volume Manager (VxVM).
The FlashSnap snapshot method can be used for alternate client backups only, in
the split mirror configuration.
See “Alternate client backup split mirror examples” on page 28.
FlashSnap supports VxVM full-sized instant snapshots, but not space-optimized
snapshot. Additionally, FlashSnap supports VxVM volumes in a shared disk group.
For support configurations, please refer to the NetBackup 7.x Snapshot Client
Compatibility List.
See the Volume Manager Administrator’s Guide for more information on deporting
disk groups.
The following steps are described in more detail in the Veritas FlashSnap
Point-In-Time Copy Solutions Administrator’s Guide.
To test volumes for FlashSnap on UNIX
1 On the primary host:
■ Add a DCO log to the volume:
■ Move the disks containing the snapshot volume to a separate (split) disk
group:
If the volume has not been properly configured, you may see an error similar
to the following:
Configuration of software-based snapshot methods 137
Software-based snapshot methods
■ Re-examine the layout of the disks and the volumes that are assigned to
them, and reassign the unwanted volumes to other disks as needed.
Consult the Veritas FlashSnap Point-In-Time Copy Solutions Administrator’s
Guide for examples of disk groups that can and cannot be split.
■ Deport the split disk group:
3 After this test, you must re-establish the original configuration to what it was
before you tested the volumes.
■ Deport the disk group on the alternate client.
■ Import the disk group on the primary client.
■ Recover and join the original volume group.
See “Identifying and removing a left-over snapshot” on page 270.
To test volumes for FlashSnap on Windows
1 On the primary host:
■ If not already done, create a snapshot mirror:
■ Move the disks containing the snapshot volume to a separate (split) disk
group.
The disk group is also deported after this command completes:
vxassist rescan
■ Import the disk group that was deported from the primary:
About VVR
The VVR snapshot method (for UNIX clients only) relies on the Veritas Volume
Replicator, which is a licensed component of VxVM. The Volume Replicator
maintains a consistent copy of data at a remote site. Volume Replicator is described
in the Veritas Volume Replicator Administrator’s Guide.
The VVR snapshot method can be used for alternate client backups only, in the
data replication configuration.
See “Alternate client backup through data replication example (UNIX only)”
on page 31.
VVR makes use of the VxVM remote replication feature. The backup processing
is done by the alternate client at the replication site, not by the primary host or client.
VVR supports VxVM instant snapshots.
See “About VxVM instant snapshots” on page 134.
Configuration of software-based snapshot methods 139
Software-based snapshot methods
3 On the secondary host, receive the IBC message from the primary host:
About NAS_Snapshot
NetBackup can make point-in-time snapshots of data on NAS (NDMP) hosts using
the NDMP V4 snapshot extension. The snapshot is stored on the same device that
contains the NAS client data. From the snapshot, you can restore individual files
or roll back a file system or volume by means of the Instant Recovery feature.
Note: NetBackup for NDMP software is required on the server, and the NAS vendor
must support the NDMP V4 snapshot extension.
You can control snapshot deletion by means of the Maximum Snapshots (Instant
Recovery Only) parameter. This parameter is specified on the Snapshot Options
dialog of the policy.
For detailed information about NAS snapshots, setting up a policy for NAS snapshots
and format of NAS snapshot name, check the 'Network Attached Storage (NAS)
snapshot configuration' chapter of this guide.
See “Means of controlling snapshots” on page 107.
About VSS
VSS uses the Volume Shadow Copy Service of Microsoft Windows and supports
Instant Recovery. VSS is for local backup or alternate client backup.
For the most up-to-date list of Windows operating systems and disk arrays supported
by this method, see the NetBackup 7.x Snapshot Client Compatibility List document
available on the Veritas support site:
http://www.netbackup.com/compatibility
For alternate client backup, the client data must reside on either a disk array such
as EMC, HP, or Hitachi with snapshot capability, or a Veritas Storage Foundation
for Windows 4.1 or later volume with snapshots enabled. VSS supports file system
backup of a disk partition (such as E:\) and backup of databases.
Configuration of software-based snapshot methods 141
Software-based snapshot methods
data to back up includes Windows system files, that volume cannot be backed
up with the VSS snapshot method.
■ Does not support the backup of Windows system database files (such as RSM
Database and Terminal Services Database).
Chapter 8
Support for Cluster
Volume Manager
Environments (CVM)
This chapter includes the following topics:
■ About enabling the NetBackup client to execute VxVM commands on the CVM
master node
The following snapshot methods support only English locale. They do not support
I18N (internationalization).
■ EMC_CLARiiON_Snapview_Clone
■ EMC_CLARiiON_Snapview_Snapshot
■ EMC_TimeFinder_Clone
■ EMC_TimeFinder_Mirror
■ EMC_TimeFinder_Snap
■ Hitachi_ShadowImage
■ Hitachi_CopyOnWrite
■ HP_EVA_Vsnap
■ HP_EVA_Snapshot
■ HP_EVA_Snapclone
■ HP_XP_BuisinessCopy
■ HP_XP_Snapshot
■ IBM_DiskStorage_FlashCopy
■ IBM_StorageManager_FlashCopy
http://www.netbackup.com/compatibility
Note: Some disk array vendors use the term snapshot to refer to a certain kind of
point-in-time copy made by the array. In other chapters of this guide, however,
snapshot refers more generally to all kinds of point-in-time copies, disk-array based
or otherwise. Refer to your array documentation for the definition of array vendor
terminology.
Configuration of snapshot methods for disk arrays 149
About the new disk array snapshot methods
Note: The following array methods support Veritas Volume Manager (VxVM)
volumes: Hitachi_CopyOnWrite and Hitachi_ShadowImage. The
IBM_DiskStorage_FlashCopy method (on the IBM DS6000) supports VxVM on
the AIX platform.
Warning: If you make other changes to the snapshot resources, the NetBackup
catalog may be invalidated. For instance, restores may fail from backups
consisting of the snapshots that have been deleted outside the view of
NetBackup.
LAN
HBA configuration
The supported HBAs are Emulex and QLogic. The JNI HBA is not supported.
Note: Persistent target bindings are not needed if you use Leadville drivers on
Solaris.
Configuration of snapshot methods for disk arrays 156
OS-specific configuration tasks
Note: The sd.conf file does not have to be modified if you use Leadville drivers.
Veritas recommends that you add LUNs 0-15 for all disk array targets on which
snapshots are to be created. This creates 16 host-side LUNs on each target that
can be used for importing the snapshots (clones, mirrors, and copy-on-write
snapshots) required for backups. If 16 host-side LUNs are not enough for a particular
disk array target, add more LUNs for that target. Note that snapshots are imported
to a NetBackup client in sequential order starting with the lowest unused host-side
LUN number. The host-side LUN number pool is managed on the disk array. The
disk array cannot determine which host-side LUN numbers have been configured
in sd.conf. The array can only determine which host-side LUN number it has not
yet assigned to the host. If the array adds a device at a host-side LUN number that
has not been configured in sd.conf, that device is not visible on the host. Also, if
alternate client backups are being used, be sure to properly configure sd.conf on
the alternate client.
You must restart after modifying sd.conf.
Symmetrix arrays pre-assign host-side LUN numbers (that is, the LUN numbers
are not set at the time the device is imported). These pre-selected LUN numbers
must be entered into sd.conf for the Symmetrix target number.
Note: If you use EMC Control Center interface (ECC) to determine Symmetrix
host-side LUN numbers, note that ECC shows host-side LUN numbers in
hexadecimal format. Since the LUN entries in sd.conf must be in decimal format,
convert the hexadecimal value to decimal before adding it to sd.conf.
If the Symmetrix array was persistently bound at target 5, and the host-side LUN
numbers of the Symmetrix devices are 65, 66, 67, then the following entries should
be added to sd.conf.
If the line is not present, add it to the modprobe.conf and enter the following:
#mv /boot/initrd-linux_kernel_version.img
/boot/initrd-linux_kernel_version.img.bak
#mkinitrd -v /boot/initrd-linux_kernel_version.img
linux_kernel_version
where the linux_kernel_version is the value that is returned from uname -r (for
example, 2.6.9-34.ELsmp).
/usr/openv/netbackup/bin/nbfirescan
■ Windows
This command queries the host’s SCSI bus for all the SCSI (or Fibre) attached
devices that are visible.
Note the following regarding CLARiiON:
■ If there are LUNs in the client’s CLARiiON storage group, the LUNs are included
in the output.
■ If there are no LUNs visible but the array is zoned to allow the host to see it, the
output includes the entry DGC LUNZ. This entry is a special LUN that the
CLARiiON uses for communication between the client and the array. The LUNZ
entry is replaced by another disk entry as soon as one is put in the storage group
which has been presented to the client.
Example Solaris output, followed by a description:
DevicePath Represents the actual access point for the device as it exists on
the client host.
Ctl,Bus,Tgt,Lun Controller, bus, target, and LUN numbers are the elements that
designate a particular physical or virtual disk from the perspective
of the client host computer.
Note: For backup of a disk array using the Windows VSS snapshot method with
Instant Recovery, be sure to configure NetBackup disk array credentials (if required
by the array) before you run the backup. A Point in Time Rollback fails if NetBackup
did not have credentials to access the array during the backup.
Symmetrix You must associate the source device in the array with the
target device(s) that are to be used for the differential
(copy-on-write) or plex-based (clone or mirror) backup.
Note: For Symmetrix arrays, NetBackup supports VSS with
differential (copy-on-write) backup but not plex-based (clone
or mirror) backup.
EMC TimeFinder Snap See “Creating EMC disk groups for VSS differential snapshots
that use EMC TimeFinder Snap” on page 160.
2 To display information on all existing snapshots on the client, enter the following
command:
vshadow.exe -q
Example output:
vshadow.exe -da
Configuration of snapshot methods for disk arrays 163
About EMC CLARiiON arrays
If the SRC <=> TGT value reads CopyOnWrite, the snapshot was created
successfully.
UNIX client
NetBackup Snapshot
Client
Create Restore
snapshot data
Create snapshot or restore
Navisphere Secure
CLI EMC CLARiiON
Register host array
with array
Navisphere Agent FLARE OS
Configuration of snapshot methods for disk arrays 166
About EMC CLARiiON arrays
■ On Windows:
If the command fails, you must address the problem before you do any further
array configuration.
This problem could be due to the following:
Configuration of snapshot methods for disk arrays 167
About EMC CLARiiON arrays
Note: On AIX or some UNIX host the Snapshot creation can fail for
EMC_CLARiiON array, if the Navisphere Secure CLI location entries are
incorrect in the /usr/openv/lib/vxfi/configfiles/emcclariionfi.conf
file.
For example, on the AIX host naviseccli is found at the following location
/usr/lpp/NAVICLI/naviseccli. Verify the correct naviseccli path and
add the following file path and name entries to the
/usr/openv/lib/vxfi/configfiles/emcclariionfi.conf file.
■ FILEPATH_NAVISEC_EXE"="filepath"
■ FILENAME_NAVISEC_EXE"="filename"
Note: You must also enter credentials by means of the Disk Array Hosts dialog
box in the NetBackup Administration Console. The disk array host name is not
provided in the Navisphere security file.
Warning: Veritas strongly recommends that every NetBackup client be given its
own CLARiiON storage group on the array. Data corruption could result if more
than one client (host) exists in a single storage group. If it is necessary to have
multiple hosts in a single storage group, you must make certain that only one host
in the storage group is actually using the device at any given time. (Only one host
should mount the disk.) A Windows host may actually write to a LUN masked device
even if the device is not mounted. Therefore, a Windows host should always be in
its own storage group.
Configuration of snapshot methods for disk arrays 169
About EMC CLARiiON arrays
Step 1 Array administrator creates clone See “Creating a clone private LUN
private LUNs. with the EMC Navisphere Web
interface” on page 170.
Step 2 Array administrator creates a clone See “Creating a clone group and
group and selects a LUN as select a LUN as source”
source. on page 170.
Step 3 Array administrator adds clone See “Adding clone LUNs to the
LUNs to the clone group. clone group” on page 171.
Step 4 Array administrator supplies source See “Obtaining the device identifier
and target devices. for each source and clone LUN”
on page 173.
Note: For Windows clients and the VSS method, you must synchronize the clone
with its source.
Note: These steps are separate from those taken by NetBackup to create the
backup. When the backup begins, NetBackup synchronizes the clones with the
source (if necessary) and splits (fractures) the clones to make them available for
the backup.
For more information on the EMC array terminology in this section, see your EMC
CLARiiON documentation.
Creating a clone private LUN with the EMC Navisphere Web interface
You must configure a clone private LUN for each CLARiiON storage processor that
owns a clone source LUN. Clone private LUNs store the portions of the client’s data
that incoming write requests change while the clone is in use. Clone private LUNs
are used while a clone LUN is fractured and when a synchronization occurs.
A clone private LUN can be any bound LUN that is at least 250,000 blocks in size.
To create a clone private LUN with the EMC Navisphere Web interface
1 Right-click the array name.
2 Right-click the Snapview node and select Clone Feature Properties.
3 Choose the LUNs you want to label as Clone Private LUNs.
Choose a clone private LUN for each storage processor that contains clone
source LUNs. (You must know which storage processor owns a given LUN.)
Only one clone private LUN is required per storage processor. You can add
more clone private LUNs later if more space is needed.
3 When you click Apply, Navisphere begins to copy data from the source LUN
to the LUN you have selected, creating a clone LUN.
Any previous data on the clone LUN is lost.
Configuration of snapshot methods for disk arrays 173
About EMC CLARiiON arrays
Obtaining the device identifier for each source and clone LUN
The NetBackup policy requires entry of the array’s Unique ID. If your array
administrator provided LUN numbers for the devices, you must convert those LUN
numbers into Unique IDs for entry in the NetBackup policy Snapshot Resources
pane. You can obtain the LUN Unique IDs in either of two ways, as follows.
To obtain the device identifier for each source and clone LUN
1 Enter the following command on the NetBackup client:
2 Note the exact UID string that this command returns. This UID is the unique
ID of the LUN.
For example, to obtain the unique ID of LUN 67, enter:
Example output:
UID: 60:06:01:60:C8:26:12:00:4F:AE:30:13:C4:11:DB:11
3 To obtain the number of the LUN to use on the naviseccli command, find the
clone group and examine the LUN list.
4 Copy the unique ID into the NetBackup policy, as follows:
■ If the LUN specified on the naviseccli command is the source LUN for the
clone group, copy the unique ID into the Source Device field of the Add
Snapshot Resource dialog box of the NetBackup policy. Help is available
for that dialog box.
See “Configuring a policy using EMC_CLARiiON_Snapview_Clone method”
on page 176.
■ If the LUN specified on the naviseccli command is a clone LUN, copy the
unique ID into the Snapshot Device(s) field.
Configuration of snapshot methods for disk arrays 174
About EMC CLARiiON arrays
Unique ID field
Storage processor
■ LUNs in the reserved LUN pool are private LUNs, which cannot belong to a
storage group. The storage processor manages its reserved LUN pool and
automatically assigns one or more private LUNs to a source LUN. This
assignment is based on how much snapshot activity takes place in the source
LUN. This activity can result from one busy snapshot or multiple snapshots.
■ While the snapshot is active, client write activity on the source consumes more
space in the reserved LUN pool. Adding more LUNs to the reserved LUN pool
increases the size of the reserved LUN pool. The storage processor automatically
uses a LUN if one is needed.
■ All snapshots share the reserved LUN pool. If two snapshots are active on two
different source LUNs, the reserved LUN pool must contain at least two private
LUNs. If both snapshots are of the same source LUN, the snapshots share the
same private LUN (or LUNs) in the reserved LUN pool.
EMC_CLARiiON_Snapview_Snapshot method
In the Snapshot Options dialog box of the policy, you can set the Maximum
snapshots (Instant Recovery only) parameter for the
EMC_CLARiiON_Snapview_Snapshot method. The maximum value is 8.
See “Maximum Snapshots parameter” on page 108.
EMC Solutions Enabler NetBackup clients For versions used in test configurations, see NetBackup Snapshot
Client Configuration, at:
http://www.veritas.com/docs/000081320
Symmetrix Solutions NetBackup clients For versions used in test configurations, see Veritas NetBackup
Enabler license Snapshot Client Configuration, at:
http://www.veritas.com/docs/000081320
HKEY_LOCAL_MACHINE\SOFTWARE\EMC\ShadowCopy
Registry details:
■ Name: EnforceStrictBCVPolicy
■ Type: REG_SZ
Possible values include:
■ TRUE: Indicates that EMC VSS Provider enforces a strict BCV rotation policy,
where a BCV should only be used if it is not currently part of a snapshot.
■ FALSE: Indicates that EMC VSS Provider does not enforce a BCV rotation
policy, leaving enforcement to the VSS requestor.
A VCMDB is a virtual LUN database that keeps track of which LUNs the client
can see. A gatekeeper is a small disk that the DMX uses to pass commands
between the client and the array.
Example output:
Symmetrix ID : 000292603831
Device Masking Status : Success
Symmetrix ID : 000492600276
Device Masking Status : Success
0050 0060
Make sure the temp_file name matches the temp_file name you used above.
4 In the output, look for Synchronized under the State column. When the pair
enters the synchronized state, it is ready to be used for backup.
To verify that the clone is complete before doing a point in time rollback
1 Create a temporary file that contains only the source and target device IDs
separated by a space.
For example, if the source device ID is 0050 and the target (clone) device ID
is 0060, the temporary file should contain the following:
0050 0060
2 Check the status of the clone with the symclone command. For example:
3 In the output, look for Copied under the State column. When the clone pair is
in the copied state, it is ready for point-in-time rollback.
■ For EMC_TimeFinder_Clone, the target devices are the STD devices that
were allocated to be used as clones.
11 Enter source and target device IDs exactly as they appear on the Symmetrix.
For example, if device 4c appears as 004C, then enter it as 004C (case does
not matter). The symdev show command can be used to determine how a
device ID appears on Symmetrix. Refer to your SymCLI documentation for
more information on this command.
For Instant Recovery backups, the Snapshot Device(s) entries determine where
and in what order the snapshots are retained.
See “Snapshot Resources pane” on page 107.
SSSU for HP NetBackup clients For versions used in test configurations, see Veritas NetBackup
StorageWorks Command Snapshot Client Configuration, at:
View EVA (CLI)
http://www.veritas.com/docs/000081320
Configuration of snapshot methods for disk arrays 186
About HP EVA arrays
HP StorageWorks HP Command View For versions used in test configurations, see Veritas NetBackup
Command View EVA EVA server Snapshot Client Configuration, at:
Web interface
http://www.veritas.com/docs/000081320
StorageWorks
Command View
Web interface
contact Hewlett Packard Enterprise for the required software and versions. HP
supplies this software as a bundle, to ensure that the software components are at
the right level and function correctly.
Note that the open support policy for VSS providers is not applicable to Instant
Recovery. To use VSS along with the NetBackup Instant Recovery feature, refer
to the NetBackup 7.x Snapshot Client Compatibility List for the components that
NetBackup supports for Instant Recovery with the array. The compatibility list is
available at the following URL:
http://www.netbackup.com/compatibility
/opt/hp/sssu/sssu_sunos
Example output:
3 Verify that you can see the EVA arrays that are managed by the host:
NoSystemSelected> ls cell
Example output:
/opt/hp/sssu/sssu_sunos
Example output:
3 Verify that you can see the EVA arrays that are managed by the host:
NoSystemSelected> ls cell
Example output:
The cause of the error message is the CLI path, which is different from the default
CLI path.
To fix the policy validation, add the following entry into the hpevafi.conf file:
[CLI_TOOL_INFO]
"FILEPATH"="/opt/hp/sssu"
"FILENAME"="sssu_hpux_parisc"
After you manually add these inputs to the hpevafi.conf file, the validation is
successful.
HP EVA restrictions
Note the following array configuration restrictions. In essence, you cannot use two
or more EVA snapshot methods for a given source disk.
Table 9-6
Array Restrictions
Table 9-7 Software that is required for IBM DS6000 and DS8000
2 Repeat step 1 for each NetBackup client or alternate client that uses the array.
3 Create a volume group and associate the volume group with the NetBackup
host you have defined on the array. For details, refer to your IBM
documentation.
4 Create logical volumes (or logical drives) for the volume group. This step makes
the volumes or drives visible to the NetBackup client. For details, refer to your
IBM documentation.
Example:
2 Find the volumes presented to this volume group and to the host:
Example:
/usr/openv/netbackup/bin/nbfirescan
Configuration of snapshot methods for disk arrays 194
About IBM DS6000 and DS8000 arrays
To use the IBM Storage Manager web interface to obtain the device identifiers
1 In the Storage Manager, click Real-time manager > Manage hardware >
Host systems.
2 Click the host for which you need to find the volumes presented.
The volume groups that are associated with the host are displayed.
3 Click the volume group to get the list of the logical volumes that are configured
in this volume group.
The Number column indicates the LUN ID.
Configuration of snapshot methods for disk arrays 195
About IBM DS6000 and DS8000 arrays
7 Enter the unique ID for the source LUN in the Source Device field.
8 Enter the unique IDs for the clone LUNs in the Snapshot Device(s) field. To
enter multiple IDs, place a semicolon between them.
Note the following:
■ The clone LUNs must be unmasked to the client (or alternate client) before
you start a backup.
■ For Instant Recovery backups, the Snapshot Device(s) entries determine
where and in what order the snapshots are retained.
Table 9-8 New snapshot method for the IBM DS4000 disk array
Install the disk array and its software, See your array documentation.
including appropriate licenses.
See “IBM 4000 software requirements”
on page 197.
Zone the client HBAs through the Fibre See your Fibre Channel documentation.
Channel switch, so the array is visible to the
primary client and to any alternate clients.
Install NetBackup, and array vendor snapshot See the appropriate installation
management software, on the NetBackup documentation.
primary client and any alternate clients.
Create and configure the Access Logical See your array documentation.
Drive for the host connection at the array.
Configure logical drives on the array and
make them visible to the host.
/opt/IBM_DS4000/
/opt/IBM_DS4000/
/usr/openv/netbackup/bin/nbfirescan
This command queries the host’s SCSI bus for all the SCSI (or Fibre) attached
devices that are visible.
Example output from an AIX host, for Hitachi and IBM arrays, followed by a
description:
Ctl,Bus,Tgt,LUN Controller, bus, target, and LUN numbers are the elements
that designate a particular physical or virtual disk from the
perspective of the client host computer.
Configuration of snapshot methods for disk arrays 199
About IBM DS4000 array
2 Repeat step 1 for each NetBackup client or alternate client that uses the array.
Configuration of snapshot methods for disk arrays 200
About IBM DS4000 array
3 For every client and host group added, map an Access Logical Drive on LUN
number 7 or 31.
4 Create logical drives and map them to the host group. This step makes the
logical drives visible to the NetBackup client. For details, refer to your IBM
documentation.
Repository % of Base Determines the size of the IBM repository logical drive as a
(100 for Instant percentage of the primary device (base logical drive). The size
Recovery) can range from 1% to 100%. The more write activity that
occurs on the primary drive, the more space the repository
logical drive requires.
If the size of the primary is 500 GB and you set this parameter
to 30%, the repository drive is set to 150 GB (30% of 500).
For more details about the repository logical drive, refer to the
IBM System Storage DS4000 Series and Storage Manager
document.
Configuration of snapshot methods for disk arrays 201
About Hitachi SMS/WMS/AMS, USP/NSC, USP-V/VM
/usr/lib/RMLIB/bin/whatrmver
/usr/lib/RMLIB/bin/whatrmver
Example output:
Model :RAID-Manager/LIB/Solaris
Ver&Rev:01-12-03/04
Configuration of snapshot methods for disk arrays 202
About Hitachi SMS/WMS/AMS, USP/NSC, USP-V/VM
Pair status must be PSUS After creating volume pairs, you must split each
pair and leave the status of the pair at PSUS.
/usr/openv/netbackup/bin/nbfirescan
Example output:
Obtaining the Hitachi array serial number and the unique device
identifiers
The NetBackup policy requires the Hitachi array's serial number and the unique
IDs (device identifiers) for the source and clone LUNs. Use the following procedure
to obtain that information.
Configuration of snapshot methods for disk arrays 204
About Hitachi SMS/WMS/AMS, USP/NSC, USP-V/VM
To obtain the Hitachi array serial number and the unique device identifiers
Enter the following command:
/usr/openv/netbackup/bin/nbfirescan
Example output:
The Enclosure ID is the serial number and the Device ID is the array’s device
ID.
Note: The term "clone LUNs," as used in this procedure, refers to the
Hitachi_ShadowImage method. For the Hitachi_CopyOnWrite method, the term
"clone LUNs" can be replaced with "snapshot LUNs."
See “Obtaining the Hitachi array serial number and the unique device identifiers”
on page 203.
5 In the Add Snapshot Resource dialog box, enter the array's serial number in
the Array Serial # field.
6 Enter the unique ID for the source LUN in the Source Device field.
The ID must be entered without leading zeroes. For example, if the LUN ID is
0110, enter 110 in the Source Device field.
7 Enter the unique IDs for the clone LUNs (for Hitachi_ShadowImage method)
or the snapshot LUNs (for Hitachi_CopyOnWrite) in the Snapshot Device(s)
field. To enter multiple IDs, place a semicolon between them.
The ID must be without leading zeroes. For example, if the LUN ID is 0111,
enter 111 in the Snapshot Device(s) field.
Note the following:
■ The LUNs must be unmasked to the client (or alternate client) before you
start a backup.
■ For Instant Recovery backups, the Snapshot Device(s) entries determine
where and in what order the snapshots are retained.
Configuration of snapshot methods for disk arrays 206
About HP-XP arrays
/usr/lib/RMLIB/bin/whatrmver
Configure command devices on NetBackup The HP-XP command devices must be visible
client and alternate client to the NetBackup client as well as to any
alternate client. To configure command
devices, refer to your HP-XP documentation.
Configuration of snapshot methods for disk arrays 207
About HP-XP arrays
/usr/openv/netbackup/bin/nbfirescan
/usr/openv/netbackup/bin/nbfirescan
Configuration of snapshot methods for disk arrays 208
About HP-XP arrays
Note: The term "clone LUNs," as used in this procedure, refers to the
HP_XP_BusinessCopy method. For the HP_XP_Snapshot method, the term "clone
LUNs" can be replaced with "snapshot LUNs."
The ID must be without leading zeroes. For example, if the LUN ID is 0111,
enter 111 in the Snapshot Device(s) field.
Note the following:
■ The LUNs must be unmasked to the client (or alternate client) before you
start a backup.
■ For Instant Recovery backups, the Snapshot Device(s) entries determine
where and in what order the snapshots are retained.
In this example, an HP-EVA snapshot was not found on the backup host. The
/kernel/drv/sd.conf file probably has insufficient lun= entries. Add lun= entries
for the HP-EVA target in sd.conf and restart the system. More information is
available about LUN entries in sd.conf.
See “About Solaris sd.conf file” on page 156.
Configuration of snapshot methods for disk arrays 210
About array troubleshooting
Backups fail and the following message appears in the Credentials must be added for the CLARiiON array by
bpfis log: means of the NetBackup Administration Console.
emcclariionfi: WARNING: Unable to import See “Configuring NetBackup to access the CLARiiON
any login credentials for any appliances. array” on page 167.
Backups fail and one or both of the following messages NetBackup searches the CLARiiON's storage groups for
appear in the bpfis log: the import host. (For a local backup, the import host is the
host where the source device is mounted. For an off-host
emcclariionfi: The host hostname was not
backup, the import host is the alternate client.) When the
found in any storage groups. To import a
host is found, the snapshot device is assigned to that
snapshot to host hostname, hostname must
storage group, thus making it visible to the import host
be in a Clariion storage group.
where the backup can proceed. If the import host is not in
emcclariionfi: LUN masking failed. Could any storage groups, the backup fails.
not find a storage group containing the
hostname [hostname].
Configuration of snapshot methods for disk arrays 211
About array troubleshooting
Table 9-13 Issues with NetBackup and EMC CLARiiON arrays (continued)
Backups fail and the following message appears in the The device cannot be imported to the host, because the
bpfis log: maximum number of devices from the array is already
imported to this host. Expire any unneeded backup images.
emcclariionfi: No more available HLU
numbers in storage group. LUN LUN number
cannot be LUN masked at this time
EMC_CLARiiON_Snapview_Clone backups fail and the The clone target device does not exist in the clonegroup
following message appears in the bpfis log: belonging to the source device. Either correct the target
list in the policy or use Navisphere to add the target device
emcclariionfi: Could not find LUN LUN
to the source device's clone group.
number in clonegroup clonegroup name
Both types of CLARiiON backups fail with the following in These messages appear when the Snapview software is
the bpfis log: not installed on the CLARiiON array. Snapview must be
installed on the array before CLARiiON clone or snapshot
emcclariionfi: CLIDATA: Error: snapview
backups can succeed. Please see the array documentation
command failed emcclariionfi: CLIDATA: This
or contact EMC for more information.
version of Core Software does not support
Snapview
Backups fail and the following message appears in the NetBackup uses naviseccli to send commands to the
bpfis log: CLARiiON array. If naviseccli encounters an error, it is
captured and placed in the bpfis log. The lines immediately
execNAVISECCLI: CLI Command [CLI command]
following the above line should contain the output from
failed with error [error number]
naviseccli that indicates why the command failed.
After a point-in-time rollback from a Windows VSS backup As a best practice, avoid performing a point-in-time rollback
that was made with the EMC CLARiiON Snapview Clone from a Windows VSS backup that was made with the EMC
snapshot provider, all clones are fractured (split from the CLARiiON Snapview Clone snapshot provider, if one of
primary) the clones is configured for the policy has not been used
for an Instant Recovery backup. After a rollback, all the
clones are placed in a “fractured” state. (Fractured clones
are no longer synchronized with the primary.) As a result,
any clone that had not already been used for a backup is
no longer available for a future Instant Recovery backup.
Table 9-13 Issues with NetBackup and EMC CLARiiON arrays (continued)
Policy validation fails for Standard policy with the following Policy validation for a Standard policy created with the
message: EMC_CLARiiON_Snapview_Snapshot fails with error 4201.
The error message is Incorrect snapshot method The policy validation fails when the CLI is installed at a
configuration or snapshot method not location where NetBackup fails to identify it. The CLI must
compatible for protecting backup selection be installed in the /sbin/naviseccli. If the CLI is
entries. installed at another location, NetBackup fails to identify
that location and policy validation fails.
To fix the policy validation, add the following entry into the
emcclariionfi.conf file:
[CLI_TOOL_INFO]
/usr/openv/lib/vxfi/configfiles/emcclariionfi.conf
[CLI_TOOL_INFO]
"FILEPATH_NAVISEC_EXE"="/opt/Navisphere/bin"
"FILENAME_NAVISEC_EXE"="naviseccli"
Point in time rollback fails and the following message See “Verifying that the clone is complete before doing a
appears in the bpfis log: point in time rollback” on page 182.
Table 9-14 Issues with NetBackup and EMC Symmetrix arrays (continued)
If all Save Device space is consumed on the Symmetrix, Check the symapi log (often found at /var/symapi/log
a backup with EMC_TimeFinder_Snap or EMC_ on UNIX) to determine the exact error. If the log indicates
TimeFinder_Clone fails with the following error in the bpfis there is no Save Device space, add Save Devices to the
log: Save Device pool on your Symmetrix array.
An internal Snap or Clone error has
occurred. Please see the symapi log file
EMC_TimeFinder_Mirror backups fail and the following This message indicates that the STD-BCV pair is not in a
message appears in the bpfis log: state that allows the mirror to be created. Verify that the
pair were fully synchronized before the backup attempt.
emcsymfi: Invalid STD-BCV pair state
See “Fully synchronizing STD/BCV mirror pairs”
on page 181.
Issue Explanation/Recommended
Action
Backups fail with the following warning message Credentials must be added for the EVA
in the bpfis log: array by means of the NetBackup
Administration Console.
WARNING: No credentials found for HP
HSV
Configuration of snapshot methods for disk arrays 214
About array troubleshooting
Issue Explanation/Recommended
Action
Snapshot job fails when the client has VxVM Uninstall the VxVM software from the
software installed, but the underlying disk in client.
Snapshot Manager backup is not configured on
stack. The following error message is displayed:
Table 9-16 Issues encountered with Snapshot (NetBackup status code 156)
The snapshot device (clone) is not visible (unmasked) to Make the clone device visible to the NetBackup client or
the NetBackup client or alternate client. alternate client before you retry the backup. Contact IBM
technical support or refer to your IBM array documentation.
The snapshot device (clone) is also a source device in Reconfigure source and clone devices so that the clone
another device pair. required for this backup is not a source device for another
clone. Contact IBM technical support or refer to your IBM
The following message may appear in the
array documentation.
/usr/openv/netbackup/logs/bpfis/ibmtsfi.log.date
log:
The snapshot device (clone) and source device are not of Reconfigure source and clone devices to be identical in
equal size. size. Contact IBM technical support or refer to your IBM
array documentation.
The following message may appear in the
/usr/openv/netbackup/logs/bpfis/ibmtsfi.log.date
log:
Table 9-16 Issues encountered with Snapshot (NetBackup status code 156)
(continued)
The IBM FlashCopy license is not installed. Install the FlashCopy license on the storage subsystem.
Contact IBM technical support or refer to your IBM array
The following message may appear in the
documentation.
/usr/openv/netbackup/logs/bpfis/ibmtsfi.log.date
log:
The FlashCopy relationship is not recording enabled. ■ Make sure a FlashCopy relationship exists for the
device pair.
The following message may appear in the
If the FlashCopy relationship is not recording enabled,
/usr/openv/netbackup/logs/bpfis/ibmtsfi.log.date ■
remove the FlashCopy relationship and then re-run the
log:
backup.
CMUN03027E resyncflash: FlashCopy operation
failure: action prohibited by current
FlashCopy state. Contact IBM technical
support for assistance
A FlashCopy relationship does not exist. Verify that a FlashCopy pair does not exist, and then
re-execute the backup.
The following message may appear in the
/usr/openv/netbackup/logs/bpfis/ibmtsfi.log.date
log:
Table 9-16 Issues encountered with Snapshot (NetBackup status code 156)
(continued)
Table 9-17 Explanations and recommended actions for status code 156
The array does not FlashCopy logical drives are created under the same logical array as is the base or primary
have enough free logical drive. The storage subsystem might have free space, but if the logical array has
space. insufficient space, the FlashCopy operation fails.
Mon Mar 31 2008 14:25:23.036588 <Pid - 1065104 / Thread id - 1> FlashCopy could not be
created. command [create FlashCopyLogicalDrive baseLogicalDrive="drive-claix11-1"
userLabel="drive-claix11-1_flcp";]. Mon Mar 31 2008 14:25:23.037164 <Pid - 1065104 /
Thread id - 1> OUTPUT=[Unable to create logical drive "drive-claix11-1_flcp" using the Create
FlashCopy Logical Drive command at line 1. Error - The operation cannot complete because
there is not enough space on the array. The command at line 1 that caused the error is:
create FlashCopyLogicalDrive baseLogicalDrive="drive-claix11-1"
userLabel="drive-claix11-1_flcp";
Recommended action: Make sure that the array has enough space available for the snapshot.
Recommended action: Delete any FlashCopies that NetBackup did not create.
The Access Logical On the IBM DS4000, the Access Logical Drive communicates with the storage subsystem.
Drive is not mapped for Any client that is connected to and needs to communicate with the storage subsystem should
the NetBackup client or have an Access Logical Drive mapped to it. If an Access Logical Drive is not mapped to the
alternate client at LUN client, the client is unable to communicate with the array. As a result, any NetBackup client
31 or 7. operation involving the array fails.
Recommended action: Create and map an Access Logical Drive. Contact IBM technical
support or refer to your IBM array documentation.
Configuration of snapshot methods for disk arrays 219
About array troubleshooting
Table 9-17 Explanations and recommended actions for status code 156
(continued)
The DAR driver is not Recommended action: Make sure that the RDAC package is installed on the AIX host.
functional.
Look for the following error in the Make sure that the RMLIB 64-bit library is installed. This
/usr/openv/netbackup/logs/bpfis/hitachi.log.<date> requirement applies when you upgrade from a 6.5.x system
log: (requires 32-bit RMLIB) to a 7.1 system, and when you
install a fresh 7.1 system.
Library RMLIB init failed
The Hitachi command device is not unmasked. Refer to Hitachi documentation for creating and unmasking
command devices.
See the sample log messages in the next row.
The Hitachi command device is unmasked but is not visible Make sure that the command device is recognized by the
to client, or the enclosure ID specified in the policy’s operating system and that the enclosure ID is entered
Snapshot Resources is invalid. correctly in the policy's Snapshot Resources.
A mismatch exists between the policy’s snapshot method Specify the correct snapshot method or snapshot devices.
and the type of LUNs specified for the Snapshot Devices.
For example, if you select the Hitachi_ShadowImage
method but specify snapshot LUNs instead of clone LUNs
for the Snapshot Devices, an error occurs.
A disk pair was not created for the source device and Set up a disk pair (primary and secondary) for the source
snapshot device specified in the NetBackup policy’s device and snapshot device that are specified in the
Snapshot Resources. policy’s Snapshot Resources. Refer to the Hitachi
documentation.
The
/usr/openv/netbackup/logs/bpfis/hitachi.log.<date>
log may contain messages similar to the following.
In the policy’s Snapshot Resources, the device identifier Recommended action: Make sure that the identifiers are
for the source device or snapshot device is invalid. correctly entered in the policy’s Snapshot Resources.
Specify source and snapshot IDs without leading zeros.
The
/usr/openv/netbackup/logs/bpfis/hitachi.log.<date> See “Configuring a NetBackup policy for
log may contain messages similar to the following: Hitachi_ShadowImage or Hitachi_CopyOnWrite”
on page 204.
Fri Mar 21 2008 16:26:49.173893 <Pid - 9477
/ Thread id - 1> getrminfo failed. Fri Mar
21 2008 16:26:49.173893 <Pid - 9477 /
Thread id - 1> operation failed with error
number <> with message <msg>'.
The RAID Manager library libsvrrm.so software is not Recommended action: Install the RAID Manager package
installed in the /usr/lib/ directory. in /usr/lib/. See the Hitachi documentation.
The installed version of RAID Manager library Recommended action: Look for the Library RMLIB
libsvrrm.so is not supported. version message in the
/usr/openv/netbackup/logs/bpfis/hitachi.log.<date>
log.
The
/usr/openv/netbackup/logs/bpfis/hitachi.log.<date>
log contains the following message:
The default array controller of the source device is not the Recommended action: Make sure that the clone (or
same as the controller of the snapshot device. Use the snapshot) device has the same default controller as the
Storage Navigator interface to verify. source device. See the Hitachi documentation.
Chapter 10
Notes on Media Server
and Third-Party Copy
methods
This chapter includes the following topics:
■ The disk must be able to return its SCSI serial number in response to a
serial-number inquiry (serialization). Or, the disk must support SCSI Inquiry
Page Code 83.
Solaris: /dev/rdsk/c1t3d0s3
HP: /dev/rdsk/c1t0d0
■ Restoring a large number of files in a clustered file system (VxFS on UNIX Only)
Automatic backup The most convenient way to back up client data is to configure
a policy and then set up schedules for automatic, unattended
backups. To use NetBackup Snapshot Manager, you must
enable snapshot backup as described in the appropriate
configuration chapter of this guide. To add new schedules
or change existing schedules for automatic backups, you can
follow the guidelines in the NetBackup Administrator’s Guide,
Volume I.
Backup and restore procedures 230
About performing a restore
User-directed backup and From a NetBackup client, the user can execute a Snapshot
archive Manager backup. The NetBackup administrator must
configure an appropriate snapshot policy with schedule.
Note: In the Backup, Archive, and Restore interface, set the policy type to
FlashBackup for UNIX clients and FlashBackup-Windows for Windows clients.
■ An entire raw partition can be restored from a full backup only. FlashBackup
incremental backups only support individual file restores.
■ Ensure that the device file for the raw partition exists before the restore.
■ The overwrite option must be selected for raw partition restores. The device file
must exist and the disk partition is overwritten.
■ To restore a very large number of files (when individual file restore would take
too long), you can do a raw-partition restore. Redirect the restore to another
raw partition of the same size and then copy individual files to the original file
system.
File promotion (for VxFS_Checkpoint or See “About Instant Recovery: file promotion”
NAS_Snapshot snapshots) on page 233.
Fast File Resync for Windows (for VxVM and See “About Instant Recovery: Fast File
FlashSnap snapshots) Resync (Windows clients only)” on page 235.
Rollback (for VxFS_Checkpoint, VxVM, VSS, See “About Instant Recovery: point in time
FlashSnap, or NAS_Snapshot snapshots, rollback” on page 236.
OST_FIM, and the disk array methods)
/usr/openv/netbackup/PFI_BLI_RESTORE
After this file is created, all subsequent restores of the client’s data use
block-level restore.
To deactivate block-level restore
Delete (or rename) the PFI_BLI_RESTORE file.
When block-level restore is activated, it is used for all files in the restore.
Block-level restore may not be appropriate for all of the files. It may take longer
to restore a large number of small files, because they must first be mapped.
Notes on rollback
Note the following.
Warning: Rollback deletes all files that were created after the creation-date of the
snapshot that you restore. Rollback returns a file system or volume to a given point
in time. Any data changes or snapshots that were made after that time are lost.
Also, if there are multiple logical volumes on a single disk or volume group and if
you perform a Point in Time Rollbackof a specific logical volume, the entire disk
or volume group is restored to the point in time.
■ Rollback can be done only from the backups that were enabled for Instant
Recovery and made with one of the following methods: VxFS_Checkpoint,
VxVM, FlashSnap, NAS_Snapshot, or the disk array methods.
Rollback can be done from the backups that were enabled for Instant Recovery
and made with and made with VSO snapshot method.
■ If the backup was made with the EMC_TimeFinder_Clone method and the clone
is not fully created, a rollback cannot succeed.
To verify that the clone is complete before you do a rollback:
See “Verifying that the clone is complete before doing a point in time rollback”
on page 182.
Backup and restore procedures 237
Instant Recovery restore features
■ For the backups that were made with the VxFS_Checkpoint method, rollback
requires the VxFS File System 4.0 or later and Disk Layout 6. For
NAS_Snapshot, the file system requirements depend on the NAS vendor.
■ Rollback deletes any VxFS_Checkpoint snapshots or NAS_Snapshot snapshots
(and their catalog information) that were created after the creation-date of the
snapshot that you restore.
Rollback deletes any snapshots (and their catalog information) that were created
after the creation-date of the snapshot that you restore.
■ If the primary file system is mounted and the snapshot resides on a disk array,
the rollback attempts to unmount the file system. Any I/O on the primary device
is forcibly stopped if the unmount succeeds. To be safe, make sure that no I/O
occurs on the primary device before a rollback.
If the attempt to unmount the primary file system fails, the rollback does not
succeed. You should halt I/O on the device and retry the rollback. For example,
if a terminal session has accessed the file system through the cd command,
change to a directory outside the file system and retry the rollback.
■ Rollback is available only when you restore the file system or volume to the
original location on the client.
■ When a file system rollback starts, NetBackup verifies that the primary file system
has no files that were created after the snapshot was made. Otherwise, the
rollback aborts.
■ Rollback from OST_FIM type snapshot can be done from copy one only.
Rollback from VSO_FIM type snapshot can be done from copy one only.
■ For the rollback from OST_FIM type snapshot, refer to the NetBackup Replication
Director Solutions Guide.
See “About Instant Recovery: point in time rollback” on page 236.
/usr/openv/netbackup/bin/jbpSA &
You can select root level or mount points (file systems or volumes), but not
folders or files at a lower level.
6 In the Directory Structure list, click the check box next to the root node or a
mount point beneath root.
You can select a file system or volume, but not lower-level components.
7 Click the Restore option.
Backup and restore procedures 239
Instant Recovery restore features
If the volume that you have selected, belongs to a storage array consistency
group or volume set, you can see a list of volumes that are restored along with
the selected volume, as they belong to the same array volume set or
consistency group. In the open dialog,
■ To recover all the listed volumes, click OK.
■ To cancel, and select another restore method for single volume recovery,
click Cancel.
For more details refer to NetBackup status code: 4311, in the NetBackup
Status Codes Reference Guide
The only available destination option is Restore everything to its original
location.
8 For file systems, you can choose to skip file verification by placing a check in
the Skip verification and force rollback option.
Warning: Click Skip verification and force rollback only if you are sure that
you want to replace all the files in the original location with the snapshot.
Rollback deletes all files that were created after the creation-date of the
snapshot that you restore.
If the checks do not pass, the rollback aborts and the Task Progress tab states
that rollback could not be performed because file verification failed.
9 For volumes belonging to consistency group or volume set, select Force
rollback even if it destroys the consistency group’s state on the storage
array option, to ignore any mismatch in consistency group configuration on
the storage arrays during the restore. NetBackup completes the restore with
warning messages.
Note: Select this option only when it is safe to rollback all the devices on the
concerned storage array consistency group.
The above figure shows a simple application server where the application data
(/mnt1 and /mnt2) resides on five storage array devices (D1, D2, D3, D4 and
D5), which are a part of storage array consistency group. Consider that your
NetBackup policy is configured such that you are using VSO snapshot method
and backup selection leads to /mnt1 and /mnt2. Once the snapshot is
conducted, it will be conducted at consistency group level. So, now at the time
of PIT Rollback Restore if you choose only /mnt1 for such a restore, you will
Backup and restore procedures 241
Instant Recovery restore features
get a warning stating partial selection. This is because of the fact that the
snapshot being referred for restore has information about group snapshot
consisting of devices D1, D2, D3, D4 and D5 where /mnt1 resides on D1, D2
and D3 only; and reverting back at consistency group will affect all devices in
this group.
The above figure shows a simple application server where the application data
(/mnt1 and /mnt2) resides on three devices (D1, D2, D3) which are part of a
storage array consistency group. The consistency group is comprised of four
devices (D1, D2, D3 and D4). Consider that your NetBackup policy is configured
such that you are using VSO snapshot method and backup selection leads to
/mnt1 and /mnt2. Once the snapshot is conducted, it will be conducted at
consistency group level. So, now at the time of PIT Rollback Restore if you
select both /mnt1 and /mnt2 for such a restore, the restore will fail due to
mis-match of the devices. Because, the group consists of four devices whereas
the devices being asked for restore are only three. Reverting back at
consistency group will affect all devices in this group.
Backup and restore procedures 242
Instant Recovery restore features
You can select root level or mount points (file systems or volumes), but not
folders or files at a lower level.
5 In the All Folders pane, click the check box next to the root node or a mount
point beneath root.
You can select a file system or volume, but not lower-level components.
Backup and restore procedures 243
Notes for restoring individual files from an Instant Recovery snapshot
Warning: Click Skip verification and force-rollback only if you are sure that
you want to replace all the files in the original location with the snapshot.
Rollback deletes all files that were created after the creation-date of the
snapshot that you restore.
If the exclude list is changed after the backup occurred, NetBackup honors the
latest version of the exclude list during the restore. Any of the files that are listed
in the current exclude list are not restored. Also, as noted in the previous item,
the exclude list on the alternate client takes precedence over the exclude list
on the primary client.
For example: If the current version of the exclude list has the entry *.jpg, and
some .jpg files were included in the backup, the .jpg files can be selected for
the restore but are not in fact restored. To restore the files, you must change
the exclude list on the primary (or alternate) client.
Note: For ordinary backups (not based on snapshots), any files that were
included in the exclude list are not backed up. For snapshot-based backups,
however, all files are included in the snapshot. The exclude list is consulted only
when a storage unit backup is created from the snapshot. If the snapshot is
retained after the backup (for the Instant Recovery feature) and the snapshot
is available at the time of the restore, NetBackup restores files from the snapshot.
Since all files are available in the snapshot (including those that would be
excluded from a storage unit backup), NetBackup incorrectly consults the current
exclude list on the client or alternate client. Any files in the exclude list are
skipped during the restore.
LAN / WAN
media
client server
SCSI
client
disks SCSI
storage
The following table describes the phases that are illustrated in the diagram.
Phase Action
Phase 2 Media server sends the data to the client over the LAN.
Phase 3 Client restores the data to disk (disk can be locally attached or on SAN).
About restoring over the SAN to a host acting as both client server
and media server
This type of restore requires the FORCE_RESTORE_MEDIA_SERVER option in
the server’s bp.conf file.
See the NetBackup Administrator’s Guide, Volume I, for details on the
FORCE_RESTORE_MEDIA_SERVER option.
Backup and restore procedures 246
About configurations for restore
LAN / WAN
client/media media
server server
SAN
The following table describes the phases that are illustrated in the diagram.
Phase Action
Phase 1 Client/media server reads data from tape over the SAN.
Phase 2 Client restores the data to disk (disk can be locally attached or on SAN).
Note: Unless the backup was made with the Instant Recovery feature, you cannot
restore from a snapshot by means of the Backup, Archive, and Restore interface.
You must perform the restore manually at the command line.
/usr/openv/netbackup/bin/bpfis query
This returns the ID (FIS IDs) of all current snapshots. For example:
2 For each snapshot identifier, enter bpfis query again, specifying the snapshot
ID:
This returns the path of the original file system (snapshot source) and the path
of the snapshot file system. For example:
/tmp/_vrts_frzn_img_26808/mnt/ufscon
OPTIONS:ALT_PATH_PREFIX=/tmp/_vrts_frzn_img_26808,FITYPE=MIRROR,
MNTPOINT=/mnt/ufscon,FSTYPE=ufs
INF - EXIT STATUS 0: the requested operation was successfully
completed
In this example, the primary file system is /mnt/ufscon and the snapshot file
system is /tmp/_vrts_frzn_img_26808/mnt/ufscon.
3 Copy the files from the mounted snapshot file system to the original file system.
umount original_file_system
umount snapshot_image_file_system
vxdg list
SPLIT-primaryhost_diskgroup
If vxdg list does not show the disk group, the group might have been
deported. You can discover all the disk groups, including deported ones,
by entering:
The disk groups in parentheses are not imported on the local system.
■ Deport the VxVM disk group:
3 Import and join the VxVM disk group on the primary (original) client:
4 Start the volume and snap back the snapshot volume as follows, using the -o
resyncfromreplica option:
To restore the entire secondary disk if the snapshot was made on an EMC,
Hitachi, or HP disk array
WITH CAUTION, you can use hardware-level restore to restore the entire
mirror or secondary disk to the primary disk.
If the disk is shared by more than one file system or VxVM volume, there may
be unintended results. Read the following:
/usr/openv/netbackup/bin/bpfis query
This returns the ID (FIS IDs) of all current snapshots. For example:
2 For each snapshot identifier, enter bpfis query again, specifying the snapshot
ID:
This returns the path or the original file system (snapshot source) and the GUID
(Global Universal Identifier) representing the snapshot volume. For example:
In this example the snapshot file system is H:\ and the GUID is
\\?\Volume{54aa666f-0547-11d8-b023-00065bde58d1}\.
3 To restore individual files from the snapshot volume:
■ Mount the GUID to an empty NTFS directory:
mountvol C:\Temp\Mount
\\?\Volume{54aa666f-0547-11d8-b023-00065bde58d1}\
■ Copy the file to be restored from the temporary snapshot mount point (in
this example, C:\Temp\Mount)to the primary volume.
Backup and restore procedures 252
About restoring from a disk snapshot
vxdg list
SPLIT-primaryhost_diskgroup
2 Import and join the VxVM disk group on the primary (original) client:
vxassist rescan
vxdg -g split_diskgroup import
vxdg -g split_diskgroup -n diskgroup join
Task Description
Configure the Snapshot Manager server in You can configure the Snapshot Manager
NetBackup. server as a snapshot management server.
To configure the Snapshot Manager server
in NetBackup you need to add the credentials
of the Snapshot Manager server.
Task Description
Configure the Snapshot Manager plug-ins in The Snapshot Manager plug-ins that are
NetBackup. installed on the Snapshot Manager server
must be registered and configured in
NetBackup with the associated Snapshot
Manager server.
Configure a Standard policy to use VSO See “Configuring a Snapshot Manager policy”
snapshot method. on page 50.
Backup and restore procedures See “About performing a backup” on page 229.
Considerations
Consider the following when integrating NetBackup with Snapshot Manager:
■ Deletion of Snapshot Manager host entry and its associated plug-ins is not
supported in NetBackup.
Using Snapshot Manager, if you delete plug-ins that are configured in NetBackup,
the images of snapshots that are taken using Snapshot Manager will be
unusable.
■ Back up from snapshot and index from snapshot is supported. Replication
operations are not supported.
■ For snapshot and index for snapshot, you can use any available storage unit.
■ Post integration, all the related operations must be performed from NetBackup.
Results of operations that are performed outside NetBackup, will not be visible
in NetBackup. Snapshot Manager has its own RBAC, where NetBackup is one
of the users and operations performed only through NetBackup will be visible
in NetBackup. For example, even though you can add Snapshot Manager plug-in
from Snapshot Manager, you must add the plug-in from NetBackup, or the
plug-in may not be visible in NetBackup.
■ HP-UX Native volume group Version 2.0 and later is not supported.
■ Raw device and raw partitions are not supported.
Snapshot management 255
About NetBackup and Snapshot Manager integration for snapshot management
■ File system verification is not supported. You must skip the verification step and
perform a force rollback. In the Restore Marked Files dialog box, select the
Skip verification and force rollback check box.
■ Consider all the Snapshot Manager limitations.
Refer to the Veritas Snapshot Manager Installation and Upgrade Guide.
■ NetBackup integration is not supported with the Snapshot Manager freemium
version.
■ It is recommended not to reinstall the Snapshot Manager server but to upgrade
it. However, if you reinstall the Snapshot Manager server you need to reconfigure
the Snapshot Manager server and the associated Snapshot Manager plug-ins.
■ Perform the following tasks for RHEL:
In the recent RHEL version, there are changes in usage of partition delimiter.
Based on last character in the device path, OS decides if partition delimiter must
be added or not. To address the change, update the udev rule to add partition
delimiter 'p' on RHEL
■ Open the /lib/udev/rules.d/62-multipath.rules file.
■ Update the existing line:
To
■ For devices in LVM, if multiple paths are used, ensure that multipathing service
is enabled.
■ On SUSE Linux, ensure that the automount service is disabled.
■ Junction mount point on filler or array is not supported for snapshot operations
with VSO FIM snapshot method.
■ For client-side nested mount points, rollback of parent mount points is not
supported.
Snapshot management 256
Troubleshooting Snapshot Manager and NetBackup integration issues
■ HP3 par is loaded and there are multiple WS API operations are running.
Workaround
Try snapshot creation after sometime.
Workaround
When you encounter this error, run the lsscsi command to verify if the device
selected in the backup selection is available on the client.
If the device is not listed, verify the connectivity issue between the array and host
or get in touch with your system administrator.
■ Backup from snapshot parent job fails with error 4213: Snapshot import failed
■ Snapshot job fails and the snapshot command does not recognize the volume
name
■ Snapshot creation fails when the same volume is mounted on multiple mount
points of the same host
■ Policy validation fails if the specified CIFS share path contains a forward slash
■ An NDMP snapshot policy for wildcard backup fails with error 4201
For explanations of NetBackup job status codes, refer to the NetBackup Status
codes Reference Guide.
install_path\NetBackup\logs\mklogdir.bat
C:\Program Files\Veritas\NetBackup\logs
Since a different path can be set during installation, the paths that are listed in this
topic are install_path\NetBackup\logs.
Note: To create detailed logs, set the Global logging level to a high value, in the
Logging dialog, under both Master Server Properties and Client Properties.
Troubleshooting 262
Logging folders for Windows platforms
The log folders can eventually require a lot of disk space. Delete them when you
are finished troubleshooting and set the logging level on primary and client to a
lower value.
Note: If you have run the NetBackup mklogdir command, the VxMS log
directory already exists.
Note: If the VxMS log location is changed, the Logging Assistant does not
collect the logs.
Troubleshooting 264
Logging folders for Windows platforms
Note: If you have run the NetBackup mklogdir.bat command, the VxMS log
directory already exists.
Note: You can use NTFS compression on VxMS log folders to compress the
log size. The new logs are written in compressed form only.
Note: If the VxMS log location is changed, the Logging Assistant does not
collect the logs.
Note: Logging levels higher than 5 cannot be set in the Logging Assistant.
Troubleshooting 265
Customer support contact information
Note: Logging levels higher than 5 should be used in very unusual cases only. At
that level, the log files and metadata dumps may place significant demands on disk
space and host performance.
Level Description
0 No logging.
1 Error logging.
4 Same as level 3.
5 Highly verbose (includes level 1) + auxiliary evidence files (.mmf, .dump, VDDK
logs, .xml, .rvpmem).
You can set the logging level for the VDDK messages.
■ For NetBackup Media Server or Third-Party Copy Device, a disk must support
serialization or support SCSI Inquiry Page Code 83.
■ For Third-Party Copy Device or NetBackup Media Server, a particular storage
unit or group of storage units must be specified for the policy. Do not choose
Any_available. Configuration instructions are available:
See “Configuring a Snapshot Manager policy” on page 50.
■ The storage_unit_name portion of a mover.conf.storage_unit_name file name
must exactly match the storage unit name (such as nut-4mm-robot-tl4-0) that
you have defined for the policy.
Help is available for creating a mover.conf.storage_unit_name file. See the
NetBackup Snapshot Client Configuration document:
http://www.veritas.com/docs/000081320
Similarly, the policy_name portion of a mover.conf.policy_name file name
must match the name of the policy that the third-party copy device is associated
with.
■ For the legacy disk array methods (TimeFinder, ShadowImage, or
BusinessCopy), the client data must reside in a device group. The data must
be on the primary disk and be synchronized with a mirror disk. Assistance from
the disk array vendor may also be required.
For information on disk configuration requirements for the legacy array methods,
see the NetBackup Snapshot Client Configuration document:
http://www.veritas.com/docs/000081320
■ If the Keep snapshot after backup option is changed from yes to no, the last
snapshot that is created for that policy must be deleted manually before the
backup is rerun. Use the bpfis command to delete the snapshot. Refer to the
man page for bpfis.
■ During a third-party copy device backup, if tape performance is slow, increase
the buffer size. To do so, create one of the following files on the media server:
/usr/openv/netbackup/db/config/SIZE_DATA_BUFFERS_TPC.policy_name
/usr/openv/netbackup/db/config/SIZE_DATA_BUFFERS_TPC.storage_unit_name
/usr/openv/netbackup/db/config/SIZE_DATA_BUFFERS_TPC
For third-party copy backup, the size of the data buffer is 65536 bytes (64K),
by default. To increase it, put a larger integer in the SIZE_DATA_BUFFERS_TPC
file. For a buffer size of 96K, put 98304 in the file. If not an exact multiple of
1024, the value that is read from the file is rounded up to a multiple of 1024.
The file name with no extension (SIZE_DATA_BUFFERS_TPC) applies as a
default to all third-party copy backups, if neither of the other file-name types
exists. A SIZE_DATA_BUFFERS_TPC file with the .policy_name extension
applies to backups that the named policy runs. The .storage_unit_name extension
Troubleshooting 268
Snapshot Manager installation problems
applies to backups that use the named storage unit. If more than one of these
files applies to a given backup, the buffers value is selected in this order:
SIZE_DATA_BUFFERS_TPC.policy_name
SIZE_DATA_BUFFERS_TPC.storage_unit_name
SIZE_DATA_BUFFERS_TPC
As soon as one of these files is located, its value is used. A .policy_name file
that matches the name of the executed policy overrides the value in both the
.storage_unit_name file and the file with no extension. The .storage_unit_name
file overrides the value in the file with no extension.
You can set the maximum buffer size that a particular third-party copy device
can support.
A third-party copy device is not used if it cannot handle the buffer size that is
set for the backup.
■ Replication Director is a feature that makes use of an OpenStorage application
to manage snapshot replication. The snapshots are stored on the storage
systems of partnering companies.
Note: Replication Director uses the NetApp DataFabric Manager server for data
movement and not the media server as in most cases.
you have tried to install the Snapshot Manager software before you install the base
NetBackup software.
/usr/openv/netbackup/bin/driver/snaplist
2 For each snapshot that is listed, run the following to make sure a bpbkar
process is associated with it:
/usr/openv/netbackup/bin/driver/snapoff snapn
not have VSS writer involvement, the data in the file may not be consistent. This
file should not be used for restore from either tape or snapshot.
A proper way to protect this file is to have a file backup with Shadow Copy
Component file list directive specified. In this case, a VSS snapshot with VSS writer
is taken.
/usr/openv/netbackup/bin/bpfis query
This command returns the IDs (FIS IDs) of all current snapshots. For example:
If bpfis removed the snapshot, you can skip the rest of this procedure.
Troubleshooting 271
Identifying and removing a left-over snapshot
3 Solaris, HP, AIX, Linux: if bpfis could not remove the snapshot, enter the
following (on the client or alternate client) when no backups are running:
df -k
This command displays all mounted file systems, including any snapshots of
a mounted file system.
If a snapshot backup is currently running, the snapshot should not be deleted.
NetBackup deletes it when the backup completes.
Here are two snapshots from a df -k listing:
/tmp/_vrts_frzn_img__filesystemname_pid
4 Solaris, HP, AIX, Linux: unmount the unneeded snapshot file systems (on the
client or alternate client, depending on the type of backup).
The next step depends on the type of snapshot.
5 For nbu_snap (Solaris only):
■ Enter the following to display leftover snaps:
/usr/openv/netbackup/bin/driver/snaplist
6 For VxVM (Solaris, HP, AIX, Linux) and VVR (Solaris and HP):
Do the following on the client for VxVM, and on the alternate client for VVR:
■ Enter the following to display unsynchronized mirror disks:
vxprint -g diskgroup
Note: file_system is the mount point of the primary file system that was
backed up, NOT the snapshot file system that was unmounted in a previous
step.
For example, if the snapshot file system that was unmounted is the following:
/tmp/_vrts_frzn_img__vm2_1765
the original file system, which should be specified on the fsckptadm list
command, is the following:
/vm2
Example entry:
Output:
/vm2
NBU+2004.04.02.10h53m22s:
ctime = Fri Apr 02 10:53:23 2004
mtime = Fri Apr 02 10:53:23 2004
flags = removable
For example:
Troubleshooting 273
Identifying and removing a left-over snapshot
■ For more detail on removing VxFS clones, refer to the recommended actions
for NetBackup status code 156 in the NetBackup Troubleshooting Guide.
where LdevName is the logical device name of the standard device. For
Hitachi and HP arrays (ShadowImage, BusinessCopy):
vxdg list
Troubleshooting 274
Identifying and removing a left-over snapshot
SPLIT-primaryhost_diskgroup
If vxdg list does not show the disk group, the group might have been
deported. You can discover all the disk groups, including deported ones,
by entering:
The disk groups in parentheses are not imported on the local system.
■ Deport the VxVM disk group:
■ On the primary (original) client, import and join the VxVM disk group:
■ On the primary (original) client, start the volume and snap back the snapshot
volume:
Example:
In this example, chime is the primary client and rico is the alternate
client.1hddg is the name of the original disk group on chime.
chime_lhddg is the split group that was imported on rico and must be
rejoined to the original group on the primary chime.
On alternate client rico, enter:
vxdg list
SPLIT-primaryhost_diskgroup
■ On the primary (original) client, import and join the VxVM disk group:
vxassist rescan
vxdg -g split_diskgroup import
vxdg -g split_diskgroup -n diskgroup join
In this case, you must use the bpdgclone command with the -c option to remove
the clone. Then resynchronize the mirror disk with the primary disk.
The following commands should be run on the client or alternate client, depending
on the type of backup.
Troubleshooting 277
Removing a VxVM volume clone
vxdg list
NAME STATE ID
rootdg enabled 983299491.1025.turnip
VolMgr enabled 995995264.8366.turnip
wil_test_clone enabled 1010532924.21462.turnip
wil_test enabled 983815798.1417.turnip
In this example, the name suffix indicates wil_test_clone was created for a
snapshot backup that was configured with an array-specific snapshot method.
If a backup failed with log entries similar to those in this example, the clone
must be manually deleted.
2 To remove the clone, enter the following:
where wil_test is the name of the disk group, vol01 is the name of the VxVM
volume, and wil_test_clone is the name of the clone. Use the Volume
Manager vxprint command to display volume names and other volume
information.
For more information, refer to the bpdgclone man page.
For assistance with vxprint and other Volume Manager commands, refer to
the Veritas Volume Manager Administrator’s Guide.
Troubleshooting 278
Alternate client restore and backup from a snapshot fails
3 To verify that the clone has been removed, re-enter vxdg list.
Sample output:
NAME STATE ID
rootdg enabled 983299491.1025.turnip
VolMgr enabled 995995264.8366.turnip
wil_test enabled 983815798.1417.turnip
user 0m0.047s the CPU cycle time for the command in user mode.
sys 0m0.024s the CPU cycle time for the command in kernel mode.
Note: If the total time, is greater than 155 sec the snapshot will fail with error
4220.
The backup of NFS share mounted by two different mount points for OST_FIM is
not supported in this release.
The VxFS_Snapshot can be used to backup a single file system only. If multiple
file systems are backed up using the same policy, the backup fails.
Make sure that you create a separate policy for each file system.
Note: The client and the alternate client on the same host is not supported.
Veritas recommends that you use the back slash when you specify the backup
selection. For example, \\NASFiler1\dataShare1 and C:\backup\testdir are
valid paths.
Troubleshooting 285
Policy validation fails with status code 223
The administrators of the HP-UX 11iv3 host machines should ignore the log
messages if they encounter them during backups with NetBackup.
nbu_snap commands
The following commands relate to the nbu_snap snapshot method.
snapon command
snapon starts an nbu_snap snapshot (copy-on-write).
Execute this command as root:
where:
■ snapshot_source is the partition on which the client’s file system (the file system
to be "snapped") is mounted
■ cache is the raw partition to be used as copy-on-write cache.
Example 1:
Managing nbu_snap (Solaris) 290
About managing nbu_snap
Example 2:
/usr/openv/netbackup/bin/driver/snapon /dev/vx/rdsk/omo/tcp1
/dev/vx/rdsk/omo/sncache
The snapshot is created on disk, and remains active until it is removed with the
snapoff command or the system is restarted.
snaplist command
snaplist shows the amount of client write activity that occurred during an nbu_snap
snapshot. Information is displayed for all snapshots that are currently active.
Execute this command as root:
/usr/openv/netbackup/bin/driver/snaplist
ident A unique numeric identifier of the snapshot. ident is the pid of the
process that created the snapshot.
Managing nbu_snap (Solaris) 291
About managing nbu_snap
size The size of the client’s snapshot source in 512-byte blocks. The
snapshot source is the partition on which the client’s file system (the
file system being backed up) is mounted.
Note: size is not a reliable guide to the size of the cache that is
needed for the snapshot. The user write activity during the snapshot is
what determines the size of the cache needed. See the cached column
of this output.
cached The number of 512-byte blocks in the client file system that were
changed by user activity while the snapshot was active. Before being
changed, these blocks were copied to the cache partition. The more
blocks that are cached as a result of user activity, the larger the cache
partition required. However, additional overhead—which is not shown
in this cached value—is required in the cache. To see the total space
that is used in a particular cache partition, use the snapcachelist
command.
minblk In the partition on which the file system is mounted, minblk shows:
the lowest numbered block that is monitored for write activity while the
snapshot is active. Only FlashBackup policies use minblk.
device The raw partition containing the client’s file system data to back up
(snapshot source).
cache The raw partition used as cache by the copy-on-write snapshot process.
Make sure that this partition is large enough to store all the blocks likely
to be changed by user activity during the backup.
snapcachelist command
snapcachelist displays information about all partitions currently in use as nbu_snap
caches. This command shows the extent to which the caches are full.
Managing nbu_snap (Solaris) 292
About managing nbu_snap
Note: snaplist and snapcachelist can also be used to monitor an nbu_snap snapshot
that a NetBackup policy started. Note that once the backup completes, NetBackup
removes the snapshot. As a result, the snaplist and snapcachelist commands no
longer return any output.
/usr/openv/netbackup/bin/driver/snapcachelist
Description of output:
busy The number of 512-byte blocks in the client data that changed while
the snapshot was active. Before being changed, these blocks were
copied to the cache partition by the nbu_snap copy-on-write process.
For each cache device that is listed, busy shows the total space that
was used in the cache.
You can use this value as a sizing guide when setting up raw partitions
for nbu_snap cache. When a cache is full, any additional change to the
client data causes the copy-on-write to fail: the snapshot is no longer
readable or writable. Reads or writes to the client data continue (that
is, user activity is unaffected). The failed snapshot, however, is not
terminated automatically and must be terminated using snapoff.
snapstat command
snapstat displays diagnostic information about the snap driver.
Execute this command as root:
/usr/openv/netbackup/bin/driver/snapstat
snapoff command
snapoff terminates an nbu_snap snapshot.
Execute this command as root:
Managing nbu_snap (Solaris) 293
About managing nbu_snap
snapshot 1 disabled
snapshot 2 disabled
...
snapshot n disabled
Warning: Do not terminate a snapshot while the backup is active: corruption of the
backup image may result.
Appendix B
Overview of snapshot
operations
This appendix includes the following topics:
Step Action
Note: Steps 1, 2, and 6 in Table B-1 apply only to databases, such as those requiring
NetBackup for Oracle.
2 Finish transactions.
3 Quiesce acknowledge.
4 Quiesce stack,
... In quiesce mode .... trigger snapshot.
5 Unquiesce.
6 Out of quiesce mode.
Continue processing.
Overview of snapshot operations 297
About quiescing the stack
source block that is about to change for the first time. Then it copies the block’s
current data to cache, and records the location and identity of the cached blocks.
Then the intercepted writes are allowed to take place in the source blocks.
Figure B-2 shows the copy-on-write process.
Phase 1
s0 s1 s2 s3 s4 s5 s6 s7 s8 s9 s10
Source data
Phase 2
s0 s1 s2 s3 s4 s5 s6 s7 s8 s9 s10
Writes delayed
Phase 3 c0 c1 c2 c3 c4
.
Cache
Phase 5
s0 s1 s2 s3 s4 s5 s6 s7 s8 s9 s10
The following table lists the phases that have been depicted in the diagram:
Phase Action
Phase 2 New write requests to s4, s7, s8 are held by copy-on-write process (see
arrows).
Overview of snapshot operations 299
How copy-on-write works
Phase Action
Phase 3 Copy-on-write process writes contents of blocks s4, s7, and s8 to cache.
These blocks write to cache only once, no matter how many times they
change in the source during the snapshot.
The immediate results of the copy-on-write are the following: a cached copy of the
source blocks that were about to change (phase 3), and a record of where those
cached blocks are stored (phase 4).
The copy-on-write does not produce a copy of the source. It creates cached copies
of the blocks that have changed and a record of their location. The backup process
refers to the source data or cached data as directed by the copy-on-write process.
Figure B-3 shows the process for backing up a copy-on-write snapshot.
Overview of snapshot operations 300
How copy-on-write works
Phase 1 s0 s1 s2 s3 s4 s5 s6 s7 s8 s9 s10
Cache
Phase 2 c0 c1 c2 c3 s4
Phase 3 s0 s1 s2 s3 s4 s5 s6 s7 s8 s9 s10
Cache
Phase 4 c0 c1 c2 c3 s4
Phase 6 s0 s1 s2 s3 s4 s5 s6 s7 s8 s9 s10
The following table lists the phases that have been depicted in the diagram:
Phase Action
Phase 4 At s7 and s8, copy-on-write tells backup to read c1, c2 instead of s7,
s8.
Symbols B
156 status code 266 Backup
3pc.conf file 43 Archive
Restore 237
A backup
agent 19, 38
abnormal termination 270, 289
automatic 229
access time not updated 76
local 24, 33
activate block-level restore 233
logs 260, 262
Active Directory 87, 142
manual 230
and FlashBackup 87
off-host
actual device file name 91–92
configuration 53, 55
Administrator account and NAS_Snapshot 82
prerequisites 226
AIX
raw partition 54, 227
and VxFS 44
retention period
media servers, restrictions 38
NAS_Snapshot 140
ALL_LOCAL_DRIVES entry 36, 53, 111, 227
scripts 69
Allow multiple data streams 75
techniques (overview) 21
alternate client
types supported 91
defined 38
user-directed 230
Alternate client backup
backup agent 55
configuring 71
backup retention level 108
alternate client backup 19, 26, 56, 90
Backup Selections list 53
and FlashSnap 135
ALL_LOCAL_DRIVES entry 36, 53, 111, 227
and split mirror 28
and Instant Recovery 111
and VVR 138
block vs. character device 227
introduction 27
directives 95
requirements 71
FlashBackup 92
restrictions 71
symbolic link 75
testing setup 135, 139, 141
BLIB 50
Any_available storage unit 57
block device file (vs character) 227
APP_NBU_VVR 139
block level incremental backup 50
arbitrated loop 39
block-level restore 232–233
archive bit
how to activate 233
incremental backup 71
restriction 233
archives 230
bp.conf file 260
auto option (Provider Type for VSS) 66
bpbkar
auto snapshot selection 57, 60, 72, 109
log 260
automatic backup 229
process 269
bpbrm log 260–261
bpdgclone command 277
Index 303
K mirror (continued)
Keep snapshot after backup 64, 270 fast resynch 134
restoring from image 246–247 overview 22
kernel messages 261 preparing 114
rotation 106
VxVM snapshot 41, 133
L mklogdir script 260
left over snapshot 270, 289 mklogdir.bat 261
Limit jobs per policy 75 modprobe.conf file (Linux) 157
limitations 36 mover.conf file 43
links (in Backup Selections list) 75 and AIX 38
Linux multi-volume system (VxFS) 36, 130, 133
and VxFS 44 Multiple Copies (Vault) 38
Local Host multiple data streams 75
network configuration for 33 configuring 75
local host backup method multiplexing 38, 227
network configuration for 24 MVS 36, 130, 133
local system account 82
lock file system 297
logging N
directories to create 259 NAS
VxMS 263 off-host backup 26
logical volume (as raw partition) 126 NAS filer
logs 259–262 as backup agent 56
creating for UNIX 260 NAS_Snapshot 21, 110–111, 140, 232
creating for Windows 261 access for NetBackup 82
loop (Fibre Channel) 39 backup retention period 140
ls command 41 licensing 81
LVM 87 logging info 260, 262
name 85
notes
M requirements 81
manual backup 230 NAS_Snapshot method 60
mapping naviseccli 166, 173
defined 40 Navisphere 164
Maximum jobs per client 76 nbfirescan 157
Maximum multiplexing per drive 76 NBU_CACHE 115
maximum pathname length 53, 74 nbu_snap method 61, 125, 288
Maximum Snapshots (Instant Recovery only) 65, 108, with VxVM shared disk group 126
117, 140 NDMP 21
Media multiplexing 75 access web info 43
media server (see NetBackup Media Server) 55–56 licensing 81
messages file 261 NDMP host
method as backup agent 56
selecting off-host backup 53, 55 NDMP protocol version 82
selecting snapshot 59 NDMP snapshot 140
mirror 22 ndmp unified log (VxUL) 260, 262
access time 76 NDMP V4 110
compared to copy-on-write 24 NetBackup Client Service 82
defined 40
Index 307
NetBackup Media Server 26, 40 PFI_BLI_RESTORE file (for block-level restore) 233
and storage units 57, 227 physical device (as raw partition) 126
network diagram of 34 platform requirements 35
selecting 55–56 platforms supported 44
NetBackup Replication Director 16 plex option (Snapshot Attribute for VSS) 67
Network Attached Storage 26 point-in-time snapshots 21
Network Attached Storage (data mover) 56 policy
network interface cards 266 for NAS snapshot 83
NEW_STREAM directive 95 how to select type of 51
NIC cards and full duplex 266 storage unit 57
no-data Storage Checkpoint 102 Policy dialog 51, 89
policy_name (on mover.conf file) 267
O primary vs. alternate client 27
promotion
off-host backup 53, 55
file 20, 233, 235
and multiplexing 227
provider 18, 266
NAS 26
Provider Type (for VSS) 66
overview 25
prerequisites for 226
raw partition 227 Q
type of disk (SCSI vs. IDE) 266 query snapshot 248, 251, 270
with data mover 43 quiesce 295, 298
Open File Backup
disabling 77 R
license 43
RAID 5 133
operating system
raw partition 92, 95
patches 35
as snapshot source 75
Oracle 36
backup 54
OST_FIM method 61
block vs. character device 227
overview of snapshot operations 294
defined 40
overwriting
not supported with VxFS_Checkpoint 130
raw partition restores 231
restore 230
fsck needed after vxfs restore 231
P specifying for cache 126
page code 83 38, 227, 267 recovery procedure 270, 289
pairresync command 273 Registry 87, 142
pairsplit (Hitachi) 202 and FlashBackup 87
pairsplit (HP-XP) 206 remote snapshot (see alternate client backup) 27
parameters for snapshots 63 removing
partitions clone 275
Windows 87 snapshots 270, 289
patch for VxFS 36 replicated host 32
patches 35, 266 replication
pathname length 53, 74 for alternate client backup 138
Perform block level incremental backups 50 testing setup for alternate client backup 139
Perform snapshot backups 288 Replication Director 26, 35, 61, 268
performance requirements for NetBackup 35
increasing tape 267 restore 230
peripherals (latest info on web) 43 and fsck 231
Index 308
V VxVM (continued)
VCMDB (Volume Configuration Management mirror 133
Database) 179 preparing for Instant Recovery 114
vendors (latest info on) 43 provider
VERBOSE setting for logs 260 with VSS 66
Veritas Federated Mapping Services 42 required version 60, 71, 132, 151
Veritas Volume Manager 62, 133 shared disk group 126, 135
Veritas Volume Manager cluster. See CVM Volume Manager 36, 133
Veritas Volume Replication 111 volume name restriction 102, 114, 133
virtual machine proxy 55 volume sets 102
VMware 55, 61 vxvm method 62, 110, 132–133, 235
ALL_LOCAL_DRIVES 53, 111 vxvol 115, 133, 136
volume
defined 42 W
sets (VxVM) 102 web access to recent Snapshot Manager info 43
vshadow.exe 161 whatrmver 201
VSS wildcards in Backup Selections list 54
disk array credentials and rollback 159 Windows
VSS method 61, 140 open file backup 43
VVR 36, 111 OS partitions 87
VVR method 32, 62, 139 System database files 88, 142
preparing for alternate client backup 138 Windows Shadow Copy Service 141
vxassist 115–117, 133, 136–137, 271–272
vxassist snapstart 114
vxdg command 114, 136, 138, 249, 252
vxdg list command 277
vxdisk command 249
VxFS clone
removing 272
VxFS file system 35, 61, 87, 125
and AIX
Linux 44
patch for library routines 36
restoring 231
VxFS multi-volume file system 36, 130, 133
VxFS_Checkpoint method 62, 87, 110, 130, 233–234
VxFS_Snapshot method 62, 132
vxibc command 139
vxmake 116
VxMS 40, 42
VxMS logging 263
vxprint 115–116, 271
vxprint command 272
vxrecover command 249
vxsnap 117
VxVM
and RAID 5 133
clone of disk group 275
instant snapshots 62, 116, 134