Isilon Onefs: Backup and Recovery Guide
Isilon Onefs: Backup and Recovery Guide
OneFS
Version 7.2.1
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change
without notice.
The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with
respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a
particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable
software license.
EMC², EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other
countries. All other trademarks used herein are the property of their respective owners.
For the most up-to-date regulatory document for your product line, go to EMC Online Support (https://support.emc.com).
EMC Corporation
Hopkinton, Massachusetts 01748-9103
1-508-435-1000 In North America 1-866-464-7381
www.EMC.com
Configure a library............................................................................87
Create a data media pool................................................................. 87
Label tape devices........................................................................... 88
Create a metadata media pool......................................................... 88
Create a client.................................................................................. 89
Configuring NDMP backup with Symantec NetBackup................................... 90
Add an NDMP host........................................................................... 91
Configure storage devices................................................................ 91
Create a volume pool....................................................................... 92
Inventory a robot..............................................................................92
Create a NetBackup policy............................................................... 93
Configuring NDMP backup with CommVault Simpana.................................... 94
Add a NAS client.............................................................................. 94
Add an NDMP library........................................................................ 95
Create a storage policy..................................................................... 95
Assign a storage policy and schedule to a client...............................96
Configuring NDMP backup with IBM Tivoli Storage Manager.......................... 97
Initialize an IBM Tivoli Storage Manager server for an Isilon cluster
........................................................................................................ 97
Configure an IBM Tivoli Storage Manager server for an Isilon cluster
........................................................................................................ 97
Help with online For questions specific to EMC Online Support registration or
support access, email [email protected].
tape devices. If you back up a SnapshotIQ snapshot, OneFS does not create another
snapshot for the backup.
Note
Note
To prevent permissions errors, make sure that ACL policy settings are the same across
source and target clusters.
You can create two types of replication policies: synchronization policies and copy
policies. A synchronization policy maintains an exact replica of the source directory on
the target cluster. If a file or sub-directory is deleted from the source directory, the file or
directory is deleted from the target cluster when the policy is run again.
You can use synchronization policies to fail over and fail back data between source and
target clusters. When a source cluster becomes unavailable, you can fail over data on a
target cluster and make the data available to clients. When the source cluster becomes
available again, you can fail back the data to the source cluster.
A copy policy maintains recent versions of the files that are stored on the source cluster.
However, files that are deleted on the source cluster are not deleted from the target
cluster. Failback is not supported for copy policies. Copy policies are most commonly
used for archival purposes.
Copy policies enable you to remove files from the source cluster without losing those files
on the target cluster. Deleting files on the source cluster improves performance on the
source cluster while maintaining the deleted files on the target cluster. This can be useful
if, for example, your source cluster is being used for production purposes and your target
cluster is being used only for archiving.
After creating a job for a replication policy, SyncIQ must wait until the job completes
before it can create another job for the policy. Any number of replication jobs can exist on
a cluster at a given time; however, no more than 50 replication jobs can run on a source
cluster at the same time. If more than 50 replication jobs exist on a cluster, the first 50
jobs run while the others are queued to run.
There is no limit to the number of replication jobs that a target cluster can support
concurrently. However, because more replication jobs require more cluster resources,
replication will slow down as more concurrent jobs are added.
When a replication job runs, OneFS generates workers on the source and target cluster.
Workers on the source cluster send data while workers on the target cluster write data.
OneFS generates no more than 8 workers per node per replication job. For example, in a
five node cluster, OneFS would create no more than 40 workers for a replication job.
You can replicate any number of files and directories with a single replication job. You
can prevent a large replication job from overwhelming the system by limiting the amount
of cluster resources and network bandwidth that data synchronization is allowed to
consume. Because each node in a cluster is able to send and receive data, the speed at
which data is replicated increases for larger clusters.
Note
CAUTION
Changes to the configuration of the target cluster outside of SyncIQ can introduce an
error condition that effectively breaks the association between the source and target
cluster. For example, changing the DNS record of the target cluster could cause this
problem. If you need to make significant configuration changes to the target cluster
outside of SyncIQ, make sure that your SyncIQ policies can still connect to the target
cluster.
differential replication the next time the policy is run. You can specify the type of
replication that SyncIQ performs.
During a full replication, SyncIQ transfers all data from the source cluster regardless of
what data exists on the target cluster. A full replication consumes large amounts of
network bandwidth and can take a very long time to complete. However, a full replication
is less strenuous on CPU usage than a differential replication.
During a differential replication, SyncIQ first checks whether a file already exists on the
target cluster and then transfers only data that does not already exist on the target
cluster. A differential replication consumes less network bandwidth than a full
replication; however, differential replications consume more CPU. Differential replication
can be much faster than a full replication if there is an adequate amount of available CPU
for the replication job to consume.
Note
File-operation rules might not work accurately for files that can take more than a second
to transfer and for files that are not predictably similar in size.
Replication reports
After a replication job completes, SyncIQ generates a replication report that contains
detailed information about the job, including how long the job ran, how much data was
transferred, and what errors occurred.
If a replication report is interrupted, SyncIQ might create a subreport about the progress
of the job so far. If the job is then restarted, SyncIQ creates another subreport about the
progress of the job until the job either completes or is interrupted again. SyncIQ creates a
subreport each time the job is interrupted until the job completes successfully. If multiple
subreports are created for a job, SyncIQ combines the information from the subreports
into a single report.
SyncIQ routinely deletes replication reports. You can specify the maximum number of
replication reports that SyncIQ retains and the length of time that SyncIQ retains
replication reports. If the maximum number of replication reports is exceeded on a
cluster, SyncIQ deletes the oldest report each time a new report is created.
You cannot customize the content of a replication report.
Note
If you delete a replication policy, SyncIQ automatically deletes any reports that were
generated for that policy.
Replication snapshots
SyncIQ generates snapshots to facilitate replication, failover, and failback between Isilon
clusters. Snapshots generated by SyncIQ can also be used for archival purposes on the
target cluster.
SyncIQ generates target snapshots to facilitate failover on the target cluster regardless of
whether a SnapshotIQ license has been configured on the target cluster. Failover
snapshots are generated when a replication job completes. SyncIQ retains only one
failover snapshot per replication policy, and deletes the old snapshot after the new
snapshot is created.
If a SnapshotIQ license has been activated on the target cluster, you can configure
SyncIQ to generate archival snapshots on the target cluster that are not automatically
deleted when subsequent replication jobs run. Archival snapshots contain the same data
as the snapshots that are generated for failover purposes. However, you can configure
how long archival snapshots are retained on the target cluster. You can access archival
snapshots the same way that you access other snapshots generated on a cluster.
Note
Data failover is not supported for compliance SmartLock directories. However, data
failover is supported for enterprise SmartLock directories. Data failback is unsupported
for all SmartLock directories.
Data failover
Data failover is the process of preparing data on a secondary cluster to be modified by
clients. After you fail over to a secondary cluster, you can redirect clients to modify their
data on the secondary cluster.
Before failover is performed, you must create and run a replication policy on the primary
cluster. You initiate the failover process on the secondary cluster. Failover is performed
per replication policy; to migrate data that is spread across multiple replication policies,
you must initiate failover for each replication policy.
You can use any replication policy to fail over. However, if the action of the replication
policy is set to copy, any file that was deleted on the primary cluster will be present on
the secondary cluster. When the client connects to the secondary cluster, all files that
were deleted on the primary cluster will be available to the client.
If you initiate failover for a replication policy while an associated replication job is
running, the failover operation completes but the replication job fails. Because data
might be in an inconsistent state, SyncIQ uses the snapshot generated by the last
successful replication job to revert data on the secondary cluster to the last recovery
point.
If a disaster occurs on the primary cluster, any modifications to data that were made after
the last successful replication job started are not reflected on the secondary cluster.
When a client connects to the secondary cluster, their data appears as it was when the
last successful replication job was started.
Data failback
Data failback is the process of restoring clusters to the roles they occupied before a
failover operation. After data failback is complete, the primary cluster hosts clients and
replicates data to the secondary cluster for backup.
The first step in the failback process is updating the primary cluster with all of the
modifications that were made to the data on the secondary cluster. The next step in the
failback process is preparing the primary cluster to be accessed by clients. The final step
in the failback process is resuming data replication from the primary to the secondary
cluster. At the end of the failback process, you can redirect users to resume accessing
their data on the primary cluster.
To update the primary cluster with the modifications that were made on the secondary
cluster, SyncIQ must create a SyncIQ domain for the source directory.
You can fail back data with any replication policy that meets all of the following criteria:
l The source directory is not a SmartLock directory.
l The policy has been failed over.
l The policy is a synchronization policy.
l The policy does not exclude any files or directories from replication.
Note
If you replicate data to a SmartLock directory, do not configure SmartLock settings for that
directory until you are no longer replicating data to the directory. Configuring an
autocommit time period for a SmartLock directory that you are replicating to can cause
replication jobs to fail. If the target directory commits a file to a WORM state, and the file
is modified on the source cluster, the next replication job will fail because it cannot
update the file.
If you are replicating a SmartLock directory to another SmartLock directory, you must
create the target SmartLock directory prior to running the replication policy. Although
OneFS will create a target directory automatically if a target directory does not already
exist, OneFS will not create a target SmartLock directory automatically. If you attempt to
replicate an enterprise directory before the target directory has been created, OneFS will
create a non-SmartLock target directory and the replication job will succeed. If you
replicate a compliance directory before the target directory has been created, the
replication job will fail.
If you replicate SmartLock directories to another EMC Isilon cluster with SyncIQ, the
WORM state of files is replicated. However, SmartLock directory configuration settings are
not transferred to the target directory.
For example, if you replicate a directory that contains a committed file that is set to expire
on March 4th, the file is still set to expire on March 4th on the target cluster. However, if
the directory on the source cluster is set to prevent files from being committed for more
than a year, the target directory is not automatically set to the same restriction.
If you back up data to an NDMP device, all SmartLock metadata relating to the retention
date and commit status is transferred to the NDMP device. If you restore data to a
SmartLock directory on the cluster, the metadata persists on the cluster. However, if the
directory that you restore to is not a SmartLock directory, the metadata is lost. You can
restore to a SmartLock directory only if the directory is empty.
backup device. However, although you can deduplicate data on a target Isilon cluster,
you cannot deduplicate data on an NDMP backup device.
Shadows stores are not transferred to target clusters or backup devices. Because of this,
deduplicated files do not consume less space than non-deduplicated files when they are
replicated or backed up. To avoid running out of space, you must ensure that target
clusters and tape devices have enough free space to store deduplicated data as if the
data had not been deduplicated. To reduce the amount of storage space consumed on a
target Isilon cluster, you can configure deduplication for the target directories of your
replication policies. Although this will deduplicate data on the target directory, it will not
allow SyncIQ to transfer shadow stores. Deduplication is still performed by deduplication
jobs running on the target cluster.
The amount of cluster resources required to backup and replicate deduplicated data is
the same as for non-deduplicated data. You can deduplicate data while the data is being
replicated or backed up.
Note
By default, all files and directories under the source directory of a replication policy are
replicated to the target cluster. However, you can prevent directories under the source
directory from being replicated.
If you specify a directory to exclude, files and directories under the excluded directory are
not replicated to the target cluster. If you specify a directory to include, only the files and
directories under the included directory are replicated to the target cluster; any
directories that are not contained in an included directory are excluded.
If you both include and exclude directories, any excluded directories must be contained
in one of the included directories; otherwise, the excluded-directory setting has no effect.
For example, consider a policy with the following settings:
l The root directory is /ifs/data
l The included directories are /ifs/data/media/music and /ifs/data/
media/movies
l The excluded directories are /ifs/data/archive and /ifs/data/media/
music/working
In this example, the setting that excludes the /ifs/data/archive directory has no
effect because the /ifs/data/archive directory is not under either of the included
directories. The /ifs/data/archive directory is not replicated regardless of whether
the directory is explicitly excluded. However, the setting that excludes the /ifs/data/
media/music/working directory does have an effect, because the directory would be
replicated if the setting was not specified.
In addition, if you exclude a directory that contains the source directory, the exclude-
directory setting has no effect. For example, if the root directory of a policy is /ifs/
data, explicitly excluding the /ifs directory does not prevent /ifs/data from being
replicated.
Any directories that you explicitly include or exclude must be contained in or under the
specified root directory. For example, consider a policy in which the specified root
directory is /ifs/data. In this example, you could include both the /ifs/data/
media and the /ifs/data/users/ directories because they are under /ifs/data.
Excluding directories from a synchronization policy does not cause the directories to be
deleted on the target cluster. For example, consider a replication policy that
synchronizes /ifs/data on the source cluster to /ifs/data on the target cluster. If
the policy excludes /ifs/data/media from replication, and /ifs/data/media/
file exists on the target cluster, running the policy does not cause /ifs/data/
media/file to be deleted from the target cluster.
Note
A file-criteria statement can include one or more elements. Each file-criteria element
contains a file attribute, a comparison operator, and a comparison value. You can
combine multiple criteria elements in a criteria statement with Boolean "AND" and "OR"
operators. You can configure any number of file-criteria definitions.
Configuring file-criteria statements can cause the associated jobs to run slowly. It is
recommended that you specify file-criteria statements in a replication policy only if
necessary.
Modifying a file-criteria statement will cause a full replication to occur the next time that a
replication policy is started. Depending on the amount of data being replicated, a full
replication can take a very long time to complete.
For synchronization policies, if you modify the comparison operators or comparison
values of a file attribute, and a file no longer matches the specified file-matching criteria,
the file is deleted from the target the next time the job is run. This rule does not apply to
copy policies.
Date accessed
Includes or excludes files based on when the file was last accessed. This option is
available for copy policies only, and only if the global access-time-tracking option of
the cluster is enabled.
You can specify a relative date and time, such as "two weeks ago", or specific date
and time, such as "January 1, 2012." Time settings are based on a 24-hour clock.
Date modified
Includes or excludes files based on when the file was last modified. This option is
available for copy policies only.
You can specify a relative date and time, such as "two weeks ago", or specific date
and time, such as "January 1, 2012." Time settings are based on a 24-hour clock.
File name
Includes or excludes files based on the file name. You can specify to include or
exclude full or partial names that contain specific text.
Note
Alternatively, you can filter file names by using POSIX regular-expression (regex) text.
Isilon clusters support IEEE Std 1003.2 (POSIX.2) regular expressions. For more
information about POSIX regular expressions, see the BSD man pages.
Wildcard Description
character
* Matches any string in place of the asterisk.
For example, m* matches movies and m123.
You can exclude characters within brackets by following the first bracket with
an exclamation mark.
For example, b[!ie] matches bat but not bit or bet.
You can match a bracket within a bracket if it is either the first or last
character.
For example, [[c]at matches cat and [at.
You can match a dash within a bracket if it is either the first or last character.
For example, car[-s] matches cars and car-.
Path
Includes or excludes files based on the file path. This option is available for copy
policies only.
You can specify to include or exclude full or partial paths that contain specified text.
You can also include the wildcard characters *, ?, and [ ].
Size
Includes or excludes files based on their size.
Note
Type
Includes or excludes files based on one of the following file-system object types:
l Soft link
l Regular file
l Directory
Note
This option will affect only policies that specify the target cluster as a SmartConnect
zone.
3. Specify which nodes you want replication policies to connect to when a policy is run.
Option Description
Connect policies to all nodes on Click Run the policy on all nodes in this cluster.
a source cluster.
Connect policies only to nodes a. Click Run the policy only on nodes in the
contained in a specified subnet specified subnet and pool.
and pool.
b. From the Subnet and pool list, select the
subnet and pool .
Note
SyncIQ does not support dynamically allocated IP address pools. If a replication job
connects to a dynamically allocated IP address, SmartConnect might reassign the
address while a replication job is running, which would disconnect the job and cause
it to fail.
4. Click Submit.
This applies only if you target a different cluster. If you modify the IP or domain name
of a target cluster, and then modify the replication policy on the source cluster to
match the new IP or domain name, a full replication is not performed.
l Target directory
Note
Option Description
Run jobs only when Click Only manually.
manually initiated by
a user.
Run jobs a. Click On a schedule.
automatically
according to a b. Specify a schedule.
schedule. If you configure a replication policy to run more than once
a day, you cannot configure the interval to span across
two calendar days. For example, you cannot configure a
replication policy to run every hour starting at 7:00 PM
and ending at 1:00 AM.
c. To prevent the policy from being run when the contents of
the source directory have not been modified, click Only
run if source directory contents are modified.
d. To create OneFS events if a specified RPO is exceeded,
click Send RPO alerts after... and then specify an RPO.
For example, assume you set an RPO of 5 hours; a job
starts at 1:00 PM and completes at 3:00 PM; a second
job starts at 3:30 PM; if the second job does not
complete by 6:00 PM, SyncIQ will create a OneFS event.
Note
Option Description
Run jobs a. Click Whenever the source is modified.
automatically every
time that a change is b. To configure SyncIQ to wait a specified amount of time
made to the source after the source directory is modified before starting a
directory. replication job, click Change-Triggered Sync Job Delay
and then specify a delay.
4. Specify which nodes you want the replication policy to connect to when the policy is
run.
Option Description
Connect the policy to all nodes in Click Run the policy on all nodes in this cluster.
the source cluster.
Connect the policy only to nodes a. Click Run the policy only on nodes in the
contained in a specified subnet specified subnet and pool.
and pool.
b. From the Subnet and pool list, select the
subnet and pool.
Note
SyncIQ does not support dynamically allocated IP address pools. If a replication job
connects to a dynamically allocated IP address, SmartConnect might reassign the
address while a replication job is running, which would disconnect the job and cause
it to fail.
Note
SyncIQ does not support dynamically allocated IP address pools. If a replication job
connects to a dynamically allocated IP address, SmartConnect might reassign the
address while a replication job is running, which would disconnect the job and cause
it to fail.
2. In the Target Directory field, type the absolute path of the directory on the target
cluster that you want to replicate data to.
CAUTION
If you specify an existing directory on the target cluster, make sure that the directory
is not the target of another replication policy. If this is a synchronization policy, make
sure that the directory is empty. All files are deleted from the target of a
synchronization policy the first time that the policy is run.
If the specified target directory does not already exist on the target cluster, the
directory is created the first time that the job is run. We recommend that you do not
specify the /ifs directory. If you specify the /ifs directory, the entire target cluster
is set to a read-only state, which prevents you from storing any other data on the
cluster.
If this is a copy policy, and files in the target directory share the same name as files in
the source directory, the target directory files are overwritten when the job is run.
3. If you want replication jobs to connect only to the nodes included in the SmartConnect
zone specified by the target cluster, click Connect only to the nodes within the target
cluster SmartConnect Zone.
After you finish
The next step in the process of creating a replication policy is to specify policy target
snapshot settings.
%{PolicyName}-on-%{SrcCluster}-latest
newPolicy-on-Cluster1-latest
3. (Optional) To modify the snapshot naming pattern for snapshots created according to
the replication policy, in the Snapshot Naming Pattern field, type a naming pattern.
Each snapshot generated for this replication policy is assigned a name based on this
pattern.
For example, the following naming pattern is valid:
%{PolicyName}-from-%{SrcCluster}-at-%H:%M-on-%m-%d-%Y
newPolicy-from-Cluster1-at-10:30-on-7-12-2012
2. (Optional) From the Log Level list, select the level of logging you want SyncIQ to
perform for replication jobs.
The following log levels are valid, listed from least to most verbose:
l Fatal
l Error
l Notice
l Info
l Copy
l Debug
l Trace
Replication logs are typically used for debugging purposes. If necessary, you can log
in to a node through the command-line interface and view the contents of
the /var/log/isi_migrate.log file on the node.
Note
3. (Optional) If you want SyncIQ to perform a checksum on each file data packet that is
affected by the replication policy, select the Validate File Integrity check box.
If you enable this option, and the checksum values for a file data packet do not
match, SyncIQ retransmits the affected packet.
4. (Optional) To increase the speed of failback for the policy, click Prepare policy for
accelerated failback performance.
Selecting this option causes SyncIQ to perform failback configuration tasks the next
time that a job is run, rather than waiting to perform those tasks during the failback
process. This will reduce the amount of time needed to perform failback operations
when failback is initiated.
5. (Optional) To modify the length of time SyncIQ retains replication reports for the
policy, in the Keep Reports For area, specify a length of time.
After the specified expiration period has passed for a report, SyncIQ automatically
deletes the report.
Some units of time are displayed differently when you view a report than how they
were originally entered. Entering a number of days that is equal to a corresponding
value in weeks, months, or years results in the larger unit of time being displayed. For
example, if you enter a value of 7 days, 1 week appears for that report after it is
created. This change occurs because SyncIQ internally records report retention times
in seconds and then converts them into days, weeks, months, or years.
6. (Optional) Specify whether to record information about files that are deleted by
replication jobs by selecting one of the following options:
l Click Record when a synchronization deletes files or directories.
l Click Do not record when a synchronization deletes files or directories.
This option is applicable for synchronization policies only.
3. In the Domain Root Path field, type the path of a source directory of a replication
policy.
4. From the Type of domain list, select SyncIQ.
5. Ensure that the Delete domain check box is cleared.
6. Click Start Job.
Note
You can assess only replication policies that have never been run before.
Procedure
1. Click Data Protection > SyncIQ > Policies.
2. In the SyncIQ Policies table, in the row of a replication policy, from the Actions
column, select Assess Sync.
3. Click Data Protection > SyncIQ > Summary.
4. After the job completes, in the SyncIQ Recent Reports table, in the row of the
replication job, click View Details.
The report displays the total amount of data that would have been transferred in the
Total Data field.
Paused
The job has been temporarily paused.
Policy Name
The name of the associated replication policy.
Started
The time the job started.
Elapsed
How much time has elapsed since the job started.
Transferred
The number of files that have been transferred, and the total size of all transferred
files.
Source Directory
The path of the source directory on the source cluster.
Target Host
The target directory on the target cluster.
Actions
Displays any job-related actions that you can perform.
Procedure
1. Click Data Protection > SyncIQ > Policies.
2. In the SyncIQ Policies table, in the row for a policy, select Delete Policy.
3. In the confirmation dialog box, click Delete.
Note
The operation will not succeed until SyncIQ can communicate with the target cluster;
until then, the policy will not be removed from the SyncIQ Policies table. After the
connection between the source cluster and target cluster is reestablished, SyncIQ will
delete the policy the next time that the job is scheduled to run; if the policy is
configured to run only manually, you must manually run the policy again. If SyncIQ is
permanently unable to communicate with the target cluster, run the isi sync
policies delete command with the --local-only option. This will delete the
policy from the local cluster only and not break the target association on the target
cluster. For more information, see the OneFS CLI Administration Guide.
Note
If you disable a replication policy while an associated replication job is running, the
running job is not interrupted. However, the policy will not create another job until the
policy is enabled.
Procedure
1. Click Data Protection > SyncIQ > Policies.
2. In the SyncIQ Policies table, in the row for a replication policy, select either Enable
Policy or Disable Policy.
If neither Enable Policy nor Disable Policy appears, verify that a replication job is not
running for the policy. If an associated replication job is not running, ensure that the
SyncIQ license is active on the cluster.
State
Whether the policy is enabled or disabled.
Schedule
When the next job is scheduled to run. A value of Manual indicates that the job can
be run only manually. A value of When source is modified indicates that the job will
be run whenever changes are made to the source directory.
Source Directory
The path of the source directory on the source cluster.
Actions
Any policy-related actions that you can perform.
Description
Describes the policy. For example, the description might explain the purpose or
function of the policy.
Enabled
Determines whether the policy is enabled.
Action
Determines the how the policy replicates data. All policies copy files from the source
directory to the target directory and update files in the target directory to match files
on the source directory. The action determines how deleting a file on the source
directory affects the target. The following values are valid:
Copy
If a file is deleted in the source directory, the file is not deleted in the target
directory.
Synchronize
Deletes files in the target directory if they are no longer present on the source.
This ensures that an exact replica of the source directory is maintained on the
target cluster.
Run job
Determines whether jobs are run automatically according to a schedule or only when
manually specified by a user.
Last Started
Displays the last time that the policy was run.
Included Directories
Determines which directories are included in replication. If one or more directories
are specified by this setting, any directories that are not specified are not replicated.
Excluded Directories
Determines which directories are excluded from replication. Any directories specified
by this setting are not replicated.
Target Host
The IP address or fully qualified domain name of the target cluster.
Target Directory
The full path of the target directory. Data is replicated to the target directory from the
source directory.
Capture Snapshots
Determines whether archival snapshots are generated on the target cluster.
Snapshot Expiration
Specifies how long archival snapshots are retained on the target cluster before they
are automatically deleted by the system.
Log Level
Specifies the amount of information that is recorded for replication jobs.
More verbose options include all information from less verbose options. The
following list describes the log levels from least to most verbose:
l Fatal
l Error
l Notice
l Info
l Copy
l Debug
l Trace
Replication logs are typically used for debugging purposes. If necessary, you can log
in to a node through the command-line interface and view the contents of
the /var/log/isi_migrate.log file on the node.
Note
The following replication policy fields are available only through the OneFS command-line
interface.
Source Subnet
Specifies whether replication jobs connect to any nodes in the cluster or if jobs can
connect only to nodes in a specified subnet.
Source Pool
Specifies whether replication jobs connect to any nodes in the cluster or if jobs can
connect only to nodes in a specified pool.
Password Set
Specifies a password to access the target cluster.
Note
Disabling this option could result in data loss. It is recommended that you consult
Isilon Technical Support before disabling this option.
Resolve
Determines whether you can manually resolve the policy if a replication job
encounters an error.
CAUTION
After a replication policy is reset, SyncIQ performs a full or differential replication the
next time the policy is run. Depending on the amount of data being replicated, a full or
differential replication can take a very long time to complete.
Procedure
1. Click Data Protection > SyncIQ > Local Targets.
2. In the SyncIQ Local Targets table, in the row for a replication policy, select Break
Association.
3. In the Confirm dialog box, click Yes.
Policy Name
The name of the replication policy.
Source Host
The name of the source cluster.
Coordinator IP
The IP address of the node on the source cluster that is acting as the job coordinator.
Updated
The time when data about the policy or job was last collected from the source
cluster.
Target Path
The path of the target directory on the target cluster.
Status
The current status of the replication job.
Actions
Displays any job-related actions that you can perform.
created. This change occurs because SyncIQ internally records report retention times
in seconds and then converts them into days, weeks, months, or years for display.
3. In the Number of Reports to Keep Per Policy field, type the maximum number of
reports you want to retain at a time for a replication policy.
4. Click Submit.
Status
Displays the status of the job. The following job statuses are possible:
Running
The job is currently running without error.
Paused
The job has been temporarily paused.
Finished
The job completed successfully.
Failed
The job failed to complete.
Started
Indicates when the job started.
Ended
Indicates when the job ended.
Duration
Indicates how long the job took to complete.
Transferred
The total number of files that were transferred during the job run, and the total size
of all transferred files. For assessed policies, Assessment appears.
Source Directory
The path of the source directory on the source cluster.
Target Host
The IP address or fully qualified domain name of the target cluster.
Action
Displays any report-related actions that you can perform.
Note
Depending on the amount of data being synchronized or copied, full and differential
replications can take a very long time to complete.
differential replication the next time the policy is run. Resetting a replication policy
deletes the latest snapshot generated for the policy on the source cluster.
CAUTION
Depending on the amount of data being replicated, a full or differential replication can
take a very long time to complete. Reset a replication policy only if you cannot fix the
issue that caused the replication error. If you fix the issue that caused the error, resolve
the policy instead of resetting the policy.
Procedure
1. Click Data Protection > SyncIQ > Policies.
2. In the SyncIQ Policies table, in the row for a policy, select Reset Sync State.
Managing changelists
You can create and view changelists that describe the differences between two
snapshots. You can create a changelist for any two snapshots that have a common root
directory.
Changelists are most commonly accessed by applications through the OneFS Platform
API. For example, a custom application could regularly compare the two most recent
Create a changelist
You can create a changelist to view the differences between two snapshots.
Procedure
1. (Optional) Record the IDs of the snapshots.
a. Click Data Protection > SnapshotIQ > Snapshots.
b. In the row of each snapshot that you want to create a changelist for, click View
Details, and record the ID of the snapshot.
2. Click Cluster Management > Job Operations > Job Types.
3. In the Job Types area, in the ChangelistCreate row, from the Actions column, select
Start Job.
4. In the Older Snapshot ID field, type the ID of the older snapshot.
5. In the Newer Snapshot ID field, type the ID of the newer snapshot.
6. Click Start Job.
View a changelist
You can view a changelist that describes the differences between two snapshots. This
procedure is available only through the command-line interface (CLI).
Procedure
1. View the IDs of changelists by running the following command:
isi_changelist_mod -l
Changelist IDs include the IDs of both snapshots used to create the changelist. If
OneFS is still in the process of creating a changelist, inprog is appended to the
changelist ID.
isi_changelist_mod -a 2_6
Changelist information
You can view the information contained in changelists.
Note
The following information is displayed for each item in the changelist when you run the
isi_changelist_mod command:
st_ino
Displays the inode number of the specified item.
st_mode
Displays the file type and permissions for the specified item.
st_size
Displays the total size of the item in bytes.
st_atime
Displays the POSIX timestamp of when the item was last accessed.
st_mtime
Displays the POSIX timestamp of when the item was last modified.
st_ctime
Displays the POSIX timestamp of when the item was last changed.
cl_flags
Displays information about the item and what kinds of changes were made to the
item.
01
The item was added or moved under the root directory of the snapshots.
02
The item was removed or moved out of the root directory of the snapshots.
04
The path of the item was changed without being removed from the root directory
of the snapshot.
10
The item either currently contains or at one time contained Alternate Data
Streams (ADS).
20
The item is an ADS.
40
The item has hardlinks.
Note
These values are added together in the output. For example, if an ADS was added,
the code would be cl_flags=021.
path
The absolute path of the specified file or directory.
Changelist information 51
Backing up data with SyncIQ
Note
Data failover and failback is not supported for compliance SmartLock directories.
However, failover and failback are supported for enterprise SmartLock directories.
Although you cannot fail over compliance SmartLock directories, you can recover
compliance directories on a target cluster. Also, although you cannot fail back SmartLock
compliance directories, you can migrate them back to the source cluster.
Note
Data failover is not supported for compliance SmartLock directories. However, data
failover is supported for enterprise SmartLock directories.
Complete the following procedure for each replication policy that you want to fail over.
Procedure
1. On the secondary Isilon cluster, click Data Protection > SyncIQ > Local Targets.
2. In the SyncIQ Local Targets table, in the row for a replication policy, from the Actions
column, select Allow Writes.
3. On the primary cluster, modify the replication policy so that it is set to run only
manually.
This step will prevent the policy on the primary cluster from automatically running a
replication job. If the policy on the primary cluster runs a replication job while writes
are allowed to the target directory, the job will fail and the policy will be set to an
unrunnable state. If this happens, modify the replication policy so that it is set to run
only manually, resolve the policy, and complete the failback process. After you
complete the failback process, you can modify the policy to run according to a
schedule again.
is useful if the primary cluster becomes available before data is modified on the
secondary cluster or if you failed over to a secondary cluster for testing purposes.
Before you begin
Fail over a replication policy.
Reverting a failover operation does not migrate modified data back to the primary cluster.
To migrate data that clients have modified on the secondary cluster, you must fail back to
the primary cluster.
Note
Failover reversion is not supported for SmartLock directories.
Complete the following procedure for each replication policy that you want to fail over.
Procedure
1. Run the isi sync recovery allow-write command with the --revert
option.
For example, the following command reverts a failover operation for newPolicy:
Note
Data failback is not supported for compliance SmartLock directories. However, data
failback is supported for enterprise SmartLock directories.
Procedure
1. On the primary cluster, click Data Protection > SyncIQ > Policies.
2. In the SyncIQ Policies table, in the row for a replication policy, from the Actions
column, select Resync-prep.
SyncIQ creates a mirror policy for each replication policy on the secondary cluster.
SyncIQ names mirror policies according to the following pattern:
<replication-policy-name>_mirror
3. On the secondary cluster, replicate data to the primary cluster by using the mirror
policies.
You can replicate data either by manually starting the mirror policies or by modifying
the mirror policies and specifying a schedule.
4. Prevent clients from accessing the secondary cluster and then run each mirror policy
again.
To minimize impact to clients, it is recommended that you wait until client access is
low before preventing client access to the cluster.
5. On the primary cluster, click Data Protection > SyncIQ > Local Targets.
6. In the SyncIQ Local Targets table, from the Actions column, select Allow Writes for
each mirror policy.
7. On the secondary cluster, click Data Protection > SyncIQ > Policies.
8. In the SyncIQ Policies table, from the Actions column, select Resync-prep for each
mirror policy.
After you finish
Redirect clients to begin accessing the primary cluster.
Note
SIQ-Failover-<policy-name>-<year>-<month>-<day>_<hour>-<minute>-
<second>
For example, the following command automatically commits all files in /ifs/data/
smartlock to a WORM state after one minute:
2. Replicate data to the target cluster by running the policies you created.
You can replicate data either by manually starting the policies or by specifying a policy
schedule.
3. (Optional) To ensure that SmartLock protection is enforced for all files, commit all files
in the SmartLock source directory to a WORM state.
Because autocommit information is not transferred to the target cluster, files that
were scheduled to be committed to a WORM state on the source cluster will not be
scheduled to be committed at the same time on the target cluster. To ensure that all
files are retained for the appropriate time period, you can commit all files in target
SmartLock directories to a WORM state.
For example, the following command automatically commits all files in /ifs/data/
smartlock to a WORM state after one minute:
This step is unnecessary if you have not configured an autocommit time period for the
SmartLock directory being replicated.
4. Prevent clients from accessing the source cluster and run the policy that you created.
To minimize impact to clients, it is recommended that you wait until client access is
low before preventing client access to the cluster.
5. On the target cluster, click Data Protection > SyncIQ > Local Targets.
6. In the SyncIQ Local Targets table, in the row of each replication policy, from the
Actions column, select Allow Writes.
7. (Optional) If any SmartLock directory configuration settings, such as an autocommit
time period, were specified for the source directories of the replication policies, apply
those settings to the target directories.
8. (Optional) Delete the copy of your SmartLock data on the source cluster.
If the SmartLock directories are compliance directories or enterprise directories with
the privileged delete functionality permanently disabled, you cannot recover the
space consumed by the source SmartLock directories until all files are released from a
WORM state. If you want to free the space before files are released from a WORM
state, contact Isilon Technical Support for information about reformatting your cluster.
NDMP backup 59
NDMP backup
Note
If you perform an NDMP two-way backup operation, you must assign static IP addresses
to the Backup Accelerator node. If you connect to the cluster through a data management
application (DMA), you must connect to the IP address of a Backup Accelerator node. If
you perform an NDMP three-way backup, you can connect to any node in the cluster.
You must set path to /CLUSTER and name to DATA_TX_IFS. You can set value to
a network interface name.
For example, run the following command to configure all the nodes in a cluster to use
the IPs within the network interface em1 as the preferred IPs:
Note
If you run an NDMP backup on a cluster with a SnapshotIQ license, the snapshot visibility
must be turned on for SMB, NFS, and local clients for a successful completion of the
operation.
DMA Supported
Symantec NetBackup Yes
Commvault Simpana No
Dell NetVault No
ASG-Time Navigator No
Note
In a level 10 NDMP backup, only data changed since the most recent incremental
(level 1-9) backup or the last level 10 backup is copied. By repeating level 10
backups, you can be assured that the latest versions of files in your data set are
backed up without having to run a full backup.
l Token-based NDMP backups
l NDMP TAR backup type
l Dump backup type
l Path-based and dir/node file history format
l Direct Access Restore (DAR)
l Directory DAR (DDAR)
l Including and excluding specific files and directories from backup
l Backup of file attributes
l Backup of Access Control Lists (ACLs)
l Backup of Alternate Data Streams (ADSs)
l Backup Restartable Extension (BRE)
OneFS supports connecting to clusters through IPv4 or IPv6.
Supported DMAs
NDMP backups are coordinated by a data management application (DMA) that runs on a
backup server.
OneFS supports all the DMAs that are listed in the Isilon Third-Party Software and
Hardware Compatibility Guide.
Note
All supported DMAs can connect to an EMC Isilon cluster through the IPv4 protocol.
However, only some of the DMAs support the IPv6 protocol for connecting to an EMC
Isilon cluster.
Supported DMAs 63
NDMP backup
l For a three-way NDMP backup operation with or without SmartConnect, initiate the
backup session using the IP addresses of the nodes that are identified for running
the NDMP sessions.
Backup Accelerator recommendations
l Assign static IP addresses to Backup Accelerator nodes.
l Attach more Backup Accelerator nodes to larger clusters. The recommended number
of Backup Accelerator nodes is listed in the following table.
Table 3 Nodes per Backup Accelerator node
NL-Series 3
S-Series 3
HD-Series 3
l Attach more Backup Accelerator nodes if you are backing up to more tape devices.
DMA-specific recommendations
l Enable parallelism for the DMA if the DMA supports this option. This allows OneFS to
back up data to multiple tape devices at the same time.
Note
" " are required for Symantec NetBackup when multiple patterns are specified. The
patterns are not limited to directories.
Unanchored patterns such as home or user1 target a string of text that might belong to
many files or directories. If a pattern contains '/', it is an anchored pattern. An anchored
pattern is always matched from the beginning of a path. A pattern in the middle of a path
is not matched. Anchored patterns target specific file pathnames, such as ifs/data/
home. You can include or exclude either types of patterns.
If you specify both the include and exclude patterns, the include pattern is first processed
followed by the exclude pattern.
If you specify both the include and exclude patterns, any excluded files or directories
under the included directories would not be backed up. If the excluded directories are not
found in any of the included directories, the exclude specification would have no effect.
Note
DMA vendor
The DMA vendor that the cluster is configured to interact with.
4. Add an NDMP user account through which your DMA can access the cluster.
2. In the NDMP Administrators table, in the row for an NDMP user account, click Delete.
3. In the Confirm dialog box, click Yes.
Setting Description
State Indicates whether the device is in use. If data is currently being backed up to
or restored from the device, Read/Write appears. If the device is not in
use, Closed appears.
Product The name of the device vendor and the model name or number of the device.
Paths The name of the Backup Accelerator node that the device is attached to and
the port number or numbers to which the device is connected.
Port ID The port ID of the device that binds the logical device to the physical device.
WWPN The world wide port name (WWPN) of the port on the tape or media changer
device.
4. (Optional) To scan only a specific port for NDMP devices, from the Ports list, select a
port.
If you specify a port and a node, only the specified port on the node is scanned.
However, if you specify only a port, the specified port will be scanned on all nodes.
5. (Optional) To remove entries for devices or paths that have become inaccessible,
select the Delete inaccessible paths or devices check box.
6. Click Submit.
Results
For each device that is detected, an entry is added to either the Tape Devices or Media
Changers tables.
Setting Description
Port The name of the Backup Accelerator node, and the number of the port.
Topology The type of Fibre Channel topology that the port is configured to support..
Options are:
Point to Point
A single backup device or Fibre Channel switch directly connected to
the port.
Loop
Multiple backup devices connected to a single port in a circular
formation.
Auto
Automatically detects the topology of the connected device. This is the
recommended setting, and is required if you are using a switched-
fabric topology.
WWNN The world wide node name (WWNN) of the port. This name is the same for
each port on a given node.
WWPN The world wide port name (WWPN) of the port. This name is unique to the
port.
Rate The rate at which data is sent through the port. The rate can be set to 1
Gb/s, 2 Gb/s, 4 Gb/s, 8 Gb/s, and Auto. 8 Gb/s is available for
A100 nodes only. If set to Auto, OneFS automatically negotiates with the
DMA to determine the rate. Auto is the recommended setting.
Item Description
Elapsed How much time has elapsed since the session started.
Transferred The amount of data that was transferred during the session.
Throughput The average throughput of the session over the past five minutes.
Client/Remote The IP address of the backup server that the data management
application (DMA) is running on. If a three-way NDMP backup or
restore operation is currently running, the IP address of the remote
tape media server also appears.
Mover/Data The current state of the data mover and the data server. The first
word describes the activity of the data mover. The second word
describes the activity of the data server.
The data mover and data server send data to and receive data from
each other during backup and restore operations. The data mover is
a component of the backup server that receives data during
backups and sends data during restore operations. The data server
Item Description
is a component of OneFS that sends data during backups and
receives information during restore operations.
The following states might appear:
Active
The data mover or data server is currently sending or receiving
data.
Paused
The data mover is temporarily unable to receive data. While the
data mover is paused, the data server cannot send data to the
data mover. The data server cannot be paused.
Idle
The data mover or data server is not sending or receiving data.
Listen
The data mover or data server is waiting to connect to the data
server or data mover.
Restore
Indicates that data is currently being restored from a media
server.
Mode How OneFS is interacting with data on the backup media server, as
follows:
Read/Write
OneFS is reading and writing data during a backup operation.
Read
OneFS is reading data during a restore operation.
Item Description
Raw
The DMA has access to tape drives, but the drives do not
contain writable tape media.
2. To view detailed information about a specific backup context, run the isi ndmp
extensions contexts view command.
The following command displays detailed information about a backup context with an
ID of 792eeb8a-8784-11e2-aa70-0025904e91a4:
Note
Procedure
1. Run the isi ndmp extensions contexts delete command.
The following command deletes a restartable backup context with an ID of
792eeb8a-8784-11e2-aa70-0025904e91a4:
To specify the full path of the source directory to be backed up, you must specify the
FILESYSTEM environment variable in your DMA. For example:
FILESYSTEM=/ifs/data/projects
To specify the pathname of the file list, you must specify the environment variable,
BACKUP_FILE_LIST in your DMA. The file list must be accessible from the node
performing the backup. For example:
BACKUP_FILE_LIST=/ifs/data/proj_list.txt
proj001/plan/\001File
proj001/plan/\002File
proj001/plan/\003File
proj001/plan/\004File
proj001/plan/\005File
proj001/plan/\006File
proj001/plan/\aFile
proj001/plan/\bFile
proj001/plan/\tFile
proj001/plan/\nFile
proj002/plan/\vFile
proj002/plan/\fFile
proj002/plan/\rFile
proj002/plan/\016File
proj002/plan/\017File
proj002/plan/\020File
proj002/plan/\023File
proj002/plan/\024File
proj005/plan/\036File
proj005/plan/\037File
proj005/plan/ File
proj005/plan/!File
proj005/plan/\"File
proj005/plan/#File
proj005/plan/$File
proj005/plan/%File
proj005/plan/&File
proj005/plan/'File
As shown in the example, the pathnames are relative to the full path of the source
directory, which you specify in the FILESYSTEM environment variable. Absolute file paths
are not supported in the file list.
Also as shown, the directories and files must be in sorted order for the backup to be
successful. A # at the beginning of a line in the file list indicates to skip the line.
The pathnames of all files must be included in the file list, or they are not backed up. For
example, if you only include the pathname of a subdirectory, the subdirectory is backed
up, but not the files the subdirectory contains. The exception is ADS (alternate data
streams). All ADS associated with a file to be backed up are automatically backed up.
The value of the path option is the FILESYSTEM environment variable set during the
backup operation. The value that you specify for the name option is case sensitive.
3. Start the restore operation.
l 7.1.1 l EMC NetWorker 8.0 and later l Isilon Backup Accelerator with a
second Backup Accelerator
l 7.1.0.1 (and l Symantec NetBackup 7.5
later)* and later l Isilon Backup Accelerator with a
NetApp storage system
l 7.0.2.5
l 6.6.5.26
* The tape drive sharing function is not supported in the OneFS 7.0.1 release.
EMC NetWorker refers to the tape drive sharing capability as DDS (dynamic drive sharing).
Symantec NetBackup uses the term SSO (shared storage option). Consult your DMA
vendor documentation for configuration instructions.
1-9
Performs an
incremental backup
at the specified level.
10
Performs unlimited
incremental backups.
N
OneFS does not
update the dump
dates file.
F
Specifies path-based
file history.
Y
Specifies the default
file history format
determined by your
NDMP backup
settings.
N
Disables file history.
N
Disables DAR and
DDAR.
Note
Procedure
1. Run the isi ndmp dumpdates delete command.
The following command deletes all snapshots created for backing up /ifs/data/
media:
Note
The steps described in the procedures are general guidelines only. They might change for
different versions of EMC NetWorker. Consult your DMA vendor documentation for the
configuration information for a specific version of EMC NetWorker.
Create a group
With EMC NetWorker, you must configure a group to manage backups from an Isilon
cluster.
Procedure
1. Connect to the NetWorker server from the NetWorker Management Console Server.
2. Click Configuration.
3. Right-click Groups and then click New.
4. In the Name field, type a name for the group.
5. Click OK.
Storage Node Name The name of the Isilon cluster you want to back up data from
Device Scan Type Select ndmp
NDMP User Name The name of an NDMP user on the Isilon cluster
Configure a library
With EMC NetWorker, you must configure the tape library that contains the tape devices
for backup and recovery operations.
Procedure
1. Connect to the NetWorker server from the NetWorker Management Console Server.
2. Click Devices.
3. Right-click Libraries and then click Refresh.
The system displays a list of tape libraries that are currently attached to the Isilon
cluster.
4. Right-click the name of the tape library you want to configure and then click Configure
Library.
5. In the Configure Library window, click Check All.
6. Click Start Configuration.
Enabled Selected
Groups The group that you created for the Isilon cluster
Configure a library 87
Backing up and recovering data with NDMP
Configuration Max parallelism The maximum number of tape drives to use for
concurrent backups
d. Click OK.
4. Right-click the name of the device and then click Label.
5. In the Label window, click OK.
6. Configure a new media pool.
a. Click Media.
Enabled Selected
Groups The group that you created for the Isilon cluster
bootstrap
Index:
d. Click OK.
Create a client
With EMC NetWorker, you must create a client that specifies the data to be backed up
from an Isilon cluster.
Procedure
1. Connect to the NetWorker server from the NetWorker Management Console Server.
2. Configure the new client.
a. Click Configuration.
b. Click the name of the group you created.
c. In the right pane, right-click and then click New.
d. In the Create Client window, in the Name field, type a name for the client.
3. In the Save set field, type the full path of the directory that you want to back up.
4. From the Pool list, select the name of the data media pool you created.
5. Configure the remote user.
a. Click Apps & Modules.
b. In the Remote user field, type the name of an NDMP user you configured on the
cluster.
c. In the Password field, type the password of the NDMP user.
6. Select NDMP, and in the Backup command field, type the backup command.
Option Description
With DSA
nsrndmp_save -T -M tar
Without DSA
nsrndmp_save -T tar
Create a client 89
Backing up and recovering data with NDMP
7. In the Application information field, type any NDMP environment variables that you
want to specify.
The following text enables directory-based file history and direct access restores
(DAR):
DIRECT=Y
HIST=F
Note
The steps described in the procedures are general guidelines only. They might change for
different versions of Symantec NetBackup. Consult your DMA vendor documentation for
the configuration information for a specific version of Symantec NetBackup.
5. Click OK.
6. In the Add NDMP Host box, click Use the following credentials for this NDMP host on
all media servers.
7. In the Username and Password fields, type the user name and password of an NDMP
user on the cluster.
8. Click OK.
e. After the wizard completes the scan for devices on the cluster, click Next.
5. On the SAN Clients page, click Next.
6. Specify backup devices that NetBackup should use.
a. On the Backup Devices page, verify that all attached tape devices are displayed in
the table, and then click Next.
b. On the Drag and Drop Configuration page, Select the tape devices that you want
NetBackup to backup data to and then click Next.
c. In the confirmation dialog box, click Yes.
d. On the Updating Device Configuration page, click Next.
e. On the Configure Storage Units page, view the name of your storage unit and then
click Next.
f. Click Finish.
7. Specify the storage unit to associate with the backup devices.
a. Expand NetBackup Management.
b. Expand Storage.
c. Click Storage Units.
d. Double-click the name of the storage unit you created previously.
e. In the Change Storage Unit window, ensure that Maximum concurrent write drives
is equal to the number of tape drives connected to your cluster.
Results
A storage unit is created for your cluster and tape devices. You can view all storage units
by clicking Storage Units.
Inventory a robot
Before you create a NetBackup policy, you must inventory a robot with NetBackup and
associate it with a volume pool.
Procedure
1. In the NetBackup Administration Console, expand Media and Device Management.
2. Inventory a robot.
a. Expand Devices.
b. Click Robots.
c. Right-click the name of the robot that was added when you configured storage
devices, and then click Inventory Robot.
3. Associate a volume pool with the robot.
a. Click Update volume configuration.
b. Click Advanced Options.
c. From the Volume Pool list, select the volume pool you created previously.
d. Click Start.
e. Click Yes.
f. Click Close.
4. (Optional) To verify that the robot has been inventoried successfully, click the name of
the media pool you created, and ensure that all media are displayed in the table.
Note
The steps described in the procedures are general guidelines only. They might change for
different versions of CommVault Simpana. Consult your DMA vendor documentation for
the configuration information for a specific version of CommVault Simpana.
NDMP Login The name of the NDMP user account that you configured on the Isilon cluster.
NDMP The password of the NDMP user account that you configured on the Isilon
Password cluster.
Listen port The number of the port through which data management applications (DMAs)
can connect to the cluster. This field must match the Port number setting on the
Isilon cluster. The default Port number on the Isilon cluster is 10000.
3. Click Detect.
The system populates the Vendor and Firmware Revision fields.
4. Click OK.
2. Specify the Isilon cluster containing the data you want to back up.
a. From the Library list, select the name of the NDMP library you configured
previously.
b. From the MediaAgent list, select the Isilon cluster you want to back up data from.
c. Click Next.
3. From the Scratch Pool list, select Default Scratch.
4. (Optional) To enable multistreaming, specify the Number of Device Streams setting as
a number greater than one.
It is recommenced that you enable multistreaming to increase the speed of backup
operations.
5. Click Next.
6. Select Hardware Compression, and then click Next.
7. Click Finish.
Storage Device Storage Policy The name of the storage policy you created Required
Content Backup Content The full path of the directory that you want Required
Path to back up
Note
The steps described in the procedures are general guidelines only. They might change for
different versions of IBM TSM. Consult your DMA vendor documentation for the
configuration information for a specific version of IBM TSM.
4. Define the path for the data mover by running the following command:
Specify device as the name of the device entry for the tape library on the Isilon
cluster.
The following command defines a path for the data mover created in step 2.
The following commands defines four tape drives and configures TSM to automatically
detect the addresses of the tape drives.
The following commands define paths for the tape drives defined in the previous step:
The following commands create labels for the tape media in the tape library:
The following command defines a device class for the TOC for a device type called file:
The following command updates the node to the domain that you created in step 6:
Configure an IBM Tivoli Storage Manager server for an Isilon cluster 101
Backing up and recovering data with NDMP
The following command updates the path to the NAS library for node001:
Note
If you are restoring data to the same location that you backed up the data from, you
do not need to define a virtual filespace mapping.
Configure an IBM Tivoli Storage Manager server for an Isilon cluster 103
Backing up and recovering data with NDMP