PowerProtect DD Basic Administration v7.11
PowerProtect DD Basic Administration v7.11
DD BASIC
ADMINISTRATION
PARTICIPANT GUIDE
PARTICIPANT GUIDE
[email protected]
Dell PowerProtect DD Basic Administration
Hardware Verification 22
Hardware Verification 22
Verify System Information Using DD System Manager 22
Verify Storage Information 23
Viewing Chassis Information 24
Verifying System Information Using the CLI 25
Terms 57
You can use the DDSM to configure and manage a single PowerProtect
DD system.
You can access and manage the PowerProtect DD system using the
DDOS command line interface .
When the initial configuration completes, you can use SSH or Telnet
utilities to access the system remotely and issue CLI commands. You can
also connect to the system using a serial console, serial over LAN (SOL),
or a keyboard and monitor and access the DDOS command line interface.
Administrators can manage the policy that controls setting user login
passwords with a default password policy. You can manage the password
policy on the Change Login Options page. Go to Administration >
Access > More Tasks > Change Login Options.
At least one Always The local user password must have at least
lowercase enabled one lowercase character.
character
At least one Always The local user password must have at least
uppercase enabled one uppercase character.
character
At least one Always The local user password must have at least
digit enabled one digit.
At least one Always The local user must have at least one special
special enabled character.
character
Default Passwords
PowerProtect DD
On the first login, you must set a new password. The password must
comply with the password strength policy on the system.
DDVE
For DD3300, and DDVE systems running in the cloud, other initial
password requirements might apply. Following are the requirements:
System Description
You can require multifactor authentication for system login for all roles and
security officer oversight. You can Configure, Enable, Edit, and Disable
multifactor authentication in the Multifactor Authentication panel in the DD
System Manager.
With the DD Operating System (DDOS), you can require the security
officer and system administrator to enter an RSA SecurID passcode
before logging into the system. Certain destructive commands or
configuration changes require system admin and security officer logins as
well.
DDOS supports MFA login only for username and password in the the DD
System Manager or SSH. DDOS does not support MFA using a certificate
or a token-based login. RSA SecurID is the only supported MFA server.
To ensure that backup applications can access the system without a
passcode, you can disable MFA for the sysadmin user only by using MFA
for login.
Prerequisites
− Do not append the system serial number to the user IDs for Active
Directory (AD) or Network Information Service (NIS) users.
Configuration Overview
The configuration process requires that you test the connection to the
RSA SecurID server. If you do not test the connection, the sysadmin
and security officer users cannot log in to the system. Dell
Use the session show all command to display all the active running
HTTPS, SSH, telnet, and web-service sessions in DDOS.
The output of the session show all command lists the session IDs
that you must use to terminate the session. Copy one of the session IDs.
The session show data output shows the username, remote host IP
address, and type of access (or agent).
• For HTTPS sessions, DDOS logs out the specified user immediately
from the DD System Manager.
Obtain the session ID of the session that you want to terminate. Run
session show all to display the running session IDs. Copy the
session ID and place it after the session delete session command
in the command line.
For this activity you play the part of a system administrator trying to
remove an unauthorized active session from the system. Use the
command line commands to view and delete sessions, and show session
data.
Hardware Verification
Hardware Verification
To verify storage information use the Hardware > Storage window in the
DD System Manager.
You can also use the command line to access the same information.
The OVERVIEW tab displays information about the overall state of disks
that belong to the system. An Addable Storage panel displays optional
storage enclosures that are available to be added to the system to
increase capacity. The OVERVIEW tab contains information about failed,
foreign, or absent disks.
The DISKS tab displays the disk state table with information about each
disk. You can filter the table to display all disks, disks in a specific tier, or
disks in a specific group.
Use the disk BEACON feature to identify which physical hard drive
corresponds to a disk identified in the table.
• Disks
• Fans
• Power supplies
• NVRAM
• CPUs
• Memory
Chassis views show the TOP VIEW, REAR VIEW, and ENCLOSURES.
The DETAILS pane shows the description and status of Power Supply 1.
Use CLI commands to view the information found in the DDSM chassis
view:
• The enclosure show chassis [enclosure] command shows
part numbers, serial numbers, and component version numbers for
one or all enclosures.
• The enclosure show chassis [enclosure] command shows
part numbers, serial numbers, and component version numbers for
one or all enclosures.
• The enclosure show summary command lists enclosures, model
and serial numbers, state, OEM names and values, and the capacity
and number of disks in the enclosure.
LDAP functionality and user interface are similar to the interface already
present for another authentication method, Network Information System
(NIS). You cannot enable LDAP and NIS simultaneously. You can use
Active Directory (AD) with either NIS or LDAP.
AD over LDAP
• You do not require or use Common Internet File System (CIFS) data
access with AD users.
• A production PowerProtect DD system is not already joined to an AD
domain.
You can call each protocol using the adminaccess command to modify
its parameters.
You can CREATE, MODIFY, and DELETE, local users in the LOCAL
USERS window. Administrators can grant user privileges, ENABLE, and
DISABLE user accounts. Administrators can view and change the user's
Management Role and Status.
Understanding the functions that these roles can perform can help you
better understand this unique environment.
You can manage local user accounts through the command line interface
(CLI) using the following commands:
The Alerts feature generates event and summary reports that the system
distributes to configurable email lists and to Dell Support.
The system sends event reports immediately. The reports provide detailed
information about a system event. The system uses notification groups to
distribute event reports.
The system sends daily summary reports that provide a summary of the
events that occurred during the last 24 hours. Summary reports include
only summary information.
By default, the system generates an ASUP once per day. An ASUP is also
generated every time the file system is started. You can also configure
ASUPs per schedule.
Dell Support can use the ASUP of your system to aid in identifying and
debugging possible system problems.
The file system logs system status messages every hour. You can bundle
and send log files to Dell Support. Sending log files to Support provides
detailed system information to aid with troubleshooting system issues.
The /log directory contains messages from the alerts feature, auto
support reports, and general system messages.
You can also view a log file using the log view command. With no
argument, the log view command displays the most current messages
file.
Syslog is a way that a network device can use a standard message format
to communicate with a logging server. Syslog is designed to simplify
monitoring network devices. Devices can use a syslog agent to send out
notification messages under a wide range of specific conditions. Remote
logging with syslog uses UDP Port 514 to send system messages to a
syslog server.
Configuring SNMP
SNMP must use an SNMP manager, also called an SNMP server. The
SNMP manager is sometimes a third-party application. The SNMP
manager operates as a centralized management station running an SNMP
management application.
You can also generate support bundles from the command line using the
following commands:
• The support bundle create {files-only <file-list> |
traces-only} [and-upload [transport
{http|https}]]command compresses listed files into a bundle and
uploads them.
• The support bundle create default [with-files <file-
list>] [and-upload [transport {http|https}]] command
compresses default and listed files into a bundle and uploads them.
Licensing Features
Electronic Licensing
The customer chooses a feature that they want to license. The ELMS
creates a license authorization code (LAC) email. The LAC email contains
a link to the ELMS portal where you can redeem your LAC for license
keys. The license keys activate the system features on the system.
You can add the license onto the DDVE using either the CLI or the DD
System Manager.
From the CLI, use the following commands to manage licenses with
ELMS:
• Use elicense show [all | license | locking-id] to show
current license information. Use the licenses option to display all
licenses installed, and all to display licenses, locking-ID, and the last
modified licenses.
Only those services that depend on the component that the MDU
upgrades are disrupted. The MDU feature can prevent significant
downtime while performing other operations during certain software
upgrades.
Checking Compatibility
Upgrade Precheck
Begin the system upgrade. Go to Maintenance > System and perform the
UPGRADE PRECHECK.
The aim of the precheck is to detect potential problems early and cancel
the upgrade. If you perform the upgrade without a precheck, you might
place the system in an unusable state after an upgrade attempt.
• The partition size check verifies that the /ddr and /(root) partitions
are correct.
• The redundant array of independent disks (RAID) metagroup assembly
check verifies that all dg0 disks are available on the head unit. DDOS
ensures that there is enough available space for the file system.
• The precheck also determines whether the file system is enabled and
verifies that the numbers of MTrees and Virtual Tape Library (VTL)
pools are less than 100.
• An operations check ensures that the system is not performing file
system cleaning, cloud cleaning, and data movement operations.
You can view the status of an upgrade using the DDOS command-line
command, system upgrade status. Log messages for the
upgrade are stored in /ddvar/log/debug/platform/upgrade-
error.log and /ddvar/log/ debug/platform/upgrade-
info.log.
PARTICIPANT GUIDE
PARTICIPANT GUIDE
[email protected]
PowerProtect DD Network Interface Administration
Terms 64
• Hostname
• Local host file
• Search domains
• Dynamic DNS
The Hardware > Ethernet > SETTINGS view displays the Host Settings,
Search Domains, Hosts Mapping, and DNS List.
The following are essential commands that administrators can use to view
and configure the IP name settings using the DDOS command line:
• The net hosts reset command clears the host list entry from the
/etc/hosts file.
• The net set dns command resets the domain name system (DNS)
server to default values.
Step One
Step Two
Step Three
The minimum value for the maximum transmission unit (MTU) size is 600
for IPv4 and 1280 for IPv6. The MTU maximum value is 9000, and the
default value is 1500. To change the MTU size, go to the MTU Settings
section of the panel and change the MTU value.
Step Four
Step Five
Creating StaticRoutes In the Hardware > Ethernet > ROUTES tab in DD System
Manager
Routes determine the path taken to transfer data to and from the localhost
(the protection system) to another network or host. Static routes define
destination hosts or networks with which the PowerProtect DD appliance
can communicate.
The system requires static routes in the main routing table to direct which
source addresses it uses with connections initiated from DDOS if the
destination program does not bind the IP address. You can add and delete
static routes from individual routing tables by adding or deleting the table
from the route specification.
Step One
Step Two
Step Three
Add a Network Destination IP Address, Netmask, and Gateway in the Configure Static
Routes
In the Create Routes dialog box, specify the DestinationIP Address, and
Netmask. Specify a destination Host.
Step Four
Review the Create RoutesSummary and click FINISH. When the process
is finished, click OK. The Route Spec table displays the new route
specification.
The Fibre Channel view displays the current Fibre Channel and shows
whether Fibre Channel and N-Port ID Virtualization (NPIV) are enabled.
The Fibre Channel view also displays two tabs: Resources and Access
Groups. Resources include ports, endpoints, and initiators. An access
group holds a collection of initiator worldwide port names (WWPNs) or
aliases and the drives and changers that they are allowed to access.
Endpoints are virtual Fibre Channel ports presented to the Fibre Channel
network by the DD Operating System. On PowerProtect DD systems with
NPIV disabled, endpoints mirror the WWPN and failure status of their
associated physical ports. On PowerProtect DD systems with NPIV
enabled, endpoints present a WWPN different to their associated physical
port and can fail over in the event of a physical port failure.
Reviewing Endpoints
To enable an endpoint:
1. On the Hardware > Fibre Channel page, select More Tasks >
Endpoints > Enable.
To disable an endpoint:
1. Go to Hardware > Fibre Channel page, select More Tasks >
Endpoints > Disable. If all endpoints are already disabled, a message
to that effect is displayed.
2. In the Disable Endpoints dialog, select one or more endpoints from
the list, click Next.
3. Confirm that the endpoints are correct. If the endpoint is associated
with an active service, a warning appears. Select Disable and the
Disable EndpointStatus dialog box appears.
4. Monitor the status of the DisableEndpoint process and select Close
when the process completes.
Configuring Endpoints
To configure FC Endpoints:
1. Go to the Hardware > Fibre Channel > Resources tab and select the
plus sign + to expand the endpoint configuration summary table.
2. Click the green plus + icon to open the Add Endpoint dialog box. Enter
an Endpoint Name for the endpoint.
3. For EndpointStatus, select Enabled or Disabled.
4. When NPIV is enabled, select a Primary system address from the
drop-down list. You must have a different primary system address than
any secondary system address.
5. If the endpoint cannot be created, the system displays an error. If there
are no errors, the system proceeds with the endpoint creation process.
Monitor the system as the endpoint is created. The system notifies you
when the endpoint creation process is complete.
Deleting Endpoints
1 node port
To enable NPIV:
To disable NPIV:
Warning: Before you can disable NPIV, all ports must have
a maximum of one endpoint.
For each connecting host, gather the hostname, FC card, World Wide
Node Name (WWNN), and each host port's World Wide Port Name
(WWPN). Use these details when you add the host WWPNs to the
PowerProtect DD as initiators.
Your system is easier to configure if you prepare and know the device
information before you begin.
Fibre Channel services, such as Virtual Tape Library (VTL) and DD Boost,
require the support of underlying components. These components are
A Fibre Channel (FC) initiator is the host port that initiates a session and
sends commands to the target endpoint for data transfers over FC. In a
storage environment, the Fibre Channel target is almost always on the
storage device. PowerProtect DD is no exception. FC targets are passive
FC entities that wait for FC initiators to request data read or write.
When you add an access group for the initiator or client, the client can
access only the devices in that access group. A client can have access
groups for multiple devices.
An access group may contain multiple initiators, but an initiator can exist in
only one access group.
Reviewing FC Initiators
Adding an FC Initiator
Deleting an FC Initiator
The Delete Initiator Button in the DD System Manager Fibre Channel Window
In non-NPIV mode, ports use the same properties as the endpoint. The
WWPN for the base port and the endpoint are the same.
In NPIV mode, the endpoint keeps its worldwide port name (WWPN) and
the system generates a new WWPN for the base physical port. This
means that you should not need to make changes to zoning if NPIV is
enabled on an existing system. The PowerProtect DD system preserves
the original WWPN on the endpoint to enable consistent switching
between NPIV modes.
You must enable channel ports before the system can identify and use
them.
When you enable an FC port, the system also enables any endpoints
using that port. With endpoint fallback, any endpoints that failover when
the port disables should fail back to their primary port.
2 With NPIV, you can assign multiple N-Port IDs or Fibre Channel IDs
(FCID) over a single Fibre Channel host connection or N-Port.
In non-NPIV mode, disabling one or more target ports also disables any
endpoints using that port.
If you want to check the status of the Fibre Channel subsystem in DDOS,
go to Hardware > Fibre Channel in the DD System Manager (DDSM).
View the Fibre ChannelStatus that is shown near the top of the Fibre
Channel page.
You can enable or disable Fibre Channel only through the command line
interface (CLI). The roles required to perform these commands are admin
and limited-admin. Use the following commands to enable and disable
Fibre Channel in DDOS:
Review FC Ports
Enable FC Ports
Disable FC Ports
You can also use the command line command, scsitarget port
disable to disable FC ports.
Configure FC Ports
You can also use the command line interface (CLI) command,
scsitarget port modify to configure a Fibre Channel port.
Link aggregation and link failover are two types of bonding that most
PowerProtect DD systems support.
When designing link failover and link aggregation, consider the following:
Components
• The system software sends and receives data to and from the bonded
interface3. Data moves across the bonded interface in the same way
manner as a physical interface.
• The virtual network interface provides the system software with a way
to access the underlying aggregated link connection, link failover
connection, or virtual local area network (VLAN). The system views the
virtual interface as a normal physical network interface.
Bonding Modes
Topologies
Bonding Hash
When you configure link aggregation, consider the following that can affect
performance:
• The network switch and network link speeds impact performance when
data throughput exceeds the switch capacity.
− If packets originate from several ports and connect to one uplink
running at maximum speed, the switch may lose some packets.
Consider using only one switch for port aggregation coming from a
PowerProtect DD system.
• Out-of-order packets can impact performance due to the processing
time the system requires to reorder the packets.
− Round-robin link aggregation mode can result in packets arriving at
the destination out-of-order. Out-of-order packets add overhead
that can severely reduce the throughput speed.
• The number of clients can impact performance.
− The bonded interface might have VLANs and or aliases on it, each
with an IP address.
Link Control
Link control does not extend beyond the directly connected device. If the
media or application server is not directly connected to the PowerProtect
DD system, the failover or aggregation functions cannot manage physical
link operations. Higher-level protocols detect any loss of connectivity.
Click the blue boxes in the image below to view more details about link
failover.
Link Failover
3: If the system loses the carrier signal, the active interface changes to
another standby interface. An Address Resolution Protocol (ARP) call
comes from the system to indicate that the data must flow to the new
Direct Connect
LAN Connect
Remote Connect
must go through a gateway. All packets contain the MAC addresses of the
gateway and PowerProtect DD.
Administrators can create and manage the bond interface using the
following steps:
can belong to one virtual interface. The number and type of cards on the
system determine the number of physical Ethernet interfaces available.
To disable the interface that you want to configure, perform the following
steps:
1. In the DD System Manager, select Hardware > Ethernet >
INTERFACES.
2. In the Interfaces list, disable the physical interface to which you want
to add the bonded interface. Select the interface from the list and click
No in the Enabled column.
If an error appears that warns about the dangers of disabling the interface,
verify that the interface is not in use and click OK.
3. Select the interfaces that you want to add to the failover configuration.
Click the checkboxes under Select physical interface(s) for bonding
next to the interface.Only group identical physical interfaces to create
the bonded interface. You may have only one interface group active at
a time.
VLAN Interfaces
• When you create a VLAN interface, you must provide an IP address for
the underlying physical or bonded interface.
Administering IP Aliases
Configuring IP Aliases
1. Go to the Hardware > Ethernet > INTERFACES tab and select the
interface to add the IP alias. You can choose an existing physical,
VLAN, or virtual interface.
2. Click CREATE.
3. From the CREATE menu, select the IP Alias option.
4. Specify an IP alias ID by entering a number in the IP Alias Id field.
Use any number between 1 and 4094 for the IP alias ID. You cannot
use the same IP alias ID that exists on the base interface. The
command line allows using a number range between 1 and 9999.
5. Enter an IPv4 and subnet mask or IPv6 address and prefix.
6. If you plan to use the IP alias in Windows mode, click the Dynamic
DNS Registration (DDNS) for Windows mode checkbox and click
NEXT.
7. Once the system configures the IP alias, click OK.
8. Review the details from the newly configured IP alias in the interface
table in the Hardware > Ethernet > INTERFACES tab.
9. Click FINISH to complete the configuration.
PARTICIPANT GUIDE
PARTICIPANT GUIDE
[email protected]
PowerProtect DD CIFS and NFS Implementation and Administration
Administering CIFS 7
Administering CIFS 7
Evaluating CIFS Status 7
Evaulate and Change CIFS Status Using CLI Commands 8
Managing CIFS Shares 9
Detailed CIFS Share Information 10
Creating a CIFS Share 11
CLI Commands to Manage CIFS Shares 12
Configuring CIFS Options 13
Accessing a CIFS Share 14
Monitoring CIFS Status 15
Administering NFS 17
Administering NFS 17
Exploring NFS 17
Exploring NFS Status 18
NFS Status Using the Command Line 18
Exploring NFS Exports 19
NFS Exports Using the Command Line 20
NFS Export Options 21
Exploring Kerberos Authentication 22
Exploring Kerberos in DDOS 22
Kerberos Authentication 23
Exploring Active Directory Authentication 24
Monitoring NFS 25
Monitoring NFS 25
Monitor NFS Client Status Using the Command Line 26
Administering CIFS
Administering CIFS
Common Internet File System (CIFS) clients can have access to the
system directories on the PowerProtect DD system. For some backup
applications that write to network drives, you will need to create CIFS
shares to provide access to a PowerProtect DD backup location.
For administrative tasks, such as retrieving core and log files, DDOS uses
the /ddvar/core directory and its subdirectories.
In the DD System Manager, the Protocols > CIFS window displays the
CIFS status in the DD Operating System (DDOS). Administrators can
As part of the initial protection system configuration, you can enable the
CIFS protocol and configure clients to access the protection system. You
can modify the initial settings on the Protocols > CIFS page. For instance, if
CIFS is not enabled, you can enable CIFS by clicking ENABLE next to the
CIFS Status on the CIFS page. With administrative privileges, you can
perform all CIFS operations such as setting authentication, managing
shares, viewing configuration, and sharing information.
In the DD System Manager, go to the Protocols > CIFS > SHARES page
to create, modify, delete, enable, and disable CIFS shares. Only admin or
limited-admin roles can perform these actions.
You can use a different share name other than the backup directory name.
For example, you may create a backup directory path
/data/col1/backup2. You can then name the share for backup2
named HR to better identify the share assignment.
Assign client access in the Clients field. To make a share available to all
clients, use the wildcard *. To make the share available to only specific
clients, use client names or IP addresses.
Do not mix a wildcard with client names or IP addresses. The system does
not apply any other client entries when an * is present in the list.
Among other functions, the cifs share command can create, destroy,
enable, disable, modify, and show the configurations of CIFS shares:
• The cifs share create command creates a share.
• The cifs share destroy command deletes a share.
• The cifs share disable command disables a share.
• The cifs share enable command enables a share.
• The cifs share modify command modifies a share configuration.
• The cifs share show command displays a list of share
configurations for all shares.
The Configure Options dialog box enables you to modify three areas:
• User indicates the user operating the system that is connected with the
PowerProtect DD system.
• Open Files shows the number of open files for each session.
• Connect Time shows the connection length in minutes.
• Idle Time is the time since last activity of the user.
The Open Files area of the Connection Details dialog box contains
additional information about CIFS connections:
• User shows the name of the system and the user on that system.
• Mode displays the following values and each value has a
corresponding permission:
− 0 – No permission
− 1 – Perform
− 2 – Write
− 3 – Perform and Write
− 4 – Read
− 5 – Read and Perform
− 6 – Read and Write
− 7 – All Permissions
• Locks displays the number of file locks.
• Files displays the file location.
Administrators can use the command line interface (CLI) to monitor CIFS
activity with the following command:
Administering NFS
Administering NFS
Exploring NFS
The Network File System (NFS) is a distributed file system protocol. NFS
is an open standard that requests for comments (RFCs) define. Anyone
can implement the NFS protocol. NFS client system users access files
over a network in a manner similar to how local storage is accessed. NFS,
like many other protocols, builds on the open network computing remote
procedure call (ONC RPC) system.
NFS clients can have access to the system directories or MTrees on the
PowerProtect DD system. The /ddvar directory contains PowerProtect
DD system, core, and log files. The /data/col1/backup directory is the
default destination for deduplicated backup data.
For administrative tasks, such as retrieving core and log files, DDOS
makes the /ddvar directory available as an NFS mount point by default.
In the CLI, the command nfs status indicates whether NFS is enabled
or disabled. If it is not active, nfs enable starts the NFS server.
Use the following CLI commands to enable, disable, and check the NFS
status:
You must create and specify the path that NFS clients can access. The
/ddvar directory contains the PowerProtect DD system, core, and log
files. The /data/col1/backup folder is the default destination for
deduplicated backup data.
NFS assigns and removes client access for each export separately. For
example, you can remove a client from /ddvar and can still access
/data/col1/backup.
You should consider these additional client access rules. A single asterisk
* indicates a wildcard entry. A wildcard allows you to use all backup
servers as clients. Clients with access to the /data/col1/backup
directory can access the entire directory. Clients with access to a
subdirectory under the /data/col1/backup only have access to that
subdirectory.
You can use the command line to manage NFS exports. Administrators
with admin role credentials can run the following commands:
• The nfs export add command adds a client or list of clients to one
or more exports.
• The nfs export del command removes a client or a list of clients
from existing exports.
• The nfs export create command creates a named export and
adds a path.
• The nfs export destroy command deletes one or more NFS
exports.
• The nfs export modify command updates an existing client or
clients to an export or set of exports.
Use the CLI to configure the default options for the export path. The
options are the following:
• The rw option enables read and write permissions. rw is the default
value.
• The no_root_squash option turns off root squashing.
Kerberos uses User Datagram Protocol (UDP) port 88 by default. You can
configure Kerberos from the DD System Manager in the Network File
System (NFS) window.
Kerberos Authentication
You can use DD System Manager to manage access to the system for
users and groups in Windows Active Directory, Windows Workgroup, and
NIS. Kerberos authentication is an option for CIFS and NFS clients.
5. Enter the full realm name for the system. For example, use
domain1.local. Include the username, and password for the system.
Then click NEXT.
6. Select the default CIFS server name, or select Manual and enter a
CIFS server name.
7. Select domain controllers. You can select Automatically assign, or
select Manual and enter up to three domain controller names.
8. Select an organizational unit (OU). You can choose Use default
Computers, or select Manual and enter an OU name. Click NEXT.
9. Click FINISH.
10. Click ENABLE.
Monitoring NFS
Monitoring NFS
In the DD System Manager, the Protocols > NFS > ACTIVE CLIENTS
tab displays any configured NFS clients and the related mount paths that
have been connected in the past 15 minutes. NFS clients and related
mount paths that have been connected for more than 15 minutes are not
displayed.
You can use the following command line interface (CLI) commands to
monitor NFS client status:
• The nfs show active command lists active clients in the past 15
minutes and the mount path for each client. nfs show active
allows all NFS-defined clients to access the PowerProtect DD system.
• The nfs show clients command lists NFS clients, mount path, and
NFS options for each client that has access to the PowerProtect DD
system.
• The nfs show detailed-stats command displays NFS cache
entries and status to facilitate troubleshooting.
You can perform these commands with the admin, limited-admin, user,
backup-operator, security, tenant-user, and tenant-admin roles.
PARTICIPANT GUIDE
PARTICIPANT GUIDE
[email protected]
PowerProtect DD File System and Data Management
Exploring MTrees
MTree Structure
You can create subdirectories within all MTrees, including the default
MTree. The DDOS reports the cumulative data that is contained within the
MTree.
MTree Benefits
The following are some of the benefits for using MTrees in organizing the
data on your protection system:
• Space and deduplication rate reporting
− MTrees provide finer granular reporting for space and deduplication
rates than directories or collections. With MTrees, you can manage
your data with finer detail. Use a data snapshot to record the state
of data stored on the device, at any given moment. You can
preserve that snapshot as a guide for restoring the storage device
or a portion of the data. Snapshots are used extensively with
MTrees as a part of the PowerProtect DD data restoration process.
• Independent storage
− Administrators can organize MTrees into individual departments,
geographies, or customers each with their own independent
storage location.
• Retention lock
− Administrators can apply retention lock at the MTree level. DD
Retention lock is an optional feature that the PowerProtect DD
system uses to securely retain saved data for an extended length of
time. DD Retention lock protects data from accidental or malicious
deletion during its retention time.
• Quotas
MTree Limits
The following table shows MTree limits for all PowerProtect DD systems:
MTree Quotas
MTree Quotas
MTree quotas allow you to set limits on the amount of logical space before
compression.
You can set quotas on user-created MTrees, but not the default /backup
MTree.
Quotas are independent of protocol. You can set quotas for MTrees used
by CIFS, NFS, PowerProtect DD VTL, or DD Boost data.
There are two types of quotas: soft limits and hard limits. When a soft limit
is reached, the system generates an alert, but operations continue as
normal. When you set a hard limit on an MTree and the amount of data in
the MTree reaches the hard limit, all write operations fail. You must
remove data from the MTree before write operations resume.
Administrators may set either soft, hard, or both soft and hard limits.
Creating MTrees
If you use the command line to create an MTree, type mtree create
<mtree-path>.
Use Quota Settings to set storage space restrictions for an MTree, storage
unit, or DD VTL pool to prevent it from consuming excess space.
Set the Pre-Comp Soft Limit, Pre-Comp Hard Limit, and combined
limits for the selected MTree.
• When setting quotas from the Quota tab, select DataManagement >
Quota.
• Set Quota Enforcement to Enabled.
• You can apply MTree quotas to DD VTL, DD Boost, CIFS, and NFS
protocols that you assign to an MTree.
• Snapshots do not count towards the quota of the MTree.
• You cannot set quotas on the /data/col1/backup directory.
• The maximum quota value that is allowed that is is 4096 PB
precompressed size.
Data Management
The summary also displays compression ratios for the last 24 hours, the
last seven days, and the current weekly average compression.
MTree Alerts
In the DD System Manager, the Health > Alerts window displays MTree
quota alerts. The system displays alerts in CURRENT ALERTS, ALERTS
HISTORY, NOTIFICATION, and DAILY ALERT SUMMARY tabs.
When you enable quota limits for MTrees, and capacity reaches its soft
limit, the system generates an alert, but operations continue as normal.
The Severity level is Warning.
When you enable quota limits for MTrees, and capacity reaches its hard
limit, two things happen:
• Any further data backing up to this MTree fails.
• The system generates an alert and an out-of-space error.
− The alert appears in the CURRENT ALERTS tab of the Health >
Alerts window. The Severity level is CRITICAL. The system also
reports the error to the backup application.
To resume backup operations after the system reaches a hard limit quota,
you can take three actions:
• You can delete sufficient content in the MTree.
• You can increase the hard limit quota.
• You can disable hard limit quotas for the MTree.
The system reports the same alerts in the Home > Dashboard > Alerts
window.
Exploring Snapshots
In the following example, the snapshot copies only the metadata pointers
to the production data for a specific point in time. In this case, 22:24 GMT.
The copy is quick and places a minimal load on the production systems. If
needed, the snapshot can be later used as a restore point.
Snapshots are a point in time view of a file system. You can use
snapshots to recover previous versions of files and also to recover files
that are accidentally deleted.
After you take a snapshot, changes to the data can still occur. When
changes occur, the file system removes the pointers to the original data
and adds pointers to the changed data.
The system stores the original file data segments for the snapshot
metadata pointers. The snapshot continues to identify the data as saved
at the original point in time.
With the snapshot, you can retrieve all data from the time you took the
snapshot. The system does not overwrite the original data. The system
adds new pointers for the changed data.
Snapshot Operations
• The system creates the .snapshot directory for each directory under
/data/col1/backup with the name of each snapshot, snap001,
snap002, and so on, in that directory.
• The system adds snapshots for the MTree in /data/col1/backup
as /data/col1/backup/.snapshot.
• Each MTree where you create snapshots contains the same type of
structure. An MTree /HR has a system-created directory
/data/col1/HR/.snapshot.
You can use the snapshot feature to take images of an MTree, manage
MTree snapshot organization and schedules, and display information
about the status of snapshots.
Creating a Snapshot
You can set up and manage a series of snapshots that automatically take
snapshots at regular intervals.
Schedules View
Schedule Details
Provide a Name and a Snapshot Name Pattern for the snapshot schedule on
the Schedule Details page.
• In the Name field, enter the name you want to give the schedule.
• In the Snapshot Name Pattern field, enter a name pattern.
• Enter a string of characters and variables that translates to a snapshot
name. For example, if you want to create a snapshot name,
Scheduled April 12, 2024, 17:33 enter, scheduled-2024-04-12-
17-33. Use alphabetic characters, numbers, _, -, and variables that
translate into current values.
• Click Next.
Schedule Execution
Select the time of day when you want to perform the schedule:
• If you want the snapshot to occur at specific timed intervals, select At
Specific Times and click Add.
− The Time dialog appears.
• Enter the time in the format hh:mm, and click OK.
• If you want the snapshot to occur in specific intervals, select In
Intervals. Click the drop-down menu to select the Start Time and End
Timehh:mm and AM or PM. Click the Interval dropdown menu to
choose the number of snapshots and then the hours or minutes of the
interval.
• Click Next.
Associate MTrees
The Associate MTree(s) page displays a list of Available MTree(s) and a list of
Selected MTree(s).
Choose the MTrees that you want to associate with the snapshot schedule
you are creating.
• Select an MTree from the Available MTree(s) list and move it to the
Selected MTree(s) list.
• Remove any MTree you do not want to associate with the current
snapshot schedule.
• Click Next.
Summary
Review the Summary window and click Finish to complete the schedule.
You can also use the command line interface (CLI) command, snapshot
schedule create <name> [mtrees <mtree-list>] [days
<days>] time <time> [,<time>...] [retention <period>]
[snap-name-pattern <pattern>] to configure a snapshot schedule.
Monitoring Snapshots
Use fast copy operation to retrieve data that is stored in a snapshot. Fast
copy makes a read/write copy of your backed-up data on the same
PowerProtect DD system. The copied data is the same as the original as
long as data in the source and destination directories do not change while
the fast copy completes.
Fast copy makes a copy of the pointers to data segments and structure of
a source to a target directory on the same PowerProtect DD system.
Administrators can use the fast copy operation to retrieve data stored in
snapshots.
Evaluate the following when using fast copy for data recovery:
• The fast copy operation can be used as part of a data recovery
workflow by using a snapshot of the data you want to recover.
− You cannot view snapshot content using a common Internet File
System (CIFS) share or Network File System (NFS) mount. You
can view all data from a fast copy of the same snapshot.
Administrators can recover lost data without disturbing normal
backup operations and production files by using a fast copy on a
share or mount.
• Fast copy makes a destination equal to the source, but not at a
particular point in time.
− The contents of the source and the destination may not be equal if
either is changed during the copy operation.
• You must manually identify data and delete it to free up space. Then
run file system cleaning to regain the data space held by the fast copy.
A fast copy operation clones files and directory trees of a source directory
to a target directory on a protection system.
1. Select Data Management > File System > SUMMARY and click Fast
Copy.
2. In the Source text box, enter the pathname of the directory where the
data to be copied resides. For example, /data/
col1/backup/.snapshot/snapshot-name/dir1 is an
appropriate path.
3. In the Destination text box, enter the pathname of the destination
directory. For example, /data/col1/backup/dir2 is an
appropriate path. This destination directory must be empty, or the
operation fails.
PARTICIPANT GUIDE
PARTICIPANT GUIDE
[email protected]
PowerProtect DD Data Replication and Recovery
Configuring Replication 25
Configuring Replication 25
Discovering Replication URL Schemes 25
Example URL Schemes 26
Adding a Partner System 27
Creating a Replication Context 28
Configuring a Replication Context Using the CLI 30
Reviewing Replication Configuration 31
Reviewing Replication Configuration Using the CLI 32
Monitoring Replication 42
Monitoring Replication 42
Creating Replication Status Reports 42
Creating a Replication Status Report 43
Replication Status Report Details 45
Data Recovery 47
Data Recovery 47
Data Recovery Description 47
Configuring Data Recovery 48
Data Resynchronization 49
Terms 51
Replication Architecture
The replication process only copies information that does not exist in the
destination system. Deduplication reduces network demands during
replication because the source sends only unique data segments over the
network to the destination.
• Data recovery
You can assign multiple replication contexts to one replication pair. You
can assign a PowerProtect DD appliance as the replication source of one
context and the replication destination of a second context simultaneously.
Replication Streams
Replication Types
Replication Topologies
Replication Topologies
Replication Guidelines
only form of replication that is used for true disaster recovery. You cannot
share the destination system of a collection pair for other roles.
When you enable encryption, the encryption algorithm and the system
passphrases must match or encryption fails. The system checks
encryption parameters during the replication association phase. During
collection replication, the source system transmits the encrypted user data
along with the encrypted system encryption key. You can recover the data
at the destination because it uses the same passphrase and the same
system encryption key.
Directory Replication
The source system then sends copies of the missing segments to the
destination. This method of replication allows source and destination
systems to use bandwidth more efficiently.
Directory replication can receive backups from both common Internet file
system (CIFS) and network file system (NFS) protocol clients. You must
use separate directories for each protocol. Do not mix CIFS and NFS data
in the same directory. The directory replication source cannot be the
parent or the child of a directory that is already being replicated.
If you receive either of the following warnings, you can choose to migrate
existing directory replication contexts to MTree replication contexts.
MTree replication copies the data segments that are associated with the
entire MTree structure except for the /data/col1/backup MTree.
MTree replication clones all metadata and file data that is related to the
MTree. MTree replication uses snapshots to determine what to send to the
destination.
replicated files do not show out of order because the directory tree
structure is part of the data in the snapshot.
The following are guidelines for the MTree replication destination system:
Initializing Replication
Replication Initialization
The Synced As Of Time displays the local date and time of sync
completion when replication is finished.
If you must sync a large amount of data and the replication pair is
connected to a slow link, initialization can take some time. To expedite the
initial data transfer to the destination system, bring the destination and
source systems together to use a high-speed low-latency link.
Configuring Replication
Configuring Replication
The scheme types are mtree:// for MTree and managed file replication
(MFR) contexts and col:// for collection replication contexts.
The hostname portion of the replication URL is the same as the output of
the net show hostname command. The path is the logical path to the
target directory or MTree.
The path for an MTree URL starts with the hostname, followed by
/data/col1 and ends with the name of the target MTree.
The following are example URL schemes for collection, and MTree
replication. Use these schemes when identifying the replication pair in the
DD System Manager in the Create Replication Pair window. Use the same
scheme when using adding a replication pair using the replication
add CLI command.
For collection replication, use the following URL scheme providing only the
hostname:
• col://<hostname>
• mtree://<source-hostname>/data/col1/<source-mtree-
name>
• mtree://<destination-
hostname>/data/col1/<destination-mtree-name>
Before you can configure replication between two systems, you must first
configure the destination or partner PowerProtect DD system, to let the
source system manage it.
If host system cannot reach the partner system after adding it, verify the
route from the managing system to the added system. If you enter a
Use the Replication > Automatic > SUMMARY > Detailed Information pane to Review a
Replication Configuration
Select a context from the list in the replication summary table to see
Detailed Information pertaining to the selected context.
The listen port is the transmission control protocol (TCP) port the
replication destination system monitors for incoming connections. If a
firewall configuration or other network issues interfere with the default
connections between the replication and source, you can modify the listen
port.
The listen port is a global setting. All contexts for which this system is a
destination monitor this port. All replication source systems must be
configured to connect to this particular port value.
The connection port is the TCP port that the source system uses to
communicate to the replication destination. The connection port is
configured per context. It is not a global setting. The default value for the
connection port is 2051.
Because the replication destination has a default listen port value of 2051,
each replication source must use a corresponding connection port value of
2051. In the example, the first two systems are configured with the correct
connection port, 2051. The third system is using an incorrect connection
port value, 3030. An incorrect connection port value prevents a replication
connection with the destination.
To move data traffic through a specific port, you can change the
Connection Port in the Modify the Connection Settings window. Change the
current context by changing the Connection Host parameter using a
hostname that is defined in the local hosts file. Using the Connection Host
parameter allows you to change the name of the destination system
without having to destroy and recreate the replication pair. The hostname
corresponds to the destination. The host entry indicates an alternate
destination address for that host.
You can specify a non-default connection port value when you create the
context in DD System Manager. You can modify the port value after the
context is created.
Low-Bandwidth Optimization
Do not use LBO if the system requires maximum file system write
performance. Enable LBO only for replication contexts that are configured
over wide area network (WAN) links with less than 6 Mb per second of
available bandwidth.
You enable LBO on a per-context basis for all file replication jobs on a
system. You must enable LBO on both the source and destination
PowerProtect DD appliances.
You might further tune your system to improve LBO functionality. Use
bandwidth and network-delay settings together to calculate the proper
transmission control protocol (TCP) buffer size and set replication
bandwidth for replication for greater compatibility with LBO.
When using DD System Manager, you can enable LBO when you create a
replication context. You can disable LBO anytime afterward.
When you enable the encryption over wire option on a replication context,
the system must first process the data that it reads from the disk. If you
enable the data at rest encryption feature, the source system must decrypt
the data before it is processed for replication. Otherwise, the data is read
from the source system.
The replication source encrypts the data using the encryption over wire
algorithm before the system transmits the data to the destination system.
You can also modify the encryption over wire setting after the context is
created.
When using the DD System Manager, you can enable the encryption over
wire feature when you create the context:
1. Go to the Replication > Automatic > SUMMARY tab on the source
system.
2. Select CREATE PAIR to create a replication pair.
3. Complete the configuration of the CREATE PAIR > CREATE tab.
4. Select the ADVANCED tab.
5. Select the checkbox Enable Encryption Over Wire.
6. Click OK.
To modify the amount of bandwidth used for replication, you can set
replication throttle for replication traffic.
Click ADD THROTTLE SETTING to view the Add Throttle Setting dialog.
The Add Throttle Setting dialog shows the current settings for any temporary
overrides. If you configure an override, the Add Throttle Setting dialog
displays the set throttle rate, 0 which means all replication traffic is
stopped, or Unlimited.
The Add Throttle Setting dialog also shows the configured schedule. You
should see the time for days of the week on which scheduled throttling
occurs.
Replication Schedule
Monitoring Replication
Monitoring Replication
There are two types of replication reports available for PowerProtect DD:
• The Replication Status Report
You can create a replication status report when you want to evaluate past
collected file system or replication data:
1. In the PowerProtect DD Management Center, select Reports >
Management.
2. Click ADD.
− The Add Report Template appears.
Data Recovery
Data Recovery
Data Resynchronization
The resynchronization process adds the context back to both the source
and destination systems and starts the resync process. The
resynchronization process can take several hours and up to several days,
depending on the size of the system and current load factors.
Replication initialization
Replication initialization is the process of transferring the initial replication
data from the source system to the target system.
PARTICIPANT GUIDE
PARTICIPANT GUIDE
[email protected]
DD Boost Implementation and Administration
Exploring DD Boost 6
Exploring DD Boost Features 6
DD Boost Overview 6
DD Boost Features 7
DD Boost Security Options 8
Managing Storage Units 10
Exploring Distributed Segment Processing 11
Exploring Managed File Replication with DD Boost 13
Managing Load Balancing and Link Failover for DD Boost 14
Exploring Virtual Synthetic Backups 15
Exploring Virtual Synthetic Backups 17
When to Use Virtual Synthetic Backups 18
When Not to Use Virtual Synthetic Backups 19
DD Boost Over Fibre Channel 19
Exploring DD Boost in High Availability Systems 20
DD Boost File System for Windows and Linux 21
BoostFS for Windows 22
BoostFS for Linux 23
Configuring DD Boost 25
Configuring DD Boost 25
Configuring DD Boost 25
Enabling DD Boost 27
Adding DD Boost Users and Clients 29
Creating Storage Units 30
Renaming, Deleting, and Restoring Storage Units 32
Setting DD Boost Options 34
Configuring DD Boost Over Fibre Channel 35
Creating DD Boost Access Groups 37
Terms 50
Exploring DD Boost
DD Boost Overview
• Dell NetWorker
• Dell Avamar
• Third-party DD Boost partner backup applications
DD Boost Features
1 The DD Boost plugin includes the DD Boost libraries for integrating with
the DD Boost server running on the protection system.
The effective authentication mode and encryption strength come from the
single entry that provides the highest authentication mode. The system
does not use the highest authentication mode from one entry and the
highest encryption settings from a different entry.
DD Boost supports file replication encryption. You can encrypt the data
replication stream by enabling the DD Boost file replication encryption
option. If you use DD Boost file replication encryption on a system
without the data at rest option, you must enable it on both systems. The
effective authentication mode and encryption strength come from the
single entry that provides the highest authentication mode.
Distributed Segment Processing (DSP) Shares Deduplication Duties with the Backup
Host.
Managed file replication (MFR) directly transfers backup data from one
PowerProtect DD system to another, one at a time on request from the
backup software. MFR uses DD Boost integration between two or more
PowerProtect DD appliances and the backup application. MFR allows
schedule replication operations and monitoring backups for both local and
remote sites. MFR simplifies the recovery from backup copies because all
copies are tracked in the backup software catalog.
You can manage the physical interfaces that connect the system to a
network and create logical interfaces to support load balancing and link
failover.
The advanced load balancing and link failover feature support combining
multiple Ethernet links into an interface group.
The links connecting the backup hosts and the switch that connects to the
PowerProtect DD appliance are placed in an aggregated failover mode.
The backup application registers a network-layer aggregation of multiple 1
GbE or 10 GbE links. The backup server controls the aggregated links.
During a traditional full backup, the protection system copies all data from
the client to a backup host. The backup host sends the resulting image set
to the PowerProtect DD appliance. The system transfers the files even
though the data may not have changed since the last incremental or
differential backup. Copying data that has not changed since the last full
backup results in more bandwidth and time to perform a backup operation.
In contrast, a synthetic full backup combines the previous full backup with
the subsequent incremental backups on the PowerProtect DD appliance.
The combination forms a new full backup. The new full synthetic backup is
an accurate representation of the client file system at the time of the most
recent incremental backup.
The following are some important points about virtual synthetic backups:
− Synthetic backups can also reduce the traffic between the backup
hosts and the PowerProtect DD appliance. Synthetic backups
reduce traffic by performing the virtual synthetic backup assembly
on the PowerProtect DD appliance.
• Your backup hosts might not handle distributed segment processing
(DSP) well and are burdened.
The High Availability (HA) feature for PowerProtect DD lets you configure
two protection systems as an active-standby pair.
You can install BoostFS for Windows on Windows Server 2016, Windows
Server 2019, or Windows Server 2022, and supports several backup and
enterprise applications.
You can install BoostFS for Windows on several Linux distributions and
supports several backup and enterprise applications.
You can download a single RPM installation package for BoostFS for
Linux from the Dell Support website. It is available in both Red-Hat
Package Manager (RPM) and .deb formats. The RPM package includes
the BoostFS executable.
Before beginning the process, verify that the FUSE version on the client is
2.8 or higher.
Configuring DD Boost
Configuring DD Boost
Configuring DD Boost
Important: Open ports UDP 2049, TCP 2051, and TCP 111
if you plan to use DD Boost features through a network
firewall.
Enabling DD Boost
The DD Boost library comes included for Dell NetWorker, Dell Avamar,
and some third-party backup applications. Some third-party backup
applications require a DD Boost plug-in that you must download and install
on the backup host before enabling DD Boost. The plug-in contains the
appropriate DD Boost library for use with compatible products. To verify
compatibility with your specific software, consult the E-Lab Navigator for
PowerProtect DD products.
Enable NFS on each PowerProtect DD system that you plan to run with
DD Boost.
Using the DD System Manager (DDSM), you can add DD Boost clients
and users by going to Protocols > DD Boost > SETTINGS.
In the Allowed Clients area, click the plus + button to enable access to a
new client using the DD Boost protocol on the system. Add the client
name as a host name, or fully qualified domain name since IP addresses
are not supported. You can add an asterisk * to the Client field to enable
access to all clients. You can also set the Encryption Strength and
Authentication Mode when setting up clients.
Using the DD System Manager, you can rename, delete, and recover
storage units in the DD System Manager (DDSM).
1. Go to Protocols > DD Boost > STORAGE UNITS and click the pencil
icon.
a. The Modify Storage Unit dialog box appears where you can
change the name, the DD Boost User, and the quota settings of a
storage unit.
In the same Storage Units window, you can delete one or more storage
units by selecting the storage units from the list and clicking the trashcan
icon.
You can retrieve any deleted storage units using the Undelete Storage
Unit menu item under the MORE TASKS button. You can recover storage
units only if file system cleaning has not occurred.
Using the DD System Manager, you can configure DD Boost over Fibre
Channel. Go to Protocols > DD Boost > FIBRE CHANNEL. The FIBRE
CHANNEL tab is where you can change the Status of DD Boost over Fibre
Channel, EDIT the Server Name, and EDITDD Boost Access Groups.
8. Once you are satisfied, select FINISH to create the DD Boost access
group.
9. When the DD Boost access group creation process finishes, click OK.
Using the DD System Manager, you can also review or create DD Boost
access groups.
The client direct feature2 enables clients to send and receive data directly
to Data Domain advanced file type devices and DD Boost devices. Clients
must have a direct network connection or a DD Boost over a Fibre
Channel connection to the PowerProtect DD system.
2 The client direct feature supports multiple concurrent backup and restore
operations that bypass the NetWorker storage node. Bypassing the
storage node eliminates a potential bottleneck. The storage node
manages the devices that the clients use but does not handle the backup
data. The clients back up directly to the PowerProtect DD system and
deduplicate directly from the client instead of going through the backup
server or storage nodes.
Metadata for the backup is sent from the Avamar client to the Avamar
server. The metadata enables Avamar to manage the backup data stored
on a PowerProtect DD appliance.
• Microsoft Exchange
• Microsoft SQL
• Oracle RMAN
• SAP HANA
With Veritas Backup Exec, you must install the plug-in software on media
servers that access the PowerProtect DD appliance during backups.
Backup Exec is not supported for use with DD Boost over Fibre Channel.
plug-in software on each media server and configure the backup software
as documented by the manufacturer.
PARTICIPANT GUIDE
PARTICIPANT GUIDE
[email protected]
PowerProtect DD Virtual Tape Library Implementation and Administration
Configuring VTL 26
Configuring VTL 26
Create a DD VTL in DD System Manager 26
Create a DD VTL Using Command Line Commands 27
Enabling and Disabling DD VTL 28
Managing VTL 29
Managing VTL 29
Managing a VTL 29
DD VTL Benefits
The following are some of the benefits of using DD VTL over physical tape
libraries:
• DD VTL is a way to support companies that have invested in backup
software and infrastructure intending to write to a physical tape library.
Using DD VTL allows these companies to reduce physical tape library
limitations while still leveraging their software and infrastructure
investment.
• DD VTL integrates with an existing Fibre Channel or tape-based
infrastructure. DD VTL offers a simple integration, using existing
backup policies. DD VTL can use existing backup policies in a backup
system using a strategy of physical tape libraries.
• DD VTL enables the simultaneous use of VTL with Network Attached
Storage (NAS), Network Data Management Protocol (NDMP), and DD
Boost. PowerProtect DD appliances simultaneously support data
access methods through VTL over Fibre Channel, NDMP access over
Ethernet, NFS, CIFS, DD Boost. This deployment flexibility means that
users can rapidly adjust to changing enterprise requirements.
• DD VTL eliminates using physical tape cartridges and the
accompanying tape-related issues for most restores. Compared to
normal tape technology, DD VTL provides resilience in storage through
the benefits of Data Invulnerability Architecture (DIA).
• PowerProtect DD appliances that are configured for VTL reduce
storage space requirements by using deduplication technology.
• Disk-based network storage provides a shorter Recovery Time
Objective (RTO) by eliminating the need for handling, loading, and
accessing physical tapes from a remote location.
If you intend to work with IBM i systems, you need an additional I/OS
license.
Ensure that you plan for appropriate user access to the system. Only the
admin role may enable and configure DD VTL on a PowerProtect DD
system. A user role can perform basic tape operations and monitoring.
DD VTL also requires an installed Fibre Channel (FC) interface card. You
must set up the interface card with initiator and port configuration.
If you choose not to use FC, you can set up DD VTL configuration to use
Network Data Management Protocol (NDMP). For NDMP, set up
communication between the backup server and a PowerProtect DD
system, using the TapeServer access group. When using NDMP, all
initiator and port function does not apply.
VTL Limits
Before you set up a DD VTL, be aware of the function and the limits of
certain VTL components.
The minimum supported I/O size for DD VTL is 64 K and the maximum is
1 MB.
In the VTL environment, virtual tape cartridges record and store data long
term. Virtual tape cartridges act the same as physical tape media. The
tape cartridges appear in a VTL system as a grouped datafile. DD VTL
assigns virtual tape cartridges to tape pools. Each tape pool is an MTREE
on the PowerProtect DD system.
DD VTL supports a maximum of 32,000 tape slots per library and 64,000
slots per PowerProtect DD system. The system automatically adds slots to
keep the number of slots equal to or greater than the number of drives.
Slot counts are typically based on the number of tapes that DD VTL uses
over a retention policy cycle.
When planning a DD VTL, determine the number of virtual tape drives you
need. A tape drive is a device that records backed-up data to a tape
cartridge. In the virtual tape world, this drive still uses the same Linear
Tape-Open (LTO) technology standards as physical tape drives.
Depending on the multiplex setting of the backup application, each drive
operates as a device that supports one or more data streams.
Important: Your backup host may not support the limits set
by the PowerProtect DD appliance you use. Ensure
compatibility between your backup host software and DD
VTL component sizing.
Important:
Before you
plan a DD
VTL on your
PowerProtect
DD system,
know the
requirements
and
capabilities
of your
backup
software.
Ensure that
the choices
you make for
your DD VTL
are
compatible
with your
backup host
software.
The following are key considerations and guidelines as you plan DD VTL
with your backup software.
• Set backup software to use a block size of 64 KB or larger. Larger
block sizes usually allow faster performance and better data
compression. You can experience data write problems f you change
the block size after the initial configuration. The data that the system
writes with the original selected size might become unreadable
depending on your backup application.
• Use multiple data streams from your client system to the PowerProtect
DD appliance to increase throughput efficiency and maintain
deduplication-friendly data. Each stream requires writing to a separate
virtual drive.
• Ensure that your backup software supports the changers and tape
drives that you select in the VTL configuration.
• Disable multiplexing to avoid low deduplication rates.
Verify that the backup software can support the changers and drives that
the PowerProtect DD appliance emulates.
To work with virtual tape drives, you must use the tape drivers that are
supplied by your backup software vendor that supports the following
drives:
• IBM LTO-1
• IBM LTO-2
• IBM LTO-3
• IBM LTO-4
• IBM LTO-5
• IBM LTO-7 - the default tape driver
• IBM LTO-8
• HP-LTO-3
• HP-LTO-4
To work with libraries, you must use the library drivers that are supplied by
your backup software vendor that supports the following libraries:
• StorageTek L180 - the default library driver
• RESTORER-L180
• IBM TS3500
• I2000
• I6000
• DDVTL
Multiplexing
An Example of Multiplexing
Multiplexing is useful for clients with slow throughput since a single client
could not send data fast enough to keep the tape drive busy.
If you are using NetWorker with DD VTL, do the following to mitigate data
compression loss:
• Set the NetWorker tape block size on the media server to 256 KB. 256
KB is a safe block size for all operating systems and drivers.
• Set NetWorker device properties, target sessions, and maximum
sessions to 1 to avoid low deduplication rates caused by multiplexing
multiple backup streams.
The DD Virtual Tape Library (DD VTL) feature has specific requirements,
such as proper licensing, interface cards, user permissions, and
configuration. As you plan to integrate DD VTL with Fibre Channel (FC),
follow these host bus adapter (HBA) and port guidelines:
• Make all FC connections to a PowerProtect DD appliance through an
FC switch or by direct attachment to an initiator.
• Use the E-Lab Navigatorto verify that the system supports the initiator
FC HBA hardware and driver.
• Upgrade the initiator HBA to its latest supported version of firmware
and software.
• Dedicate the initiator FC port to PowerProtect DD VTL devices.
• Verify that each FC port supports the speed that you configured for
each port.
• Consider spreading the backup load across multiple FC ports and
switches to avoid bottlenecks on a single port and provide increased
resiliency.
• Use either an installed FC interface card to operate VTL service or
configure VTL to use NDMP over Ethernet.
A preconfigured VTL access group lets you add devices that support
NDMP-based backup applications. The preconfigured VTL access group
is named TapeServer.
Tape Management
Larger capacity tapes pose a risk to system full conditions. Large capacity
tapes are difficult to expire and reclaim the space that they hold compared
to smaller tapes. A larger tape can carry more backups which makes
expiring the entire tape difficult because it might contain a current backup.
Consider the following tape management ideas when you configure VTL:
• Target multiple drives to write multiple streams.
• Set retention periods to no longer than what you require.
• Expire and relabel tapes to reclaim and reuse space. You must expire
all backups on a tape by policy or manually before you can make it
available for reuse.
− DD Operating System does not delete and reclaim the space on
tapes until the tape is relabeled, overwritten, or deleted. Consider a
situation in which you created a 1 TB tape on your system. That
tape represents 30% of your total system capacity. The tape fills,
and now you want to reclaim the space from that tape. You could
delete half of the content on the tape and still cannot reclaim any
space on your system. The tape still holds unexpired data.
− Backing up smaller files to larger-sized tapes can take a long time
to fill. Use a larger number of smaller-capacity tapes. You can
reduce the chances of newer files preventing cleaning the older
data on a larger tape.
− If backups with different retention policies exist on a single piece of
media, the youngest image prevents file system cleaning and reuse
of the tape. You can avoid this condition by initially creating and
using smaller tape cartridges.
• Begin with a tape count that can accommodate twice the pre-
compressed size of all expected backups during the retention period.
• Use caution not to create more tapes than you need. The system
capacity may fill up prematurely. Usually, backup software uses blank
tapes before recycling tapes.
• Consider that some backup applications support only specific capacity
tapes. Review your backup application support documentation to
determine correct capacity tapes.
Barcode Definitions
When creating tapes, you must provide a starting barcode to begin the
sequence.
When creating tapes for your VTL configuration, consider the following:
• If you back up large files, consider using larger-sized tapes since some
backup applications are not able to span across multiple tapes.
• Use smaller tapes across many drives for greater throughput by using
more data streams between the backup host and the PowerProtect DD
appliance.
When the VTL creates a tape, it assigns a unique identifier for the tape, a
logical, eight-character barcode. The barcode must start with six numeric
or uppercase alphabetic characters (from the set {0-9, A-Z}).
When creating the identifier, use either two or three of the first characters
of the group or pool to which the tapes belong. If you use two characters
as the identifier, for example, AA, and then use four numbers in sequence
to number up to 10,000 tapes. If you use three characters, you can
sequence only 1,000 tapes.
The eight character barcode ends with a two-character tag indicating the
supported tape type.
NDMP Support
You must run NDMP client software on the backup server. The software
can route the server data to the related DD VTL TapeServer group on the
PowerProtect DD appliance. The DD VTL TapeServer group holds tape
drives that interface with NDMP-based backup applications. A device that
the NDMP TapeServer uses must be in the DD VTL group TapeServer
and is available only to the NDMP TapeServer.
− You can only access devices that are intended for use through
NDMP through the TapeServer access group.
− You cannot locate devices that are in the TapeServer access group
in any other VTL access groups.
− You cannot add initiators to the TapeServer access group.
IBM i Support
IBM i customers use a dedicated IBM tape library or IBM virtual tape
library (VTL) to protect their data. PowerProtect DD series and DD VTL
can emulate the IBM tape library and tape drives that IBM i systems use.
All peripheral equipment must emulate IBM equipment, including IBM tape
libraries and devices.
The hardware drivers that IBM i systems use are part of the Licensed
Internal Code (LIC)1 and IBM i operating system.
IBM i virtual libraries are not managed any differently from other operating
systems when they are licensed properly.
DD VTL supports one type of library configuration for IBM i use. The
library configuration that is supported is an IBM TS3500 configured with
IBM LT0-3, LTO-4, LTO-5, LTO-7, and LTO-8 virtual tape drives. Virtual
library management is done from the Virtual Tape Libraries tab. Use the
CREATE button, to set the number of virtual drives and the number of
slots.
You can connect Fibre Channel devices directly to the host with direct-
attach. Use a Fibre Channel-arbitrated loop (FC-AL) topology or a Fibre
Channel-switched fabric (FC-SW) topology.
Configuring VTL
Configuring VTL
You can configure DD VTL using command line interface (CLI) commands
in DD Operating System (DDOS).
With an admin or limited-admin role, you can create a VTL, add VTL
drives, and show existing VTL configurations using the following CLI
commands:
• Use vtl add vtl [model model] [slots num-slots] [caps
num-caps]to add a tape library to a PowerProtect DD system. VTL
supports a maximum of 64 libraries per on each PowerProtect DD
system.
• Use vtl drive add vtl [count num-drives] [model
model] to add drives to a VTL.
• Use vtl show config [vtl] to show the library name and model
and tape drive model for a single VTL or all VTLs.
DD VTL controls the operation of the VTL. License and enable DD VTL in
order to use DD VTL.
DD VTL provides the environment for virtual tape library devices to exist.
Managing VTL
Managing VTL
Managing a VTL
Select the DD Virtual Tape Libraries > VTL Service > Libraries menu
item to view summary information relating to all VTLs.
Select the DD Virtual Tape Libraries > VTL Service > Libraries >
{library-name} menu item to view summary information about the
selected VTL. The number and disposition of tapes in the VTL are also
shown. If no tapes are associated with the VTL, the system shows nothing
in the Tapes section.
Select the Changer menu item to view detail about the changer. The
changer item details the vendor, product ID, revision number, and serial
number of the changer. The changer details are all attributes that you
would expect to find with a physical changer device.
Select the Drives to view detailed information about all drives that are
associated with a VTL. The details include the drive number, vendor,
product ID, revision number, serial number, and status. If a tape is in the
drive, the system displays the barcode and the name of the tape pool to
which the tape belongs.
Clients can only access selected media changers or virtual tape drives on
a system through access groups.
When you select Access Groups > Groups in DD System Manager, the
system displays the following information:
• Group Name
− The name of the VTL group
• Initiators
− The number of initiators that are assigned to the group
• Devices
With an admin or limited-admin role, you can create VTL access groups
using the following command line interface (CLI) command:
• Use vtl group create group-name - to create a VTL access
group with the specified name. When you create the group, you can
then add the VTL changer, drives, and initiators to the group.
1. Select the Hardware > Fibre Channel > ACCESS GROUPS tab.
− The ACCESS GROUPS tab contains summary information about
any DD Boost access groups and VTL access groups. The
information includes the following:
o The name of the group
o The type of service
o The endpoint associated with the group
o The names of the initiators in the group
o The number of devices in the group
2. The Number of Access Groups in the Access Groups window displays
the total number of groups that are configured on the system.
3. Select View DD VTL Groups to go to the DD System Manager
Protocol > DD VTL page to access more information and
configuration tools.
1. You can select the View VTL Groups hyperlink on the Hardware >
Fibre Channel > Access Groups tab. You can also go to Protocols >
DD VTL page directly.
2. Select the Access Group menu item. To expand the list, click the plus
sign + and select an access group from the Access Groups list.
3. Select the logical unit number (LUN) in the LUNS tab.
4. Review a summary list of the various LUNs in the selected access
group.
1. You can select the View DD VTL Groups hyperlink on the Hardware
> Fibre Channel > Access Groups tab. Or you can go to Protocols >
DD VTL page directly.
2. Select the Access Groups menu item. To expand the list, click the
plus sign + next to the Groups item.
3. Select an access group from the Groups list.
4. Select the INITIATORS tab.
5. Review a summary of any initiators in the selected access group.
To delete a VTL access group, you must first remove all initiators and
logical unit numbers (LUNS) from the access group. Use the configure or
modify process to delete these objects from an access group.
1. Select Protocols > VTL > Access Groups > Groups.
2. Select More Tasks > Group > Delete.
3. In the Delete Group dialog, select the group and click NEXT.
4. In the Group Confirmation dialog, verify the deletion and click SUBMIT.
5. Click CLOSE when the Delete Groups Status displays Completed.
With an admin or limited-admin role, you can delete VTL access groups in
DD Operating System with the following Command Line Interface (CLI)
command:
• scsitarget group destroy My_Group
Managing Tapes
The PowerProtect DD system provides the tools that you would expect to
manage tapes. They include the ability to create and delete tapes. The
VTL service also enables tape import and export from and to the vault.
You can also move tapes within the VTL between the slots, drives, and
cartridge access ports (CAPs). You can search for specific tapes using
various criteria, such as location, pool, or barcode to search for a tape.
Create Tapes
With an admin or limited-admin role, you can create tapes using the
following CLI command:
• Use vtl tape add barcode [capacity capacity] [count
count] [pool <pool>] to add one or more virtual tapes and insert
them into the vault. Optionally, add the tapes to the specified pool.
Delete Tapes
You can delete tapes from either a library or a pool. If initiated from a
library, the system first exports the tapes, then deletes them. The tapes
must be in the vault, not in a library. On a replication destination system,
deleting a tape is not permitted.
1. Select Virtual Tape Libraries > DD VTL Service > Libraries >
library or Vault.
2. Select MORE TASKS > Tapes > Delete.
3. In the Delete Tapes dialog, enter search information about the tapes
to delete, and select Search.
4. Select the checkbox of the tape that should be deleted or the checkbox
on the heading column to delete all tapes, and select Next.
5. Select Submit in the confirmation window, and select Close.
You can also delete tapes using the following CLI command:
• Use vtl tape del barcode [count count] [pool pool] to
delete the specified tape or one or more tapes. You cannot delete
tapes that are in a VTL.
Import Tapes
When you create tapes for VTL, you can add them directly to a VTL or to
the vault. From the vault, you can import, export, move, search, and
remove the tapes. Importing moves existing tapes from the vault to a
library slot, drive, or CAP. The number of empty slots in the library limits
the number of tapes that you can import at one time.
With an admin or limited-admin role, you can import tapes using the
following CLI command:
• vtl import vtl barcode barcode [count count] [pool
pool] [element {drive | cap | slot}] [address addr] -
This command is used to move tapes from the vault into a slot, drive,
or CAP.
Export Tapes
Exporting a tape removes that tape from a slot, drive, or CAP and sends it
to the vault.
With an admin or limited-admin role, you can also export tapes using the
following CLI command:
• vtl export vtl {slot | drive | cap} address [count
count] - Remove tapes from a slot, drive, or cartridge-access port
(CAP) and send them to the vault.
Move Tapes
1. Select DDVirtual Tape Libraries > DD VTL Service > Libraries >
library. When started from a library, the Tapes panel allows tapes to
be moved only between devices.
2. Select MORE TASKS > Tapes > Move.
3. In the Move Tape dialog, enter search information about the tapes to
move, and select SEARCH.
4. From the search results list, select the tape or tapes to move.
5. Do one of the following:
a. Select the device from the Device list, for example, a slot, drive, or
CAP, and enter a beginning address using sequential numbers for
the second and subsequent tapes. For each tape to be moved, if
the specified address is occupied, the next available address is
used.
b. Leave the address blank if the tape in a drive originally came from a
slot and is to be returned to that slot. Also, leave the address blank
if you are going to move the tape to the next available slot.
6. Select Next.
7. In the Move Tape dialog, verify the summary information and the tape
listing, and select Submit.
8. Select Close in the status window.
Search Tapes
• In the Pool field, select the name of the pool in which to search for
the tape. If there are no pools, select the Default pool.
• In the Barcode field, specify a unique barcode, or leave the default
* to return a group of tapes. The Barcode selection allows the
wildcards ? and *, where ? matches any single character and *
matches zero or more characters.
• In the Count field, enter the maximum number of tapes that you
want returned to you. If you leave this blank, the system applies *.
5. Select SEARCH.
Review Tapes
Select the Tapes menu item associated with the VTL to review the tapes
that are assigned to it. The tapes are in a slot, drive, or CAP.
PARTICIPANT GUIDE
PARTICIPANT GUIDE
[email protected]
Dell Cloud Tier Implementation and Administration
Terms 47
Dell Cloud Tier moves data to the cloud for long-term data retention. The
PowerProtect DD appliance sends only unique, deduplicated data1 to the
cloud or retrieves it from the cloud.
1Sending only deduplicated data ensures that the data in the cloud
occupies as little space as possible.
provider. To use the cloud units, you must connect the PowerProtect DD
to the cloud with an account for a supported cloud provider.
The system stores active data locally, while data intended for long-term
retention is stored on the cloud. Some MTree data may reside in the
active tier, while older data is in the cloud.
The system maintains file system metadata in local storage and mirrors it
to the cloud. This metadata is used in deduplication, cleaning, and
replication operations. Cloud Tier uses local storage for metadata to
minimize writes to the cloud. The metadata includes the index, the
directory manager (DM) responsible for managing the namespace, and
container metadata. Also, certain metadata, including container metadata,
is stored with the data in the cloud for disaster recovery purposes.
When files move from the active tier to the cloud tier, PowerProtect DD
deduplicates and stores the data in cloud object storage in its native
format. Moving data to the cloud results in a lower total cost of ownership
(TCO) over time for long-term cloud storage. For security, the cloud tier
supports encryption of data at rest and the DD Retention Lock feature,
thus ensuring the ability to satisfy regulatory and compliance security
policies.
Cloud tier enables you to move backup data from the active tier in a
protection system to lower cost, high-capacity object storage in a public,
private, or hybrid cloud for long-term retention.
Local storage maintains file system metadata associated with the data
stored in the cloud. The metadata is also mirrored in the cloud. The cloud
tier requires extra storage capacity to hold metadata that is associated
with the data in the cloud tier. Metadata is used for deduplication,
Dell Cloud Tier supports one or two cloud units on each PowerProtect DD
appliance. Other details about cloud units include:
• Each cloud unit has the maximum capacity of the active tier. You can
scale the cloud tier to the maximum capacity without scaling the active
tier any larger.
• Each cloud unit maps to a cloud provider. Each cloud unit can write to
a separate supported cloud provider.
• Metadata shelves store metadata for both cloud units. The number of
metadata shelves you need depends on the cloud unit physical
capacity.
• Data that is stored on the active tier provides local access to data. You
can use the active tier for operational recoveries. The cloud tier
provides long-term retention for data that is stored in the cloud.
DD Virtual Tape Library (DD VTL) supports storing the VTL vault in cloud
tier storage. Use the PowerProtect DD system and the DD VTL tape out to
cloud feature to store the VTL vault on cloud tier storage. The protection
system must have a configured cloud tier, with Dell Cloud Tier and DD
VTL licenses.
DD VTL does not require a special configuration to use cloud storage for
the vault. When you configure DD VTL, select Cloud Storage as the
Vault Location.
The following are important points to consider about deduplication and file
system cleaning with cloud tier storage.
• Each cloud unit has its own segment index and metadata and thus
each cloud is a deduplication unit by itself. Deduplication does not
occur across the active tier and cloud tier.
• The cloud tier uses the same compression algorithm as the active tier.
On most PowerProtect DD appliances, the default compression
algorithm is gzfast. For legacy Data Domain systems and the
PowerProtect DD3300, the lz compression algorithm is used by
default.
• You can schedule cloud tier cleaning or perform cleaning on demand.
By archiving data that changes infrequently, do not schedule cleaning
operations as often as active tier cleaning. Less frequent cleaning
minimizes access delays that user can experience during data recalls.
• Set the schedule for cloud tier cleaning relative to active tier cleaning.
The schedule specifies running cloud tier cleaning after every user-
defined Nth run of active tier cleaning. By default, consider running
cloud tier after every fourth scheduled active tier cleaning.
Dell Cloud Tier supports DD Retention Lock. Consider the following when
applying DD Retention Lock features to Dell Cloud Tier:
• You can move retention-locked files from the active tier to the cloud.
• You can apply DD Retention Lock on files that are already in the cloud
tier.
• PowerProtect DD appliances using DD Retention Lock Compliance do
not allow deleting files in the cloud unit.
• You can recall locked files to the active tier. The recalled files remain
locked.
You can enable Dell Cloud Tier on one or both systems in a replication
pair.
The replication source always places the replicated files first in active tier
of the destination system. The replication destination then copies the files
to the cloud.
The source system reads data from the cloud only if the destination
system migrates the files to the cloud tier from its active tier.
Dell Cloud Tier supports one or two cloud units on each PowerProtect DD
appliance. Other details about cloud units include:
• Each cloud unit has the maximum capacity of the active tier. You can
scale the cloud tier to maximum capacity without scaling the active tier
any larger.
• Each cloud unit maps to a cloud provider. Each cloud unit can write to
a separate supported cloud provider.
• Metadata shelves store metadata for both cloud units. The number of
metadata shelves you need depends on the cloud unit physical
capacity.
• Data that is stored on the active tier provides local access to data. You
can use the active tier for operational recoveries. The cloud tier
provides long-term retention for data that is stored in the cloud.
The migration process moves the active tier storage, and the locally stored
Cloud Tier metadata from the existing system to a new system. During the
Cloud Tier migration, the source system operates in a restricted mode. In
restricted mode, the active tier storage is available for backup operations,
but operations involving Cloud Tier storage are not permitted.
The cloud tier consists of a maximum of two cloud units. Each cloud unit
maps to a cloud provider, enabling multiple cloud providers per protection
system. An active cloud tier must include a PowerProtect DD system that
is connected using an account with a supported cloud service provider.
Configuration
Regions are configured at the bucket level instead of the object level. All
objects that are contained in a bucket are stored in the same region. A
region is specified when a bucket is created, and cannot be changed once
it is created.
The Alibaba Cloud user credentials must have permission to create and
delete buckets and to add, modify, and delete files within the buckets they
create. Alibaba uses Resource Access Management (RAM) users that
Procedure
Configuration
For enhanced security, Cloud Tier uses Signature Version 4 for all AWS
requests. Signature Version 4 signing is enabled by default.
The AWS user credentials must have permissions to create and delete
buckets and to add, modify, and delete files within the buckets they create.
Procedure
5. Select the Storage class and Storage region from their drop-down
lists.
6. Enter the provider Access key as password text.
7. Enter the provider Secret key as password text.
8. Ensure that you unblock port 443 in firewalls. Communication with the
AWS cloud provider occurs over HTTPS on port 443.
9. If you use an HTTP proxy server to get around a firewall for this
provider, click Configure for HTTP Proxy Server. Enter the proxy
hostname, port, user, and password.
10. Click Add.
Configuration
Procedure
2. Click Add. The system displays the Add Cloud Unit dialog.
3. Enter a Name for this cloud unit. Cloud unit names support only
alphanumeric characters.
4. For Cloud provider, select Flexible Cloud Tier Provider Framework
for S3 from the drop-down list.
5. Enter the provider Access key as password text.
6. Enter the provider Secret key as password text.
7. Specify the appropriate Storage region.
8. Enter the provider endpoint in this format:
http://<ip/hostname>:<port>. If you are using a secure
endpoint, use https:// instead.
9. For Storage class, select the appropriate storage class from the drop-
down list.
10. Ensure that port 443 (HTTPS) is not blocked in firewalls.
Communication with the S3 cloud provider occurs on port 443.
11. If you use an HTTP proxy server to get around a firewall for this
provider, click Configure for HTTP Proxy Server. Enter the proxy
hostname, port, user, and password.
12. Click Add.
Configuration
• GetObject
• PutObject
• DeleteObject
Procedure
3. Enter a Name for this cloud unit. Cloud unit names support only
alphanumeric characters.
4. For Cloud provider, select Google Cloud Storage from the drop-
down list.
5. Enter the provider Access key as password text.
6. Enter the provider Secret key as password text.
7. Storage class is set as Nearline by default. If a multiregional location
is selected then the storage class and the location constraint is set as
Nearline Multiregional. All other regional locations have the storage
class set as Nearline Regional.
8. Select the Storage region.
9. Ensure that port 443 (HTTPS) is not blocked in firewalls.
Communication with Google Cloud Provider occurs on port 443.
10. If you use an HTTP proxy server to get around a firewall for this
provider, click Configure for HTTP Proxy Server. Enter the proxy
hostname, port, user, and password.
11. Click Add.
Configuration
Procedure
Overview
In signature version 4, you use your secret access key to derive a signing
key. The derived signing key uses elements related to the date, service
type, and region. When servers receive an authenticated request, servers
The following are some of the customer benefits when using signature
version 4:
Once set, you cannot modify the signature version of the cloud profile.
With Dell Cloud Tier storage, the PowerProtect DD appliance holds the
metadata for the files residing in the cloud. A copy of the metadata resides
in the cloud for disaster recovery.
The cloud tier requires a local store for a local copy of the cloud metadata.
To configure Cloud Tier, you must meet the storage requirement for the
licensed capacity.
If you are creating a file system, you can enable the cloud tier at the same
time. To create a file system, select Create File System and then
configure the active tier on the system.
In Data Management > File System > SUMMARY, the main panel
displays statistics for the Active Tier and the Cloud Tier.
The statistics viewable in the DD System Manager for both the Active
Tier and the Cloud Tier are:
• Size
• Used
• Available
• Pre-Compression
• Total Compression Factor (Reduction %)
• Cleanable
• Space Usage
Data moves from the active tier to the cloud tier as detailed in your data
movement policy. You can run the cloud tier policy manually or
automatically by using a schedule. You can schedule the policy to run
daily, weekly, or monthly and at a specific time of day.
The system moves files from the active tier to the cloud tier based on the
date that the files were last modified. The Data Movement Policy establishes
the File Age in Days threshold, Age Range, and Destination.
You can also throttle the number of resources that the process can
consume. Throttling is an important consideration. When you allot
resources for data movement to the cloud, you have fewer resources
available for primary backup data ingest operations.
You can view the data movement schedule in the DD System Manager at
Data Management > File System > SUMMARY.
You can set the data movement schedule in DD System Manager at Data
Management > File System > CLOUD UNITS > Settings > DATA
MOVEMENT.
If the system cannot access the cloud unit when data movement runs, it
skips the cloud unit during the run. The system attempts data movement
for that cloud unit in the next run. The data movement schedule
determines the duration between two runs. If the cloud unit becomes
available and you cannot wait for the next scheduled run, you can start the
data movement manually.
Recall brings data from the cloud tier to the active tier in protection
storage. Restore recovers data from the active tier in protection storage
and makes it available to the client.
You can recall data from the cloud tier as needed either through the
backup software interface such as what NetWorker provides. You can also
recall data directly using the DD System Manager.
You can also recall data from the cloud tier using the command line.
For nonintegrated backup applications, you must recall the data to the
active tier before you can restore it. Backup administrators must trigger a
recall or backup applications must perform a recall operation before you
can restore cloud-based backups. Once you recall a file, the system
resets aging for that file, and its time starts again from zero. You can only
recall a file on the source MTree only. Integrated applications can recall a
file directly.
When no space is available on the active tier, the recall action fails to
move the file. The system checks available space before it initiates any
data movement. Recall and data movement actions occur per file. Dell
Cloud Tier checks for existing data segments on the active tier. The
system recalls from the cloud only the segments that are not present in
the active tier. This type of recall makes data movement efficient.
Select Data Management > File System > SUMMARY. In the Cloud Tier
section of the Space Usage panel, click RECALL, or expand the File
System Status panel at the bottom of the screen. Click RECALL.
In the Recall File from Cloud dialog, enter the exact file name without
using wildcards, and the full path of the file. For example, you can enter
the following: /data/col1/mt11/ file1.txt. Click Recall to start the
recall process.
Only four recall jobs are active at any given time. You can cue up to 1,000
recall jobs to run automatically. The system automatically generates the
recall queue. The recall continues after you restart the system during a
recall, when the system is available.
You can restore the data from the active tier when the file recall
completes.
DD Virtual Tape Library (DD VTL) supports storing the VTL vault in cloud
storage. Storing VTL vaulted tapes on cloud tier storage is called tape out
to cloud. The DD Operating System (DDOS) supports cloud storage for
use as the VTL vault. DD VTL does not support the option to store the
vault from an MTree replication destination on cloud storage.
You must license and enable the Dell Cloud Tier feature on either a
physical or virtual PowerProtect DD appliance. The appliance must also
have a VTL license.
Configure a cloud profile and cloud unit name before using the DD VTL
tape out to cloud feature.
The Fibre Chanel (FC) and network interface requirements for virtual tape
library (VTL) are the same for both cloud-based and local vault storage.
DD VTL does not require a special configuration to use cloud storage for
the vault. When you configure the DD VTL, select the cloud storage as the
vault location.
The workflow for backing up and restoring data using the PowerProtect
DD VTL tape out to cloud feature is as follows:
1. Perform the backup server or client configuration and user application
setup.
The administrator applies a tape selection policy at the pool level and sets
the age threshold for data moving to the cloud. The minimum setting is 14
days. If you change the policy to user-managed, you can use a command
to select one or more tapes to move during the next scheduled data
movement. If the administrator sets the policy to none, the system moves
no tapes to the cloud.
The cloud data movement schedule defines how frequently the system
moves vaulted tapes to the cloud. Administrators can set the cloud data
movement schedule to Never, to any number of days or weeks, or to run
Manually.
Data movement for the VTL occurs at the tape volume level. You can
move individual tape volumes or collections of tape volumes to the cloud
tier but only from the vault location. You cannot move tapes in other
elements of a VTL.
Use the backup application to verify the tape volumes that move to the
cloud are marked and inventoried according to the backup application
requirements.
Manually select tapes for migration to the cloud tier. You can set the
migration to migrate immediately or at the next scheduled data migration.
You can also manually remove tapes from the migration schedule.
After the next scheduled data migration, the tapes move from the cloud
unit to the vault. You can return tapes to a library from the vault.
PARTICIPANT GUIDE
PARTICIPANT GUIDE
[email protected]
PowerProtect DD Data Security Implementation
Exploring DD Encryption 23
Exploring DD Encryption 23
DD Encryption at Rest 23
Exploring Inline Encryption 24
Key Management 25
Key Management Considerations 26
Exploring Authorization Workflow 27
Configuring Encryption 29
Changing the Encryption Passphrase 30
Disabling Encryption 31
File System Locking 32
Locking the File System 34
Terms 38
Admin users can enable or disable all users except the sysadmin user and
users with the security role. Only Security officers can enable or disable
other security officers.
security officer. If the administrator selects No, then the system skips
creating the security officer.
After the initial system configuration, create a user with a security role
within the DD System Manager. Go to Administration > Access > Local
Users > CREATE > Create User. In the Create User window, enter the
user details and select Security in the Management Role field.
Deep Dive: You can create a user with a security role using
the command line interface. For details, see the DDOS
Command Reference Guide found on the Dell Support
website.
You can use the runtime authorization policy to update or extend retention
periods and rename MTrees.
The following are the CLI commands to view and configure the runtime
authorization policy:
• The authorization policy set security-officer
{enabled | disabled} command enables or disables runtime
authorization policy. You cannot disable the authorization policy on DD
Retention Lock Compliance systems.
• The authorization policy reset security-officer
command resets runtime authorization policy to defaults. You cannot
reset the authorization policy on DD Retention Lock Compliance
systems.
• The authorization policy set security-officer
enabled command shows the current authorization policy
configuration.
• The authorization show history [last n { hours | days
| weeks }] command views or audit past authorizations according to
the interval that the security officer provides in the command.
You must install the DD Retention Lock Compliance license to enable the
security officer authorization policy. You are not permitted to disable the
authorization policy on DD Retention Lock Compliance systems.
Most of the steps in the following procedure require both sysadmin and
security officer credentials. The following is the general flow of operations
for DD Retention Lock:
1. License and enable DD Retention Lock.
a. Enable DD Retention Lock Governance, Compliance, or both on
the PowerProtect DD systems. You must install a valid license for
each of the editions you plan to enable.
2. Commit files and MTrees.
a. Commit the files and MTrees to lock on the PowerProtect DD
system using client-side commands. Use an appropriately
configured archiving or backup application, either manually, or by
using scripts. Windows clients might download utilities for DDOS
compatibility. Dell backup applications like PowerProtect Data
Manager and Dell Cyber Recovery use the DD API to lock backups.
3. Extend retention times.
a. Optionally, you can extend the file retention times of the committed
files and MTrees.
4. Delete files.
a. Though you are not required to do so, you can delete files with
expired retention periods using client-side commands.
To perform retention locking on a file, change the last access time1 (atime)
of the file to the retention time of the file. The retention time that you set is
the time when the file can be deleted. Use a qualified archive application
to perform this operation.
1The archiving application must set the atime value, and DD Retention
Lock must enforce it, to avoid any modification or deletion of locked files.
• If the set atime is less than or equal to the current time plus 12 hours,
the retention time falls before the minimum retention period. The file is
not locked and the system generates no error message.
• If the set atime is less than the minimum retention period and is greater
than the current time plus 12 hours, then the file falls before the
minimum retention period. The file is not locked and the system
generates an error message.
• If the set atime is greater than the maximum retention period, then the
file is not locked and the system generates an error message.
• If the set atime is greater than or equal to the minimum retention period
and if atime is less than or equal to the maximum retention period, then
the file is locked.
You cannot modify locked files on the PowerProtect DD system even after
the retention period for the file expires. You can copy files to another
system and then modify them. Data that you archive and retain on the
PowerProtect DD system after the retention period expires remains on the
system. You can delete the remaining files using an archiving application,
or remove them manually.
The automatic retention lock feature allows you to set automatic values for
the retention period on a per MTree basis. When you add new files to an
MTree that already has preconfigured retention lock settings, the new files
can automatically receive lock settings. Adding the new files does not
affect the other files in the MTree. Both Retention Lock Compliance and
Retention Lock Governance support automatic retention lock.
DD System Manager
Command Line
You can also manage Retention Lock using the following commands in
the CLI:
• The mtree retention-lock enable mode {compliance |
governance} mtree mtree-path command enables Retention
Lock and edition for the specified MTree. Enabling Retention Lock
Compliance requires security officer authorization.
• The mtree retention-lock disable mtree mtree-path
command disables Retention Lock for the specified MTree.
The mtree retention-lock disable command is allowed on
Retention Lock Governance MTrees only.
• The mtree retention-lock set {min-retention-period |
max-retention-period | automaticretention-period |
automatic-lock-delay} period mtree mtree-path
command sets the minimum or maximum retention period for the
specified MTree. The mtree retention-lock set command
requires security officer authorization when applying the command to
an MTree that is enabled with Retention Lock Compliance.
• The mtree retention-lock show {min-retention-period |
max-retention-period | automaticretention-period |
automatic-lock-delay} mtree mtree-path command shows
the minimum or maximum retention period, the automatic retention
period, or the automatic lock delay time for the specified MTree.
• The mtree retention-lock indefinite-retention-hold
enable mtree mtree-path - command enables Indefinite
Retention Hold (IRH) for the specified MTree. This command option is
allowed on Retention Lock-enabled MTrees only (Governance or
Compliance). It is not allowed on the /data/col1/backup MTree. When
IRH is enabled, all locked and expired files are protected until you
disable the hold. Revert operations on locked files for Retention lock
Governance MTrees are not allowed. You cannot disable Retention
Lock for an MTree when IRH is enabled.
• The mtree retention-lock indefinite-retention-hold
disable mtree mtree-path command disables Indefinite
Retention Hold (IRH) for the specified MTree. You can use this
command on IRH-enabled MTrees only. You cannot apply IRH on the
/data/col1/backup MTree. You can delete expired files
immediately after disabling IRH on an MTree.
Deleting files leaves behind residual data that a person can use to recover
the deleted data. Sanitization removes any trace of deleted files with no
residual remains.
During sanitization, the system runs through five phases: merge, analysis,
enumeration, copy and zero.
Enumeration: Reviews all the files in the logical space and remembers
what data is active.
Copy: Copies live data forward and clear the space that it used to occupy.
You can view the progress of these five phases by running the system
sanitize watch CLI command.
Exploring DD Encryption
Exploring DD Encryption
DD Encryption at Rest
Encryption of data at rest protects backup and archive data that is stored
on systems with data encryption. As data is ingested, the PowerProtect
DD deduplicates, compresses, and encrypts the stream using an
encryption key before writing to the redundant array of independent disks
(RAID) group. The encryption at rest feature satisfies internal governance
rules and compliance regulations.
authenticity using the Galois/Counter (GCM) mode. You can use both
confidentiality and message authenticity in GCM mode.
When using Data Security Manager (DSM), the system administrator can
select an Advanced Encryption Standard (AES) algorithm for encrypting
all data within the system. The AES consists of either 128 or 256-bit
encryption.
Key Management
To invoke the authorization policy, the security officer must log in through
the command line interface (CLI) and issue the runtime authorization
policy command authorization policy set security-officer
enabled.
Configuring Encryption
Disabling Encryption
You must set security authorization and provide a security officer login and
password to disable encryption.
You can enable the file system lock to securely transport the DD-
Encryption-enabled protection system and its external storage devices.
You can also use the same feature to lock a disk when you are replacing
it. This procedure requires both the security officer and the system
administrator authorization. A passphrase protects the encryption key that
is stored on a disk that the system encrypts by the passphrase. You
cannot retrieve this passphrase when the system is locked.
Without the encryption that file system locking provides, a thief with
forensic tools could recover the data.
a. When you are ready, you can unlock the file system using a similar
procedure.
a. If you enter the passphrase is incorrectly, the file system does not
start and the system reports the error. Type the correct passphrase,
as directed in the previous step.
Dell Technologies recommends that you should only destroy a file after
careful consideration. You cannot reverse the file system destroy
operation. Destroying the file system deletes all data in the file system,
including virtual tapes. Deleted data is not recoverable.
Use the filesys destroy to destroy the file system. You must run the
command with an admin role. The filesys destroy command runs
only with a security policy authorization on the system and with security
officer endorsement. If you configure multifactor authentication on the
system, the security officer must enter the RSA passcode to authorize this
command. The system runs some enhanced security checks before
allowing filesys destroy.
Client-side command
A client-side command is a command that originates and occurs on the
client. DD System Manager and command line command are considered
server-side commands.
PARTICIPANT GUIDE
PARTICIPANT GUIDE
[email protected]
PowerProtect DD Secure Multi-Tenancy Implementation and Administration
Terms 55
• Data protection
• Network security
• Disaster recovery capabilities
SMT provides the ability to securely isolate many users and workloads in
a shared infrastructure, so that the activities of one tenant is not apparent
or visible to the other tenants.
The following cloud models are allowed for protection storage with Secure
Multitenancy (SMT):
Local Backup
For large enterprises in a private cloud, local backup occurs for multiple
business units in the same geography. Each business unit is a tenant for
SMT.
Replicated Backup
Remote offices with local backup can replicate data to a public, private, or
hybrid cloud as an SMT tenant.
Remote Backup
SMT Architecture
SMT only allows tenants to manage and monitor their own data.
Management of isolated tenant data enables chargeback information and
monitors trending and other reporting.
tenants. The MTrees, SUs and VTL pools provide logical data isolation
by restricting the visibility of each tenant and read and write access to
data in their tenant units only.
• The tenant configures backup and archiving applications to send data
to their configured tenant unit MTree, SUs, or VTL pool.
Tenants Using Separate Protocols Logically and Securely Isolated from Each Other
The following compares the abilities of the landlord and the tenant
administrator.
• The landlord can:
− Monitor and manage all tenants.
− View all content across the entire system.
− Set capacity and stream quotas on the system for different tenant
units.
− Generate reports on tenant unit data.
• The tenant administrator can:
In the example, when Tenant A names Tenant Unit A 1.1 as the source
and Tenant Unit A 2.1 as the destination.
When using strict security mode the UUID of the source and destination
tenant must be set and identical. i.e. the tenant must be created via the
CLI and the UUID set using a command like: smt tenant create tenant-
name [tenant-uuid uuid]
a. Administrators set the capacity quota through the command line for
any future replication MTrees.
b. Capacity quotas prevent any single tenant from creating a full
storage condition that prevents other tenants from adding data to
their own spaces.
With a local IP assigned to Tenant A, their tenant units are accessible only
by a client using the configured local IP. Without a local IP associated with
With remote IP addresses, clients can only access the tenants when they
connect from a defined set of configured remote IPs. An authorized user
with a username and password without a remote IP assigned to their client
cannot gain access to the system. This form of network isolation creates
an association between the management IP and a tenant unit. Remote IP
addresses provide a layout of network isolation using access validation.
Setting local and remote IPs is only required for self-service sessions.
Creating a Tenant
There are two parts to the Multitenancy window. The first part displays a
list of all tenants and tenant units that the PowerProtect DD System
Manager manages. The second part displays a detailed overview of
selected tenants and tenant units.
When you select All Tenants, a detailed overview displays the number of
configured tenants, tenant units, and host systems.
Inside the Identity Host System, identify a system that DDMC manages
with enough capacity to host your new tenant unit.
The Multitenancy page has two parts. The first part provides a list of all
tenants and tenant units in the data center. The second part displays an
overview of either the selected tenant or tenant units.
Configuring Administration
Configuring MTrees
MTrees Page to Administer MTrees for the Tenant Unit and the Summary Page to Create
the Tenant Unit
On the MTrees page, select the MTree for the tenant unit, or click ADD,
EDIT, or DELETE to modify the MTree list.
Important:
If you configure strict security mode, you must create the
tenant in the command line interface using the smt tenant
create tenant-name [tenant-uuid uuid]
command.
Attribute Description
Default Gateway You can configure a default gateway for tenant units
belonging to the same tenant.
The way to make tenant units work with a DIG is to configure all of the IP
addresses in the DIG as local data access IP addresses within the tenant
unit. The SMT tenant unit takes full advantage of any link that the DIG
provides.
The net filter or IP table restricts access by blocking packets that are
based on remote and local IP setups.
Each tenant unit has a firewall rule set to only permit traffic from a certain
client IP address.
If you configure remote IPs with subnets and data ranges, the system
does not perform the tenant isolation check.
Non-SMT entities within the PowerProtect DD system may not use the
unique default gateways that the administrator assigns a tenant.
Managing Quotas
Once the administrator sets the quotas, the tenant admin can monitor one
or all tenant units. Monitoring ensures that no single object exceeds its
allocated quotas and deprives others of system resources.
The Quota Settings Window Allows Administrators to Disable and Configure an MTree
Quota.
Tenant Self-Service
Alerts related to secure multi-tenancy are specific to each tenant unit and
differ from PowerProtect DD system alerts. When you enable tenant self-
service, the tenant-admin can choose to receive alerts about the various
system objects. A tenant-admin may only view or modify notification lists
with which they are associated.
The following are some of the CLI commands to administer CIFS and NFS
for SMT:
• The mtree create mtree-path [tenant-unit tenant-unit-
name] [quotasoft-limit n {MiB|GiB|TiB|PiB] [quota-
hard-limit n {MiB|GiB|TiB|PiB}] command creates an
MTree in the specified path and sets the capacity soft and hard quotas
for the MTree.
• The mtree modify mtree-path tenant-unit tenant-unit-
namecommand assigns an MTree to a tenant-unit.
• The cifs share create share path path {max-
connections max connections | clients clients |
users users | comment comment} command creates a CIFS
share.
• The nfs show active [tenant-unit tenant-unit]
command displays the NFS clients that are active over the past 15
minutes and the mount path for each client.
Virtual tape library (VTL) access groups create a virtual access path
between a host system and PowerProtect DD VTL to achieve tenant data
isolation. The physical Fibre Channel connection between the host system
and PowerProtect DD VTL must exist.
The backup application on the host system writes to and reads from the
DD VTL tapes. DD VTL creates the tapes in a DD VTL pool, which is an
MTree formatted for VTL data. Administrators can assign pools to tenant
units. The association of VTL pools to MTrees enables SMT monitoring
and reporting.
The following are some of the CLI commands that tenant-admins can run
to view read-only information about their VTL pools:
• The mtree list command displays a list of MTrees belonging to
their tenant unit.
• The mtree show compression command displays statistics about
compression for their MTree.
• The mtree show performance command displays statistics about
performance for their MTree.
During DD Boost MFR, storage units are not replicated in total. Instead,
the backup application selects certain files within a storage unit for
replication. You can replicate files from a storage unit that is assigned to a
tenant unit on one system to a different storage unit assigned to a tenant
unit on another system.
Access the Tenant Unit Details lightbox by selecting a specific Tenant from
the All Tenants list and clicking the information icon.
Monitoring Quotas
Initially, you can set quotas with the Secure MultiTenancy (SMT)
configuration wizard. You can perform quota tasks using the PowerProtect
DD Management Center (DDMC), or the command line.
Landlords and tenant admins can collect usage statistics and compression
ratios for MTrees associated with their tenant-units using the following
commands:
MTree List
For landlords, use the mtree list command to list MTrees that exist on
a PowerProtect DD system. For tenant-admins, use mtree list to list
MTrees within their assigned tenant-unit.
The quota capacity show command lists capacity quotas for MTrees
and storage-units.
PCM Operations
PCM provides space usage information for an MTree. You can configure
PCM using both the DD System Manager (DDSM) and the PowerProtect
DD Management Center (DDMC).
Perform the following steps to start the PCM process immediately using
DD System Manager:
1. Select Data Management > MTree > SUMMARY.
2. Scroll down to the Physical Capacity Measurements area and click
Measure Now to the right of Submitted Measurements.
3. Select a Priority as Normal or Urgent and click SUBMIT.
You can create status and usage templates to generate reports for Secure
MultiTenancy (SMT). Go to Reports > Management within PowerProtect
DD Management Center (DDMC) to create the templates.
Add a Report
From the report type page, select Multitenancy Reports and click NEXT.
From the Add Report Template Content page, provide a Name, Template,
and Sections and click NEXT.
From the Add Report Template Scope page, select Tenant or Tenant Unit,
and click NEXT.
Here is an example of the Scope page for both Daily Status and Usage
Metrics report template.
Add a Schedule
From the Schedule page, provide Time Span, Schedule, and Report
retention information, and click NEXT.
The following is an example of the Schedule page for both Daily Status
and Usage Metrics report templates:
From the Email page, enter the email ID and click NEXT.
Review the information in the Summary page. Confirm that you check the
Save report template and Run report checkboxes then click FINISH.
Report Templates
Sections Available in the Daily Status and Usage Metrics Report Templates
When you create a report from a report template, you can add sections
such as a report overview, logical and physical capacity, replication
activity, and the number of network bytes used.
The Daily Status template includes daily status for the tenant or tenant
unit as it pertains to report overview, capacity, replication, and network
bytes used.
The Usage Metrics template includes metrics for the tenant and tenant
unit as it pertains to logical and physical capacity consumption and
network bytes used.
Network isolation
Administrators can eliminate potential security problems with tenants
accessing the system over the network. They can configure specific
network clients for tenants using local and remote internet protocol (IP).
Local and Remote IPs create a layer of network isolation using access
validation.
Storage unit
A storage unit is an MTree configured for the DD Boost protocol. The
system administrator creates a storage unit and assigns it to a DD Boost
user. The DD Boost protocol permits access only to storage units
assigned to DD Boost users connected to the system.
Tenant
In SMT a tenant is responsible for scheduling and running the backup
application for the tenant customer. Tenants are also responsible for
managing their own tenant units, including configuring backup protocols
and monitoring resources and stats within their tenant unit.
Tenant unit
A tenant unit is a partition of a PowerProtect DD system that serves as the
unit of administrative isolation between tenants. Administrators assign
MTrees to tenant units. A tenant-admin user can configure and monitor a
specific tenant unit. Tenant units may consist of one or more MTrees.
Tenant units can also span multiple PowerProtect DD systems.
PARTICIPANT GUIDE
PARTICIPANT GUIDE
[email protected]
Capacity and Throughput Planning and Monitoring
Appendix 57
Terms 60
Collect information about your daily backup sizes and the amount of time
to complete the backups. Calculate capacity needs using the information
that you collect about the backup system. Know that these factors can
affect system data capacity requirements to operate data protection
operations successfully.
The key to space savings in a data protection system is the ability for the
system to recognize duplicate data. Data deduplication achieves space
savings by finding and reducing the number of duplicate copies of data
that are stored. The amount of duplicate data within the protection system
sets the limit on space savings that you realize through deduplication.
The following are factors to consider when determining the capacity needs
for your protection system:
Data Size
The data size is the total physical size of the data that is backed up to a
protection system.
Data Type
Office workers create files in their normal day-to-day. These files are
examples of data that deduplicates well. Email messages, spreadsheets,
and text documents often contain redundant data that a work community
distributes and shares. Backing up a virtual data center with thousands of
identical virtual machines might experience an overall data reduction of
1000:1.
Deduplication Rate
Change Rate
High change rates can mean low deduplication as new and unique data is
unlikely to match preexisting data on the protection system and
deduplicates poorly.
Retention Policies
Data reduction rates vary based on data types, similar data amounts, and
storage duration. It is difficult to determine exactly what rates to expect
from any given protection system and the data it protects. Protection
systems usually achieve the highest data reduction rates when it stores
many full backups.
Different datatype and sizes affect your data reduction results. According
to Dell Technologies, you can realize up to 65:1 data reduction ratio when
applying best practices for deduplicating data on a PowerProtect DD
appliance.
The following are factors that are associated with lower data deduplication
ratios:
The following are factors that are associated with higher data
deduplication ratios:
With an estimated compression of 10x, the amount of space that you need
for each incremental backup is 100 GB.
As subsequent full backups run, the backups likely yield a higher data
reduction rate. If you estimate 20x for the data reduction rate on
subsequent full backups, then 10 TB of data compresses to 500 GB.
Four daily incremental backups require 100 GB each. One weekly backup
that uses 500 GB of space yields a burn rate of 900 GB per week. A 900
GB weekly burn rate over the full 8-week retention period totals an
estimated 7.2 TB storage. 7.2 TB storage includes the daily incremental
backups and the weekly full backups.
When you add the requirement for daily incremental backups to the initial
full backup, you realize a required capacity of about 9.2 TB. On a system
with 10 TB of usable capacity, the appliance operates at about 92% of
capacity. You might need more than an eight percent buffer for current
needs. You might want to consider a system with a larger capacity, or a
system to which you can add extra storage to compensate for data
growth.
You can calculate the required throughput by dividing the size of the
largest backup by the backup window time.
You can divide 2 TB by 10 hours. The result is that you need a throughput
rate of at least 200 GB per hour or about 489 Mbps throughput to move
the data to the protection system within the backup window.
Maximum Up to Up to Up to 15 Up to 26 Up to 41 Up to
Throughput 4.2 12.7 TB/hr TB/hr TB/hr 1.85
TB/hr TB/hr TB/hr
Maximum Up to Up to Up to 33 Up to 57 Up to 94 Up to
Throughput 7.0 27.7 TB/hr TB/hr TB/hr 4.2
with DD TB/hr TB/hr TB/hr
Boost
Logical Up to Up to Up to Up to Up to Up to
Capacity 1.6 PB 11.2 PB 18.7 PB 49.9 PB 97.5 PB 4.8
PB
Logical Up to Up to Up to Up to Up to Up to
Capacity 4.8 PB 33.5 PB 56.1 PB 149.8 293 PB 14.8
with Cloud PB PB
Tier
Usable Up to 96 Up to Up to Up to Up to Up to
Capacity TB 516 TB 864 TB 2.3 PB 4.5 PB 288
with Cloud TB
Tier
The maximum capacity is the amount of usable data storage space for
each PowerProtect DD model. Capacity is based on the maximum number
of drives a specific PowerProtect DD model supports. The maximum
capacity for each PowerProtect DD model assumes the maximum number
of internal and external drives that are supported for that model.
The number of network streams you may expect to use depends on your
hardware model.
External factors in the backup environment often impact how fast data is
sent to the PowerProtect DD appliance. External factor bottlenecks do not
affect the potential throughput of the PowerProtect DD appliance.
Simultaneous Streams
System Cleaning
Precompressed Data
RAID Rebuild
If a storage disk fails, the system must replace the failed disk with a spare
disk. The replacement maintains the full redundancy of the RAID system
and rebuilds all lost data onto the spare disk. During a RAID disk rebuild
the PowerProtect DD system uses system resources that it uses for other
operations.
Consider the daily change rate in the data and retention period.
Maximum Up to Up to Up to 15 Up to 26 Up to 41 Up to
Throughput 4.2 12.7 TB/hr TB/hr TB/hr 1.85
TB/hr TB/hr TB/hr
Maximum Up to Up to Up to 33 Up to 57 Up to 94 Up to
Throughput 7.0 27.7 TB/hr TB/hr TB/hr 4.2
with DD TB/hr TB/hr TB/hr
Boost
Logical Up to Up to Up to Up to Up to Up to
Capacity 1.6 PB 11.2 PB 18.7 PB 49.9 PB 97.5 PB 4.8
PB
Logical Up to Up to Up to Up to Up to Up to
Capacity 4.8 PB 33.5 PB 56.1 PB 149.8 293 PB 14.8
with Cloud PB PB
Tier
Usable Up to 96 Up to Up to Up to Up to Up to
Capacity TB 516 TB 864 TB 2.3 PB 4.5 PB 288
with Cloud TB
Tier
Allow for a minimum 20% buffer in both capacity, stream count, and
throughput requirements:
• Use the required capacity divided by maximum capacity of a particular
model to calculate the capacity percentage.
• Use the required throughput divided by the maximum throughput of a
particular model to calculate the throughput percentage.
Sometimes one model provides adequate capacity but does not provide
enough throughput, or conversely. Your model selection should
accommodate both throughput and capacity requirements with an
appropriate buffer.
Maximum Up to Up to Up to 15 Up to 26 Up to 41 Up to
Throughput 4.2 12.7 TB/hr TB/hr TB/hr 1.85
TB/hr TB/hr TB/hr
Maximum Up to Up to Up to 33 Up to 57 Up to 94 Up to
Throughput 7.0 27.7 TB/hr TB/hr TB/hr 4.2
with DD TB/hr TB/hr TB/hr
Boost
Logical Up to Up to Up to Up to Up to Up to
Capacity 1.6 PB 11.2 PB 18.7 PB 49.9 PB 97.5 PB 4.8
PB
Logical Up to Up to Up to Up to Up to Up to
Capacity 4.8 PB 33.5 PB 56.1 PB 149.8 293 PB 14.8
with Cloud PB PB
Tier
Usable Up to 96 Up to Up to Up to Up to Up to
Capacity TB 516 TB 864 TB 2.3 PB 4.5 PB 288
with Cloud TB
Tier
Scenario 1
Maximum Up to Up to Up to 15 Up to 26 Up to 41 Up to
Throughput 4.2 12.7 TB/hr TB/hr TB/hr 1.85
TB/hr TB/hr TB/hr
Maximum Up to Up to Up to 33 Up to 57 Up to 94 Up to
Throughput 7.0 27.7 TB/hr TB/hr TB/hr 4.2
with DD TB/hr TB/hr TB/hr
Boost
Logical Up to Up to Up to Up to Up to Up to
Capacity 1.6 PB 11.2 PB 18.7 PB 49.9 PB 97.5 PB 4.8
PB
Logical Up to Up to Up to Up to Up to Up to
Capacity 4.8 PB 33.5 PB 56.1 PB 149.8 293 PB 14.8
with Cloud PB PB
Tier
Usable Up to 96 Up to Up to Up to Up to Up to
Capacity TB 516 TB 864 TB 2.3 PB 4.5 PB 288
with Cloud TB
Tier
If both Dell EMC Cloud Tier and DD Boost are used, the customer could
use the DD3300. Otherwise the DD6900 would be the better choice.
If Cloud Tier and DD Boost are used, the DD3300 is a possible solution:
• If DD Boost is the primary protocol that is used for backup data and 7.0
TB/hr is required, a 54% buffer for throughput is achieved.
• If Cloud Tier is used, the DD3300 provides up to 96 TB capacity. The
system capacity provides a 27% buffer.
The DD6900 is a possible solution even if Cloud Tier and DD Boost are
not used:
• The DD6900 provides up to 15 TB/hr using CIFS or NFS and up to 33
TB/hr when using DD Boost. Regardless of which protocols are used,
the DD6900 exceeds the throughput requirements.
• The DD6900 provides up to 288 TB usable capacity without Cloud
Tier. The system capacity is above the 70 TB storage requirement,
leaving a 76% buffer.
Scenario 2
A customer estimates that they require 275 TB usable storage for backups
over the next 5 years. They require at least 15 TB/hour throughput to
ensure that all data is backed up within their backup window.
Maximum Up to Up to Up to 15 Up to 26 Up to 41 Up to
Throughput 4.2 12.7 TB/hr TB/hr TB/hr 1.85
TB/hr TB/hr TB/hr
Maximum Up to Up to Up to 33 Up to 57 Up to 94 Up to
Throughput 7.0 27.7 TB/hr TB/hr TB/hr 4.2
with DD TB/hr TB/hr TB/hr
Boost
Logical Up to Up to Up to Up to Up to Up to
Capacity 1.6 PB 11.2 PB 18.7 PB 49.9 PB 97.5 PB 4.8
PB
Logical Up to Up to Up to Up to Up to Up to
Capacity 4.8 PB 33.5 PB 56.1 PB 149.8 293 PB 14.8
with Cloud PB PB
Tier
Usable Up to 96 Up to Up to Up to Up to Up to
Capacity TB 516 TB 864 TB 2.3 PB 4.5 PB 288
with Cloud TB
Tier
If both DD Boost and Dell EMC Cloud Tier are used, the customer could
use the DD6900. Otherwise the DD9400 would be the better choice.
If Cloud Tier and DD Boost are used, the DD6900 is a possible solution:
• If DD Boost is the primary protocol that is used for backup data 33
TB/hr is backed up, providing a 55% buffer for throughput.
• If Cloud Tier is used, the DD6900 provides up to 576 TB capacity. The
system capacity provides a 52% buffer.
If Cloud Tier and DD Boost are not used, the DD6900 is not a possible
solution:
• If CIFS and NFS are the primary protocols that are used for backup
data, the customer can expect up to 15 TB/hr for backup data. The
system throughput matches the requirement, but leaves no buffer for
growth.
• If Cloud Tier is not used, the DD6900 provides up to 288 TB capacity.
Although the system capacity is above the requirement, it does not
provide the recommended 20% buffer.
The DD9400 is a possible solution even if Cloud Tier and DD Boost are
not used:
• The DD9400 provides up to 26 TB/hr using CIFS or NFS and up to 57
TB/hr when using DD Boost. Regardless of which protocols are used,
the DD9400 exceeds the throughput requirement.
• The DD9400 provides up to 768 TB usable capacity without Cloud
Tier. The system capacity is above the 275 TB storage requirement,
leaving a 64% buffer.
Scenario 3
A customer estimates that they require 625 TB usable storage for backups
over the next 5 years. They require at least 36 TB/hour throughput to
ensure that all data is backed up within their backup window.
Maximum Up to Up to Up to 15 Up to 26 Up to 41 Up to
Throughput 4.2 12.7 TB/hr TB/hr TB/hr 1.85
TB/hr TB/hr TB/hr
Maximum Up to Up to Up to 33 Up to 57 Up to 94 Up to
Throughput 7.0 27.7 TB/hr TB/hr TB/hr 4.2
with DD TB/hr TB/hr TB/hr
Boost
Logical Up to Up to Up to Up to Up to Up to
Capacity 1.6 PB 11.2 PB 18.7 PB 49.9 PB 97.5 PB 4.8
PB
Logical Up to Up to Up to Up to Up to Up to
Capacity 4.8 PB 33.5 PB 56.1 PB 149.8 293 PB 14.8
with Cloud PB PB
Tier
Usable Up to 96 Up to Up to Up to Up to Up to
Capacity TB 516 TB 864 TB 2.3 PB 4.5 PB 288
with Cloud TB
Tier
If DD Boost and Dell EMC Cloud Tier are used, the customer could use
the DD9400. Otherwise, the DD9900 would be the better choice.
If Cloud Tier and DD Boost are used, the DD9400 is a possible solution:
• If DD Boost is the primary protocol that is used for backup data 57
TB/hr is backed up, providing a 37% buffer for throughput.
• If Cloud Tier is used, the DD9400 provides up to 2.3 PB capacity. The
DD9400 provides a 73% buffer for capacity.
If Cloud Tier and DD Boost are not used, the DD9400 is not a possible
solution:
• If CIFS and NFS are the primary protocols that are used for backup
data, the customer can expect up to 26 TB/hr for backup data. The
maximum throughput of the DD9400 is below the requirement of 36
TB/hr.
• If Cloud Tier is not used, the DD9400 provides up to 768 TB capacity.
The system capacity provides only a 19% buffer for capacity.
The DD9900 is a possible solution even if Cloud Tier and DD Boost are
not used:
• The DD9900 provides up to 41 TB/hr using CIFS or NFS and up to 94
TB/hr when using DD Boost. Regardless of which protocols are used,
the DD9900 exceeds the throughput requirement.
• The DD9900 provides up to 1.5 PB usable capacity without Cloud Tier.
The capacity of the system is well above the 625 TB storage
requirement, leaving a 58% buffer.
Consider using tools such as Dell Data Protection Advisor (DPA) to gain
actionable insights about what the bottleneck may be in a backup solution.
System Utilization
1. The State column shows the state of the CPU. If the CPU is
performing only one type of operation, it reports only one state.
a. C indicates that the system is performing file system cleaning
operations.
b. D indicates that the system is reconstructing data onto a
replacement.
c. V – file verification
2. CPU avg/max reports the average and maximum CPU utilization in
percent. The number in brackets is the CPU ID of the most-loaded
CPU.
3. Disk max reports the highest disk utilization over all disks. The number
in brackets is the disk ID of the most-loaded disk.
If the CPU utilization shows 80% or greater or if the disk utilization is 60%
or greater for an extended period, the PowerProtect DD appliance is likely
to run out of disk capacity or reach the CPU processing maximum.
Confirm that the system is not performing cleaning or disk reconstruction
operations. You can check cleaning and disk reconstruction in the State
column of the system to show performance output.
Monitoring Throughput
In the example report on this page, you can see a high and steady amount
of data inbound on the network interface eth0a. The amount of inbound
data indicates that the backup host is writing data. The incoming data is
backup traffic, and not replication traffic as the Repl column indicates no
activity.
The likely issue causing the low Disk write rates is the high number of
duplicate data segments that are duplicates of segments arriving on the
system. The PowerProtect DD appliance identifies the duplicates in real-
time as they arrive and writes only those new segments it detects.
When you identify performance problems, document the time when you
observe poor performance to know where to look in the system show
performance output.
During normal data protection operations, you must maintain proper file
system space on your PowerProtect DD appliance. In DD System Manager
(DDSM), select Data Management > File System to review details about
the file system.
The File System window provides the following tabs for monitoring details:
• The SUMMARY tab shows space usage statistics for the active and
cloud tiers.
• The DDENCRYPTION tab displays encryption types, status, and
progress. You must license the DD Encryption feature in order to view
any status for encryption.
• The CHARTS tab displays graphs for Space Usage, Consumption, and
Daily Written over time.
Select Data Management > MTree in order to view details about MTrees
available for use on the system.
The MTree window provides the following tabs for monitoring details:
• The SUMMARY tab displays the selected MTree Details, configured
Quotas, Protocols in use, Snapshots, Physical Capacity Measurements
(PCM) and Retention Lock Status.
• The SPACE USAGE tab displays a space usage graph for the
selected MTree.
• The DAILY WRITTEN tab displays a graph of data written daily over a
selected time.
The following are the three levels of capacity and methods to reduce the
amount of space used on the system:
• Level 1 is where capacity reaches a point that the appliance cannot
write additional data to the file system. The system generates an out of
space alert.
− To remedy a level 1 capacity limit alert, delete unneeded datasets.
Shorten the data retention period on the system. Lastly, delete
snapshots, and then perform a file system cleaning operation to
recover space.
• Level 2 is where capacity reaches a point that you cannot delete files
because deleting files requires free space.
− To remedy a level 2 capacity limit, expire snapshots and perform a
file system cleaning operation to recover space.
• Level 3 is where all attempts to expire snapshots, delete files, or write
new data fail.
The Data Management > File System > SUMMARY Page in DD System Manager
In the DD System Manager, the Data Management > File System >
SUMMARY page displays current space usage and availability. The
SUMMARY page also provides an up-to-the-minute indication of the
compression factor.
You can monitor CPU and disk utilization in the state and utilization
options of the command output.
• The Active Tier Space Usage Tier section shows the amount of disk space
available based on the last cleaning.
internal index may expand as the appliance fills with data. The
index expansion takes space from the available amount.
− Cleanable displays the estimated amount of space that the system
could reclaim after running a cleaning operation.
• The Active Tier (Last 24-Hours) section displays compression information.
Data Management > File System > Chart > Space Usage Chart in DD System Manager
In the DD System Manager, the Data Management > File System >
CHARTS page displays graphs depicting how the PowerProtect DD writes
and stores data on the PowerProtect DD appliance.
The lines of the Space Usage chart denote measurements for the following:
• Pre-Comp Used is displayed as a blue line with blue shading. Pre-
Comp Used is the total amount of data that backup servers send to
In the DD System Manager, the Data Management > File System >
CHARTS page displays written and stored data on the PowerProtect DD
appliance in graph format.
Capacity
Selecting the Capacity item changes the chart so that it displays the
amount of space used relative to the total capacity of the system, with a
blue line indicating the storage limit.
The Capacity view also displays cleaning start and stop data points. The
graph covers one week by default and displays one cleaning event. This
PowerProtect DD appliance has its cleaning schedule set to one day per
week.
Space Useage
The File System > CHARTS > Consumption Chart Capacity Graph in DD System
Manager
When you view the Post-Comp Used graph, you can see the space that
the system consumes over time.
The chart displays Pre-Comp Used data in blue, Post-Comp Used data
in red, the Comp Factor in green.
Data Management > File System > CHARTS > Daily Written Chart
The Data Management > File System > CHARTS page displays graphs
depicting how data is written and stored on the PowerProtect DD
appliance.
The Daily Written graph displays a visual representation of data flow over
time.
The Daily Written graph allows you to see data ingestion and
compression factor results over a duration that you select. You may notice
trends in compression factor and ingestion rates. The graph shows data
amounts for both precompression and postcompression.
Local-Comp Factor displays the compression factor of the files when the
system writes them to disk. The default local compression method on
most PowerProtect DD appliances is GNU zip fast gzfast. Other
supported compression types are Lempel-Ziv lz, GNU zip gz, and none.
Gz is a zip-style compression that uses the least amount of space for data
storage. Gz compression uses 10% to 20% less space than lz on
average. However, some datasets get higher compression. PowerProtect
DD systems often use the gz compression type for nearline storage
applications in which performance requirements are low.
The File System Marks Expired Data For Deletion During File System Cleaning
Running file system cleaning can take from several hours to several days
to complete depending on the amount of space the file system must clean.
You can run file system cleaning by scheduling the operation or running it
manually.
You can run file system cleaning by setting a schedule for both the active
and the cloud tier.
If the system can reach the specified capacity percentage in the specified
number of days, it initiates cleaning on the active tier. On a scheduled
date and time, the system checks if the estimated capacity usage can be
reached in the specified number of days. If possible, set the days option
to twice the number of days set with the filesys clean set
schedule command.
For example, if you schedule cleaning to run once per week, set the Days
option to 14. If you schedule cleaning biweekly, set the Days option to 28.
If the specified estimate-percent-used is achievable, the system
initiates cleaning on the active tier. If the percentage is not achievable, the
system skips the cleaning operation.
Scenario 1
A customer estimates that they require 70 TB usable storage for backups
over the next 5 years. They require at least 3.25 TB/hour throughput to
ensure that all data is backed up within their backup window.
If both Dell EMC Cloud Tier and DD Boost are used, the customer could
use the DD3300. Otherwise the DD6900 would be the better choice.
If Cloud Tier and DD Boost are used, the DD3300 is a possible solution:
• If DD Boost is the primary protocol that is used for backup data and 7.0
TB/hr is required, a 54% buffer for throughput is achieved.
• If Cloud Tier is used, the DD3300 provides up to 96 TB capacity. The
system capacity provides a 27% buffer.
The DD6900 is a possible solution even if Cloud Tier and DD Boost are
not used:
• The DD6900 provides up to 15 TB/hr using CIFS or NFS and up to 33
TB/hr when using DD Boost. Regardless of which protocols are used,
the DD6900 exceeds the throughput requirements.
• The DD6900 provides up to 288 TB usable capacity without Cloud
Tier. The system capacity is above the 70 TB storage requirement,
leaving a 76% buffer.
Scenario 2
A customer estimates that they require 275 TB usable storage for backups
over the next 5 years. They require at least 15 TB/hour throughput to
ensure that all data is backed up within their backup window.
If both DD Boost and Dell EMC Cloud Tier are used, the customer could
use the DD6900. Otherwise the DD9400 would be the better choice.
If Cloud Tier and DD Boost are used, the DD6900 is a possible solution:
• If DD Boost is the primary protocol that is used for backup data 33
TB/hr is backed up, providing a 55% buffer for throughput.
• If Cloud Tier is used, the DD6900 provides up to 576 TB capacity. The
system capacity provides a 52% buffer.
If Cloud Tier and DD Boost are not used, the DD6900 is not a possible
solution:
• If CIFS and NFS are the primary protocols that are used for backup
data, the customer can expect up to 15 TB/hr for backup data. The
system throughput matches the requirement, but leaves no buffer for
growth.
• If Cloud Tier is not used, the DD6900 provides up to 288 TB capacity.
Although the system capacity is above the requirement, it does not
provide the recommended 20% buffer.
The DD9400 is a possible solution even if Cloud Tier and DD Boost are
not used:
• The DD9400 provides up to 26 TB/hr using CIFS or NFS and up to 57
TB/hr when using DD Boost. Regardless of which protocols are used,
the DD9400 exceeds the throughput requirement.
• The DD9400 provides up to 768 TB usable capacity without Cloud
Tier. The system capacity is above the 275 TB storage requirement,
leaving a 64% buffer.
Scenario 3
A customer estimates that they require 625 TB usable storage for backups
over the next 5 years. They require at least 36 TB/hour throughput to
ensure that all data is backed up within their backup window.
If DD Boost and Dell EMC Cloud Tier are used, the customer could use
the DD9400. Otherwise the DD9900 would be the better choice.
If Cloud Tier and DD Boost are used, the DD9400 is a possible solution:
• If DD Boost is the primary protocol that is used for backup data 57
TB/hr is backed up, providing a 37% buffer for throughput.
• If Cloud Tier is used, the DD9400 provides up to 2.3 PB capacity. The
DD9400 provides a 73% buffer for capacity.
If Cloud Tier and DD Boost are not used, the DD9400 is not a possible
solution:
• If CIFS and NFS are the primary protocols that are used for backup
data, the customer can expect up to 26 TB/hr for backup data. The
maximum throughput of the DD9400 is below the requirement of 36
TB/hr.
• If Cloud Tier is not used, the DD9400 provides up to 768 TB capacity.
The system capacity provides only a 19% buffer for capacity.
The DD9900 is a possible solution even if Cloud Tier and DD Boost are
not used:
• The DD9900 provides up to 41 TB/hr using CIFS or NFS and up to 94
TB/hr when using DD Boost. Regardless of which protocols are used,
the DD9900 exceeds the throughput requirement.
• The DD9900 provides up to 1.5 PB usable capacity without Cloud Tier.
The capacity of the system is well above the 625 TB storage
requirement, leaving a 58% buffer.
Spatial locality
Spatial locality also termed data locality is the use of data elements within
relatively close storage locations.
Weekly cycle
The weekly cycle is the number of days that the system runs an
incremental backup. The number of days in a cycle is typically four to six
days in a week.
PARTICIPANT GUIDE
PARTICIPANT GUIDE
[email protected]
Dell PowerProtect DD Management Center Administration
Monitoring Systems 47
Overview 47
Performing Daily Monitoring 50
Monitoring Capacity 55
System Details 60
Monitoring Replication 62
Administration Menu 66
Multitenancy 66
Permissions 69
Groups 71
Properties 72
Simulation Activity: Adding Permissions to Systems 73
Infrastructure Menu 74
Systems 74
Data Centers 75
Configuration Templates 77
Updates 79
Simulation Activity: Adding a Data Center 83
Smart Scale 84
Overview 84
System and Hardware Requirements Activity 85
Question 1 86
Question 2 86
Smart Scale Services 87
Glossary 89
Overview
DDMC:
Instructions
Notes
Question 1
1. A customer wants to know the aggregated usage totals for all their
managed PowerProtect DD systems. Which of the following is the
best recommendation for the customer?
a. Deploy a DDMC to view the site-wide storage capacity.
b. Deploy a DDMC to view the estimate-projected capacity needs
based on historical trends.
c. Deploy a DDMC to view the processed alerts for all managed
PowerProtect DD systems.
d. Deploy a DDMC to use Smart Scale.
Question 2
DDMC DDSM
System Requirements
The VMware hardware and software that is required to host a DDMC are:
Installing DDMC
Prerequisites
For a smooth and successful DDMC installation, ensure that the following
are available:
− Download the DDMC .zip file from the Dell Support website.
− Two DDMC packages are available for the ESXi platform, DDMC
without Smart Scale services4 and DDMC with Smart Scale
services5
− DDMC in AWS, Azure, and GCP are available in the marketplace of
each of these public clouds
2 The VMware vSphere client application is not required for AWS, Azure,
GCP, Hyper-V, or KVM.
3 If installing within a Hyper-V or cloud environment without role-based
1. Download the required DDMC software and extract the .zip file.
2. Log in to a vSphere client or VMWare Host Client.
3. Launch the virtual machine deployment wizard to deploy the DDMC
instance using the OVA file.
4. Complete the initial configuration.
The initial login requires using the sysadmin user ID and the default
password.
Azure sysadmin/changeme
DDMC Dashboard
Users can log in to DDMC with their existing public key infrastructure (PKI)
and common access card (CAC) and present the PowerProtect DD
system with a certificate for authentication or authorization.
1. Click the User icon on the DDMC banner and select Logout in the
dropdown.
Prerequisites
For the DDMC network and time zone configuration, ensure that the
following are available:
• Hostname
• IP address
• Netmask
• Default Gateway
• Domain name
• DNS servers
• NTP servers
Interface
The Interfaces page lets you manage and configure the Ethernet
interface, DHCP, and IP addresses and displays network information and
status.
1. Click the gear icon in the DDMC banner and select Settings.
2. Select Network > Interface.
3. Select the interface to modify and click Edit.
4. Complete the configuration and click SAVE.
Hosts
1. Click the gear icon in the DDMC banner and select Settings.
2. Select Network > Hosts.
3. Select the Mode to set the host and domain names:
a. Using DHCP
b. Manually
i. Enter a Host name.
ii. Enter a Domain name associated with DDMC.
4. Click APPLY to save the changes.
For manual configuration, use the Mapping area to add a host mapping.
DNS
1. Click the gear icon in the DDMC banner and select Settings.
2. Select Network > DNS.
3. Select the Mode to set the method for obtaining the DNS:
a. Using DHCP
b. Manual
i. Click ADD.
ii. Enter the DNS IP address.
4. Select APPLY to save changes.
Search domains are shown as an action table within the DNS page.
Routes
1. Click the gear icon in the DDMC banner and select Settings.
2. Select Network > Routes.
3. In the STATIC ROUTES page, set the default IPv4 or IPv6 gateway
address:
a. Using DHCP
b. Manual
i. Enter the gateway IP address.
4. Click APPLY to save the changes.
SNMP
DDMC supports SNMP V2C and SNMP V3. SNMP V3 provides a greater
degree of security than V2C by replacing clear text community strings with
user-based authentication using either MD5 or SHA1.
The default port that is open when SNMP is enabled is port 161. Traps are
sent out through port 162.
1. Click the gear icon in the DDMC banner and select Settings.
2. Select Network > SNMP.
3. In the Status area, select Enable to use SNMP.
4. In the Status area, select Disable to stop using SNMP.
5. Click APPLY to save the changes.
Uses the V3 Configuration are to set up SNMP V3 users and trap hosts.
Uses the V2C Configuration area to set up the community strings and
trap hosts.
Perform the following to set or change the time and date settings:
1. Click the gear icon in the DDMC banner, and then select Settings >
SYSTEM > Time and Date.
2. Under Settings set how the time synchronizes:
a. To manually set the time and date, from the Synchronization mode
select Manual, set the Time Zone from the drop-down lists, and then
set the Date and Time.
b. To use an NTP server to synchronize the time, from the
Synchronization mode select how to access the NTP server:
i. Using NTP server from DHCP which automatically select a server.
ii. NTP service manually, and then add the IP address of the servers
in the NTP Servers area.
3. Select APPLY.
The roles available in the DDMC are the same as the roles in the DD
System Manager.
Role Description
1. Click the gear icon in the DDMC banner, and then select Settings >
ACCESS > Administrator Access.
2. View the Passphrase. If required, set the passphrase.
3. View the available Protocols, and for the selected protocol, configure
the required options.
a. The following protocols are available for viewing or configuration:
i. FTP, FTPS, HTTP, HTTPS, SCP, SSH, or Telnet
o The status of the service is either enabled or disabled.
o The allowed hosts set the access permissions for the named
host.
4. Click APPLY to save changes.
To create a local user with either the admin, limited-admin, or the user
role:
1. Click the gear icon in the DDMC banner, and then select Settings >
ACCESS > Local Users.
2. Click ADD.
3. In the Add Local User dialog box, fill out the requested information.
4. Select ADD.
5. Click APPLY to save changes.
Configuring Authentication
DDMC Authentication
1. Click the gear icon in the DDMC banner, and then select Settings >
Access > Authentication.
2. Select the authentication method NIS, WINDOWS, or LDAP.
3. Fill out the requested information.
4. Enable the required authentication method.
5. Click APPLY.
Step One
Step Two
Click ADD. Ensure that the box next to the system being added is checked
and enter the system details:
If another DDMC manages the system, select the Takeover managed system
checkbox.
Step Three
A progress bar displays on the page showing the progress of the initial
data synchronization for the newly added systems.
Notes
The three main areas of the DDMC main page are the banner, navigation
panel, and the work area.
Banner
1. Alerts
− A bell icon that when clicked shows the most recent alerts.
− A red badge notifies of unseen new alerts and the count.
2. Settings
− Provides various options including Support Bundles and Disaster
Recovery.
3. Refresh
− A circular icon that displays the first letter of the user ID.
− Displays the user and role information, provides access to the
classic view of DDMC, and the Logout option.
Navigation Panel
The navigation panel is on the left side of the DDMC and includes the
following categories:
• Dashboard
• Health
• Capacity
• Replication
• Reports
• Administration
• Infrastructure
Work Area
Within each category, you can select subcategories that appear in the
work area.
When you select a subcategory, the content in the work area changes.
The navigation elements on a DDMC page change the focus and scope
that the work area displays:
The dashboard:
By default, each user is assigned a dashboard with one tab with a group
of widgets that are configured to cover all the systems that a user is
monitoring.
Dashboard Tabs
You can copy a dashboard tab with all its widgets to a new dashboard tab
and then edit the new dashboard with the Add tab control in the upper right
corner.
Dashboard tabs can be filtered using the filter icon in the upper right
corner to:
• Filter by group
• Filter by property
• Filter by system
• Filter by rule
• Clear filter
Widgets
To create a widget:
You can edit widgets using the Edit widget control or delete widgets using
the Remove widget control in the banner of each widget.
From some DDMC pages, you can launch a DD System Manager (DDMS)
session to perform configuration or troubleshooting operations.
The DDSM session that starts requires no login or logout and provides
complete management of the system.
Simulation Activity
Notes
Monitoring Systems
Overview
DDMC Reports
In addition to data provided on the interface, you can generate reports on-
demand or scheduled and email to a list of interested parties.
The following table shows the type of data that DDMC retains for each
sample:
The data history is scanned to find the projection with the best fit the
regression with the highest R2 value.
The R2 value is a measure of how close the regression fits the actual
measurements:
After the best fit is determined, the projection must pass the following
validation tests to ensure that the prediction is accurate:
Using DDMC to perform daily monitoring of your site lets you check for
unusual activity before it becomes a serious problem.
Dashboard
• Health Status
• Active Alerts
• Capacity Thresholds
• Capacity Used
• Replication Status
• Lag Thresholds
Alert Notifications
The Alerts notification area reports the severity, date, class, and system
name of the new alert.
To see the alert details, select the View All link to open the Health > Alerts
page.
Health Alerts
At the upper right corner, you can select the ACTIVE ALERTS or ALL ALERTS
tab.
The All active alerts or All alerts date range filter allows for narrowing or
expanding the focus of alert scoping or going back to a specific point in
time. The date range includes Last 12 hours, Last 24 hours, Last 7 days, Last
30 days, All active alerts, and Custom.
The SYSTEM and TENANT buttons at the upper right let you show all the
PowerProtect DD systems or systems by tenant assignment.
To see a summary of the history of the alert, click the EVENT OCCURRENCE
HISTORY button to see a list of every occurrence of the alert for the system.
Health Status
The toggle buttons at the upper right let you show all the PowerProtect DD
systems and systems organized by group or tenant assignment.
Health Jobs
The Health Jobs page displays information about jobs or tasks that are
initiated from DDMC.
This information includes jobs that are still in progress and complete,
whether successfully or not.
Details of the task, including its subtask status, are shown for a selected
task in the Details panel.
Monitoring Capacity
Overview
You can monitor current and historical space consumption, and estimated
projected near-term future storage needs.
• Systems/MTrees
• Cloud
Systems Capacity
The Systems capacity threshold table shows the current used and
projected capacity.
3 Months is the default for the Projection Timeline (Capacity Used %).
The Export option downloads a .csv file to your workstation that contains
the current capacity that is used and the protected capacity utilization.
To view the capacity projection, select a system and click the Calculate
Projections button.
A projection is not made if the average usage in the last seven days is
less than 10%.
MTrees Capacity
The MTrees capacity table shows MTrees capacity statistics for the
PowerProtect DD systems.
A Details pane appears at the right of the page when you click the Show
Details icon at the left of each PowerProtect DD system and MTree in the
table.
The Export option downloads the MTrees table information to a .csv file to
your workstation.
Cloud Capacity
• Monitor the active tier and cloud tier capacity residing on different
cloud providers
System Details
The System Details contains the following tabs for non-HA systems:
Tab Description
NETWORK The NETWORK tab shows total bytes, backup and restore
bytes, and replication inbound and outbound bytes. A
Network Trend chart is available.
SYSTEM The SYSTEM CHARTS includes all system charts, and lets
CHARTS you produce charts for selected time intervals.
Monitoring Replication
System Reports
Overview
Reports Templates
Creating a Report
The Add Report Template wizard creates a report template for use in running
reports about key data points.
To create a report:
The report template adds an entry in the reports table. Select the report
template to immediately run, edit, delete, or disable the report.
Simulation Activity
In this simulation, the learner creates a cloud report that is called Cloud
Status and view the report from the email.
Notes
Administration Menu
Multitenancy
You can edit, delete, or view tenant information from the Multitenancy
page.
a. Datacenter location
b. Size now (GB)
c. Size to grow (GB)
d. Time to grow
5. In the Select Host System page, select a system with enough logical
capacity to host the tenant unit and then click NEXT.
6. In the Administration page, set the following and then click NEXT:
a. Tenant Unit name
b. Optionally the Administrator name
c. Administrator email
d. Check Use strict security mode to allow incoming replications
only if they are from another tenant unit that the same tenant owns.
e. Add Management IP Addresses, which is optional, as needed.
When Create an Empty Tenant Unit is selected, the Use strict
security mode and Management IP Addresses options do not show.
7. The next page depends on the previous choice:
a. For manual provisioning, you can create MTrees/Storage Units,
and then click NEXT.
b. For automatic provisioning, you can configure users for data access
over the DD Boost protocol, and then click NEXT
c. For Create an empty Tenant Unit, go to the Summary page.
8. Review the Summary page and then click CREATE.
Permissions
c. Backup Operator
d. User
8. Click ASSIGN.
Groups
In a group:
Properties
To add properties to systems and replication pairs, follow the steps below:
▪ Let you provide a name and specific values for the property.
Selecting the option Allow multiple types lets you assign more
than one value.
4. Click ADD.
5. Assign values to the properties by editing the system from the
Infrastructure > Systems page.
Simulation Activity
Notes
Infrastructure Menu
Systems
From the Infrastructure Systems page you can perform the following:
Data Centers
• The ability to deploy Smart Scale Data Centers Services for Storage
Unit Mobility.
• The ability to create, manage, and monitor health and alerts of groups
of systems at the data center level.
Configuration Templates
The Audit Schedules allow you to create audits that generate an alert for
each noncompliant configuration system.
Updates
Overview
Download the DDOS update package7 from the Dell Support website.
7Download the DDOS update package with the prefix x.x.rpm, for
example, 7.11.0.0-1035502.rpm.
After the update package has been uploaded to the DDMC inventory, you
can update one or more systems.
The DDMC allows you to update the DDOS on one or more PowerProtect
DD systems.
− Rebooting the system lets the update continue without any conflicts
with background processes and may be required for some updates.
Simulation Activity
In this simulation, the learner creates a data center that is called DellEdu
and add the systems ddve01.delledu.lab and ddve02.delledu.lab.
Notes
Smart Scale
Overview
Instructions
Notes
Question 1
Question 2
Prerequisites
The DDMC Smart Scale feature is not available by default. Smart Scale
services must be deployed.
From the Data Centers page, you can deploy Smart Scale services.
Once complete, Smart Scale services indicate that they are Running on the
data center.
Deep Dive: For more details about Smart Scale, see the
Dell PowerProtect DD Management Center Installation and
Administration Guide on the Dell Support website.
OVA file
The Open Virtualization Application or Appliance (OVA) is a single file that
archives all the files that make up the Open Virtualization Format (OVF).
OVF is an open standard that contains multiple files as a package.
Passphrase
The Passphrase is a human-readable key, like a smart card, used to
generate a machine-usable AES 256 encryption key.
Search domain
A search domain is a domain used as part of a domain search list. The
search list, as well as the local domain name, is used by a resolver to
create a fully qualified domain name (FQDN) from a relative name.
Secure Multitenancy
Secure Multitenancy (SMT) is the simultaneous hosting of an IT
infrastructure by an internal IT department or an external provider for more
than one consumer or workload like a business unit, department, or
tenant.
PARTICIPANT GUIDE
[email protected]
[email protected]
ESDPS04250 ~ Smart Scale for PowerProtect Appliances Concepts
[email protected]
Smart Scale for PowerProtect Appliances Concepts
Protecting exponentially growing data is a big challenge to IT. Risk from cyber
attacks and meeting ever increasing service levels further complicates
protecting data. Some of the challenges to protecting rapidly growing data are:
Managing multiple data centers and cloud environments
Accommodating new and evolving applications
Optimizing capacity and performance
Organizations must keep pace with data growth, optimize its workload, and
provide ongoing capacity insight in a data protection environment.
The Smart Scale feature in PowerProtect DD Management Center aids in
managing these challenges.
Clients
The strong analytics capabilities in DDMC provide the capacity projection and
analytics service for Smart Scale.
Lastly, DDMC provides migration and placement service for data that is stored in
system pools.
DD Namespace VM
Smart Scale
Services
Namespace Redirection DD Boost Client
Services
System Pool
Mobile Mobile
Mobile Mobile
Storage Storage
Storage Storage
Unit Unit
Unit Unit
Storag
e Unit
The DDNRS is used with Smart Scale to manage its credential and backup set
databases.
The DD Namespace VM is mostly stateless and is used in the data path for initial
connection redirection.
IP Addresses IP Addresses
IP11, IP12, IP13 IP51, IP52, IP53
System IPs: System IPs:
DD01: IP11 DD01: IP51
DD02: IP12 DD02: IP52
DD03: IP13 DD03: IP53
Network Discovery
DDMC builds network groups that are based on the network topology using the IP
configuration of managed systems.
2 3
The administrator must specify the following when creating a mobile storage unit
(MSU):
A system pool to determine the PowerProtect DD systems that can host the
MSU
A network group or groups that file system clients can use to access the MSU
A mobile Boost user with credentials for accessing the mobile storage unit
1. The mobile storage unit MSU-1 is hosted on system DD03 and is configured for
system pool SP-1.
2. Two workloads access MSU-1. One workload uses Network Group 1, which
accesses IP pool address IP10. A second workload can use Network Group 2
and IP50 for pool access.
3. A mobile Boost user provides a username and password to connect to IP10, the
pool access IP address for Network Group 1.
Data Center
App App App
System Pool
Mobile Mobile
Storage Mobile Mobile Storage
Unit Storage Storage
Unit
Unit Unit
Storage
Unit
Become acquainted with the components used with Smart Scale to better
understand its function.
Pool access IP (not Mobile storage unit 5 Mobile Boost user (not
pictured)4 pictured)6
enable you to create mobile storage units within a system pool. Smart Scale
services include, namespace redirection, capacity projections and
recommendations, mobile storage unit placement, and analytics.
3 A system pool in a data center is a defined set of PowerProtect DD systems.
Clients that access a mobile storage unit must provide the username and password
of its mobile DD Boost user.
Sequencing Activity:
In the table below, place the number found on the diagram across from its
component name.
PARTICIPANT GUIDE
PARTICIPANT GUIDE
[email protected]
[email protected]
ESDPSD04094 ~ Smart Scale Implementation
Data Centers........................................................................................................................ 6
Data Center Summary Page ................................................................................................ 7
Simulation Activity: Create a Data Center for Smart Scale Deployment ............................... 8
Appendix ................................................................................................. 53
Data centers are created and managed within the Infrastructure menu in the
PowerProtect DD Management Center (DDMC) navigation panel. Only admin roles
can view and administer data centers.
Administrators can create custom dashboards for individual data centers and filter
PowerProtect DD systems at the data center level. Administrators can also deploy
Smart Scale services for DD Boost-based storage unit mobility.
Dell Technologies recommends creating no more than four system pools per data
center for Smart Scale services.
The Data Centers Summary page shows information about individual data centers.
Perform this simulation activity to experience creating Smart Scale data centers
with PowerProtect DD Management Center (DDMC).
To deploy and enable the Smart Scale service, a Smart Scale data center must first
exist.
Smart Scale services are not available by default. Smart Scale can only be
deployed using DDMC. For the smoothest deployment, ensure that all resources
are preconfigured.
When services are deployed, the administrator must provide the data center name.
The deployment wizard also requires vCenter credentials, and username. The
administrator must supply an IP address with network details of the DD nameserver
VM, and the related port numbers.
DNS configuration is also required for Smart Scale deployment. Configure at least
one DNS server on the DDMC.
More information about DDMC deployment can be found in the Dell EMC
PowerProtect DD Management Center (DDMC) 7.8 Installation and Administration
Guide.
Perform this simulation activity to experience how Smart Scale is deployed through
PowerProtect DD Management Center (DDMC).
1.1 Introduction
A pool access IP is the network component that is used to create and access a
Mobile Storage Unit in a system pool.
In order to create a system pool, Smart Scale services must first be deployed.
Resource Requirements
Hardware Resource Requirements
Memory 8 GB 24 GB
System Requirements
Requirement Use
DDMC administrator role permissions Used to access the Smart Scale feature
vCenter 6.7 or later with shared Used as host for Data Domain Namespace
datastore and HA configuration VM
One or more valid DNS servers that Used for communications between
are configured on the DDMC DDNVM, DDMC, and protection storage
DDNVM Ports
PARTICIPANT GUIDE
PARTICIPANT GUIDE
[email protected]
[email protected]
ESDPSD04249 ~ Smart Scale for PowerProtect Appliances Administration-SSP
Smart Scale for PowerProtect DD appliances can be used with PowerProtect Data
Manager and Dell NetWorker as the data protection management applications for
backups.
As administrators create protection policies, they can specify a system pool as the
targeted protection storage. Copy placement becomes transparent to the data
protection client. This feature helps administrators to better manage capacity
changes and storage unit placement without requiring modifications to the
protection infrastructure.
1.16 Click
Notes
1.2 Click
1.3 Click
1.4 Click
1.5 Click
1.6 Click
1.7 Click
1.8 Click
1.9 Click
1.10 Click
1.11 Click
1.12 Click
1.13 Click
1.14 Click
1.15 Click
1.16 Click
1.17 Click
1.18 Click
1.19 Click
1.20 Click
1.21 Click
1.22 Click
1.23 Click
1.24 Click
1.25 Click
1.26 Click
1.27 Click
1.28 Click
1.29 Click
1.30 Click
Notes
1.9 Click
1.10 Click
1.17 Click
Notes
As administrators create protection policies, they can specify a system pool as the
targeted protection storage. Copy placement is transparent to the data protection
client. This feature helps administrators to better manage capacity changes and
storage unit placement without requiring modifications to the protection
infrastructure.
By default, the backup happens every 15 minutes and retains only the latest
readable copy. The backup copies to a staging area (/resource/ddmc_dr/backup) in
DDMC and then move to the MTree or storage unit.
1.6 Click the Name field and name the storage unit.
1.9 Click the Storage Units tab to view the new SU.
1.10 Click NFS to create an NFS mount point for the new SU.
Notes
To perform a DDMC recovery, deploy a new DDMC with the same version and
mount the MTree that has the backup on the new DDMC.
Following is a table that explains which recovery operation should be used when
experiencing a corrupt or destroyed DDMC instance, or DD Namespace VM
(DDNVM), or both:
1.36 Click
1.47 Click
1.48 End
Notes