Commvault_Professional_Advanced_Course_Guide
Commvault_Professional_Advanced_Course_Guide
Education Services
Commvault®
Professional
Advanced
Student Guide
Copyright
Information in this document, including URL and other website references, represents the current view of Commvault
Systems, Inc. as of the date of publication and is subject to change without notice to you.
Descriptions or references to third party products, services or websites are provided only as a convenience to you and
should not be considered an endorsement by Commvault. Commvault makes no representations or warranties, express
or implied, as to any third-party products, services or websites.
The names of actual companies and products mentioned herein may be the trademarks of their respective owners.
Unless otherwise noted, the example companies, organizations, products, domain names, e-mail addresses, logos,
people, places, and events depicted herein are fictitious.
Complying with all applicable copyright laws is the responsibility of the user. This document is intended for distribution to
and use only by Commvault customers. Use or distribution of this document by any other persons is prohibited without the
express written permission of Commvault. Without limiting the rights under copyright, no part of this document may be
reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic,
mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of
Commvault Systems, Inc.
Commvault may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering
subject matter in this document. Except as expressly provided in any written license agreement from Commvault, this
document does not give you any license to Commvault’s intellectual property.
©1999-2020 Commvault Systems, Inc. All rights reserved. Commvault, Commvault and logo, the "C hexagon” logo,
Commvault Systems, Solving Forward, SIM, Singular Information Management, Commvault HyperScale, ScaleProtect,
Commvault OnePass, Commvault Galaxy, Unified Data Management, QiNetix, Quick Recovery, QR, CommNet, GridStor,
Vault Tracker, InnerVault, Quick Snap, QSnap, IntelliSnap, Recovery Director, CommServe, CommCell, APSS,
Commvault Edge, Commvault GO, Commvault Advantage, Commvault Complete, Commvault Activate, Commvault
Orchestrate, and CommValue are trademarks or registered trademarks of Commvault Systems, Inc. All other third party
brands, products, service names, trademarks, or registered service marks are the property of and used to identify the
products or services of their respective owners. All specifications are subject to change without notice.
Confidentiality
The descriptive materials and related information in the document contain information that is confidential and proprietary
to Commvault. This information is submitted with the express understanding that it will be held in strict confidence and will
not be disclosed, duplicated or used, in whole or in part, for any purpose other than evaluation purposes. All right, title
and intellectual property rights in and to the document is owned by Commvault. No rights are granted to you other than a
license to use the document for your personal use and information. You may not make a copy or derivative work of this
document. You may not sell, resell, sublicense, rent, loan or lease the document to another party, transfer or assign your
rights to use the document or otherwise exploit or use the Manual for any purpose other than for your personal use and
reference. The document is provided "AS IS" without a warranty of any kind and the information provided herein is subject
to change without notice.
Contents
Commvault® Professional Advanced Configurations .............................................................................................................. 6
Introduction.............................................................................................................................................................................. 7
Commvault® Professional Advanced Course .................................................................................................................. 8
Course Overview ............................................................................................................................................................. 9
Education Advantage ..................................................................................................................................................... 10
Class Resources ............................................................................................................................................................ 11
Commvault Education Career Path ............................................................................................................................... 12
Commvault® On-Demand Learning ............................................................................................................................... 13
Education Services V11 Certification............................................................................................................................. 14
Cloud Resources ........................................................................................................................................................... 17
Module 1 – Deployment ........................................................................................................................................................ 18
CommCell® Console .......................................................................................................................................................... 19
CommCell® Console Overview ...................................................................................................................................... 20
Customize the CommCell® Console .............................................................................................................................. 29
Job Controller ................................................................................................................................................................. 33
Event Viewer .................................................................................................................................................................. 41
Planning ............................................................................................................................................................................. 45
Phases of a Deployment ................................................................................................................................................ 46
Planning a Deployment .................................................................................................................................................. 47
Download Commvault® Software ................................................................................................................................... 49
CommServe® Server Installation ....................................................................................................................................... 53
CommServe® Server Installation ................................................................................................................................... 54
CommServe® Server Post Installation Tasks ................................................................................................................ 60
CommServe® Server Database Maintenance ............................................................................................................... 70
MediaAgent Installation ..................................................................................................................................................... 77
MediaAgent Installation Overview ................................................................................................................................. 78
MediaAgent Advanced Options ..................................................................................................................................... 84
Agent Installation ............................................................................................................................................................... 90
Agent Installation Overview ........................................................................................................................................... 91
Push Installation ............................................................................................................................................................. 92
Interactive Installation .................................................................................................................................................... 97
Custom Package .......................................................................................................................................................... 102
Encryption ........................................................................................................................................................................ 108
Encryption Overview .................................................................................................................................................... 109
Software Encryption ..................................................................................................................................................... 111
Hardware Encryption ................................................................................................................................................... 117
Encryption Considerations ........................................................................................................................................... 119
Module 2 – Storage Advanced Configurations ................................................................................................................... 126
INTRODUCTION
Course Overview
Education Advantage
The Commvault® Education Advantage product training portal contains a set of powerful tools to enable Commvault
customers and partners to better educate themselves on the use of the Commvault software suite. The portal includes:
Class Resources
Course manuals and activity guides are available for download for Instructor-Led Training (ILT) and Virtual Instructor-Led
Training (vILT) courses. It is recommended to download these documents the day prior to attending class to ensure the
latest document versions are being used.
Self-paced eLearning courses can be launched directly from the EA page. If an eLearning course is part of an ILT or vILT
course, it is a required prerequisite and should be viewed prior to attending class.
If an ILT or vILT class will be using the Commvault® Virtual Lab environment, a button will be used to launch the lab on
the first day of class.
Commvault® certification exams can be launched directly from the EA page. If you are automatically registered for an
exam as part of an ILT or vILT course, it will be available on the final day of class. There is no time limit on when the
exams need to be taken, but it is recommended to take them as soon as you feel you are ready.
Commvault On-Demand Learning is a convenient, flexible, and cost-effective training solution that gives you the tools to
keep a step ahead of your company’s digital transformation initiatives. You and your company will benefit by:
Commvault's Certification Program offers Professional-level, Engineer-level, and Expert-level certifications. This Program
provides certification based on a career path, and enables advancement based on an individual’s previous experience
and area of focus. It also distinguishes higher-level certifications such as Engineer and Expert from lower-level
certification as a verified proof of expertise.
Key Points
Certification is integrated with and managed through Commvault's online registration in the Education Advantage
Customer Portal.
Cost of certification registration is included in the associated training course.
Practice assessments are available at ea.commvault.com.
The Commvault Certified Professional Exam Prep course is also available.
Students may take the online certification exam(s) any time after completing the course.
Although it is recommended to attend training prior to attempting an exam, it is not required.
CommCell Administration – user and group security, configuring administrative tasks, conducting data protection
and recovery operations, and CommCell monitoring.
Storage Administration – deduplication configuration, disk library settings, tape library settings, media
management handling, and snapshot administration.
CommCell Implementation – CommServe® server design, MediaAgent design and placement, indexing settings,
client and agent deployment, and CommCell maintenance.
Certification status as a Commvault Certified Professional requires passing the Commvault® Certified Professional Exam.
Commvault® Engineer Exam – this exam validates expertise in deploying medium and enterprise level
CommCell® environments with a focus on storage design, virtual environment protection, and application data
protection strategies.
Certification status as a Commvault Certified Engineer requires certification as a Commvault Certified Professional and
passing the Advanced Infrastructure Design exam.
Certification status as a Commvault Certified Expert requires certification as both a Commvault Certified Professional and
Certified Engineer, and successful completion of Expert certification requirements. These Expert certification requirements
include attending the Expert class and passing the Expert Certification exam.
Cloud Resources
The site http://cloud.commvault.com provides additional resources to help you manage and maintain a Commvault®
installation.
Download Center section – lets you download service packs, hot fixes, and binaries for new installations.
Documentation section – provides a link to the documentation which is updated for each service pack
Forms and Store sections – allows new features and components like workflow or custom reports to be
downloaded and installed into an existing installation.
MODULE 1 – DEPLOYMENT
CommCell® Console
CommCell Toolbar – provides an easy to navigate 'ribbon' to manage and configure the CommCell environment
CommCell browser – is the main navigation window which contains a hierarchical structure of all categories and
components within the CommCell environment
Content / Summary window – provides details based on what component is selected in the CommCell browser
Job Controller – provides viewing and control capabilities for all active jobs in the CommCell environment
Event Viewer – provides information for all logged events within the CommCell environment
1. CommCell® Browser - Shows all CommCell components in an easy to navigate tree structure.
• Job Controller.
• Event Viewer.
To log on to the CommCell console, launch the application locally or through a web browser. Required information to log
on is a username, password and the CommServe host name. When using Active Directory accounts, the format for
username will be domain\user.
3. Web based – The web-based version of the console is supported on all major web browsers
CommCell® Toolbar
The CommCell® Console uses a ribbon style toolbar to provide more efficient navigation and configuration. Configuration
options are organized within the toolbar to provide quick access to perform common tasks. By placing the mouse on the
toolbar, use the scroll wheel to quickly move through the different toolbars available. You can hide the toolbar by clicking
the arrow in the upper right corner.
1. The CommCell® Browser view is a hierarchical structure showing all CommCell® components
2. Select the Agents view to displays a categorized view of agents in the CommCell® environment
1. Content View - Displays detailed information based on what is selected in the CommCell browser window
1. Currently active jobs and job status are displayed in the Job Controller.
3. Pause & Play buttons are used to freeze and unfreeze the Job Controller window refresh. It does not
pause the actual jobs.
5. Pause & Play buttons to freeze and unfreeze Event Viewer window refresh.
2. Hover over the hidden browser title bar to view browser. Click the pin to relock browser in the window.
Float Windows
When multiple monitors are used, you can float windows, so they are not bound to one screen. For instance, when a
window is floated, it can be dragged outside of the console windows and placed on the other monitor for a multi-screen
view of the CommCell® Console.
1. Click and drag the tab to an empty space in window, or right-click it.
Job Controller
The Job Controller is the most effective tool for managing and troubleshooting all active jobs within the
CommCell® environment. Regardless of which method is used to initiate a job (schedule, on demand or script), the job
appears in the Job Controller.
Move a field – Click and drag the field to the location in the Job Controller
Resize the field – Click the outer edge of the field and drag to the correct size
Sort the field – Click on the field and an arrow on the right side appears. You can sort in order or reverse order
for the field
Add/remove field – Fields can be added and removed by using the menu arrow in the upper-right corner
Add/Remove Fields
Select Chevron (double-down arrow in the upper-right corner) | choose columns | select the column
Number of Readers in use – displays the number of streams per job that are being used for the operation. Note
that the 'readers in use' field only displays a number during the backup phase.
Current throughput – displays the current throughput in gigabytes (GB) per hour.
Priority – displays the job priority for the operation.
Each job includes information about the job status, data path, and media usage or job errors.
2. Details for job can be viewed including general information, progress, streams and media usage.
A dialog box appears to confirm the status change of a job and provides an explanation dialog box. You can also view
explanations for job status change in the Job Summary report.
3. Type an explanation if needed in the Reason field and click Yes to confirm to kill the job.
3. Type an explanation if needed in the Reason field and click Yes to confirm to kill the job.
To add a new filter, select the New Filter button. To apply an existing filter, from the Filters drop-down box select the filter.
2. Define how long a job will appear in the Job controller after it is completed, failed or killed.
To reduce communication traffic during normal operations, the job status in the Job Controller updates every five minutes.
This value is increased or decreased in the Job Management applet in the Control Panel. Unless required for a specific
job type, it is recommended NOT to decrease these values as it could have a negative impact on backup or restore
performance.
In certain situations where job status must be closely monitored, the update intervals can be modified. You can customize
the status updates for different agents within the CommCell® environment.
2. Update status time for protection and recovery operations can be modified for different agents from the Job
Updates tab.
Event Viewer
The Event Viewer window displays events reported based on conditions within the CommCell® environment. By default,
the event viewer displays the most recent 200 events. This number can be increased up to 1,000. The event log maintains
up to 10,000 events or 7 days of events. These default settings can be modified.
The event viewer can be filtered based on the available fields. Although some filters, such as Date does not have a
practical application, other fields such as Computer, Program or Event code can be used to quickly locate specific events.
3. Filter fields using the drop-down arrow and define filter criteria.
Although only 200 to 1,000 events are displayed in the event viewer, the entire event log can be searched from the event
viewer. The default total number of events retained is 10,000.
When right-clicking anywhere in the event viewer, select the option to search events. Events are searched by time range,
severity and job ID. If common searches are frequently conducted, the search criteria can be saved as a query and run at
any time.
2. Define search criteria to display all relevant events in the event log.
By default, the event log retains 10,000 events or 7 days of events. When the event logs reach their upper limit, the oldest
events are pruned from the event logs. These options are configured in the System Settings applet in the Control Panel.
3. Set the event retention criteria settings for number and age of events.
4. The CommServe® database volume free space checks. Default values can be modified if needed.
Planning
Phases of a Deployment
Installing Commvault® software is a straightforward process. However, it is important to note that a full
CommCell® deployment must be well-planned and executed. Within a modern data center, components such as the
network, hardware, operating systems, and applications all play a significant role in creating a sound data protection
strategy. This section presents the high-level steps and best practices for deploying a CommCell environment.
It is highly recommended to consult with Commvault Professional Services to assist with any deployment. They can
provide help with the architecture, physical resources assessment, planning or deployment as required.
Deployment phases:
Planning
Downloading the Commvault® software
Installing the CommServe® server and executing post-install tasks
Installing the MediaAgents and executing post-install tasks
Configuring storage and deduplication
Configuring features and plans
Deploying agents and configuring subclients
Planning a Deployment
The planning phase encompasses all pre-deployment steps and is considered the most important phase in the process. A
poorly planned deployment can impact production systems and result in an undersized environment.
For instance, if the current backup environment is having problems backing up virtual machines, during the deployment of
the new CommCell® environment, then the first data protection to address is the virtual machines. This ensures adequate
protection as soon as possible.
It is also important to decide what will become of the old backup environment. Is there a cutoff date where once the new
system is in place, the old system will be decommissioned? Does the old system need to be phased out by keeping the
old hardware for a period of time? Or will the tape library be partitioned between the old and the new systems to allow
restores of old data? Considering all questions ensures a seamless transition of your current backup environment.
Establish, Procure and Install the Hardware for the New Environment
Once the CommCell® environment is designed, the next step is to establish hardware requirements of every component
and to procure and install the required hardware. If hardware is missing, determine if the deployment phase should be
postponed. For instance, if a MediaAgent is missing the SSD drives for the Deduplication Database (DDB), it might be
preferable to wait instead of placing the DDB on the C:\ drive. Even if the database can be moved later, it could lead to a
drive that fills up before the SSD drives are received. It can also significantly underperform, therefore impacting backup
performances.
Coordinate with the Change Management Team for the Deployments Steps and Procedures
Many companies have change management procedures in place so it's important to coordinate with the Change
Management Team. Completing any necessary paperwork and following procedures avoids major impacts, such as a
reboot or network usage to occur during peak operation hours.
The Commvault Maintenance Advantage website or Commvault Cloud website provides the latest version of the
Commvault installation media. By clicking Downloads & Packages, you can access the most current software installations
and service packs or select a previous version.
Resumable Download Manager – Based on the Bootstrapper download manager, this option is activated by
selecting the required files from the list and then selecting Launch Download Manager at the bottom of the
screen.
Bootstrapper Direct Download – This option reduces deployment time by selecting only the required
Commvault software components and download of installation media.
2. Click to download the media kits including the most recent service pack.
3. Click to download the hot fixes that were launched after the latest service pack.
1. Once the file is downloaded, extract the contents into a directory and run the Setup.exe as administrator.
2. This will launch the screens to install the CommCell®, MediaAgent, and optional agents to manage and protect
resources.
The following summarizes key points for a new CommServe server deployment:
Commvault software must be downloaded prior to installation. To avoid any deployment delays, arrangements for
the software download should be done in advance. Additionally, determine the ability to routinely download
updates and upload log files from the CommServe host. If the CommServe will not have internet access, alternate
methods should be discussed and documented.
Determine the location for a local and remote Software Cache. The Software Cache is a directory where
Commvault updates and software packages are stored. These can be configured during the deployment and
typically help position the software to be routinely accessible throughout the organization – or prepare for a
disaster.
Verify the Hardware and System Requirements.
Ensure the size of the environment has been assessed and there are adequate resources available for the
CommServe server:
Based on the sizing assessment, determine if the CommServe server will be physical or virtual.
Determine if the CommServe server needs to be deployed in a clustered configuration for high availability.
Ensure the operating system meets the Commvault specifications and patched with updates prior to the
installation.
Determine if the method of deployment requires additional considerations for Disaster Recovery. For example,
configuring a 'Floating Host Name' for the CommServe server.
Determine if additional components such as Metrics Reporting or the Workflow engine will be installed on the
CommServe server.
Determine the methods for accessing the CommCell® console and/or Commvault Command CenterTM.
The consoles are installed by default along with the CommServe components.
IIS is required for the Web Server and Web Console and are automatically installed when IIS is enabled on the
CommServe server.
Although not always required, reboots (powering off and on) may be required to complete an installation or
update. It is recommended to anticipate downtime and that the organization's change request or maintenance
window process is accommodated in advance. In some cases, the organization may require the changes be
implemented after hours.
Outline the firewall and network considerations prior to any installation. Unless performing a decoupled install, all
software components must communicate with the CommServe server during installation. Determine the
requirements for working with the organizations firewall configuration in advance.
Identify any monitoring, security, or anti-virus software that will be installed on the same systems as Commvault
software. The installation and in many cases Commvault operations may be blocked or performance severely
degraded by such software. This can be avoided by applying the appropriate exceptions or filters for the
Commvault software in advance.
Ensure any Service and Administrative accounts are preconfigured and known during the installation. The
account type and permissions required are determined by the components being deployed. A thorough review of
the deployment should help determine the needs.
For the CommServe server, an account with local Administrator privileges is required for the software installation.
A password for the CommCell 'admin' account is configured during the installation. This password should be a
complex password and the primary administrator should always use this account when managing the
environment.
A permanent license file must be applied after the CommServe software is installed. Ensure that any pending
purchase agreements are completed prior to the deployment of the Commvault software.
Complete post installation tasks.
When deploying Commvault® software, it is important to note that every environment is different relative to the available
infrastructure, technology, budget, culture and requirements of the organization. Whether performing a new installation, an
upgrade, or expanding an existing environment, a good amount of planning should take place prior to installing
Commvault software. The more emphasis put into planning, the more likely the deployment will go smoothly.
Gathering Information
Proper documentation of the CommCell components being installed is essential for a smooth deployment. The following
chart provides a sample of the information that must be obtained for the CommServe deployment. Having this information
in advance will not only help the deployment go quicker – it can help bring any shortcomings to the surface, such as a
lacking resource. Furthermore, it can aid in verifying site readiness and serve as a template for post deployment
documentation.
Additional Packages File system agent, MediaAgent, Workflow engine, Web Server
DR Share \\DRCommserve\CSDR
5. If the MediaAgent package was also selected, provide the location for the Index Directory.
6. Provide the location where the Microsoft SQL software will be installed.
7. Select the folder in which the CommServe® Server database will be created. It is recommended to use a
dedicated volume.
8. Define the location to store the CommServeDR backup exports. It is recommended to use a network location,
preferably offsite.
9. If you plan on using email archiving, check to configure IIS for recalls.
11. Provide the host name to use to reach the CommServe® Server.
13. During a new install, select Create New Database, or select Use Existing Database during a disaster recovery or
staging scenario.
14. Provide an email address and a password for the built-in admin account.
During the installation, the software packages and updates are copied to local disk. This is called a Software Cache or
CommServe Cache and can be leveraged to "push" Client Agent software and updates to other servers in the
environment. These settings can later be changed.
If the CommServe server is being installed in a cluster, log into the active node with an Administrative account and then
run Setup.exe from the installation media. The Cluster Setup Install Option page is displayed during the installation. After
completing the selections on the active node, it may be necessary to log into the remaining cluster nodes and repeat the
installation process. The installation will apply the missing components to the cluster node.
Additional hotfixes may follow the release of a Feature Release to address critical issues. The hotfixes, included in
Maintenance Releases, should also be applied as soon as they are available. Subsequently, if a new Feature Release or
Maintenance Release hotfix conflicts with an older installed hotfix, the installation process removes them automatically.
By default, the system creates automatic schedules that download and install updates on Commvault® servers, as well as
on clients. These schedules can be modified as needed.
2. In the CommServe Software Cache tab, specify the local path to store software and updates.
2. In the Remote Software Cache tab, add the machine used as a remote cache.
3. Select the client that will host the Remote Software Cache.
3. Click Advanced to open the advanced download and sync cache options.
System Created Download Software – Download the updates automatically in the software cache once a week
if new updates are available.
System Created Install Software – Automatically install updates on Commvault® servers and clients once a
week if required. For instance, many companies have change control procedures in place. Installing updates
automatically on servers might go against these procedures. In this case, the System Created Install Software
schedule can be modified or simply disabled.
The next step is to ensure that the CommServe® server is up to date. This provides all the latest configuration options
available. Updates can be deployed from the software cache using the CommCell console.
To install updates
1. From the Tools menu | Add/Remove Software | Install Service Pack and Hotfixes.
Metrics reporting is a tool that monitors and reports on the health of the CommCell® environment and stores the
information either in a private Metrics Reporting server, or on Commvault® Cloud metrics services. Once the information is
stored, a broad range of reports and dashboards are available through a portal to help an administrator monitor the
CommCell® environment.
To use the private server, the Metrics Reporting component must be selected when installing the CommServe® server.
Enabling the private or cloud Metrics Reporting is achieved from the Control Panel.
Email Settings
Commvault® software sends alert notification or reports by email. Prior to using these features, the email server must be
configured.
If your corporate mail server is secured, it is important to understand the level of security. Commvault® software
uses a functionality called SMTP relay. This means that the email server relays emails generated and sent by the
CommServe® server. Therefore, SMTP relay must be allowed on the mail server for the CommServe® server IP
address. Refer to your software vendor documentation for more information about SMTP relay and the mail
server.
Additional accounts can be created, and roles defined and applied to allow users to login and perform specific tasks in the
system. For instance, a role allowing only backup and restore operations can be created and assigned to the SQL servers
and DBA Active Directory account. The result is that a DBA can now connect to the console and only see SQL servers, on
which he or she could only run backups and restores.
In addition to the Admin account, create at least one more CommCell® account having all privileges.
Ensure primary administrators always use local accounts when managing Commvault® software.
Document the passwords of these two accounts with the DR plan.
Create an Active Directory® connection to allow the use of Active Directory integrated accounts.
When creating a role and security rule, use a test user first to validate if the rule is properly created and gives the
expected result. If so, then use it in production.
Each administrator should use its own account. The Admin account should not be used by everyone. This helps in
keeping track of any changes in the system using the Audit Trail report.
Maintenance Modes
There are several modes that can be selected when executing a DB maintenance:
Full - Performs a full maintenance on the database. It includes CheckDB, ReindexAll and ShrinkDB commands. It
is recommended to run on a bi-yearly basis.
Recommended - Performs a recommended maintenance which includes ShrinkDB and ReindexRecommended
commands. It is recommended to run this maintenance mode every couple of weeks. By default, a system
created schedule will execute it on every other Sunday.
CheckDB - Validates the consistency of the CommServe database by running an integrity check.
ReindexRecommended - Re-indexes the largest and most frequently used tables of the database.
ReindexAll - Re-indexes all tables of the database.
ShrinkDB - If table re-indexing creates a significant amount of fragmentation, the ShrinkDB command will reclaim
that space by shrinking the database.
2. It provides the correct syntax and options for the tool, such as the maintenance mode.
V11 Service Pack 5 introduced a workflow and a schedule that maintains the CommServe server database automatically.
The schedule, which is called System Created DB Maintenance schedule, runs every other Sunday at 3 p.m. and
executes a Recommended maintenance. The Full maintenance is not scheduled. It is therefore recommended to either
run it manually, or schedule it twice a year. This schedule executes a workflow called DBMaintenance, which executes the
maintenance based on the mode that is selected in the schedule. The workflow also contains email components that can
be modified to send a result notification on failure or success.
4. By default, it runs in recommended mode, but a different mode can be manually selected.
Recovery Assistant
The CommServe Recovery Assistant tool is used to restore the CommServe database from the DR backup. The tool is
used to rebuild the CommServe server on the same or different computer, change the name of the CommServe host and
update the CommCell license.
MediaAgent Installation
When installing the MediaAgents, refer to the Commvault Online Documentation to ensure that all hardware requirements
are met.
6. Select MediaAgent.
10. Check this box if there is a firewall between the MediaAgent and the CommServe® server, and network routes are
configured.
Validate that the location of the Index directory is properly set for the MediaAgent. This is done in the Properties pages of
the MediaAgent in the CommCell® console.
3. Set space thresholds – days and percentage apply to V1 indexing clients only.
2. Product version, Service Pack level and update status will be displayed in the Contents / Summary window.
The index directory can manage both V1 and V2 indexes. It must be located on a dedicated high-speed disk, preferably
solid-state drives. Index directory performance is critical when streaming a high number of jobs to a MediaAgent and
when conducting DASH full operations. When MediaAgent software is first deployed to a server, the location of the index
directory will be on the system drive. It is recommended to change the location to a dedicated drive prior to any jobs
running.
3. Set space thresholds – days and percentage apply to V1 indexing clients only.
The index directory location can be changed by changing the 'Index Directory' in the Catalog tab of the MediaAgent
properties. By default, the Index Directory is located in the Commvault® software installation folder. It is recommended to
move it to a dedicated set of SSD disks.
A MediaAgent can receive and handle a certain amount of parallel data transfer operations, or streams. This is based on
hardware resources of the MediaAgent, as well as the Commvault Online Documentation. By default, a MediaAgent has
100 set as a maximum of streams it can receive. After proper evaluation, this number can be increased or decreased.
In some cases, it can be useful to limit the bandwidth used by a MediaAgent when it communicates with clients during
backups and/or another MediaAgent to replicate data, especially when the network is shared with other systems and
users. Commvault® software provides the ability to configure Network Throttling Rules and apply such a rule to a
MediaAgent. A Network Throttling Rule provides granularity in how the bandwidth is controlled and the time range of the
day or the days of the week for which the rule is enforced.
If maintenance is required on the MediaAgent or the libraries hardware, it is possible to disable or mark a MediaAgent
offline for maintenance. It is important to understand the differences between disabling a MediaAgent and marking it
offline for maintenance.
Option Result
Disable a MediaAgent No client nor administrative tasks can be executed. The MediaAgent is considered as logically
unavailable and unusable by the CommCell® console
Mark a MediaAgent Marking a MediaAgent offline for maintenance prevents from executing data protection, data
offline for recovery and auxiliary copy jobs using the MediaAgent. However, administrative tasks such
maintenance as 'Clean Drives,' 'full scan,' or 'disk library pruning' may still be conducted.
To enable/disable a MediaAgent
Agent Installation
Push install
Interactive install
Custom package
Agent deployment best practices:
If DNS names are used, ensure the DNS is properly resolving the name forward and reverse.
If deploying an agent on a remote site, consider using a remote software cache or transfer a custom package.
If the client is behind a firewall blocking ports, set network configurations to tunnel communication in a port.
Agent Requirements
It is important when deploying agents, to validate requirements. Prerequisites differ from one type to another. Even for
components that you frequently deploy, always confirm as it may change when a new service pack is released.
Push Installation
From the Tools menu | Click Add/Remove Software | Install Software
The CommCell® console is used to push the Commvault® software to clients. The following specific ports are used to
achieve the install:
1. From the Tools menu | Click Add/Remove Software | Choose Install Software.
7. Provide a domain account that has administrative privileges on the target systems to install software.
9. If required, choose Client Group and Storage Policy to use for default subclients.
12. If there is a firewall, set network route to communicate with the CommServe® server.
14. View the summary and Click Finish to launch the job.
16. Choose the client status tab to see every client's status.
Interactive Installation
It is possible to download the packages on a client machine by using the download bootstrapper. Once downloaded, start
the installation by executing Setup.exe.
1. Download the packages or remotely reach the Software Cache | Execute Setup as an administrator.
7. Use the default path or enter a different drive and installation path.
10. If there are multiple network interfaces, select the DNS name or type the IP address used to communicate from
this server to the CommServe®.
11. Define the CommServe® DNS name or IP address. Leave empty for a Decoupled Install.
14. Provide user account with CommCell® Security Role having installation permissions.
17. You may need to refresh the view. To redraw window, Click Client Computers and press F5 key to refresh the
view.
18. View the newly installed client in the CommCell Browser window.
Custom Package
A custom package is a lightweight agent package created by the Commvault administrator. A typical agent installation
requires many questions to be answered. The custom package has all these questions pre-answered. It also is useful to
run silent installations. If an enterprise level deployment solution is in place, such as Microsoft® SCCM, it can be
leveraged to silently push the package to multiple machines.
1. Download the packages or remotely reach the Software Cache | Execute Setup as an administrator.
7. Select this option to upgrade installed agents to a new version (i.e. v10 to v11).
8. Select this option to update agent(s) to the latest service pack (i.e. SP11 to SP12).
9. Choose the Select Packages option to select the agents to include in the package.
14. Select to create a self-extracting custom package. Unselect to create a silent install package with response file.
16. Select to Create or Use existing instance or check Show to users to provide interaction for this section.
17. Use the default installation folder or type or browse to a different installation directory.
18. If there is a firewall in place, set rules to communicate with the CommServe® or check Show to users to provide
interaction for this section at time of install.
19. Provide CommServe® DNS name or IP address. Leave empty for a Decoupled Install or check Show to users to
provide interaction for this section at time of install.
20. Select to disable local server firewall if required or check Show to users to provide interaction for this section at
time of install.
21. If it is a locked down CommCell®, provide the certificate file or check Show to users to provide interaction for this
section at time of install.
23. Provide a user account with CommCell Permissions to add client to CommCell.
25. If the ‘Create a self-extracting executable’ option was selected, it will create a custom package.
26. If the ‘Create a self-extracting executable’ option was not selected, it will create a silent install folder structure.
Encryption
Encryption Overview
Encrypting data is an essential part of data protection, especially when data is entrusted to a third party for storage. Equal
consideration must be paid to backup and archive data. If an unencrypted tape is stolen with sensitive information on it,
there is no way to prevent someone from accessing that data. Simple security measures such as password protection
may only delay access to the data. Preventing intrusion into your production environment requires a front-line defense that
is costly and time consuming.
Inline encryption – is an option that secures data during the data protection job. Inline encryption is performed
on the client or MediaAgent or is used for network transmission.
Offline encryption – is an option that secures data during auxiliary copy jobs. Offline encryption is performed on
the source MediaAgent.
Hardware encryption – encrypts data when writing to tape drives that support hardware encryption. Hardware
encryption is supported on LTO 4 and above tape drives which support and are licensed for encryption by the
vendor.
Commvault® software is certified for the US DoD/Canadian DND FIPS encryption accreditation for information security.
The following table describes the options, advantages and disadvantages of each method:
In-Line Client or Enabled on the Allows encryption of data Consumes CPU & memory of
MediaAgent client and during primary movement client or MediaAgent.
configured on the and storage. Can slow
subclient. primary data protection job.
Off-line MediaAgent Configured in the Allows encryption of data Consumes CPU & memory of
storage policy during secondary MediaAgent.
secondary copy. movement and storage.
Does not slow primary data
protection job.
Hardware LTO4 and above Turned on/off at Enables faster encryption Requires dedicated hardware for
drive with storage policy for tape media with minimal backups and restores. Decryption
encryption copy. CPU or Memory takes place on the drive
support consumption on client or potentially exposing data during
MediaAgent. transmission.
Software Encryption
There are several advantages for software encryption:
Data can be encrypted on the client during initial data protection providing complete end-to-end security.
Different encryption ciphers are used based on security requirements.
In certain cases, software encryption can provide a performance benefit by distributing the load of data encryption
to multiple systems as opposed to hardware encryption, where all data encryption is handled on the tape drive.
Data can selectively be encrypted using inline encryption by configuring encryption settings at the subclient level.
This can further improve performance by only encrypting data that requires encryption.
Restore operations always decrypt data at the destination location.
Cipher Key
Length
3-DES 192
Triple Data Encryption algorithm symmetric-key block cipher. Applies cipher algorithm three times to each
block.
Blowfish 128 or
Symmetric cipher, which divides data into 64 bits and encrypts the blocks individually. This algorithm is 256
available in the public domain and is fast and it is claimed to never have been compromised.
Serpent 128 or
Serpent is a symmetric cipher, which encrypts data in 128-bit blocks and uses a key size between 128 to 256
256 bits. This algorithm is in the public domain.
TwoFish 128 or
The successor to Blowfish, this symmetric encryption method uses keys up to 256 bits. This algorithm is 256
fast and, like Blowfish, is available in the public domain.
GOST 256
Developed by Soviet and Russian government. A symmetric cipher in 64-bit blocks using a key length of
256 bits.
AES (Rijndael) encryption is the industry standard used by hardware devices and most encryption software. The other
ciphers were AES candidates and meet all requirements. Some are faster and some are stronger. Rijndael was selected
as the most flexible.
Inline Encryption
Right-click the storage policy primary copy | Click Properties | Advanced tab
Inline encryption is used to encrypt data during primary protection operations. The encryption can take place on the client
or the MediaAgent. Encryption is enabled for Commvault® software through the storage policy primary copy or at the client
level. Encryption can further be configured at the subclient level. Subclient level encryption provides the flexibility of
defining only that data which requires encryption. By default, when encryption is enabled on a client, encryption is enabled
on all subclients.
1. Expand the storage policy | Right-click the storage policy Primary copy | Properties.
When encryption is enabled on a client, the cipher and key length must be set. The default cipher used is blowfish 128 bit.
The 'Direct Media Access' setting determines whether encryption keys are stored on the media. The 'Via Media Password'
option puts the keys on the media. The 'No Access' option only stores the keys in the CommServe® database. If the keys
are stored on the media, data can be recovered using Commvault® software's 'catalog' feature, or in the case of Disaster
Recovery data, the Media Explorer tool. Encryption keys are always stored in the CommServe database.
DR Data recovery using Media Explorer requires the user to provide the Media Password used when the data was written.
The default Media Password is blank. If the Media Password is not known, contact Commvault Support to assist in
recovering the password.
1. Client Advanced properties enable encryption and provide a choice of cipher, key length, and option to write a
copy of the keys on media.
2. Subclient properties provide options to encrypt on:
a. The client (Network and Media)
b. MediaAgent (Media Only)
c. The client and decrypt on MediaAgent (for transmission only), or
d. Disable encryption (None)
When encryption is enabled for a client, the default subclient encryption setting 'Client and MediaAgent' encrypts all data
on the client and the data remains encrypted when written to storage.
Offline Encryption
The 'Offline' or 'Copy-based' encryption uses Commvault® software encryption to secure data during auxiliary copy jobs.
From the Data Encryption section in the storage policy copy's Advanced tab, the 'encryption cipher,' 'key lengths,' and the
option to 'store keys on the media' are configured.
In some cases, encrypted source data will be decrypted first then re-encrypted when storing deduplicated data or
changing encryption ciphers. By default, encrypted data is preserved during an auxiliary copy operation. The 'Store Plain
Text' option is selected to decrypt data during the auxiliary copy job. If 'Store Plain Text' option is selected, you can still
encrypt data during data transmission by selecting the option 'Encrypt on network using selected cipher.'
1. Expand the storage policy | Right-click the storage policy copy | Properties.
2. In the Advanced tab select Re-encrypt Data option and configure the cipher and media access key.
Hardware Encryption
For tape drives that support hardware encryption, Commvault® software can enable or disable an encryption operation on
the drive and manage encryption keys. Keys are stored in the CommServe® database. The 'Direct Media Access' option
'Via Media Password' puts a copy of the keys on the media. The 'No Access' option only stores the keys in the
CommServe database.
Commvault software writes data in chunks. Tape media uses 8GB chunks for indexed-based backups and 16GB chunks
for database backups. When encryption is enabled for data protection jobs writing to tape media with 'hardware
encryption' enabled, each chunk has a separate encryption key seeded by a random number generator and other factors.
Generating keys at the chunk level provides an enhanced level of security and greatly reduces the potential of data
compromise.
Encryption Considerations
Media Password
The Media Password is used when recovering data through Media Explorer or when the 'Catalog Media' option for tape is
used. A media password is essential when using:
Hardware encryption
Commvault® software copy-based encryption (with the 'Direct Media Access' option set to 'Via Media Password')
By default, the password is set for the entire CommCell® environment in the System applet in Control Panel. The default
password is blank. Storage policy level media passwords can be set, which will override the CommCell password settings.
For higher level of security or if a department requires specific passwords, use the 'Policy level' password setting which is
configured in the Advanced tab of the storage policy properties.
Erase data is a CommCell-level license. Once it is applied you need to enable 'erase data' in the General tab of the
storage policy. Once this license is applied, the 'Catalog' option cannot be used to recover backup data managed by the
storage policy. This is because random binaries are written to the OML header in the media password location. The
password cannot be used, and you always get a decryption error when using these tools.
If the erase data operation is something that would be of value to your organization, then it may be worth the risks
previously described. If you are not sure, then it is recommended that you do not use it. If you have capacity-based
licensing arrangements with Commvault software, check to see if this license is installed. If it is, you may want to disable
'erase data' in all storage policies within the CommCell® environment.
When the erase data license is implemented it is only effective when writing jobs to new or recycled media. The license
cannot be retroactively applied to jobs already in storage. If the license is removed, it is only effective when writing to new
or recycled media. All jobs written to media for the storage policy when the license was being used cannot be recovered
through Media Explorer or the 'Catalog' option.
It is technically not possible to erase specific data from within a job. Erase data works by logically marking the data
unrecoverable. If a Browse and Restore or Find operation is conducted, the data does not appear. In order for this feature
to be effective, any media managed by a storage policy with erase data enabled will not be able to be recovered through
Media Explorer or the 'Catalog' option.
Erase Media
To ensure encryption keys are destroyed in the CommServe® database when tapes are aged, the 'Erase Media' option is
used. Erase Media is a physical operation that mounts the tape and overwrites the OML header. Once the header is
overwritten, data cannot be recovered using any method Commvault® software provides. This is considered a destructive
operation so it cannot be performed on any tapes where jobs are actively being retained. The option to erase media is
available in all logical media groups except the Assigned Media group.
From the spare media group | Right-click the desire tape | Options | Erase Spare Media
Erasing spare media loads the tape in a drive and overwrites the header, thus preventing data past the header from being
recovered through Commvault® software.
When tapes are recycled, they can automatically be marked to be erased. This is done by selecting the 'Erase Media'
check box in the Media tab of a storage policy copy. An erase media operation must be scheduled for the library, which
physically loads each marked tape and overwrites the OML header.
When the 'Mark Media to be Erased' option is enabled for storage policy copies, erase media operations must be
scheduled for any library where media is marked for erasing.
2. Set the erase operation to be quick (overwrite OML header) or full (overwrite entire tape).
4. …saved as a script.
Right-click the storage policy tape copy | Click Properties | Media tab
The option 'Mark Media to be Erased After Recycling' in the Media tab of the storage policy copy marks all tapes
managed by the policy to be erased once all jobs have aged. A schedule must then be set up at the tape library to erase
the media. The erase media operation mounts the tape and write a new OML header to the tape. This makes the data
completely unrecoverable through the CommCell® console, Media Explorer, or through a tape catalog operation. It is
important to note that the data on the tape is not actually erased. As such, it is recommended to encrypt all tapes.
Disk Libraries
The 'Disk Library Low Watermark' is configured with a percentage threshold that reports an event to the Event Viewer
when total disk capacity for all mount paths reaches or falls below the specified percentage. Alerts can also be configured
to notify users when the threshold has been reached.
Instead of a low watermark, an estimated number of days before running out of space can be used. Based on the daily
usage history, the system estimates when it expects the library to be saturated. When the number is equal or lower to the
defined value, an event is sent in the Event Viewer.
The 'Prevent accidental deletion of data from mount paths' option locks disk mount paths content from being deleted for
Windows MediaAgents. It is strongly recommended that this setting is selected. When it is enabled, and a user connects
to the MediaAgent to delete data using Windows Explorer, the operating system displays an access denied error
message.
The 'Mount Path Usage' section determines the order in which incoming client streams are distributed across mount
paths, when multiple mount paths are configured for a disk library.
Fill & Spill - The default setting for mount path usage, uses mount paths in the order in which they were created.
Spill & Fill (load balancing) - Load balances device streams between all mount paths in the library.
Which should you use for best performance? For disk libraries, part of a bottleneck are the disks themselves, but the other
part of the bottleneck is the I/O path to the disks. The use of Fill & Spill or Spill & Fill (load balancing) should be
determined by how the mount paths were created and the I/O path to each mount path.
Use Fill & Spill if the mount paths are part of the same disk array and there is a single I/O path to the mount
paths.
Use Spill & Fill if the mount paths are on separate disks and/or if there are separate I/O paths to the mount paths.
If there is only one mount path in the library, consider setting mount path usage to Spill & Fill. The reason for this
is if an additional disk array is later added, the disk library will automatically use the mount paths in a load
balancing manner. A common misconfiguration is when disks are added to a library and the administrator or
engineer forgets to set this option.
Managed disk space is a Commvault® software feature that allows a job stored on disk to be retained beyond its standard
retention settings if free space exists. Managed disk space thresholds do not apply to disk libraries using Commvault
deduplication.
Start Aging when data occupied on disk is (High Watermark) - determines the amount of disk space
consumed before aged jobs will be pruned.
Stop Aging when data occupied on disk is (Low Watermark) – stops purging jobs when amount of disk space
consumed falls to the specified value.
2. Set the high threshold at which the system starts aging jobs.
3. Set the low threshold at which the system stops aging jobs.
To un-hide libraries
Keeping an eye on this tab helps in avoiding filled library situations and helps in planning the acquisition or replacement of
hardware. Based on the average daily disk space consumption and the average space freed up by data aging, the system
can estimate a possible date on which the library is expected to be filled.
4. Shows daily average for new space consumption and data aging space freeing up.
Mount Path Allocation determines the number of concurrent write operations that are performed to a mount path. The
default is set to 'Maximum allowed.' If a defined number is set, increasing this number could potentially improve overall
backup performance within the CommCell® environment by allowing more jobs to run concurrently. However, setting this
number too high could have a negative impact on the MediaAgent, network, and disk I/O performance.
Increasing this setting will run more streams through the network, MediaAgent, and disk. For many environments
increasing this number marginally may not have a negative impact on the overall environment. It is recommended
to incrementally adjust this number higher (increments of three to five at a time) and carefully monitor the network,
MediaAgent and disk performance.
If multiple mount paths exist in the library, consider how the mount paths have been carved out of the disks. If two
mount paths set to five writers are allocated from the same physical disk array then, there would be ten
concurrent write operations to the disks, five from each mount path.
Consider the I/O path from the MediaAgent to the disks as a potential bottleneck. Depending on the disk storage
being used (Fibre, iSCSI or NAS) the I/O paths can become a bottleneck.
Increasing the number of writers could have a positive impact on overall backup performance within the
CommCell console by running more jobs but may slow down individual jobs due to the extra network and disk
load.
2. Defines the number of writers for mount path or sets it to read only.
Space Allocation specifies hard cut-off limits for a mount path. By default, it is set to reserve 2GB of disk space. If more
reserve space is needed, increase this setting or use the 'Do not consume more than' setting to limit how much disk space
will be allocated to write operations. This can be useful if you require reserving more space for maintenance purposes,
such as a defrag operation.
MediaAgent
File size
Number of writers
Block size
Number of files
The new storage must first be presented to the MediaAgent operating system, and then the 'Move Data Path' operation
can be initiated.
1. Expand the library | Right-click the mount path | Move Mount path.
3. Displays the total capacity and free space left for each mount path.
This can lead to fragmentation over time, which could start degrading performances at some point. In this case, the mount
paths can be defragmented using any defragmentation tool supporting the sparse file attribute. Windows built-in
defragmentation tool does not support the sparse file attribute and therefore, cannot be used. Commvault does not
support calls on any Third-party defragmentation tools. However, Disk keeper has certified its 2011 (and higher) release
version for online volume defragmentation.
Before running any defragmentation jobs on a mount path, a disk library maintenance job and a report can be generated
to provide information about fragmentation. Based on the result displayed in the report, the Commvault administrator can
easily establish if maintenance is required.
The Disk Library Maintenance job can be scheduled from the Job Initiation tab.
Commvault recommends running the Disk Library Maintenance Job and the Library and Drive Configuration report once
or twice a year.
4. A copy of the report can be saved on local disks, on a network share or an FTP location.
5. Report format settings such as the language, the title and the date and time format can be defined.
Setting a mount path as read-only has an advantage over disabling the entire library. Restores can still be achieved and if
there are enough additional mount paths, backup jobs can still run using the remaining mount paths.
Run the defragmentation tool on the mount path. Consult your software vendor documentation, to ensure to use a tool
that supports online defragmentation and that supports the sparse file attribute. Once the defragmentation job completes,
do not forget to put the library back online or to put the mount path back to read-write.
Deduplication
1. Signature is generated at the source - For primary data protection jobs using client-side deduplication, the source
location is the client. For auxiliary DASH copy jobs, the source MediaAgent generates signatures.
2. Based on the generated signature it is sent to its respective database. The database compares the signature to
determine if the block is duplicate or unique.
3. The defined storage policy data path is used to protect data – regardless of which database the signature is
compared in, the data path remains consistent throughout the job. If GridStor® Round-Robin has been enabled for
the storage policy primary copy, jobs will load balance across MediaAgents.
1. Right-click Storage Resources | Storage Pools and select Add Storage Pool | Disk.
2. Provide a name for the Storage Pool.
3. You can select an existing disk library from the list, or…
4. …create a new local or network-based library.
5. Check the box to enable deduplication.
6. If the disk storage is shared amongst multiple MediaAgents, several deduplication database partitions can be
used. Define the number of partitions to use.
7. Click the first Choose Path link to define the location of the first database partition.
8. Select the MediaAgent hosting the first DDB partition.
9. Browse to the dedicated high-performance volume.
10. Repeat the same procedure to define the location of the additional partitions.
There are two options to consider for stream parameters with deduplication:
MediaAgent maximum number of parallel data transfer operations – This is the maximum number of streams
that the MediaAgent can receive. It is configured in the General tab of the MediaAgent's Properties page.
DDB maximum number of parallel data transfer operations – This is the maximum number of streams that
any DDB of the CommCell® environment can receive. This is a governor option that is applied to all DDBs. It is
configured in the Resource Manager of the Media Management applet.
If there are three MediaAgents; one is extra-large and two are medium, how can streams be adjusted? In this
scenario, the DDB governor value should be set to the number of streams that the largest MediaAgent can
receive, which in this case is 300 streams. This means that the extra-large MediaAgent maximum number of
streams must also be set to 300 streams.
Lastly, the two medium MediaAgents can be set to 100, preventing them to receive more streams than their
hardware can handle.
For more information on MediaAgent sizing and guidelines, refer to the Commvault Online Documentation.
Deduplication block size is configured in the global deduplication policy properties. All storage policy copies associated
with the global deduplication policy always use the block size defined in the global deduplication policy.
For databases, the recommended block size is from 128KB to 512KB depending on the size of the data, number of full
backups, and retention requirements. The logic for a larger block size is based on the fact that initial protection of the
database does not yield high deduplication ratios due to the uniqueness of block data, so block size does not have a
significant impact on deduplication ratios. However, protecting the database redundantly over time can yield very high
deduplication ratios, depending on the application and compression usage.
For large data repositories managing binary data types such as static media repositories, larger deduplication block sizes
can provide more scalability. For these data types the potential for duplicate data blocks is minimal. By using a higher
block size, it allows more data to be stored with a smaller Deduplication Database (DDB) size. In this scenario, the
advantage is not storage space savings but rather the advantage of using client-side deduplication and DASH full
backups results in processing and backing up the block data only once. This solution works very well for static data types,
but if processing such as video editing will be performed on the data, deduplication may not be the best method to protect
the data.
Why does Commvault recommend a 128KB block size? Competitors who usually sell appliance-based deduplication
solutions with steep price tags use a lower block size, which results in a marginal gain in space savings considering most
data in modern datacenters is quite large. Unfortunately, there are some severe disadvantages to this. First, records for
those blocks must be maintained in a database. Smaller block size results in a much larger database which limits the size
of disks that the appliance can support. Commvault software can scale significantly higher.
Even more importantly is the aspect of fragmentation. The nature of deduplication and referencing blocks in different
areas of a disk leads to data fragmentation. This can significantly decrease restore and auxiliary copy performance. The
higher block size recommended by Commvault makes restores and copying data much faster.
The main aspect is price and scalability. With relatively inexpensive enterprise class disk arrays you can save significant
costs over dedicated deduplication appliances. If you start running out of disks, just add more space, which can be added
to an existing library, so deduplication will be preserved. Considering advanced deduplication features, such as DASH
Full, DASH Copy, and SILO tape storage, the Commvault® Deduplication solution is a powerful tool.
If the block size is changed on a deduplication store containing data, it results in a re-baseline of all data in the store since
new signatures being generated do not match existing signatures. If the block size needs to be changed, ensure no jobs
are currently running, seal the current Deduplication Database (DDB), and then change the block size.
Compression
By default, compression is enabled in a global deduplication policy. Data is compressed prior to signature generation,
which is optimal for files and virtual machines. For database backups, signatures are generated first and then
compression takes place providing better deduplication ratios for large databases. In some cases, where application data
is being compressed prior to backup, such as in certain Oracle RMAN (Recovery MANager) configurations, it is best not
to use Commvault compression.
Rule of thumb is to use one compression or the other, but never both. For most data, Commvault compression results in
the best deduplication ratios.
Compression is enabled by default for the tape data path. Most modern tape drives are advanced enough to
attempt compression on the block and determine if it is beneficial. If the block compressed well, the block remains
compressed. If not, it will un-compress the data on the fly.
1. Expand the Storage Resources | Deduplication Engine | Right-click the primary copy | Properties.
1. Expand the Storage Resources | Deduplication Engine | Right-click the primary copy| Properties.
2. Check the option and set a maximum number of days for deduplication.
Data Verification
With all the benefits of Commvault® deduplication, it is critical to consider the integrity of deduplicated data. A corrupt
block in the deduplication store can result in data from multiple jobs not being recoverable. Commvault® V11 provides live
data verification operations that are conducted while data protection jobs are running. To use data verification, the
MediaAgent options 'Validation on Media' and 'Validation on Network' must be enabled, which they are by default.
Incremental Verification
This method verifies data integrity for new jobs added since the last verification job. This option is available when running
'Verification of Deduplication Database' or 'Verification of Existing Jobs on Disk and the Deduplication Database' options.
Commvault® introduced a DDB verification schedule that executes an incremental verification every day, at 11 a.m. Since
this method only verifies new jobs, full verification jobs should periodically be executed, such as once a month or once a
quarter.
The best way to protect against potential data corruption, whether using deduplication or not, is to always have multiple
copies of data.
1. Click Schedule Policies | Right-click the System Created DDB Verification schedule policy | Edit.
3. Click Edit.
Tape Libraries
Once completed, if the library is dedicated to a single MediaAgent, it is configured directly in the CommCell® browser. If it
is shared, it must be configured using the Expert Storage Configuration wizard.
8. Right-click the unconfigured library (red question mark) and choose Configure.
Even if another MediaAgent requires writing data, the library operation is always conducted by the active controller.
Failover candidates can be defined to replace the active controller, should the active controller become unavailable.
When defining virtual mail slots, a starting port number must be defined, as well as the order for additional media. The
order can go up or down. For instance, an administrator could define to start with port number one and to go up for
additional media. Every day, the exported media will be ordered starting with port number one and will go up for as many
slots are required.
When using virtual mail slots, ensure to leave at least one empty slot in the library, as it is required by the system when
re-ordering tapes.
3. Specify if a media in a virtual mail slot can be used for data protection job when space is available.
Appendable Media
By default, Commvault® software tries to fill a tape completely before exporting it. This default behavior is overridden by
either manually exporting the unfilled tapes or by scheduling an export media job using the 'Export Active Media' option. In
such cases, the exported media is assigned a status of appendable. If the media becomes appendable but gets reinserted
in the library within 14 days, it is used by the system for subsequent jobs. If it is reinserted after 14 days, the system does
not use it anymore. The 14 days threshold can be modified to better fit your needs.
3. Check to use appendable media for subsequent jobs when re-inserted in the tape library.
4. Define the threshold in days to re-insert a media and append data to it.
2. Check to reset container and export location when a media is reinserted in the tape library.
Auto-Cleaning
Over time when using tape drives, dirt can accumulate on the drive read/write heads. When it happens it's important to
clean the drives using a cleaning media. This process is usually automated but can be executed manually.
Hardware Controlled Cleaning – The cleaning of the drives is handled by the library itself. In this scenario, auto-
cleaning must be disabled in Commvault® software and must be enabled on the library by using either the
administration web portal or its control panel. The library configures dedicated cleaning slots where cleaning
media are stored. The dedicated cleaning slots and cleaning media are not visible in Commvault software and
cannot be used. Manual cleaning operations must be initiated from the library web page.
Software Controlled Cleaning – The cleaning of drives is handled by Commvault software. In this scenario,
auto-cleaning must be disabled on the library and must be enabled in Commvault software. The library does not
reserve any dedicated cleaning slots and Commvault software is aware of the cleaning media. Cleaning must be
initiated from Commvault® software.
Both cleaning methods are equally effective since both use hardware sense code and/or cleaning thresholds. The
preferred method can be determined based on the manufacturer's recommendations.
Even if both cleaning methods are as effective, it cannot be used concurrently. A choice must be made, and a single
method used.
If the software receives a sense code and cleans the tape drive heads, but the drive still encounters errors, it is not a dirt
issue and probably is a hardware malfunction that should be investigated. In this situation, to avoid having the system
trying to clean the drive again, a minimal number of days since the last cleaning can be set before a new cleaning attempt
is conducted. The default value is 3 days, which ensures that even on a long weekend, the administrator will notice that
there is a cleaning issue before additional unnecessary cleanings are attempted. Otherwise, it could result in using all
cleaning media in a single night.
When a drive status is set to dirty and the system cannot clean the drive, such as when there are no cleaning media
available in the library, Commvault® software stops using that drive completely for both backups and restores. This
prevents damaging the media or corruption when writing data to the media a using dirty tape drive. If resources are limited
and a restore requires a tape drive, the 'Continue using drive even if it needs cleaning, during the restore' option is used. It
would allow using the drive, but as mentioned by the option, only during restores.
3. Check and define a minimal number of days between cleaning to avoid multiple cleanings on hardware
malfunction.
4. When resources are limited, check to continue using a dirty drive only on restores.
If you use cleaning thresholds, some adjustments to the threshold values might be preferable. By default, the threshold to
retire a bad media is five read/write errors. But the threshold to clean a dirty drive is ten read/write errors, which means
that up to two tapes could be retired before the drive gets cleaned. And these media are probably good media. To avoid
this situation, you can slightly increase the tape threshold or decrease the drive threshold or both, to ensure that the drive
is cleaned before the media is retired, (i.e., you could increase the tape threshold to seven and lower the drive threshold
to six).
When content verification fails – If enabled, when the MediaAgent encounters a media that it cannot read, such
as a 'bad On Media Label (OML),' it automatically overwrites that tape. This option should be carefully planned as
it could result in overwriting valid data.
When it comes from a different CommCell environment – By default, when a tape was used in a different
CommCell environment, Commvault® software does not use the media and moves them into a media group called
'Foreign Media'. The media can automatically be used as scratch tapes by checking this option.
Prevent use of tapes from a different backup vendor – This option is useful when the library is shared with
another backup software. Since every vendor uses different writing format, Commvault software can identify a
tape coming from a different vendor and prevent its use by checking that option.
2. Check to overwrite media that cannot be read successfully. This setting could inadvertently write over valid tapes.
3. Use this option to overwrite tapes coming from a different CommCell® environment.
4. Use this option when the library is shared with another backup software to prevent overwriting tapes from the
other vendor.
Enable Auto-Recovery – When this option is enabled, the MediaAgent tries to recover the Media every 20
minutes.
Attempt to remove media from the drive when unload fails – When this option is enabled and the MediaAgent
'unload' command fails, it tries using a move command instead, which in some cases might successfully move the
media out of the drive.
2. If checked, the MediaAgent attempts to recover the stuck media every 20 minutes.
3. Use this option to try to recover the media using a move command when the unload command fails.
In recent versions of the software, auxiliary copy jobs are now scheduled to run automatically every 30 minutes. If
you conduct frequent backups during the day, such as hourly transaction log backups, these jobs will be copied to
tapes every 30 minutes. If the default value of 20 minutes is used to unmount idle media, tapes will constantly be
mounted and unmounted for the entire day, which is definitely not recommended for the robotic arm.
Solution: Increase the 'Unmount Media from the drive after' value to 35 minutes. The first auxiliary copy job of the
day then mounts the tape, which stays mounted all day long if no other jobs are requiring the drive.
2. Set the time value at which the idle media will be unmounted.
Do Periodic mail slot check for any change in status – This option periodically polls the library every 30
seconds for any change in the I/E ports status. The frequency of the polling is configured using the 'Library Status
Check Interval' option.
Prevent Auto Import of Media from mail slot – This option, when checked, prevents the MediaAgent from
importing the media automatically when inserted in the mail slots. The import must be manually launched from the
CommCell® console.
2. Check this option to force the use of manual imports and prevent the MediaAgent to automatically import tapes.
3. Check this option to periodically poll the library I/E ports for changes…
1. Expand the tape library and click the master drive pool.
If a library is offline, the library's Status tab, located within the Properties of the library, provides an explanation on the
reason why. It also provides statistics and error counts. If counters start to increase, this could be an indication of
connection errors or that a drive is about to fail.
To monitor progress, right-click the library and choose 'Mark Library Fixed' to reset counters and see if problem persists. A
library can be disabled or taken offline for maintenance using the Status tab.
5. Shows the number of errors encountered since counters were last reset.
Drive Validation
A drive validation tests tape drive speed by writing and reading blocks of data to media. It is recommended to validate
drives when the library is initially configured and to document the results. In the future, if you suspect having performance
issues, a new validation job can be compared to the initial value.
When the drive is replaced, right-click the old drive and choose 'Mark Drive Replaced.' After a few minutes, the system re-
maps the drive to the new serial number and the drive becomes available. It is also possible to configure the system to
attempt to remap replaced drives automatically. This is accomplished by enabling the automatic drive replacement in the
library properties.
1. Expand the master pool | Right-click the defective drive | Mark Drive Replaced.
2. Enable this option to attempt to map replaced drives serial numbers automatically.
Physical media management refers to any action performed that will physically cause actions within the library, such as:
Logical management of media focuses on the media group and the state of tape, which is represented by a media icon.
Logical actions include:
Media Icons
All tapes within a library are associated with a media status icon. These icons are used to quickly identify key attributes of
a tape.
Note that colors representing the different icons may not be accurately shown in print.
Icons: Tape
Libraries
Media Status Undiscovered Media with Media from a Appendable Aged Aged
Media duplicate different library Media Media Retired
barcodes Media
Media Lifecycle
The lifecycle of a tape is tracked from the time of initial discovery to the time of its logical destruction. The logical lifecycle
of a tape is different than the physical life. Logical management of tapes are managed with tapes in or outside the library.
The Logical management of tape media is organized in the following media groups:
1. Physical media is tracked by its location. A tape can either be inside the library or exported.
Scratch groups hold all new or recycled media. Multiple scratch groups can be used to define which tapes a job uses
when it executes. When a job requires a spare tape, the tape is pulled from a defined scratch group. The storage policy
copy data path is used to determine which scratch group the tape is selected from.
The terms: Scratch Pool, Scratch Group, Spare Media Group or Spare Media Pool are used interchangeably
throughout Commvault documentation and the CommCell console
All new and recycled tapes are placed in scratch groups
Once a job is written to a tape it is moved out of the scratch group and into the assigned media group
Multiple scratch groups can be created and assigned to storage policy copies. When a job for the policy copy
runs, it automatically picks a tape from the assigned scratch group
2. Scratch Media Icons. Two types of scratch media are available: Spare and Aged.
By default, a default scratch group is created when the library is initially detected. From this point additional scratch
groups can be created, and tapes can manually or automatically be assigned to the group. When jobs run, tapes are
pulled from scratch groups and used for the job. Once data is written to the tape, it is moved out of the scratch group and
into the assigned media pool.
When using multiple scratch groups, different storage policy copies define which scratch group tapes are pulled from.
Also, tapes can be manually moved to other scratch groups or automatically assigned to different groups. Assigning tapes
is based on barcode patterns or high watermark thresholds and scratch group priority. This allows different job types to be
placed on specific media, which simplifies tape management outside the library.
A storage policy copy data path includes the selection of a scratch group. By configuring multiple scratch groups and
assigning them to storage policy copies, you can determine which tapes will be used to write certain jobs. Once data
managed by the storage policy copy is written to the tape, the policy copy will own the tape until all jobs have aged. This
means no other storage policy copies can write data to the tape.
High and low watermarks are assigned to each scratch group in a tape library. Low watermarks are used to alert
administrators when the library is running out of spare tapes. The high watermark is used to limit the number of tapes
placed in a scratch group.
Low watermarks – Reports events in the Event Viewer when scratch tapes fall below the defined number. Alerts
can also be configured to alert administrators when low watermarks are reached.
High watermarks – Limits the number of tapes that will be assigned to a scratch group. This is useful when
multiple scratch groups and custom barcode pattern definitions are not being used. Scratch groups can be
assigned high, medium and low priorities. When tapes are discovered or recycled, they are placed in the high
priority scratch group until it reaches the upper watermark. The medium group is filled next followed by the low
priority group. If there are still additional tapes available, they are placed in the Default Scratch group designated
in the General tab of the library properties.
Tip: Using multiple scratch groups to ensure available media for backup operations
You are managing a CommCell® environment and running backup jobs directly to tape each night. During the day,
you run auxiliary copy jobs to tape to be sent off site. You are concerned that auxiliary copy jobs may use too
many tapes and there will not be enough media for backup operations.
Solution: Create an additional scratch group and name it Auxiliary Copies. Set the priority to Medium. Configure
a High Watermark in the Default Scratch group to be a greater number than the number of tapes required to
perform nightly backups. Set the Default Scratch group priority to high.
In the secondary copy of each storage policy, use the Scratch Pool drop-down box to assign the Auxiliary
Copies scratch pool to the copy.
The Cleaning Media group manages all cleaning tapes for a library. Tape drives are cleaned based on drive counter
usage tracked by Commvault® software and/or sense codes reported from the library. Drive cleaning settings are
configured in the library properties under the Drive tab.
Best practice guidelines are to configure drive cleaning based on the library manufacturer's recommendations.
Commvault software should automatically detect and move cleaning tapes to the cleaning media group when the
tapes are discovered.
If cleaning tapes are incorrectly identified and moved to a scratch pool, you can manually move the tapes or use
custom barcode definitions to associate cleaning tapes with the cleaning media pool.
Low watermarks can be defined to trigger events and optional alerts when the number of spare cleaning media
reaches the low threshold.
The Retired Media group is a holding area for all tapes that have exceeded tape error thresholds or are manually marked
bad. Tapes in the Retired Media group will remain in the group until they are manually marked good or deleted. Any tapes
in the Retired Media group will NOT be written to. If a tape is in the Assigned Media group and is marked bad, it will NOT
be moved to the Retired Media group until all jobs have aged from the tape.
Only tapes that are not currently retaining job data are placed in the retired media group. If a tape is marked bad
but is currently retaining data, it will still appear in the Assigned Media group. Once all jobs have aged from the
tape it is moved to the Retired Media group.
Tape counters are tracked for the life of a tape from initial discovery to deletion.
By default, manufacturer-recommended thresholds are used for all tapes. These settings can be modified in the
Control Panel | Hardware Maintenance applet | Media Retirement tab. It is NOT recommended to increase the
threshold values.
For as long as a tape is in the Retired Media group it will NOT be written to.
Tapes can be moved out of the Retired Media group using the following methods:
Delete – Deletes the existence of the tape from the CommServe® server database. The tape can then be
rediscovered and reused. The tape is treated as a brand-new tape and all counters are reset. If there are any
existing aged jobs on the tape they will not be recoverable.
Mark Media Good – Is recommended if the tape has existing jobs that have aged but may still need to be
retained. If this is the case after marking the tape good, move it to the Overwrite Protect Media group.
Tapes should be left in the Retired Media group until they are physically disposed of. This prevents a bad tape from
accidentally being discovered and reused. If a bad tape is disposed of and is replaced with a new tape with the same
barcode, delete the tape from the Retired Media group before putting the new tape in the library.
Sometimes tapes can be incorrectly marked bad due to drive problems that result in tape errors. If there is a
sudden increase in bad tapes this may be an indication of drive problems. However, do NOT discount the
possibility that the tapes are bad. There have been situations where bulk orders of brand-new tapes are
legitimately bad. If you do not know what manufacturing, delivery, or storage methods are being used; then it is
critical to act on the side of caution.
The Foreign Media group manages all media from different CommCell® environments or tapes from a different backup
vendor.
Tapes from one CommCell® environment cannot be directly restored into another. When a tape is loaded and the
OML (On Media Label) is read, if the CommCell ID is different than the CommCell® environment reading the tape,
the tape is moved to the Foreign Media group.
Commvault software will not write to tapes when the OML header is not recognized as a Commvault header and
the tape is moved to the Foreign Media group.
The Overwrite Protect Media group logically locks down a tape, so it will NOT be written to or recycled. Tapes must be
manually moved to the Overwrite Protect Media group and remain there indefinitely until they are moved out of the group.
By default, an Overwrite Protect Media group is automatically created. Additional overwrite protect media groups can be
added.
Tapes are moved to the Overwrite Protect Media group using the following methods:
For active tapes in the Assigned Media group – Right-click on the tape and select Prevent Reuse. The tape appears in the
Assigned Media and the Overwrite Protect Media groups.
For tapes in scratch groups – Right-click on the tape and select Move. For Media Group Type select Overwrite Protect
Media group and then select the overwrite group.
Moving a tape to the Overwrite Protect Media group is just one-way Commvault® software can prevent data from
being overwritten. Data can also be locked down at the job level.
Consider a job that spanned multiple tapes. Manually moving tapes to the Overwrite Protect Media group requires
you to know every tape the job was written to. The job and all tapes can also be locked down through the storage
policy copy. In the job history of the policy copy, right-click on the job and select Retain Job. You are then
prompted to select a date to hold the job, or infinitely retain the job.
The Catalog Media group is used to hold all tapes that are actively being cataloged or are marked for catalog. A catalog
operation is used to catalog job metadata from a tape and enter the metadata back into the CommServe® server
database. You can perform this operation if the CommServe server database had to be restored to a point-in-time prior to
the jobs on a tape finishing. This situation can arise in cases of disaster, database corruption, or if the CommServe server
metadata backups are not properly managed.
A tape can be individually picked for catalog or multiple tapes can be picked and marked for catalog. When tapes are
picked for catalog they are moved to the Catalog Media group.
All tapes that are actively retaining data are located in the Assigned Media group. Within a library, there is only one
assigned media group. Tapes remain in the group until ALL jobs on the tape have exceeded retention and are marked as
aged. During the data aging operation, the tape is then recycled back into a scratch pool.
Tapes in the Assigned Media group cannot be deleted. Delete is considered a non-destructive operation.
Delete Contents can be performed on a tape which is considered a destructive operation. To delete the contents
of multiple tapes, use the Shift or Ctrl keys to select multiple media. Note that this recycles the tape and the jobs
are marked aged.
2. Media icons reflect the status of the media. Several icons are available.
When a spare tape is picked for a backup job, the tape becomes associated with the storage policy copy that is managing
the job. This is important to understand because once the tape is associated with the policy copy, no jobs from other
policy copies can be written to the tape. This is done intentionally to avoid mixed retention on media. Since each storage
policy copy can have different retention configured, it is important to separate jobs based on policy copy ownership.
Since tapes are associated with storage policy copies it is important to properly configure and manage storage policies.
Tip: Consequences of having too many storage policies using tape media
Your environment has 25 storage policies. This results in at least 25 storage policy primary copies. If all primary
copies are defining tape library data paths, then at least 25 tapes must be in the library to accommodate all
potential jobs. If the storage policies also have secondary copies using a tape data path, then additional tapes
must also be present to meet the media needs of the secondary copies.
In some cases, backup configurations such as Start New Media and Mark Media Full can complicate things more.
It is important to understand your environment's needs and how Commvault software manages media.
Use global secondary copies to consolidate data from multiple storage policy secondary copies to the same tape
sets.
1. Media in Assigned Media group are associated with a specific Storage Policy Copy.
Action Description
Verify Physically verify the OML header information to CommCell® tape metadata and the barcode label.
Media (Except cleaning media pool)
Delete Logically delete the existence of a tape from the CommServe® server database. (Except assigned media
Tape group)
Delete Logically delete contents by marking all jobs as aged and recycling the tape back into a scratch pool.
Contents (Only in assigned media group)
Erase Physically erase data by writing a new OML header to the tape.
Media
Media Refresh active jobs on existing tapes by writing the jobs to new tapes.
Refresh
Exporting Media
Exporting tapes is a physical operation that sends commands to the library to eject tapes to the import/export mail slots.
To view the progress of export operations, use the Exports in Progress view in the library tree.
To view tapes in the import/export slots, use the I/E Ports view in the library tree.
Individual Export
Tapes can be individually exported from any location within the library tree.
3. Pop-up message instructing administrator to remove tape from library’s export location.
Bulk Export
Select the library | Export | Select media to export
Multiple tapes can be selected together to perform a bulk export. This is considered a library level operation, so the bulk
export is conducted by right-clicking on the library and selecting Export Media. Optionally, use the Shift or Ctrl keys to
select multiple tapes. A bulk export moves tapes until the import/export slots are full. Once tapes are removed from the
slots, the export operation continues until all tapes have been exported
5. Click Finish.
At times a tape must remain inside the library. To prevent a tape from being exported indefinitely or until a specific date,
use the Prevent Export option. The Prevent Export option is available within any media group.
3. Expiration dates can be defined for how long the tape cannot be exported.
Allowing Export
Right-click the tape | Options | Allow Export
A tape that has been previously marked to prevent export can be set by the administrator to Allow Export. This operation
overrides the previous setting specified during the Prevent Export operation.
When tapes are exported out of the library, they are logically associated with an export location. There are two types of
locations that can be defined: Stationary or Transit.
2. Provide a name for the location and the type (stationary or transit).
Tapes can be moved between certain media groups. Moving tapes between groups is a logical operation and no physical
media movement will be conducted in the library. Moving tapes provides the administrator with a great deal of flexibility
regarding media management.
The Move operation moves tapes between Scratch, Cleaning, Overwrite Protect, Foreign and Catalog media
groups.
Tapes can be moved to the Retired Media group by manually marking the tape bad. Tapes can be moved out of
the retired media pool by using the option Mark Media Good or by deleting and rediscovering the tape.
Tapes moved to the Overwrite Protect Media group will remain there indefinitely until they are manually moved
out of the group.
Tapes cannot be manually moved to the Assigned Media group. Tapes can logically be moved to the Overwrite
Protect Media group by selecting Prevent Reuse. The tape will exist in both the Assigned Media group and the
Overwrite Protect Media group until all jobs age from the tape, where it will then only exist in the overwrite protect
media group.
When a tape is cataloged or marked for catalog, it logically appears in the Catalog Media group.
There are two methods for using the Move operation:
Specific tapes can be moved between media groups by selecting a tape or multiple tapes (using the Shift or Ctrl
keys). After selecting the tapes to move, right-click and select the Move option. Specify the media group type and
the specific destination pool to move the tapes to.
Groups of tapes can also be moved between scratch pools by right-clicking on the scratch pool and selecting
Move. This method allows you to specify how many tapes to move but it does not allow you to specify which
tapes will be moved.
2. Select the group type and group to move the tape and then click OK.
Deleting Tapes
Right-click the tape | All Tasks | Delete
A Delete operation is a logical operation that deletes the existence of the tape from the CommCell® environment and
CommServe® server database. The delete operation does not physically erase any data and the tape does not need to be
in the library when it is deleted.
This is also considered a non-destructive operation which means a tape can only be deleted if no active jobs are currently
retained on the tape. This means that the Delete option will not be available for tapes in the Assigned Media group.
When a tape is deleted, the tape records are removed from the CommServe server database. This means data
on the tape would not be recoverable. If data on a deleted tape needs to be recovered, the Media Explorer tool is
used to catalog and recover data.
Since deleting a tape is a logical operation, there will be no tape movement in the library and the tape does not
need to be available.
If a tape has been permanently lost or destroyed and a replacement tape with the same barcode label will replace
the original, delete the original tape before adding the replacement tape to the library. Note that within a
CommCell environment, duplicate barcode patterns cannot exist or it will result in an error.
1. Right-click tape | All Tasks | Delete. Note this cannot be performed on active tapes.
Delete Contents is a logical operation that automatically marks all jobs on a tape as aged. During this operation, a
confirmation message appears, which requires the administrator to enter erase and reuse media. The administrator is
then prompted to select a scratch group in which the tape will be moved to. Data on the tape is logically marked aged so
the data can still be recovered—up to the point where the tape is mounted and the OML header is overwritten.
The most common situation where the Delete Contents operation is used is when there is not enough spare media to run
scheduled jobs. This typically happens when storage policies are improperly configured, or retention expectations are
unrealistic compared to the capacity to store data. If an administrator frequently uses the Delete Contents option to free-
up tapes for jobs, consider readdressing environment configurations or purchasing more tapes. Manual operations such
as Delete Contents can accidentally delete critical data.
1. In the Assigned Media group | Right-click a tape | All Tasks | Delete Contents.
Tip: Delete a tape to allow for a full recovery operation when the tape is missing.
An interesting use for the Delete Contents option is when a tape containing an incremental backup job is bad or
missing at the time of a recovery operation. In the case of a restore, the full backup and all incremental backup
jobs will need to be available for the restore to complete.
The Commvault® software prompts the administrator to put the tape in the library. If it is known that the tape is
permanently unavailable, use the Delete Contents option to erase the records of the jobs. When the restore is
run, the jobs on the unavailable tape is skipped and the restore operation completes successfully.
Erasing Tapes
Erase Media is a physical operation that mounts the tape and overwrites the OML header. Once the header is overwritten
data cannot be recovered using any method Commvault® software provides. This is considered a destructive operation,
so it cannot be performed on any tapes where jobs are actively being retained. The option to erase media is available in
all logical groups except the Assigned Media group.
A tape can be erased on demand by right-clicking on the tape and selecting Erase Media. This action then loads the tape
in a drive and overwrites the header physically preventing data to be recovered through Commvault® software.
Right-click the storage policy copy | Media tab | enable Mark Media to be Erased After Recycling
When tapes are recycled, they can automatically be marked to be erased. This is done by selecting the Erase Media
checkbox in the Media tab of a storage policy copy. When tapes are recycled, they are marked to be erased. An erase
media operation must be scheduled for the library, which will physically load each marked tape and overwrite the OML
header.
The option, Mark Media to be Erased After Recycling in the Media tab of the storage policy copy, marks all tapes
managed by the policy to be erased once all jobs have aged. A schedule must then be set up at the tape library to erase
the media. The erase media operation mounts the tape and writes a new OML header to the tape. This makes the data
completely unrecoverable through the CommCell® Console, Media Explorer, or through a tape catalog operation. It is
important to note that the data on the tape is not actually erased.
Note: The only guarantee of preventing data from being recovered is to physically destroy the media. Even with the erase
media and data encryption, there is no 100% guarantee that data cannot be recovered.
When the Mark Media to be Erased option is enabled for storage policy copies, erase media operations must be
scheduled for any library where media is marked for erasing.
2. Set the erase operation to be quick (overwrite OML header) or full (overwrite entire tape).
4. …saved as a script.
Verifying Media
Right-click the tape | All Tasks | Verify Media
A Media verification operation is conducted to physically verify the barcode label, verify that the OML header matches,
and that the tape belongs to the CommCell® environment. Since this is a physical operation the tape is mounted in a drive
and a job appears in the job controller. When the job is initiated, the operation type is Media Inventory.
The Media verification operation is commonly used for the following tasks:
To ensure that the barcode label and the OML header properly match
To identify potentially damaged tapes
To identify tapes that may belong to a different CommCell environment
To identify tapes that may have an OML header written by different backup software
To verify a tape
2. Verification is a physical operation which will load the tape to conduct the verification operation.
A tape in the assigned media pool can manually be marked full to prevent additional data from writing to the tape.
Mark media full is most commonly used for the following situations:
The Mark Media Full action presents a manual method to mark a tape full. Tapes can also be set to be
automatically marked full. This is done through the Advanced backup or auxiliary copy options. This automatically
marks the tape full after the job completes. Using the mark media full option is best used when jobs need to be
isolated on tapes, such as when extended retention rules are applied.
A tape can be marked bad from any group within the Media by Groups tree. Once a tape is marked bad it no longer is
used for future backup operations.
It is important to note that depending on the state of a tape when it is marked bad, it appears differently within the
CommCell® console:
Bad tape with jobs currently being retained The tape is marked bad and no future jobs are written to the media.
The tape remains in the assigned media group displaying a box icon
with a red outline.
Bad tape with no jobs currently being The tape is marked bad and no future jobs are written to the media.
retained (no jobs on media or all jobs The tape is moved into the Retired Media group.
exceeded retention)
There may be certain situations where a tape is improperly marked bad. The most common problem is due to drive errors.
1. In the Retired Media group | Right-click the tape | Options | Mark Media Good.
Viewing Contents
Click the Assigned Media Pool | Right-click the tape| View | View Contents
Job contents and details can be viewed for individual tapes. To view the contents of a job, right-click on the tape and
select View Contents. The view contents option only appears if jobs are on the tape.
2. Jobs on the tape will be listed. Jobs in black font are active and jobs in gray font are aged.
Jobs on the tape that are actively being retained will appear in black print
Jobs on the tape that have exceeded retention will appear in gray print
The view contents option will appear for any tape that has active or aged jobs
One of the most common issues that may arise regarding media management is when tapes are not properly
recycling. Using the View Contents of a tape lets the administrator view which jobs are causing the tape not to
recycle. This information can then be used to track down problems that are causing the jobs to remain active
within the environment.
Another method to assist in predicting when tapes will recycle is using the Data Retention Forecast and
Compliance report. This report lists all tapes, their expected aging date, and specific reasons why the tapes have
not aged. Each of the reasons are hyperlinks to the Commvault documentation site which will provide more
details on the explanations.
The Media Refresh option is a powerful tool used to write job data from existing tapes to a new destination tape. Tapes
can be manually picked for refresh operations or automatic refresh operations can be triggered based on predefined
criteria.
When specific jobs are currently being retained on the tape, but all other jobs have aged. This situation results in
inefficient use of tape resources since a tape does not recycle until all jobs are marked aged. The media refresh
operation copies all active jobs to a new tape and the old tapes are recycled and moved back into a scratch pool.
When upgrading to new tape libraries, the media refresh can copy existing jobs from an old tape format to a new
format such as LTO4 to LTO5. For this method to be used, the tape library must be capable of reading the old
tapes.
When data is required to be periodically rewritten to new tapes to better ensure the integrity of the data. Data
integrity on tape is especially important when job data is being retained for long periods of time.
Media refresh is a physical operation and requires a source and destination drive. It is best to run media refresh
operations when backup and auxiliary copy jobs are not running.
The concept that a storage policy copy manages all jobs associated with it is still followed for refresh operations. If
multiple storage policy copies require refresh operations, separate schedules must be configured for each policy.
The scheduler is based on the auxiliary copy scheduler. This means that when refresh schedules are configured,
they can be configured to refresh ALL storage policy copies that have been enabled for refresh or just specific
copies.
To determine which tapes are required for a media refresh operation, use the Media Prediction report. In the
General tab, specify the Media Refreshing criteria.
For media refresh to work, the following must be done:
Ensure Media Refresh is enabled for the storage policy copy.
Media to be refreshed must be marked full.
Define criteria through a storage policy copy.
The following threshold can be set to trigger tapes to be selected for media refresh:
Number of months after the media was written – set this threshold when data integrity on tapes is critical. Once
the tape reaches a certain age from the last write, it will be picked for media refresh.
Number of months before the media is aged.
When tape capacity used is below a specified percentage – This is an optional check box and is used along
with the first two settings. By setting the capacity limit, this could potentially reduce the total amount of data that is
written to new media. This is best used to maximize tape efficiency by writing jobs on tapes not fully utilized to
new tapes, so old tapes can recycle.
When media retirement thresholds are exceeded – This is useful to write jobs from a bad tape to a good tape.
Click the Assigned Media group | Right-click the tape | Refresh | Pick for Refresh
Tapes are manually picked for refresh through the assigned media pool. Multiple tapes can be selected using the Shift or
Ctrl keys. Manually picking tapes for refresh is useful in situations where tapes need to be recycled for new jobs (and only
a few jobs on tape have not aged).
1. Right-click on a tape in the Assigned Media group | Refresh | Pick for Refresh.
The media refresh feature follows the rule that a storage policy copy manages all data associated with it. This means that
the scheduling for refresh operations is conducted at the storage policy level. The media refresh schedule is based on the
auxiliary copy schedule, so the same options found in auxiliary copies are available for media refresh.
The media refresh feature is a capability introduced in the Commvault v9 product. Prior to this feature, auxiliary
copy jobs were used to write data from old tapes to new tapes. In situations where a tape library is being
upgraded and the new library is NOT capable of reading old tape formats, the auxiliary copy method can still be
used. This is implemented the same way as a normal auxiliary copy.
4. …saved as a script.
The Tape to Tape Copy option manually copies the content of a tape onto a new tape from the scratch media pool. This is
useful when a tape is suspected to be defective and is required to be copied on a new tape. The advantage of using the
Tape to Tape Copy option instead of traditional media refresh is that there is no need to enable any media refresh setting
on the storage policy copy.
The MediaAgent used to execute the Tape to Tape Copy job is required to be Windows-based.
1. Click Assigned Media | Right-click the media | Options | Tape to Tape Copy.
Vault Tracker®
Vault Tracker is used to track media that are removed from a library and stored in offsite locations. This technology
identifies media associated with storage policy copies. Vault Tracker also uses the retention and media management
settings to identify when tapes are required for export or due back from export location.
Identifies media that must be sent off-site for storage or brought back from off-site locations.
Automatically moves the media in sequence in the library and provides a pick-up list for the operators.
Can use Virtual Slots for easier removal of media from a library with a limited number of I/E ports.
Facility to identify and track media during transit.
Facility to track record and track containers when the media is stored and moved using containers.
Facility to record and track the movement of non-CommCell or Foreign Media.
A Vault Tracker job that monitors exported media is reported in one of three states:
Departure and arrival of media for a library is automatically recorded by the library's slot inventory mechanism. Therefore,
media removed from a library source when the 'Auto Acknowledge' option is selected is automatically assumed to be in
transit (if a transit location is specified) or at the specified export location.
Media departing an export location can be automatically or manually flagged as removed from that location. It can be
marked 'Picked Up' to put it 'In Transit' (if a transit location is specified) or optimistically at its destination (new source). 'In
Transit' media, or exported media that has not been marked 'Picked Up' because you want to confirm its arrival at its
intended export location (new source), can be marked as 'Reached Destination.'
Individual media in a transit state between two export locations that, for one reason or another, requires the return of the
media to the originating source, can be marked 'Return to Source.' Alternatively, a Vault Tracker job that moved media
between two export locations can be 'rolled back' to indicate return of its entire media to the source location.
Exported media that can be re-used as spare media can be tracked/scheduled with a 'Due Back' tracking policy. A Due
Back tracking policy essentially creates a Vault Tracker job consisting of media at an export location that are in a 'Spare'
state.
If a data protection operation requests an exported media (depending on the options selected for the library in the Media
tab of the Library Properties dialog box), one of the following operations will be performed:
Automatically mark the exported media as 'full,' and use a new media for the data protection operation.
The job will remain in the 'waiting' state, with the 'Reason for job delay' stating that the media is outside the
library.
However, if a restore or auxiliary copy job requires data from an exported media, the system prompts you to import the
media to complete the operation. Information about exported media is retained in the system; they do not have to be
rediscovered if they are re-imported.
Define Locations
Locations are defined to track media throughout its lifecycle. There are two options that define Locations: 'transit' or
'stationary.' Multiple transit and stationary locations are used to provide great flexibility and accuracy in tracking tape
media. This is an important aspect of data lifecycle management.
Knowing where your data is and when it's time to recall the media for recycling is key for data retention and destruction
policies.
Containers
Containers are used to logically group exported media into a single addressable unit. This reduces the number of objects
to track and facilitates movement of a set of media into, or out of, third party off-site storage when a cost is assigned to
each unit. Therefore, ten media placed in a single container for storage is a single unit cost in both directions.
Containers can be pre-defined by the user or dynamically defined by a Vault Tracker Job.
Each container reports the volume of data it contains.
Exported Media Properties reports which container the media is in.
The contents of each container can be viewed.
List Media identifies the required media, container, and location.
Automatically created from a Tracking Policy, using an automated mechanism to generate the container names. If
this is established, a container is automatically created whenever the Tracking Policy is run, and all the location
for all the media associated with the job is automatically set to the associated container.
Manually created from the CommCell® browser so you can define a name and specify details about the container.
Once the container is created, you can 'Move,' 'Remove,' 'Delete,' and 'Recall' all media associated with that container by
right-clicking on the container in the CommCell browser and selecting the option. You can also view/edit the container's
Properties.
For media that are not automatically associated with a container, there are several ways of associating the container
information. When you record the 'Pending Action as Reached Destination,' you will be provided with the option to set a
container if necessary. You can set the container for any 'Pending Action' that are displayed in the CommCell® browser,
when you click the 'Actions' icon under the Vault Tracker entity.
Containers can also be associated with media displayed in the 'Exported Media' pool as follows:
Individually associate a container for a specific media by opening 'Media Properties' and selecting a container as
the location for the media.
Set the container for all media in the 'Exported Media pool,' by right clicking the 'Exported Media' icon displayed
under 'Media By Location' and then clicking the 'Set Container' option.
Opening the library to remove multiple media manually may be quicker, but it requires a visual search of all the slots to
find the appropriate media. Virtual Mail Slots alleviate the need to visually search the library for each media by designating
a static block of slots within the library to hold the media awaiting export.
The Virtual Mail Slot feature is enabled on the Library's Media Properties page. The administrator enters the starting slot
number, number of slots, and the direction (Up or Down) to use.
3. Specify if a media in a virtual mail slot can be used for data protection job when space is available.
There are two types of tracking policies that are created using the Vault Tracker Enterprise Policy Wizard:
An Export Media policy is created to track the movement of media between any source and any destination, including:
Library to location
Location to location
Location to library
Library to library
If the tracking policy is used to export media from a tape library, it also moves the media in the I/E ports or virtual mail
slots. To create a new tracking policy, right click on Vault Tracker Policies and select New Tracking Policy. This starts the
wizard to create your scheduled tracking policy.
The first screen requires a policy name and offers several options including Auto Acknowledge.
Auto acknowledge
When creating a Tracking Policy, you can enable the Auto Acknowledge option within the policy. When the policy is run,
the system automatically acknowledges all pending actions associated with the media movement if both source and
destination is a location. If the source is a library, the pending actions is automatically acknowledged when the media is
removed from the library.
It is also possible to delay the export of the media, which creates a time gap between when the policy is run and when the
media movement actions start. This provides time for you to create and review a Vault Tracking Report with all the policy's
actions detailed prior to any actions taking place. If any changes to the policy are required, you can perform these
changes without aborting or rerunning an action.
Temporarily stopping an action by placing the export operation in a pending state for a specified amount of time is yet
another option. This is useful if you need to perform library maintenance or urgent data protection and recovery
operations, but do not wish to abort or restart what has already been completed. You can manually restart the policy from
the point it left off when the library is available, or you can set the policy to resume automatically at a specific time. When
you use the 'Auto Acknowledge' option to export tape media from a library, it is recommended to use the 'Delay the export
by nn minutes' option to delay moving the media to the mail slots. If somebody physically removes the tapes from the
library after the Export Media tracking policy completes, but before the tracking report is generated and sent to the tape
operator, because of the Auto acknowledge option, tapes are marked as 'in vault' resulting in an empty report. For
instance, the export can be delayed by ten minutes. This ensures that the report is generated before tapes are move to
the mail slots.
Scheduling example using the Auto acknowledge option with a delay export of 10 minutes
The Export Media Policy is scheduled to start at 9 a.m. with a delay of 10 minutes to export media. The Tracking report
that lists media to send to vault is scheduled to start at 9:05. This setup results in the following:
9:00 – Export Media tracking policy starts, which create a list of all tapes required to be sent to vault, but since the
export is delayed, tapes are not moved in I/E ports yet.
9:05 – The Tracking report is generated listing all media required to go to vault and is sent to tape operators by
emails.
9:10 – The delay for exporting media is reached and media are physically moved in the I/E ports.
Additional screens include several other options, such as storage policy copies. Storage policies copies must be selected
to be included and managed by the policy. This can be useful for scheduling purposes. A good example could be to
create a tracking policy to export tapes containing production data daily, and another tracking policy to export tapes
containing development data once a week.
Media Status to export can also be configured. By default, only full media are exported. If the tape media is your primary
recovery method in the case of disaster, it could potentially lead to data loss if the library is destroyed with tapes partially
filled still located in it. In this case, it is recommended to select to include Active media, which means that partially filled
media are also exported when the job runs.
By default, when exporting media, a limit of 10 media is defined for the export process. If you want to export all media
meeting your criteria and avoid any data loss, it is recommended to uncheck the limit box.
4. Check and set a number in minutes to delay the physical export of media.
8. Select the source location for the export, such as a tape library, a location or a repository. If it is a library, tapes
will be moved to the I/E ports.
10. Uncheck if you don’t want to limit the number of media and export all media meeting criteria.
A Due Back policy can be created to track the movement of spare media stored in an off-site location. When all jobs
stored on an off-site media all expire, the media becomes a spare media that can be recalled and reused. The Due Back
policy creates the list of anticipated media to recall. If the 'auto acknowledge' option is selected, as soon as the media is
inserted in the I/E ports, it is imported in the library and the status is automatically changed from 'In Vault' to 'In Library.'
When creating a policy, you can schedule the policy to run later if you anticipate changes being made to the policy prior to
media movement taking place. Policies can be scheduled to run daily, weekly, etc.
6. Leave unchecked to recall all spare media, or check and set a number to limit the number of tapes to recall.
7. Define the destination where recalled tapes will be stored, such as a specific location, a tape library, a container
or a shelf.
The Export Media report is used to validate that all media required for vaulting has been moved to the I/E ports. It can also
be used to prepare the paperwork for the vaulting company if required. Reports can also be formatted as 'Iron Mountain'
ready format and be sent to the vaulting company's ftp site if supported. The vaulting company therefore has a list of
expected media and, in some cases, will pre-fill the paperwork requiring just a signature on pick-up.
3. From the Source Location tab, select the source location to include in the report.
4. From the Destination Location tab, select the destination where media are sent.
5. From the Output tab, select the report format to use, such as HTML, Text delimited, PDF or ‘Iron Mountain’
format.
6. A copy of the report can be saved on local disks, network share or sent to an FTP site, such as the vaulting
company’s FTP site.
The Due Back Media report lists all media stationed in vault that have all jobs expired and can be recalled and used as
spare media. Same as the Export Media report, The Due Back Media reports can be formatted as 'Iron Mountain' ready
format and be sent to the vaulting company's FTP site if supported. The vaulting company therefore has a list of media to
deliver and, in some cases, will pre-fill the paperwork requiring just a signature on delivery.
Running a Due Back Media report requires to be licensed for Vault Tracker® Enterprise. If you are licensed for another
Vault Tracker edition, this report is not included. If it is the case, to have a list of tapes to recall from vault, simply click the
location in the CommCell® browser and sort the media by 'return date'. All media having a return date listed as 'now' can
be recalled.
2. All media information is listed including the return date. Any tape listing ‘Now’ as a return date can be recalled.
2. Select Due back for reuse to generate a report listing all tapes that expired and that can be recalled.
6. Select the report format to use, such as HTML, Text delimited, PDF or ‘Iron Mountain’ format.
7. A copy of the report can be saved on local disks, network share or sent to an FTP site, such as the vaulting
company’s FTP site.
One of the more difficult concepts for backup administrators transitioning from legacy backup products to
Commvault® software is that a server is not directed to a storage policy—subclient data which is located on the server is.
This is achieved by defining what data a subclient manages. For most file systems and applications, a default subclient is
automatically generated. For these agents, the default subclient protects all data the agent is responsible for. Additional
subclients can be created to meet performance, management and special protection requirements.
The storage policy the subclient data is associated with determines the data path. The path is used to move data from the
source location to protected storage. All active subclients must be associated with a storage policy.
MediaAgent
Library
Drive pool (tape library)
Scratch pool (tape library)
MediaAgent
MediaAgents are the workhorses that move data from production servers to the backup environment. They supply the
processing power to receive data, arrange it in chunk format, and send it to the library. MediaAgents can also be
responsible for encryption, compression, or deduplication processing.
Library
Libraries are logically defined and are categorized as stationary or removable media libraries. Stationary libraries define a
path to a disk location such as a drive letter or UNC path. They are considered stationary since these paths do not change
once defined. Removable media libraries are generally thought of as tape libraries, but they can also be magnetic optical
or USB storage devices.
Drive pools are a MediaAgent's view of allocated drives within a tape library. Use of drive pools gives the MediaAgent the
flexibility of drive choice and usage within a library. Without drive pools, assigning and sending a data protection job to a
specific drive would fail if the drive was broken or offline. Having a pool of drives to choose from gives the job the best
chance of success. It also isolates resources of different technologies (i.e., LTO6 and LTO7 drives), which allows an
administrator to easily direct specific jobs to the set of drives, and with scratch pool definition, a different set of tapes.
Scratch pools allow new and re-usable media within the library to be logically grouped based on media type and intended
usage. At least one default scratch pool exists for every tape library. Master drive pools can be assigned their own default
scratch pools. Additional user-defined scratch pools can be created, media assigned manually or automatically and
assigned to a storage policy copy's data path.
Retention is defined in the Retention tab of the storage policy copy. Each copy has its own retention configurations. This
allows subclient data to be managed in multiple locations, each with their own retention settings.
A storage policy also provides granularity to a point where a single server hosting different sets of data with different
protection requirements, can easily be broken into multiple subclients associated with different storage policies. Each
newly created storage policy has different settings such as retention and copies.
A global deduplication policy is not a storage policy, but it allows data from multiple storage policy copies to be combined
into a single deduplicated disk storage structure. The global deduplication policy defines the location of the Deduplication
Database (DDB) partitions and the disk library that contains the deduplication store.
A global secondary copy breaks the standard requirement that all data owned by a storage policy copy is placed in
isolation on tape media. With a global secondary copy, you can assign data from multiple tape policy copies. This
provides for more efficient tape media management, especially when consolidating data to tape from multiple remote
sites, which may be configured with individual storage policies.
Primary snap copy (used only with IntelliSnap® feature, block-level and VSA Application Aware backups)
Primary backup copy, also known as primary classic
Secondary synchronous copy
Secondary selective copy
Secondary snap copy
Primary Copy
A storage policy primary copy sets the primary rules for protected data. Each storage policy can have two primary copies:
Primary snap copy – manages protected data using the Commvault IntelliSnap® feature, any agents configured
to run block-level backups or the Virtual Server Agent (VSA) using Application Aware backups.
Primary classic copy – manages traditional agent-based data protection jobs. Most rules defined during the
policy creation process are modified after it has been created.
Secondary Copies
Secondary Synchronous
Secondary Selective
Secondary snap copy
Synchronous Copy
A synchronous copy defines a secondary copy to synchronize protected data with a source copy. All valid data (jobs that
completed successfully) written to the source copy are copied to the synchronous copy via an update process called an
auxiliary copy operation. This means that all full, incremental, differential, transaction log, or archive jobs from a source
copy are also managed by the synchronous copy. Synchronous copies are useful when you want a consistent point-in-
time copy at any point within the cycle of all protected data available for restore.
Provides consistent point-in-time copies of data required to restore data to a specific point-in-time within a cycle.
Provides copies that are required to be sent off-site daily.
Provides the ability to restore multiple versions of an object from a secondary copy within a cycle.
Selective Copy
A selective copy allows automatic selection of specific full backups or manual selection of any backup for additional
protection. Selective copy options allow the time-based automatic selection of 'all,' 'weekly,' 'monthly,' 'quarterly,' 'half-
year,' and/or 'yearly full.'
Advanced options allow you to generate selective copies based on a frequency of 'number of cycles,' 'days,' 'weeks,' or
'months.' You can also choose the 'Do Not Automatically Select Jobs' option which allows you to use auxiliary copy
schedules to determine when copies of full backups are made.
For certain array vendors, Commvault® software supports secondary snap copies for managing clone, mirror, and vault
copies. For NetApp arrays, multiple mirror and vault copies are created within a storage policy. For other vendors
including EMC and HDS, an additional secondary snap copy is created to manage clone copies.
Subclient Association
Subclient Properties
To protect a subclient it must be associated with a storage policy. During an agent install, a storage policy is selected for
the default subclient. The policy defined to manage the subclient is configured in the Storage Device tab – Data Storage
Policy sub tab. Use the storage policy drop-down box to associate the subclient with a policy.
All subclients for a specific storage policy is associated with another policy in the Associated Subclients tab of the Storage
Policy Properties. You can select 'Re-Associate All' to change all policies or select specific subclients and choose 'Re-
Associate' to associate them to a new policy.
If subclient associations are made for more than one storage policy, you can use the Subclient Associations option by
expanding Policies, right-clicking on Storage Policies and selecting Subclient Associations. The window displays all
subclients for the CommCell® environment.
4. Select the storage policy from the drop-down list and click Apply.
Storage policies are used for CommServe® Disaster Recovery Backups or standard data protection. CommServe disaster
recovery storage policies are only used for protecting the CommServe® server metadata database, the CommServe
registry, configuration files, and specified log files. No standard data can be protected by a CommServe DR policy.
Standard data protection policies are used for protecting all production data within an environment.
The name of the storage policy is defined at the time of creation and later modified in the Storage Policy Properties. The
name should be descriptive and reflect what is being protected.
Create a Storage Policy with Primary Copy using a Global Deduplication Policy
When the primary copy of the storage policy is using Commvault® deduplication, it must be defined as a deduplicated
storage policy during the creation process.
10. Define the retention for the data of the primary copy.
To create a non-deduplicated storage policy, such as tapes, do not enable deduplication and simply select the library.
1. Expand Storage Policies | Right-click the storage policy | Choose Create New Copy.
2. Give a name to the storage policy copy.
3. Define the tape library and MediaAgent that will be used for the copy.
If the secondary copy is sent to the cloud or to a remote site disk library, it can be deduplicated. The global deduplication
policy must first be created. Then, create the secondary copy and select the global deduplication policy from the list.
1. Expand Storage Policies | Right-click the storage policy | Choose Create New Copy.
Synchronous = unchecked
Selective = checked
6. Define if all Primary Copy backups will be copied or give a starting point.
Device Streams
Device streams determine how many concurrent write operations are performed to the library. Increase device streams to
allow for more concurrent job streams to write if adequate resources are available.
Stream Randomization
When conducting auxiliary copy jobs, stream randomization improves performance by randomly selecting source streams
during the auxiliary copy job. This results in a more balanced stream of data when multi-streaming auxiliary copy jobs.
For example, a large Exchange database that generates a 4TB stream while running an auxiliary copy of that single
stream can take a significant amount of time. But breaking it down into smaller streams while receiving it helps the
subsequent auxiliary process by leveraging multi-streaming. This results in copying more than one stream concurrently.
It is recommended to use this feature when the client is writing to a dedicated library where all resources are exclusive to
the client, such as in client/MediaAgent LAN-Free backups.
Multiplexing
Primary Copy
When writing multiple streams to a tape library, multiplexing is used to improve write performance by multiplexing multiple
Job Streams into a Device Stream. Multiplexing improves backup performance but can have a negative effect on restore
performance.
Consult with the Commvault Online Documentation for more information on the proper settings for multiplexing.
When writing the primary copy to a disk library, there are no advantages in enabling multiplexing. The disk library already
receives multiple streams concurrently from subclients, and if available, leverages multiple mount paths. Unless using a
tape library, multiplexing should not be used.
Secondary Copy
If the source location is a disk library with multiple mount paths, this option can be used to improve read performance from
the disks when using the 'Combine to Streams' option.
Combine to Streams
A storage policy is configured to allow the use of multiple streams for primary copy backup. Multi-streaming of backup
data is done to improve backup performance. Normally, each stream used for the primary copy requires a corresponding
stream on each secondary copy.
In the case of tape media for a secondary copy, multi-stream storage policies consume multiple media. The 'Combine to
streams' option is used to consolidate multiple streams from source data on to fewer media when secondary copies are
run. This allows for better media management and the grouping of like data onto media for storage.
You back up home folders subclient to a disk library using three streams to maximize performance. The total size
of protected data is 600GB. You want to consolidate those three streams onto a single 800GB capacity tape for
off-site storage.
Solution: By creating a secondary copy and setting the 'Combine to streams' setting to 1 you will serially place
each stream onto the media.
In some cases, using the 'Combine to streams' option may not be the best method to manage data. Multi-streaming
backup data is done to improve performance. When those streams are consolidated to the same media set, they can only
be recovered in a single stream operation. Though combining to streams has a media consolidation benefit, it will have a
negative effect on the restore performance.
Another reason not to use the 'Combine to streams' option is for multi-streamed backups of SQL, DB2, and Sybase
subclients. When these agents use a single subclient with multi-streaming enabled, the streams must be restored in the
same sequence they were backed up in. If the streams are combined to the same tape, they must be pre-staged to disk
before they can be recovered. In this case, not enabling 'Combine to streams' and placing each stream on separate media
bypasses the pre-staging of the data and allows multiple streams to be restored concurrently, making the restore process
considerably faster. Note that this only applies to subclients that have been multi-streamed. If multiple subclients have
been single-streamed and combined to media, they will NOT have to be pre-staged prior to recovery.
1. Expand Storage Policies | Right-click the storage policy secondary tape copy | Properties.
2. Define the number of device streams, or tape drives, source streams are combined to.
Auxiliary copy operation allows you to schedule, run on-demand, save a job as a script, or set an automatic copy.
There are several options to choose from when configuring Auxiliary copy operations:
2. Run the auxiliary copy for all copies or choose a specific copy.
4. Check the Start New Media and Mark Media Full On Success options to isolate the job on a set of media.
Inline Copy
Right-click the storage policy secondary copy | Click Properties | General tab
The Inline Copy option lets you create additional copies of data at the same time you are performing primary backups.
This feature is useful when two copies of data must be done quickly. Data is passed from the client to the MediaAgent as
job streams. The MediaAgent then creates two sets of device streams; each going to the appropriate library. Although this
is a quick method for creating multiple copies, there are a few caveats to consider:
Inline Copy is not supported if Client Side Deduplication has been enabled.
If the primary copy fails, the secondary copy also fails.
Since both copies are made at the same time, twice as many library resources are required, which may prevent
other jobs from running.
Since backup data is streamed, data is sent to both libraries simultaneously, which may cause overall
performance to degrade. Basically, your job runs as fast as the slowest resource.
The last point is important to understand. Consider a scenario where the primary library receiving the client streams is a
disk library, and two secondary libraries are cloud and tapes. If Inline Copy is enabled on both secondary copies, the three
copies are performing at the speed of the slowest target, in this case, let's assume it is the WAN link to reach the cloud
library. It might result in tape drive buffers not filling up quickly enough. The tapes therefore must constantly be paused
and repositioned, also known as "shoe shinning." This reduces the lifespan of tapes and drives significantly.
1. Expand the storage policy| Right-click the storage policy secondary copy | Properties.
3. Accept pop-up message that you understand both data paths share a common Media agent to continue.
Parallel Copy
Right-click the storage policy secondary copy | Click Properties | General tab
A parallel copy generates two secondary copy jobs concurrently when an auxiliary copy job runs. Both secondary copies
must have the 'Enable Parallel copy' option selected and the destination libraries must be accessible from the same
MediaAgent. Like the Inline copy option, the performance is based on the speed of the slowest target.
There is an advantage of using Parallel Copy over Inline Copy to create multiple secondary copies. A Parallel Copy is
executed by an auxiliary copy schedule and is independent of the backup job — thus not slowing up the backup
performance as an Inline copy will do.
1. Expand the storage policy| Right-click the storage policy secondary copy | Properties.
2. Check this box to enable parallel copy for multiple secondary copy.
Deferred Copy
Right-click the storage policy secondary copy | Click Properties | General tab
Deferring an auxiliary copy prevents a copy from running for a specified number of days. Setting this option results in data
not aging from the source location, regardless of the retention on the source, until the auxiliary copy is completed. This
option is traditionally used in Hierarchical Storage Management (HSM) strategies where data will remain in a storage
policy copy for a certain period. After that time, the data is copied to another storage policy copy and deleted from the
source during the next data aging job.
With Commvault® software it is recommended to copy data to multiple HSM copies to provide for disaster recovery, as
well as HSM archiving.
Consider a scenario where a MediaAgent has a costly performant disk library that has reached full capacity. The storage
unit is already fully expanded. A larger cheaper, but less performant unit is acquired. Instead of just adding mount paths to
the actual library (which may not use the performant disks for the most recent data) an HSM strategy leveraging the
deferred copy option can be used. The storage policy must have the primary copy stored in the performant library, and a
secondary copy in the larger library.
1. Expand the storage policy | Right-click the storage policy secondary copy | Properties.
Advanced options allow you to generate selective copies based on a frequency of number of 'cycles,' 'days,' 'weeks,' or
'months.' You can also choose the 'Do Not Automatically Select Jobs' option which allows you to use auxiliary copy
schedules to determine when copies of full backups will be made.
3. From the drop-down list, select which full backups will be eligible to be copied.
4. Select if the first or last full backup of the period will be copied.
To configure and use a Global Secondary Copy, the Global Secondary Copy Policy first needs to be created. Then, in
every storage policy for which you want to use it, a secondary copy associated to the Global Secondary Copy Policy must
be created.
6. Set the number of streams and the retention for the global copy.
8. Review settings.
2. Check ‘Use Global Secondary Copy Policy’ and select the copy from the drop-down box.
3. Choose to use global retention or select override and set retention in Retention tab.
5. All jobs matching the criteria are displayed in the summary view.
To delete a job
2. If the job has dependent jobs, choose if dependent jobs should be deleted.
If a storage policy is no longer needed, it is recommended to disable it instead of deleting it. Deleting a policy is a
destructive operation that will purge the policy's data which could be required for restore.
1. In the storage policy properties view the Associations tab to ensure no subclients are associated with the policy. A
storage policy cannot be deleted if subclients are associated with the policy.
2. On the storage policy, right-click | select View | Jobs. De-select the option to Specify Time Range then click OK.
This step displays all jobs managed by all copies of the storage policy. Ensure that there are no jobs being
managed by the policy that require to be kept and then exit from the job history.
3. Right-click on the storage policy | Select All Tasks | Delete. Read the warning dialog box then click OK. Type
'erase and reuse media' then click OK.
MODULE 4 - MONITORING
Views
The disk library summary view is accessed in the Storage Resource section. When the library is clicked, the mount paths
are displayed in the summary window. Information including the status of mount paths, the total disk capacity, and the free
space is provided.
2. The display window shows the statuss of mount paths, such as ready or offline including capacity and free space.
3. Export information to a report in a PDF, XLS or MHTML format. Insert shows sample view of output.
8. To display additional details of the protected jobs, click on a section of the pie chart.
The Disk Usage tab of a disk library properties provide information about space usage and deduplication performances.
Disk Space Utilization – Provides information on space consumed and space left.
Disk Space Savings – Provides the total amount of application data that is protected for all backups, compared
to the amount of data that is physically written in the library. By this information, the performance of deduplication
is illustrated. It also provides the amount of data that is non-deduplicated.
Average Daily Disk Consumption – Gives the daily average of space consumption and compares it to the daily
average of space released by aging obsolete data. If more space is consumed daily than released, the library will
fill up at some point. Based on these metrics, this section estimates and displays the expected date on which the
library will be filled.
1. Expand Storage Resources | Libraries | Right-click the library | Select Properties from the menu.
2. On the Disk Usage tab, this section displays current space usage.
4. Shows daily average for new space consumption and data aging space freeing up.
Engine Views
The Deduplication Engine view, under Storage Resources in the CommCell browser® provides information about the
Deduplication Engine performance. Information such as the amount of data protected vs. the amount of data written, the
number of unique blocks in the deduplication store, the amount of records pending deletion and the average query and
insert (Q&I) time are important to properly monitor deduplication health of a CommCell® environment.
A Deduplication Database (DDB) with an average Q&I time over 2000 microseconds should be investigated as it
may be indicative of performance issues which could impact backup jobs, auxiliary copy jobs and data aging.
A DDB or DDB partition getting close to 750,000,000 unique blocks may represent a partition reaching its
capacity. Overtime, the DDB partition performances will degrade, impacting all deduplicated operations. If a DDB
is near this limit, a call should be placed with support to ensure performances are adequate.
A DDB with an excessive number pending delete records that increases each day may be indicative of either an
underperforming DDB, or an operation window blackout period not providing enough time for the DDB to purge
obsolete records.
1. Expand Storage Resources | Deduplication Engines | Select and expand a Deduplication policy | Expand the DDB
display and click on a ddb partition.
2. The Properties section displays information about the DDB partition including information on the last DDB backup
time.
3. Displays the response time of the database for queries and inserts.
4. Click to export DDB partition information. Details can be exported to PDF, XLS and MHTML formats.
5. Additional information of the deduplication partition including graphs and statistics can be viewed in the details
pane.
6. Details of the deduplication engine including graphs and statistics can be viewed in the details pane.
7. Graph displays unique and secondary blocks. Includes pending delete records.
3. Clicking a tape library MasterPool displays the status of all tape drives.
4. Clicking a disk library displays the status of all its mount paths.
Job Controller
The Job Controller in the CommCell® console is used to manage all active jobs within the CommCell® environment.
Regardless of which method is used to initiate a job (schedule, on demand or script), the job will appear in the Job
Controller. The Job Controller is the most effective tool within the CommCell® console for managing and troubleshooting
active jobs.
Job Details
Right-click job | Details or double-click job
Details for specific jobs are used to provide information on job status, data path, media usage or job errors.
2. Details for job can be viewed including general information, progress, streams and media usage.
Event Viewer
The Event Viewer window displays events reported based on conditions within the CommCell® environment. By default,
the event viewer displays the most recent 200 events. This number can be increased up to 1,000. The event log maintains
up to 10,000 events or 7 days of events. These default settings can be modified.
The Event Viewer can be filtered based on the available fields. Although some filters, such as 'Date' does not have a
practical application, other fields such as 'Computer,' 'Program' or 'Event code' can be used to quickly locate specific
events.
3. Filter fields using the drop-down arrow and define filter criteria.
Although only 200 to 1,000 events are displayed in the event viewer, the entire event log can be searched from the event
viewer. The default total number of events retained is 10,000.
When right-clicking anywhere in the event viewer, select the option to search events. Events are searched by time range,
severity and job ID. If common searches are frequently conducted, the search criteria can be saved as a query and run at
any time.
2. Define search criteria to display all relevant events in the event log.
By default, the event log retains 10,000 events or 7 days of events. When the event logs reach their upper limit, the oldest
events are pruned from the event logs.
Metrics Reporting
Metrics Reporting is a tool that monitors and reports on the health of the CommCell® environment and stores the
information either in a private Metrics Reporting server, or on Commvault® Cloud metrics services. Once the information is
stored, a broad range of reports and dashboards are available through a portal to help an administrator monitor the
CommCell® environment.
8. Additional configuration settings to include time of data collection, upload frequency and Client computer groups
to monitor.
9. Advanced setting to upload Audit and other information for chargeback and billing.
1. Displays information on the health status of the CommCell®.. To access, open supported browser and use the
URL - http://cloud.commvault.com.
Alerts Overview
Alerts are configured to provide real-time feedback about conditions in the CommCell® environment as they occur.
Built-In Alerts
A wide range of alerts are preconfigured in the system on initial installation. Some are enabled, others can be enabled if
required. These alerts monitor several components and conditions. A summary view explains what the alert is for. For
more information on the preconfigured alerts, refer to the Commvault Online Documentation.
2. The Alerts windows lists all the configured alerts. Disabled alerts are greyed out.
The Alert Wizard is used to configure the alert type, entities to be monitored, notification criteria and notification method.
Type of alert
Entities to be monitored
Notification criteria
Notification method
To add an alert
3. Type a display name and select the alert Category and Type.
10. Select the provider for the regular and the escalated alerts.
11. Optionally, additional rules can be set that only if met, the alert will be triggered.
12. Start typing an account name and select CommCell® or domain users as recipients for the alert. Non-registered
users and groups can be manually added by typing the SMTP email address.
13. Define or modify the sender display name and SMTP email address.
Notification Providers
Commvault® software offers many easy-to-configure notification providers. These providers ensure that an administrator
is notified at any time should an issue arise.
Email
SNMP
Event Viewer
Run Command
Save to
RSS Feeds
Console Alerts
SCOM
Workflow
Email
Email notifications are sent to the CommCell® console or domain users by selecting them from the list. The user must
have logged in at least once to the CommCell console. Email addresses or distribution lists can be defined. If the email
server is down, the system tries to resend the email for four hours. After that time limit, if the server is still down, the
notification is discarded and will not be sent. Email notification format can be HTML or text and be modified as needed.
To use email notification, a SMTP server must be configured using the email and Web Server applet from the
Configuration tab.
SNMP
Alerts can be sent by the CommServe® server as SNMP to any computer listening for SNMP traps. This notification
method is useful if an existing monitoring and/or ticketing system is in place. SNMP alerts support SNMP Version 1
(SNMPv1) and SNMP Version 3 (SNMPv3) and require the SNMP Enabler to be installed on the CommServe server.
Event Viewer
You can send alert notifications from the CommServe® server to the Windows Event Viewer of other computers where it is
generated as an event. For all alerts related to backup and restore operations, the following information is sent to the
Windows Event Viewer:
Run Command
The Run Command notification is used to send alert notifications from the CommServe® server to other client computers
by executing a command script. The Run Command can be located on the CommServe server or on remote computers
but is executed only on the CommServe server. It also can be used to run a script to resolve the issue, such as restarting
some services or any tasks.
Save to
You can send an alert notification to a local directory, a network share, or the Cloud Services website. This is particularly
useful in obtaining a list of failed attempts in an operation. If you plan on using the Cloud Services, Cloud Metrics Reports
must first be activated for the CommCell® console.
Alert: Client_Properties Type: Configuration - Clients Alert: Client_Properties Type: Configuration - Clients Detected
Criteria: Properties Modified Detected Time: Mon Feb 27 10:13:02 2017 CommCell: winter User: Administrator Property
Modifications: Status: Modified Client: winter Agent Type: Not Applicable Instance: Not Applicable Backup Set: Not
Applicable Subclient: Not Applicable Comments: Update Client properties Client: winter Client Client Description: Set
to [text description of a client]
RSS Feed
It is possible to turn the CommServe® server into a RSS Feed server, which allows an alert notification to be sent as an
RSS Feed. Your favorite RSS Feed client can be configured to receive notifications by subscribing to the CommServe
server.
SCOM
The Commvault® software can send alert notifications from the CommServe® database to the Microsoft Systems Center
Operations Manager (SCOM). The Microsoft SCOM Server provides a monitoring service for critical applications within an
enterprise and sends alerts about events in these applications. An administrator can raise tickets against these alerts and
take any necessary action to resolve the problem. SCOM must first be installed and the CommServe® server defined as a
SCOM agent. For the agent to communicate with the SCOM server, firewall ports 5723 and 5724 must be open.
For more information on SCOM notification configuration and prerequisites, please refer to the Commvault Online
Documentation.
Workflow
It can be useful to try to resolve an issue using automation. When an alert is triggered, a workflow notification launches
any workflow. Note that when configuring the alert, the workflow must be created first, then selected.
Scenario: A backup job goes into a pending status, stating that it cannot communicate with the client.
Solution: A workflow alert could launch a script to restart the Commvault services on the client, then restart the
backup and send an email if the backup is still pending.
Console Alerts
When configuring alerts, console alerts can be selected as a notification method. Once an alert is triggered, it appears in
the Console Alerts window within the CommCell® browser. Right-click on an alert to view details, delete, mark as read or
unread, or to insert a note. Console alerts can be pinned or deleted using the icons at the bottom of the window.
3. Selecting an alert provides information about the alert, the entity that triggered the alert and more.
4. Right-click an alert to look at the details, delete it, mark it read/unread or insert a note.
Common Alerts
Category Type Options
Job Management Data Protection, Data Recovery Job Failed, Phase or network errors
Reports Overview
CommCell® reports can be configured from the Reports tab in the CommCell toolbar. The most common report types are
listed in the toolbar, such as:
Job Summary
Job Schedule
CommCell Readiness
When the report type is selected, it is the default report in the report window. Note that any other report type can be
accessed from the window. Reports can be scheduled, saved to a specific location, or saved as report templates.
Depending on the report type selected, various report criteria are configured from the tabs in the Report Selection window.
Use the tabs to choose which resources; clients, MediaAgents, libraries, or storage policies, to include in the report. You
can also select the information to be included in the report, such as failed items, storage usage, job information, or
resource configuration.
Job Summary report – is used to view data protection, data recovery and administrative jobs.
CommCell® Readiness report – is used as a status report for CommCell components such as clients,
MediaAgents, library storage capacity and index directories.
CommCell® Configuration report – provides CommCell configuration, license usage, and update status of
CommCell components.
Job Schedule report – is used to view schedules for client computer groups, clients, and administrative jobs.
Data Retention Forecast and Compliance report – is used to view jobs in storage, the media it is located on,
and the estimated time the data will age.
Report Outputs
When running any report, it can be formatted using HTML, text delimited, PDF, XML. A copy of the report can also be
saved on a local drive of any CommCell® client computer, on a network share by providing credentials that have access to
the share, or to an FTP site by providing login information. Language, date and time formats are selected from drop-down
lists.
1. From the Reports menu | Select a report to open Report Selection detail.
2. On the Output tab, select report output type of HTML, Text, PDF or XML format.
4. Select the output location including a local drive on the CommServe® server, a network share or FTP site.
Report Schedules
There are a few ways to view scheduled reports. The easiest method to use is the View Schedules tool from the Reports
tab. This view displays all configured reports and allows the schedules or the report settings, such as the recipients, to be
edited.
Report Types
Commvault® software provides several report types. Each report is built to provide specific information useful to the
management of a CommCell® environment. The following section describes all built-in reports.
Job Summary
The Job Summary report is considered the most important because it provides details about CommCell® jobs. This
includes job status, the amount of data protected, skipped files and folders, and much more. Some available job types are
backup, restore, administrative jobs, archiving, and Continuous Data Replication (CDR). To help decipher the information,
the report provides a multi-color display that includes a summary of all jobs followed by detailed information about each
one.
Although the Job Summary report is still available in the CommCell® console, it is recommended to execute the report
from the Web Console.
Job Summary Report (Web) – Provides the link to automatically execute the Web Console version of the report.
Job Summary Report (CommCell console) – Executes the traditional CommCell® console version of the report.
6. Select the Storage Policies tab to include details on Storage Policies and copy sets.
7. From the Selections tab, include additional information for the report including listing Subclient filters used in job
and Failed Objects.
8. On the Options page, select details about job types, throughput, data size and job status.
10. Select the output type to include HTML, CSV, PDF and with some reports XML.
11. If not wanting to display results on screen, select Output To if saving to a folder or FTP site is required.
15. The next section provides detailed information for each job by Client and agent.
16. Colors in report detail status and success, failure, completed with errors and more. Color details are found at end
of report.
Audit Trail
The Audit Trail report tracks information about any CommCell® configuration modifications along with the user who made
the change. Different levels can be selected ranging from high (high impacting modification changes) to low (low
impacting modification changes, such as logins).
CommCell® Readiness
The CommCell Readiness report is used to query every CommCell® component to ensure all resources are available.
When running this report, the CommServe® server queries MediaAgents to ensure they are reachable. It then queries the
libraries to confirm that there is enough free space in disk, tape and cloud libraries. It finally runs a Check Readiness on
all the clients to ensure they are ready for the next backup.
2. On the Report Selection page choose the components to include in the readiness check.
3. Select all or specific computers and agent types to include in the report.
CommCell® Growth
The CommCell® Growth report provides a growth overview of the CommCell® environment for a defined period. This
report is valuable for trending analysis and resource procurement.
3. This section provides information about the number of clients and agents from the beginning to the end date.
4. This section provides information about jobs from the beginning to the end date.
5. This section provides information about media usage from the beginning to the end date.
License Summary
The License Summary report displays license usage statistics for the CommCell® Environment. This report provides the
required information, based on the license model, to help a backup administrator plan license procurement when needed.
3. The first section provides information about license usage per types.
CommServe® Event
The CommServe® Event report is used to display events that might no longer be in the Event Viewer. The Event Viewer
displays 200 events by default, but the system keeps 10,000 events back in time. This report has several options are used
to filter out unwanted events or to cover a specific time range.
2. Any severity levels can be filtered from the report by unselecting it.
User Capability
In CommCell® console where several users and administrators have access, it can be hard to keep track of everyone's
responsibilities. The User Capability report helps to establish the exact capabilities each user or user group has.
Configuration
The CommCell® Configuration report lists all configuration parameters that are set in the CommCell® environment. If all
options are selected, it provides a blueprint of the environment in its entirety. Having access to a CommCell®
Configuration report provides the opportunity to completely reconfigure the environment to a previous state, if necessary.
6. Include all or specific Libraries information and optional to include drive details.
8. Additional details can be included such as media, Global storage policies and subclient list.
Schedule
The Schedule report lists all schedules that are configured for the requested time range. It provides schedule name, job
type, which agents are associated with it, and more. Optionally 'Job options' can be selected to display in the report.
5. All jobs for the selected time are listed with scheduling information.
Storage Policy
The Storage Policy report is used to display configuration settings storage policies and storage policy copies. If content
indexing is enabled on a policy, a dedicated section lists content indexing related configuration options.
Storage Information
The storage information report provides valuable information about CommCell® media. This report includes pertinent
information, such as the media location, data size, expiration date, and last write time.
When using tape media as primary recovery method for disaster, send a storage information report offsite daily. If there is
a DR, this report can be used to quickly identify the most recent tape that belongs to the CommServeDR storage policy,
which is required to restore the CommServe® server database.
3. Select to include all or specific libraries. Option to include media location details.
6. Select the storage policies for which the storage information is required.
Forecast
The Data Retention Forecast and Compliance report is a report that lists all media information as it is today or as it will be
in the future. This report can be helpful in case of media you expected to be back from vault, but that did not. This report
displays the expected return date and the retention criteria that is locking the tape in vault, such as basic retention,
extended retention, job retention, etc.
6. Click Run.
7. The report lists all jobs and provides information such as retention.
Discovery
The Discovery report is used in conjunction with Content Indexing. It displays information about items such as Legal
Holds, Review Sets, and Content Searches.
You can use the Review Set Posting report to review detailed information about search result items that are posted in a
review set. This report is useful if you need to know:
The name and type of files that are posted to the set and the time that files are posted.
The backup job ID and the user that posted the job.
Restore status, and Legal Hold status, if applicable.
You can use the Review Set Activity report to review detailed information about each operation that a user performs on a
review set. This report is useful if you need to know:
You can use the Review Set Summary report to review information about a review set. This report is useful if you need to
know:
The name of a review set and the users that post to it.
The sharing status of a review set.
The number of items contained in a review set.
You can use the Legal Hold Summary Report to review detailed information about legal set items. This report is useful if
you need to know:
You can use the Search Activity Report to review detailed information about searches that users perform on a review set.
This report is useful if you need to know:
The Library and Drive Configuration report provides information about the storage libraries and drives within the
CommCell® environment including the number of libraries, their capacity, and the amount of space left. This report is
useful if you need to know:
Scratch Pool
The Scratch Pool report is used to display setting that are configured for scratch pool. A list of the spare media can
optionally be displayed.
4. Sample section of Scratch Pool report lists configuration settings of the spare media group.
5. If report option is selected, report will display list the spare media within the scratch group.
Media Prediction
The Media Prediction report displays the anticipated media that is required by operations performed within the entire
CommCell® environment.
Also displayed in the report is information about media that is required per individual backup set, instance, or subclient.
This report is useful if you need to know:
4. Use the default setting of Latest data or select a specific time frame to recover.
5. Sample report section displays list of required media for the selected operation.
The jobs in Storage Policy Copies report displays information about data protection jobs that are associated with the
specified storage policy copies. You can use this report to review information for operations that run in each storage policy
copy. This report is useful if you need to know:
Detailed information regarding the jobs that reside on each storage policy copy, including the client, backup set,
instance, subclient, and data size.
Whether a job has been picked for data verification.
Whether or not the job is enabled with 'Data Integrity Validation.'
The history and status of data verification for each job.
Whether a job is to be copied to the copy (from another source copy), or if the job has already been copied.
Jobs with data encryption and deduplication.
The media that is associated with each job.
The jobs that are retained using extended retention rules.
The amount of data that is not copied per job.
Legal Hold data protection jobs associated with the storage policy copies.
Tracking
Built-in Vault Tracker reports can be sent to tape operators or backup administrators, to provide a list of tapes that are
required to be exported or recalled. The list is automatically populated by the software based on the tracking policy's
media movement and media status.
7. Select the report format to use, such as HTML, Text delimited, PDF or ‘Iron Mountain’ format.
8. A copy of the report can be saved on local disks, network share or sent to an FTP site, such as the vaulting
company’s FTP site.
14. Type an email addresses or distribution lists for accounts not listed.
The File Level Analytics report provides information on files protected in Commvault® software. File attributes such as the
file size and type, the modification, and last access time are used to build the report. Clients, backup sets and subclients
can be granularly selected to be included in the report.
TIP: Use File Level Analytics Report to Implement Commvault OnePass® Archiving
When implementing Commvault OnePass® archiving without a good knowledge of the data being archived, two
situations might happen. Either the archiving rules are not strict enough and almost no data gets archived, or the
rules are too strict resulting in most of the data being archived. To avoid this situation, use the File Level Analytics
report to properly establish the archiving rules.
3. Select the filter criteria such as PDF files bigger than 4 KB and not modified in the last 10 days in this example.
4. Review the additional options and select as needed for the report output.
7. This section of report displays the list of entities included in the report.
8. Displays the total number and size of files meeting the criteria.
THANK YOU