NetBackup IT Analytics DC Installation Guide for Backup Manager v11.4
NetBackup IT Analytics DC Installation Guide for Backup Manager v11.4
Release 11.4
NetBackup IT Analytics Data Collector Installation
Guide for Backup Manager
Last updated: 2025-02-03
Legal Notice
Copyright © 2025 Veritas Technologies LLC. All rights reserved.
Veritas and the Veritas Logo are trademarks or registered trademarks of Veritas Technologies
LLC or its affiliates in the U.S. and other countries. Other names may be trademarks of their
respective owners.
This product may contain third-party software for which Veritas is required to provide attribution
to the third party (“Third-party Programs”). Some of the Third-party Programs are available
under open source or free software licenses. The License Agreement accompanying the
Software does not alter any rights or obligations you may have under those open source or
free software licenses. Refer to the Third-party Legal Notices document accompanying this
Veritas product or available at:
https://www.veritas.com/about/legal/license-agreements
The product described in this document is distributed under licenses restricting its use, copying,
distribution, and decompilation/reverse engineering. No part of this document may be
reproduced in any form by any means without prior written authorization of Veritas Technologies
LLC and its licensors, if any.
The Licensed Software and Documentation are deemed to be commercial computer software
as defined in FAR 12.212 and subject to restricted rights as defined in FAR Section 52.227-19
"Commercial Computer Software - Restricted Rights" and DFARS 227.7202, et seq.
"Commercial Computer Software and Commercial Computer Software Documentation," as
applicable, and any successor regulations, whether delivered by Veritas as on premises or
hosted services. Any use, modification, reproduction release, performance, display or disclosure
of the Licensed Software and Documentation by the U.S. Government shall be solely in
accordance with the terms of this Agreement.
Technical Support
Technical Support maintains support centers globally. All support services will be delivered
in accordance with your support agreement and the then-current enterprise technical support
policies. For information about our support offerings and how to contact Technical Support,
visit our website:
https://www.veritas.com/support
You can manage your Veritas account information at the following URL:
https://my.veritas.com
If you have questions regarding an existing support agreement, please email the support
agreement administration team for your region as follows:
Japan [email protected]
Documentation
Make sure that you have the current version of the documentation. Each document displays
the date of the last update on page 2. The latest documentation is available on the Veritas
website.
https://sort.veritas.com/data/support/SORT_Data_Sheet.pdf
Contents
Introduction ................................................................................. 18
Architecture overview (Commvault Simpana) ..................................... 19
Prerequisites for adding Data Collectors (Commvault Simpana) ............. 19
Upgrade troubleshooting: Microsoft SQL Server and Java 11 ................ 20
Installation overview (Commvault Simpana) ....................................... 22
Open TCP/IP access to the Commvault database ............................... 22
Additional Static Port Configuration Steps .................................... 23
Set up a read-only user in the CommServe server ............................... 24
Option 1: Execute SQL commands in the CommServe database
..................................................................................... 24
Option 2: Use MSSQL Management Studio ................................. 25
Load historical data prior to initial data collection ................................. 28
Add Commvault Simpana servers .................................................... 28
Add a Commvault Simpana Data Collector policy ............................... 29
Introduction ................................................................................. 34
Prerequisites for adding Data Collectors (Cohesity DataProtect) ............ 34
Installation overview (Cohesity DataProtect) ...................................... 35
Contents 5
Introduction ................................................................................. 40
Architecture overview (EMC Avamar) ............................................... 41
Prerequisites for adding Data Collectors (EMC Avamar) ....................... 41
Installation overview (EMC Avamar) ................................................. 42
Add EMC Avamar servers .............................................................. 43
Adding an EMC Avamar Data Collector policy ................................... 43
Testing the collection ..................................................................... 46
Add/Configure an Avamar Server within the Data Collector policy
window ................................................................................. 47
Import EMC Avamar server information ............................................. 49
CSV Format Specifications ....................................................... 49
Import Notes ......................................................................... 50
Export EMC Avamar server information ............................................ 50
Introduction ................................................................................. 98
Architecture overview (IBM Spectrum Protect -TSM) ............................ 99
IBM Spectrum Protect (TSM) - servers and instances defined
..................................................................................... 99
Prerequisites for adding data collectors (IBM Spectrum Protect - TSM)
........................................................................................... 99
Installation overview (IBM Spectrum Protect - TSM) ........................... 100
Add IBM Spectrum Protect (TSM) servers ........................................ 101
Adding an IBM Spectrum Protect (TSM) Data Collector policy ............. 101
Add/Configure an IBM Spectrum Protect (TSM) server within the Data
Collector policy ..................................................................... 106
Import IBM Spectrum Protect (TSM) information ............................... 108
CSV format specifications ....................................................... 108
Export IBM Spectrum Protect (TSM) server information ...................... 109
■ Overview
Overview
The Data Collector is a centralized and remotely managed data collection
mechanism. This Java application is responsible for interfacing with enterprise
objects, such as backup servers and storage arrays, gathering information related
to storage resource management.
The Data Collector continuously collects data and sends this data, using an http or
https connection, to another Java application, the Data Receiver. The Data Receiver
runs on the Portal Server and stores the data that it receives in the Reporting
Database. When you use the Portal to generate a report, the Portal requests this
information from the Reporting Database, then returns the results in one of the
many available reports.
The Data Collector obtains all of its monitoring rules from a Data Collector
configuration file. This file resides in the Reporting Database in XML format. When
the Data Collector first starts, it downloads this file from the Reporting Database.
The Data Collector uses this file to determine the list of enterprise objects that are
to be monitored and included in its data collection process.
■ Introduction
Introduction
Exagrid connector uses an ExaGrid appliance's SNMP Server Identification to gain
read-only access to an Exagrid appliances capacity information using the snmp
protocol and this information helps to monitor and report capacity-related information
about the ExaGrid appliances in an Exagrid system.
Field Description
Collector Domain The domain of the collector to which the collector backup policy is
being added. This is a read-only field. By default, the domain for a
new policy will be the same as the domain for the collector. This
field is set when you add a collector.
Policy Domain The collector Domain is the domain that was supplied during the
data collector installation process. The Policy Domain is the domain
of the policy that is being configured for the Data Collector. The
Policy Domain must be set to the same value as the Collector
domain.
To find your Domain name, click your login name and select My
Profile from the menu. Your Doamin name is displayed in your profile
settings.
Appliance Details The probe is used to collect the ExaGrid Appliance's capacity
information using the SNMP protocol.
Pre-installation setup for ExaGrid 17
Add ExaGrid Data Collector Policy
Field Description
■ Every minute
■ Hourly
■ Daily
■ Weekly
■ Monthly
Note: Explicit schedules set for a collector policy are relative to the
time on the collector server. Schdules with frequencies are relative
to the time that the Data Collector was restarted.
Notes Specify or edit the notes for your data collector policy. The maximum
number of characters allowed is 1024. Policy notes are retained
along with the policy information for the specific vendor and is
displayed on the Collector Administrator page as a column making
them searchable as well.
■ Introduction
Introduction
In most cases, a single instance of the Data Collector can support any number of
enterprise objects. However, each environment has its own unique deployment
configuration requirements, so it is important to understand where the Data Collector
software must be installed so that you can determine how many Data Collectors
must be installed and which servers are best suited for the deployment.
Pre-Installation setup for Commvault Simpana 19
Architecture overview (Commvault Simpana)
■ For most Backup Manager systems, install the Data Collector on a server that
is in the same time zone as the backup server from which you are collecting
data. For Veritas NetBackup and IBM Spectrum Protect(TSM) collection, the
Data Collector server and backup server can be in different time zones.
■ Open TCP/IP access to the Commvault database on a static port (1433
recommended).
See “Open TCP/IP access to the Commvault database” on page 22.
■ MS SQL Server needs to be accessible to the collector on a static TCP Port
(1433 recommended), requiring a restart of the SQL database service, if not
already configured.
■ MS SQL Server must be set to Mixed-mode authentication, and the login to be
used for collection must be using SQL authentication.
https://msdn.microsoft.com/en-us/library/ms188670.aspx.
■ WMI Proxy access requires the following: Port 135 is required for skipped files
collection and 445 for CIFS over TCP. A fixed port can be configured for WMI
as specified at:
http://msdn.microsoft.com/en-us/library/bb219447%28VS.85%29.aspx
■ The Data Collector should be installed on the same server as the WMI proxy
server.
■ Read-only user configured on the CommServe server.
See “Set up a read-only user in the CommServe server” on page 24.
■ If you want to load historical data from a CommServe database, use the utility
described in the following section.
See “Load historical data prior to initial data collection” on page 28.
■ One Data Collector can include all of these backup products: Commvault
Simpana, EMC Avamar, EMC NetWorker, EMC Data Domain, Veritas Backup
Exec, Veritas NetBackup, HP Data Protector, IBM Spectrum Protect(TSM) , and
Generic Backup products. And, you also can include other enterprise objects,
such as storage arrays, in this Data Collector.
error in the collector logs, the version of MS SQL Server may be incompatible and
not allow collection using the TLS algorithms enabled by default with Java 11.
Upgrade MS SQL Server to the latest version to enable secure collection. Your MS
SQL Server version may not be supported. If upgrade is not possible, a workaround
can be attempted to restore compatibility. If the following steps do not resolve the
issue, your version of MS SQL Server is not supported.
Use the following steps to modify the enabled algorithms to attempt communication
with the data collector. Note that using this workaround will reduce the security of
your collection. The default list of disabled algorithms is taken from Java 11.0.6 and
may change in later versions.
1. Edit <collector install dir>/java/conf/security/java.security.
2. Search for jdk.tls.disabledAlgorithms.
3. Copy the existing lines and comment (to have a backup for easy restore).
Note: These steps apply only if you are performing an IN-HOUSE installation. If a
third-party service provider is hosting your Portal, that is, a HOSTED installation
(perhaps for a product evaluation) skip this section and contact your hosting
organization’s representative to configure the hosted portal for your Data Collector.
Note: MS SQL Server must be set to Mixed-mode authentication and the login to
be used for collection must be using SQL authentication.
■ Click OK.
Note: MS SQL Server must be set to Mixed-mode authentication and the login to
be used for collection must be using SQL authentication.
1. Create a new read-only user with a non-expiring password and assign the
db_datareader role in the CommServe database. After completing the following
steps, verify the connectivity with the read-only user ID.
2. Grant EXECUTE permission for the following stored procedures to the new
read-only user:
dbo.GetDateTime
dbo.GetUnixTime
dbo.GetJobFailureReason
dbo.JMGetLocalizedMessageFunc
There are two methods for assigning EXECUTE permission.
4. Click Search.
5. Add specific objects.
cd <HOME>\database\tools
sqlplus portal/portal@//localhost:1521/scdb
3. Run the utility that configures the Data Collector to look for historical data. This
utility only prompts you to enter hours and then configures data collection
accordingly.
SQL> @cmv_update_max_lookback_hours.sql
Enter value for hours: 12
old 1: UPDATE ptl_system_parameter SET param_value = &hours
WHERE
param_name='
CMV_MAX_LOOK_BACK_HRS'
new 1: UPDATE ptl_system_parameter SET param_value = 12 WHERE
param_name='CMV_
MAX_LOOK_BACK_HRS'
Commit;
1 row updated.
Commit complete.
■ Internal Host Name - Must match the host name of the CommServe Server.
■ IP Address - IP address of the Commserve Server.
■ Type - Commvault Server.
4 Click Add Policy and then select the vendor-specific entry in the menu.
Pre-Installation setup for Commvault Simpana 31
Add a Commvault Simpana Data Collector policy
Field Description
Collector Domain The domain of the collector to which the collector backup policy is
being added. This is a read-only field. By default, the domain for a
new policy will be the same as the domain for the collector. This field
is set when you add a collector.
Policy Domain The Collector Domain is the domain that was supplied during the
Data Collector installation process. The Policy Domain is the domain
of the policy that is being configured for the Data Collector. The Policy
Domain must be set to the same value as the Collector Domain.
The domain identifies the top level of your host group hierarchy. All
newly discovered hosts are added to the root host group associated
with the Policy Domain.
CommServe Server Address* Specify the IP address or host name of the CommServe server. This
field is required.
CommServe DB Server Address Specify the IP address or host name of the CommServe database
system. This field may be empty. The CommServe database
hostname defaults to the CommServe server if the field is empty.
DB Server Port Specify the port used by the CommServe database. The default is
1433. This port is not enabled by default on the SQL server. Once
this port is configured on the SQL server, the server must be restarted
before data collection can occur.
DB Server User ID* Specify the read-only user ID (with a non-expiring password) for the
CommServe database. This is a SQL authentication login with at
least the following roles and permissions in the CommServe database:
db_datareader
- EXECUTE dbo.GetDateTime
- EXECUTE dbo.GetUnixTime
- EXECUTE dbo.GetJobFailureReason
- EXECUTE dbo.JMGetLocalizedMessageFunc
Field Description
DB Server Password* The non-expiring password associated with the User ID.
Active Probes
Inventory Click the clock icon to create a schedule frequency for collecting data
relating to system details such as system, disk, tape, VTL and
filesystem compression. You can schedule the collection frequency
by minute, hour, day, week and month. Advanced use of native CRON
strings is also available. Optimize performance by scheduling less
frequent collection.
Note: Explicit schedules set for a Collector policy are relative to the
time on the Collector server. Schedules with frequencies are relative
to the time that the Data Collector was restarted.
Jobs Click the clock icon to create a schedule frequency for collecting data
relating to backup jobs. The default collection is every 30 minutes.
You can schedule the collection frequency by minute, hour, day,
week and month. Advanced use of native CRON strings is also
available. The default maximum number of hours that will be collected
is 168 (7 days).
Note: Explicit schedules set for a Collector policy are relative to the
time on the Collector server. Schedules with frequencies are relative
to the time that the Data Collector was restarted.
Drives in Use Click the clock icon to create a schedule frequency for collecting data
relating to the drives in use for backup. You can schedule the
collection frequency by minute, hour, day, week and month. Advanced
use of native CRON strings is also available.
Note: Explicit schedules set for a Collector policy are relative to the
time on the Collector server. Schedules with frequencies are relative
to the time that the Data Collector was restarted.
Pre-Installation setup for Commvault Simpana 33
Add a Commvault Simpana Data Collector policy
Field Description
Skipped File Details Activates skipped file collection details - this collects which files had
problems during backup/restore and needed to be skipped. It collects
client logs and may require WMI proxy information. Click the clock
icon to create a schedule frequency for collecting data relating to the
skipped files during backup. You can schedule the collection
frequency by minute, hour, day, week and month. Advanced use of
native CRON strings is also available.
Note: Explicit schedules set for a Collector policy are relative to the
time on the Collector server. Schedules with frequencies are relative
to the time that the Data Collector was restarted.
Note: By default, this field is not selected. Activating this field will
cause another collection to run periodically that may take hours
(depending on the number of clients). You can also access the
CommCell GUI for additional information.
CommServe Server Domain Specify the domain associated with the User ID. This field must be
combined the CommServe Server User ID. If this field is blank, a
local user account (.\username) will be used.
CommServe Server User ID Specify the user ID with administrative privileges on the CommServe
server. This field must be combined the CommServe Server Domain.
If this field is blank, a local user account (.\username) will be used.The
User ID and Password fields are required for the Skipped File Details
collection.
CommServe Server Password The password associated with the CommServe Server User ID. The
User ID and Password fields are required for the Skipped File Details
collection.
Notes Enter or edit notes for your data collector policy. The maximum
number of characters is 1024. Policy notes are retained along with
the policy information for the specific vendor and displayed on the
Collector Administration page as a column making them searchable
as well.
■ Introduction
Introduction
In most cases, a single instance of the Data Collector can support any number of
enterprise objects. However, each environment has its own unique deployment
configuration requirements, so it is important to understand where the Data Collector
software must be installed so that you can determine how many Data Collectors
must be installed and which servers are best suited for the deployment.
4 Click Add Policy, and then select the vendor-specific entry in the menu.
Pre-Installation setup for Cohesity DataProtect 38
Add a Cohesity DataProtect Data Collector policy
Field Description
Collector Domain The domain of the collector to which the collector backup policy is being added.
This is a read-only field. By default, the domain for a new policy will be the same
as the domain for the collector. This field is set when you add a collector.
Policy Domain The Policy Domain is the domain of the policy that is being configured for the Data
Collector. The Policy Domain must be set to the same value as the Collector
Domain. The domain identifies the top level of your host group hierarchy. All newly
discovered hosts are added to the root host group associated with the Policy
Domain.
Typically, only one Policy Domain will be available in the drop-down list. If you are
a Managed Services Provider, each of your customers will have a unique domain
with its own host group hierarchy.
To find your Domain name, click your login name and select My Profile from the
menu. Your Domain name is displayed in your profile settings.
Management Server Addresses One or more DataProtect Management server IP addresses or host names to
probe. Comma-separated addresses or IP ranges are supported, e.g.
192.168.0.1-250, 192.168.1.10, myhost
Note: To collect from a Cluster, enter the IP address of only one of the
management servers. To collect from multiple nodes, use the primary node IP.
User ID* This field is required. View-only User ID and password for the Cohesity DataProtect
storage system.
Password* This field is required. Password for the Cohesity DataProtect storage system.
Active Probes
Protection Details Probe captures data about Protection Jobs, including their policies, schedules,
sessions, and backups.
Pre-Installation setup for Cohesity DataProtect 39
Add a Cohesity DataProtect Data Collector policy
Field Description
Schedule Click the clock icon to create a schedule. By default, it is collected at 4:04 am daily.
Every Minute, Hourly, Daily, Weekly, and Monthly schedules may be created.
Advanced use of native CRON strings is also available.
*/20 9-18 * * * means every 20 minutes between the hours of 9am and 6pm
Notes Enter or edit notes for your data collector policy. The maximum number of
characters is 1024. Policy notes are retained along with the policy information for
the specific vendor and displayed on the Collector Administration page as a column
making them searchable as well.
Test Connection Test Connection initiates a Data Collector process that attempts to connect to the
subsystem using the IP addresses and credentials supplied in the policy. This
validation process returns either a success message or a list of specific connection
errors. Test Connection requires that Agent Services are running.
Several factors affect the response time of the validation request, causing some
requests to take longer than others. For example, there could be a delay when
connecting to the subsystem. Likewise, there could be a delay when getting the
response, due to other processing threads running on the Data Collector.
You can also test the collection of data using the Run functionality available in
Admin > Data Collection > Collector Administration. This On-Demand data
collection run initiates a high-level check of the installation at the individual policy
level, including a check for the domain, host group, URL, Data Collector policy and
database connectivity. You can also select individual probes and servers to test
the collection run.
Chapter 5
Pre-Installation setup for
EMC Avamar
This chapter includes the following topics:
■ Introduction
Introduction
In most cases, a single instance of the Data Collector can support any number of
enterprise objects. However, each environment has its own unique deployment
configuration requirements, so it is important to understand where the Data Collector
software must be installed so that you can determine how many Data Collectors
must be installed and which servers are best suited for the deployment.
Pre-Installation setup for EMC Avamar 41
Architecture overview (EMC Avamar)
The Data Collector connects to the Avamar system and via JDBC, extracts data
from the Management Console Server (MCS) Database. The Avamar connection
information is retrieved from the Portal. This connection information includes
parameters such as the user ID, password, and server address. The Avamar version
is retrieved from the Avamar server. Open ports: 22 for SSH and 5555 for JDBC
to MCS DB.
Note: These steps apply only if you are performing an IN-HOUSE installation. If a
third-party service provider is hosting your Portal—that is, a HOSTED installation
(perhaps for a product evaluation)—skip this section and contact your hosting
organization’s representative to configure the hosted Portal for your Data Collector.
Note: You can also add EMC Avamar servers directly from the Data Collector policy
screen.
See “Add/Configure an Avamar Server within the Data Collector policy window”
on page 47.
Field Description
Collector Domain The domain of the collector to which the collector backup policy is being
added. This is a read-only field. By default, the domain for a new policy will
be the same as the domain for the collector. This field is set when you add
a collector.
Pre-Installation setup for EMC Avamar 45
Adding an EMC Avamar Data Collector policy
Field Description
Policy Domain The Collector Domain is the domain that was supplied during the Data
Collector installation process. The Policy Domain is the domain of the policy
being configured for the Data Collector. The Policy Domain must be set to
the same value as the Collector Domain.
The domain identifies the top level of your host group hierarchy. All newly
discovered hosts are added to the root host group associated with the Policy
Domain.
Typically, only one Policy Domain will be available in the drop-down list. If
you are a Managed Services Provider, each of your customers will have a
unique domain with its own host group hierarchy.
To find your Domain name, click your login name and select My Profile from
the menu. Your Domain name is displayed in your profile settings.
Avamar Servers* You can add Avamar servers by clicking Add. You can also import multiple
servers using a .CSV file. You can also do this using the Inventory. The
Avamar Servers table is populated using any of these methods. The servers
added to this table using EMC Avamar Data Collector Policy screen are
also displayed under Inventory. You must indicate which servers are active.
Active Click Active to indicate the Avamar server(s) to use in the data collection
policy. For Avamar, this is the Utility Node. If additional fields must be
configured, the Configure Server dialog is automatically displayed when
you make your selection.
Add Click Add to add an Avamar server type. The added servers are also
displayed under Inventory.
Note: Data Collector policies can be in place for multiple servers, but a
server cannot have multiple policies.
Configure Click Configure to revise or add information to the Avamar server you
selected.
Import Click Import to browse for the CSV file in which you entered the Avamar
server configuration details.
Field Description
Export Click Export to create and download a comma-separated values (CSV) file
containing all the server information listed in the Avamar Servers table.
Active Probes Click the clock icon to set a schedule frequency. You can schedule the
collection frequency by minute, hour, day, week and month. Advanced use
of native CRON strings is also available. You can set a schedule to collect
on:
■ Activity Details
■ Configuration Changes
■ Operational Data
■ Static Data
Note: Explicit schedules set for a Collector policy are relative to the time on
the Collector server. Schedules with frequencies are relative to the time that
the Data Collector was restarted.
Configuration Changes Collects datasets, retention policies, schedules, Avamar clients, groups and
group members from the Avamar server.
Operational Data Collects node utilization, node space, events, DPN statistics, Garbage
Collection (GC) status, current Global Storage Area Network (GSAN) status
from the Avamar.
Static Data Collects the Axion systems, event catalog and plugin catalog entries from
the Avamar server.
Utility Node Details Collects chassis information for the Avamar Utility Node.
Notes Enter or edit notes for your data collector policy. The maximum number of
characters is 1024. Policy notes are retained along with the policy information
for the specific vendor and displayed on the Collector Administration page
as a column making them searchable as well.
high-level check of the installation, including a check for the domain, host group
and URL, plus Data Collector policy and database connectivity.
Note: Data Collector policies can be in place for multiple servers, but a server
cannot be assigned multiple policies within the same domain. If you try add a server
that is already assigned to another Data Collector policy, you will be prompted to
remove it from its current policy and reassign it.
1. Click Add on the EMC Avamar Data Collector Policy screen. The Add EMC
Avamar Server screen displays.
Pre-Installation setup for EMC Avamar 48
Add/Configure an Avamar Server within the Data Collector policy window
Field Description
Avamar Server The Avamar Utility Node server name. This is displayed on the
Name Inventory page under the Host Name column. This is displayed
on the Host Administration dialog under Internal Name. This is
a required field.
Utility Node IP IP address of the Avamar Utility Node. This is management server
Address with which the Data Collector will communicate. This is displayed
on the Host Management page under the Primary IP column. This
is a required field.
Software Location Directory under which all Avamar binaries and configuration files
are kept. This field is pre-populated with a default location:
/usr/local/avamar. You can edit the field, but it must contain a
value.
Pre-Installation setup for EMC Avamar 49
Import EMC Avamar server information
Field Description
Utility Node User The login credentials must have admin or dpn access to the
ID Avamar utility node so that the Data Collector can invoke
command-line programs in the /usr/local/avamar/bin directory.
Password Password for the utility node username that has root-level access
to the Avamar utility node. This is a required field.
Database User ID User name to log into the EMC Avamar Management Console
database for reporting access. This field is pre-populated with a
default Avamar user ID: viewuser. You can edit the field, but it
must contain a value.
Password Password for credentials required to log into the EMC Avamar
management console database for reporting access. This field is
pre-populated with a default Avamar user password: viewuser1.
You can edit the field, but it must contain a value.
Import Notes
If the same Avamar Server already exists in the specified host group, the details
are updated when an import occurs.
To Import EMC Avamar Servers
1 Prepare the CSV according to CSV Format Specifications.
See “CSV Format Specifications” on page 49.
2 Select Admin > Data Collection > Collector Administration.
3 Click Add Policy.
4 Select EMC Avamar. The EMC Avamar Data Collector Policy screen is
displayed.
5 Click Import. The Import EMC Avamar Servers window is displayed. You
can browse for the CSV file you created.
■ Adding/Configuring an EMC data domain server within the Data Collector policy
window
The Data Collector connects to the Data Domain system via SSH to issue
data-gathering commands from the command-line interface (CLI).
Data Domain systems straddle the backup and storage capacity worlds. When
addressing data protection challenges, Data Domain provides backup, archive, and
disaster recovery solutions. In support of these solutions, Data Domain appliances
supply deduplication and storage management systems. These systems provide
storage in the following ways:
■ Native storage device for backup systems
■ Virtual tape library (VTL) for backup systems
■ NFS mount or CIFS share folders for file storage
Collection related to backups retrieves Data Domain system details such as file
system and virtual tape library (VTL) usage. If NetBackup collection also is enabled,
a file-level compression probe can collect data that links NetBackup backup images
with Data Domain file-level compression ratios, enabling reports by NetBackup
client or policy. Data collection gathers information regarding the actual size of the
backup image that was sent to the Data Domain system, along with the size of the
backup image that is stored on disk after deduplication and compression. NetBackup
IT Analytics maps the backup image back to the backup system file catalog. This
data helps in identifying backup sets, clients, and policies that are best suited for
the deduplication/compression features offered by Data Domain storage. In addition,
chargeback reporting can use the actual disk space used (size of the backup image).
Pre-Installation setup for EMC Data Domain backup 53
Prerequisites for adding Data Collectors (EMC Data Domain Backup)
Note: These steps apply only if you are performing an IN-HOUSE installation. If a
third-party service provider is hosting your Portal, that is, a HOSTED installation
(perhaps for a product evaluation) skip this section and contact your hosting
organization’s representative to configure the hosted portal for your Data Collector.
Note: When adding an EMC Data Domain Server, in the Inventory select Hosts,
not Backup Servers.
■ After adding the policy: For some policies, collections can be run on-demand
using the Run button on the Collector Administration page action bar. The Run
button is only displayed if the policy vendor is supported.
On-demand collection allows you to select which probes and devices to run collection
against. This action collects data the same as a scheduled run, plus logging
information for troubleshooting purposes. For probe descriptions, refer to the policy.
To add an EMC data domain backup Data Collector policy
1 Select Admin > Data Collection > Collector Administration. Currently
configured Portal Data Collectors are displayed.
2 Search for a Collector if required.
3 Select a Data Collector from the list.
4 Click Add Policy, and then select the vendor-specific entry in the menu.
5 Optionally, add an EMC Data Domain server from the policy screen. This action
can also be completed in the Inventory.
See “Adding/Configuring an EMC data domain server within the Data Collector
policy window” on page 58.
Pre-Installation setup for EMC Data Domain backup 56
Add an EMC data domain backup Data Collector policy
Field Description
Collector Domain The domain of the collector to which the collector backup policy is being added. This is a read-only
field. By default, the domain for a new policy will be the same as the domain for the collector. This
field is set when you add a collector.
Pre-Installation setup for EMC Data Domain backup 57
Add an EMC data domain backup Data Collector policy
Field Description
Policy Domain The Collector Domain is the domain that was supplied during the Data Collector installation process.
The Policy Domain is the domain of the policy that is being configured for the Data Collector. The
Policy Domain must be set to the same value as the Collector Domain.
The domain identifies the top level of your host group hierarchy. All newly discovered hosts are
added to the root host group associated with the Policy Domain.
Typically, only one Policy Domain will be available in the drop-down list. If you are a Managed
Services Provider, each of your customers will have a unique domain with its own host group
hierarchy.
To find your Domain name, click your login name and select My Profile from the menu. Your
Domain name is displayed in your profile settings.
Example: yourdomain
Data Domain When you check Active for a server shown in the list, a dialog window prompts for the SSH
Servers* credentials. Alternatively, select a server and click Configure.
You must configure each server with a Backup Type of Data Domain Server.
Add Click Add to add a Data Domain servers. The added servers are also displayed under Inventory.
See “Adding/Configuring an EMC data domain server within the Data Collector policy window”
on page 58.
Note: If the hosts already exists, NetBackup IT Analytics displays a confirmation dialog box to
update the Host Details (including the Host Type). Click Ok to update Host details / Host Type.
Configure Select a Data Domain server and click Configure to enter the SSH credentials that will be used
to access the server. This allows provides access to setting up file-level compression. Refer to the
following for additional setup information.
See “Configure a data domain server for file-level compression collection” on page 59.
Export Click Export to retrieve a list of all the Data Domain servers in a comma-separated values file.
Inventory Probe Inventory details such as system, disk, tape, and VTL details are collected by default. Click the
clock icon to create a schedule. You can schedule the collection frequency by minute, hour, day,
week and month. Advanced use of native CRON strings is also available.
Note: Explicit schedules set for a Collector policy are relative to the time on the Collector server.
Schedules with frequencies are relative to the time that the Data Collector was restarted.
Pre-Installation setup for EMC Data Domain backup 58
Adding/Configuring an EMC data domain server within the Data Collector policy window
Field Description
VTL Performance Data associated with the performance of the Data Domain system virtual tape libraries (VTL) is
collected by default. Click the clock icon to create a schedule. You can schedule the collection
frequency by minute, hour, day, week and month. Advanced use of native CRON strings is also
available.
File-Level When this probe is selected, MTrees can be entered into the File-Level Compression list configured
Compression for a Data Domain Server. The Data Domain Servers will then display an Include/Exclude column,
Probe with negative numbers indicating MTrees are excluded and positive numbers indicating MTrees
are included. Hover your mouse over the Incl/Excl column to view the MTrees.If the column displays
+0, it indicates no Mtrees have been included in the collection, and -0 indicates no Mtrees have
been excluded, so all Mtrees will be collected from.
For additional setup information refer to the following section. This section also contains information
about RMAN and NetBackup jobs.
See “Configure a data domain server for file-level compression collection” on page 59.
Notes Enter or edit notes for your data collector policy. The maximum number of characters is 1024.
Policy notes are retained along with the policy information for the specific vendor and displayed
on the Collector Administration page as a column making them searchable as well.
Test Connection Test Connection initiates a Data Collector process that attempts to connect to the subsystem using
the IP addresses and credentials supplied in the policy. This validation process returns either a
success message or a list of specific connection errors. Test Connection requires that Agent
Services are running.
Several factors affect the response time of the validation request, causing some requests to take
longer than others. For example, there could be a delay when connecting to the subsystem.
Likewise, there could be a delay when getting the response, due to other processing threads
running on the Data Collector.
You can also test the collection of data using the Run functionality available in Admin>Data
Collection>Collector Administration. This On-Demand data collection run initiates a high-level
check of the installation at the individual policy level, including a check for the domain, host group,
URL, Data Collector policy and database connectivity. You can also select individual probes and
servers to test the collection run.
The Data Domain Servers table, shown in the policy, is populated using either of
these methods. Servers added to this table are also displayed under Inventory.
The Data Domain Servers table only displays available servers. These servers are
not assigned to other policies within the domain.
Note: Data Collector policies can be in place for multiple servers, but a server
cannot be assigned multiple policies within the same domain. If you try add a server
that is already assigned to another Data Collector policy, you will be prompted to
remove it from its current policy and reassign it.
1. Click Add on the EMC Data Domain Backup Data Collector Policy screen.
2. Select a Data Domain Server and click Configure.
The Add Backup Server screen displays.
4. Click Assign Host Group to select a host group membership. Host group
membership is mandatory when creating a backup server. A server can belong
to multiple groups.
archive storage expenses. This data can be used to identify clients with inefficient
de-duplication ratios, highlighting where de-deduplication is not an effective approach
for certain backed-up files. For example, some hosts may be running database
applications that are constantly producing unique bits of data. These hosts can
consume much of the expensive Data Domain storage. Data Domain collection
now can identify the largest offenders, which can then be moved to less expensive
storage to avoid paying premium rates for de-duplication. Use the following report
templates to take advantage of this collected data: Data Domain NetBackup File
Compression Summary and the Data Domain NetBackup File Compression Detail.
Field Description
Data Domain Server Name* In order for Data Domain Servers to be listed in the policy window, they must be
configured with a Backup Type of Data Domain Server.
SSH User ID* The command-line interface (CLI) via SSH is used to gather Data Domain system
data. This requires the SSH Service to be enabled and a Data Domain user that has
a management role of 'user'. This User ID must be the same for all addresses listed
in the System Addresses entry field for the Data Domain systems.
Pre-Installation setup for EMC Data Domain backup 61
Configure a data domain server for file-level compression collection
Field Description
File-Level Compression - This selection is relevant only when the File-Level Compression probe is selected
MTrees Attached to Backup in the EMC Data Domain Backup Policy.
Systems*
Select the option to either include or exclude collection from the MTrees entered in
the list. If the exclude option is selected with an empty MTree list, data from all
MTrees will be collected. If the include option is selected with an empty MTree
list, no file-level compression data will be collected.
Note: Warning: Choosing to exclude file-level collection with an empty MTree list
may cause collection to take several hours to complete.
Pre-Installation setup for EMC Data Domain backup 62
Configure a data domain server for file-level compression collection
Field Description
OR
Field Description
RMAN:[MTree Name].
NBU:[MTree Name].
AVM:[MTree Name].
AVM:/data/col1/avamar-1629802442
If you do not specify a hint, it is assumed that the Data Domain MTree contains files
created by Veritas NetBackup.
To run File-Level compression across all Mtrees on a Data Domain system, with a
'hint' to specify that the database attempt to link files with Backup jobs, enter:
RMAN:*
or
NBU:*
or
AVM:*
depending on the Backup system using the MTree. Mixed use of wildcards is not
supported.
Aggregated global and local compression rates for all NetBackup backup images
are collected for all MTrees that are connected to the Active Data Domain servers
via DDBOOST and where the NetBackup Data Collector is active and has
successfully completed an initial data collection.
Warning: Choosing to exclude collection with an empty MTree list may cause
Pre-Installation setup for EMC Data Domain backup 64
Configure a data domain server for file-level compression collection
Field Description
■ Introduction
■ Prerequisites for adding data collectors (Dell EMC NetWorker Backup &
Recovery)
■ Adding a Dell EMC Networker Backup & Recovery Data Collector policy
Introduction
In most cases, a single instance of the Data Collector can support any number of
enterprise objects. However, each environment has its own unique deployment
configuration requirements, so it is important to understand where the Data Collector
software must be installed so that you can determine how many Data Collectors
must be installed and which servers are best suited for the deployment.
See “Adding a Dell EMC Networker Backup & Recovery Data Collector policy
” on page 67.
■ 64-bit OS. See the Certified Configurations Guide for supported operating
systems.
■ When the NetBackup IT Analytics system collects data from any vendor
subsystem, the collection process expects name/value pairs to be in US English,
and requires the installation to be done by an Administrator with a US English
locale. The server’s language version can be non-US English.
■ Supports Amazon Corretto 17. Amazon Corretto is a no-cost, multi-platform,
production-ready distribution of the Open Java Development Kit (OpenJDK).
■ For performance reasons, do not install Data Collectors on the same server as
the Portal. However, if you must have both on the same server, verify that the
Portal and Data Collector software do not reside in the same directory.
■ Install only one Data Collector on a server (or OS instance).
■ For most Backup Manager systems, install the Data Collector on a server that
is in the same time zone as the backup server from which you are collecting
data. For Veritas NetBackup and IBM Spectrum Protect (TSM) collection, the
Data Collector server and backup server can be in different time zones.
■ The NetWorker REST API is installed with the NetWorker installation. As part
of the installation process, an Apache Tomcat instance is installed and a nonroot
-user, nsrtomcat, is created. If your system has special user security
requirements, ensure that proper operational permissions are granted to the
nsrtomcat users.
■ With the NetWorker REST API, all endpoints other than initial landing endpoint
(https://your-server-name:9090/nwrestapi/) require authentication. Once a user
has been authenticated by the API, permissions to NetWorker resources will be
based on the NetWorker permissions for that user.
■ Default port number to connect to NetWorker REST API is 9090. This value is
configurable in the policy.
Table 7-1 Dell EMC Networker Backup and Recovery Data Collector policy
Field Description
Collector Domain The domain of the collector to which the collector backup policy is being added.
This is a read-only field. By default, the domain for a new policy will be the same
as the domain for the collector. This field is set when you add a collector.
Pre-Installation setup for Dell EMC NetWorker backup & Recovery 69
Adding a Dell EMC Networker Backup & Recovery Data Collector policy
Table 7-1 Dell EMC Networker Backup and Recovery Data Collector policy
(continued)
Field Description
Policy Domain The Collector Domain is the domain that was supplied during the Data Collector
installation process. The Policy Domain is the domain of the policy that is being
configured for the Data Collector. The Policy Domain must be set to the same value
as the Collector Domain. The domain identifies the top level of your host group
hierarchy. All newly discovered hosts are added to the root host group associated
with the Policy Domain. Typically, only one Policy Domain will be available in the
drop-down list. If you are a Managed Services Provider, each of your customers
will have a unique domain with its own host group hierarchy. To find your Domain
name, click your login name and select My Profile from the menu. Your Domain
name is displayed in your profile settings.
Backup Server Name* Enter one or more Dell EMC NetWorker Backup & Recovery server host names to
probe. Comma-separated host names are supported, for example: myhost1,
myhost2. This field is required.
User ID* View-only User ID for Dell EMC NetWorker REST API. This field is required.
Password* Password for Dell EMC NetWorker REST API. This field is required.
Port* Port used for Dell EMC NetWorker REST API connection. Default: 9090. This field
is required.
Authentication Server Authentication Server used for DELL EMC NetWorker REST API connection.
Note: This parameter is required only if there is a dedicated authentication server
configured for this NetWorker Server. If this parameter is not specified, collection
assumes this NetWorker server uses its own local authentication service.
Authentication Port Authentication Port used for DELL EMC REST API connection. Default value is
9090
Active Probes
Protection Sources Collects Storage Nodes, Devices, Protection Groups, Protection Policies, Workflows
and their Actions.
Table 7-1 Dell EMC Networker Backup and Recovery Data Collector policy
(continued)
Field Description
Schedule Click clock icon to create a schedule. By default, it is collected at 3.33 am daily.
Every Minute, Hourly, Daily, Weekly, and Monthly schedules may be created.
Advanced use of native CRON strings is also available.
*/20 9-18 * * * means every 20 minutes between the hours of 9am and 6pm
Notes Enter or edit notes for your data collector policy. The maximum number of characters
is 1024. Policy notes are retained along with the policy information for the specific
vendor and displayed on the Collector Administration page as a column making
them searchable as well.
Test Connection Test Connection initiates a Data Collector process that attempts to connect to the
subsystem using the IP addresses and credentials supplied in the policy. This
validation process returns either a success message or a list of specific connection
errors. Test Connection requires that Agent Services are running.
Several factors affect the response time of the validation request, causing some
requests to take longer than others. For example, there could be a delay when
connecting to the subsystem. Likewise, there could be a delay when getting the
response, due to other processing threads running on the Data Collector.
You can also test the collection of data using the Run functionality available
inAdmin>Data Collection>Collector Administration. This On-Demand data
collection run initiates a high-level check of the installation at the individual policy
level, including a check for the domain, host group, URL, Data Collector policy and
database connectivity. You can also select individual probes and servers to test the
collection run.
Chapter 8
Pre-Installation setup for
generic backup
This chapter includes the following topics:
■ Introduction
Introduction
In most cases, a single instance of the Data Collector can support any number of
enterprise objects. However, each environment has its own unique deployment
configuration requirements, so it is important to understand where the Data Collector
software must be installed so that you can determine how many Data Collectors
must be installed and which servers are best suited for the deployment.
Pre-Installation setup for generic backup 72
Generic backup data collection
Note: In addition to the regularly scheduled data collection, the CSV file also can
be imported manually.
See “Manually load the CSV file (generic backup)” on page 77.
4 Click Add Policy, and then select the vendor-specific entry in the menu.
6 Optionally, add/edit a Generic Backup server from the policy screen. These
operations can also be completed in the Inventory.
7 Click OK to save the policy.
Field Description
Collector Domain The domain of the collector to which the collector backup policy is being
added. This is a read-only field. By default, the domain for a new policy
will be the same as the domain for the collector. This field is set when
you add a collector.
Policy Domain The Collector Domain is the domain that was supplied during the Data
Collector installation process. The Policy Domain is the domain of the
policy that is being configured for the Data Collector. The Policy Domain
must be set to the same value as the Collector Domain.
The domain identifies the top level of your host group hierarchy. All
newly discovered hosts are added to the root host group associated
with the Policy Domain.
To find your Domain name, click your login name and select My Profile
from the menu. Your Domain name is displayed in your profile settings.
Example: yourdomain
Backup Select the backup product management server, e.g., Generic Backup
Management Server with which the Data Collector will communicate. The selected
Server* management server is used to associate the data file with a server.
Add Click Add to add a Generic Backup server. Added servers are also
displayed in the Inventory.
Edit Select a server and click Edit to update the server values.
Pre-Installation setup for generic backup 76
Adding/Editing a generic backup server within the Data Collector policy
Field Description
File Path* The absolute file path on the Data Collector Server where the CSV data
file is located. Typically, C:\\Program
Files\\Aptare\\mbs\\logs\\genericBackups.csv for
Windows, or /opt/aptare/mbs/logs/genericBackups.csv for
Linux.
Example:
/opt/aptare/mbs/logs/genericBackups.csv
Click the clock icon to create a schedule frequency. You can schedule
the collection frequency by minute, hour, day, week and month.
Advanced use of native CRON strings is also available.
Note: Explicit schedules set for a Collector policy are relative to the
time on the Collector server. Schedules with frequencies are relative
to the time that the Data Collector was restarted.
Notes Enter or edit notes for your data collector policy. The maximum number
of characters is 1024. Policy notes are retained along with the policy
information for the specific vendor and displayed on the Collector
Administration page as a column making them searchable as well.
Note: Data Collector policies can be in place for multiple servers, but a server
cannot be assigned multiple policies within the same domain. If you try add a server
that is already assigned to another Data Collector policy, you will be prompted to
remove it from its current policy and reassign it.
Pre-Installation setup for generic backup 77
Manually load the CSV file (generic backup)
C:\opt\APTARE\mbs\bin\listcollectors.bat
Linux:
/opt/aptare/mbs/bin/listcollectors.sh
In the output, look for the Event Collectors section associated with the Software
Home—the location of the CSV file (the path that was specified when the Data
Collector Policy was created). Find the Event Collector ID and Host ID.
2. Use the following commands to load the data from the CSV file into the Portal
database.
Windows:
C:\opt\APTARE\mbs\bin\loadGenericBackupData.bat <EventCollectorID>
<HostID>
Linux:
/opt/aptare/mbs/bin/loadGenericBackupData.sh <EventCollectorID>
<HostID>
Note: If you run the command with no parameters, it will display the syntax.
The load script will check if the backup server and client already exist; if not, they
will be added to the database. The script then checks for a backup job with the
exact same backup server, client, start date and finish date. If no matches are found,
the job will be added; otherwise, it will be ignored. This prevents duplicate entries
and allows the import of the script to be repeated, if it has not been updated. Once
the load is complete, these clients and jobs will be visible via the NetBackup IT
Analytics Portal and the data will be available for reporting.
Pre-Installation setup for generic backup 79
CSV format specification
Note: The CSV file must be UTF-8 encoded, however be sure to remove any UTF-8
BOMs (Byte Order Marks). The CSV cannot be properly parsed with these additional
characters.
EXAMPLE: genericBackupJobs.csv
'Mainframe Backup','mainframe_name','10.10.10.10','BACKUP','2008-03-24
10:25:00', '2008-03-24
11:50:00',3713,45221,'D',0,'413824','Retail_s01002030','Incremental',
'/I:/Shared/','Daily'
UNIX tar backup','host_xyz.anyco.com','null','BACKUP','2008-03-24
10:22:00','2008-03-24
12:50:00',1713,45221,'T',1,'5201','HQ_Finance','Full','/D:/Backups/','Daily'
ArcServe','host_123.anyco.com','null','RESTORE','2008-03-24
8:22:00','2008-03-24
9:12:00',0,0,'T',0,'2300','Retail_s03442012','Incremental',
'/I:/Shared/','EOM'
Chapter 9
Pre-Installation setup for
HP Data Protector
This chapter includes the following topics:
■ Introduction
■ Configure the Data Collector server in Cell Manager (HP Data Protector)
Introduction
In most cases, a single instance of the Data Collector can support any number of
enterprise objects. However, each environment has its own unique deployment
configuration requirements, so it is important to understand where the Data Collector
software must be installed so that you can determine how many Data Collectors
must be installed and which servers are best suited for the deployment.
Cell Manager
HP Data Protector bases its backup management on the Cell Manager, which
provides a central point for managing backup and restore sessions. Cell Manager
executes Data Protector Software and Session Managers and also contains the
Data Protector Internal Database (IDB). This database includes such details as
backup duration, session IDs, and media IDs. Multiple cells can be configured into
a group, enabling a single-point manager of cell managers.
Media agent
The media agent communicates with the disk agent and also reads/writes data on
the media device.
Client systems
The client systems are the systems that are being backed up. These systems have
the HP Data Protector Disk Agent installed on them.
Pre-Installation setup for HP Data Protector 84
Prerequisites for adding Data Collectors (HP Data Protector)
■ See “Configure the Data Collector server in Cell Manager (HP Data
Protector)” on page 88.
■ See “Configure an HP Data Protector admin user” on page 89.
■ See “Validate HP Data Protector setup” on page 90.
Note: These steps apply only if you are performing an IN-HOUSE installation. If a
third-party service provider is hosting your Portal--that is, a HOSTED installation
(perhaps for a product evaluation)--skip this section and contact your hosting
organization’s representative to configure the hosted Portal for your Data Collector.
omnicc omnidbutil
omnicellinfo
omnidb
omnidownload
omnimm
omnirpt
Pre-Installation setup for HP Data Protector 87
Preparing for centralized data collection on a remote Cell Manager
Find the directory (default is: C:\Program Files\omniback\bin) where the following
commands are located and verify that the following commands exist in that directory.
This validates that the User Interface was installed correctly.
omnicc
omnicellinfo
omnidb
omnidownload
omnimm
omnirpt
Then, in the Connect to Cell Manager dialog, select or type the name of
the Cell Manager and click Connect.
■ Data Protector Java UI (Linux):
/opt/omni/java/client/bin/javadpgui.sh
2. Make sure that you have a user that is a Local System Administrator account
from the Data Collector Server.
See “Configure an HP Data Protector admin user” on page 89.
3. Validate the conditions.
See “Validate HP Data Protector setup” on page 90.
If the Data Collector is installed on a different server from the Cell Manager server,
the following tasks must be performed.
On the HP Data Protector Cell Manager server:
1. Open the HP Data Protector UI.
2. Make sure you have a user that is a Local System Administer account from
the Data Collector Server.
■ Switch to the User context and add the user under an “Admin” class.
Pre-Installation setup for HP Data Protector 90
Validate HP Data Protector setup
■ If the first command does not run correctly, it means that the User Interface
component has not been installed correctly.
■ If the first command runs, but the second command displays, “Insufficient
permissions. Access denied,” it means that the configuration of the Data
Collector Server in the HP Cell Manager server has failed.
4. Enter values for all required fields (denoted by an *) and click OK. The field,
Internal Host Name, needs to match the host name of the HP Cell Manager
Server. Ensure you select HP Cell Manager Server as the Type. The fields
External Name, Make and Model are not used by the application for anything
other than display purposes.
5. Select the Host Group while you are adding the backup server.
Note: If a server group hierarchy has already been established in the application,
you can select the host group to which you would like the HP Cell Manager Server
to belong, although it is recommended that you add the HP Cell Manager Server
to the top-level folder.
Pre-Installation setup for HP Data Protector 91
Add an HP Data Protector Data Collector policy
5 Optionally add an HP Data Protector Backup Server from the policy screen.
This action can also be completed in the Inventory.
See “Add/Edit a HP Data Protector server within the Data Collector policy”
on page 96.
Pre-Installation setup for HP Data Protector 93
Add an HP Data Protector Data Collector policy
Table 9-1
Field Description
Collector Domain The domain of the collector to which the collector backup
policy is being added. This is a read-only field. By default,
the domain for a new policy will be the same as the domain
for the collector. This field is set when you add a collector.
Policy Domain The Collector Domain is the domain that was supplied during
the Data Collector installation process. The Policy Domain
is the domain of the policy that is being configured for the
Data Collector. The Policy Domain must be set to the same
value as the Collector Domain.
To find your Domain name, click your login name and select
My Profile from the menu. Your Domain name is displayed
in your profile settings.
Example: yourdomain
Backup Management Server* Select the Cell Manager server. Verify that the IP address
and OS information are correct.
Add Click Add to add a Cell Manager server. Added servers are
also displayed in the Inventory.
Note: If the hosts already exists, NetBackup IT Analytics
displays a confirmation dialog box to update the Host Details
(including the Host Type). Click Ok to update Host details /
Host Type.
Edit Select a Cell Manager server and click Edit to update the
values.
Pre-Installation setup for HP Data Protector 95
Add an HP Data Protector Data Collector policy
Field Description
Cell Manager* The Cell Manager is the server that runs session managers
and core software to manage the backup details in the HP
Data Protector database. Enter the name or IP address of
the HPDP Cell Manager server.
Example: HPSERVER
Backup Software Location* On the Data Collector server, this is the home directory of
the HP Data Protector Admin Client software—that is, the
location of the omni commands, such as omnicellinfo and
omnireport—to be used by the Data Collector server.
The following parameters are required only when the Data Collector server is different from
the Cell Manager server.
Remote Software Location The home directory on the HP Data Protector server where
Cell Manager is installed. Typically C:\Program
Files\Omniback for Windows, or /opt/omni for Linux; only
required for remote access to Cell Manager.
Operating System The operating system on which the HP Data Protector Data
Collector is running; only required for remote access to Cell
Manager.
Cell Manager User ID An Admin User that the Cell Manager server
recognizes--either a local user account or a Windows domain
user account--that has execute rights to the omnidbutil
command. This command is used to obtain drive status.
Password/Repeat Password Password for the Cell Manager User ID on the remote system.
Access Control Command For Linux hosts: If the user ID is not root, provide the full path
to access control; Example: /usr/bin/sudo
Pre-Installation setup for HP Data Protector 96
Add/Edit a HP Data Protector server within the Data Collector policy
Field Description
WMI Proxy Server Enter the server name or IP address where the WMI Proxy
is installed. Only needed if collecting from Windows hosts
and when the Data Collector is on a server that is different
from the Cell Manager server. If the Data Collector is installed
on a Linux OS, a WMI Proxy Server must be installed on a
Windows system in order to collect data from a Cell Manager
that is installed on a Windows system.
Notes Enter or edit notes for your data collector policy. The
maximum number of characters is 1024. Policy notes are
retained along with the policy information for the specific
vendor and displayed on the Collector Administration page
as a column making them searchable as well.
Note: Data Collector policies can be in place for multiple servers, but a server
cannot be assigned multiple policies within the same domain. If you try add a server
that is already assigned to another Data Collector policy, you will be prompted to
remove it from its current policy and reassign it.
4. Click Assign Host Group to select a host group membership. Host group
membership is mandatory when creating a backup server. A server can belong
to multiple groups.
■ Introduction
■ Add/Configure an IBM Spectrum Protect (TSM) server within the Data Collector
policy
Introduction
In most cases, a single instance of the Data Collector can support any number of
enterprise objects. However, each environment has its own unique deployment
configuration requirements, so it is important to understand where the Data Collector
software must be installed. This enables you to determine how many Data Collectors
must be installed and which servers are best suited for the deployment.
Pre-Installation setup for IBM Spectrum Protect (TSM) 99
Architecture overview (IBM Spectrum Protect -TSM)
server, verify that the Portal and Data Collector software do not reside in the
same directory.
■ Install only one Data Collector on a server (or OS instance).
■ For most Backup Manager systems, install the Data Collector on a server that
is in the same time zone as the backup server from which you are collecting
data. For Veritas NetBackup and IBM Spectrum Protect (TSM) collection, the
Data Collector server and backup server can be in different time zones.
■ A IBM Spectrum Protect (TSM) client must be installed on the Data Collector
server with the dsmadmc command available.
■ Ports to IBM Spectrum Protect (TSM) instances must be open. Each instance
will have a unique port. Make note of these instance/port combinations for data
collection. Typically, TCP 1500 is the default port.
■ The Data Collector server can be any server within your network that is Java
1.7 compatible and with dsmadmc installed. For Linux platforms, this server
must be added to dsm.sys so the Data Collector can use the dsmadmc
command.
■ For a data collector installed on Windows, the parameter variable DSM_CONFIG
MUST be configured. The value of the variable should be the full path and
filename for the dsm.opt file.
Note: For a default installation of the Windows TSM client, the location is:
C:\Program Files\Tivoli\TSM\baclient\dsm.opt
To add the parameter, navigate to Admin > Advanced > Parameters. Click Add
and Add Advanced Parameter dialog box is displayed. Click Add to add the
parameter.
Note: If a host group hierarchy has already been established in the application,
you can find the host group to which you would like the IBM Spectrum Protect (TSM)
server to belong, although it is recommend adding the TSM server to the top-level
folder.
Note: You can also add IBM Spectrum Protect (TSM) servers directly from the Data
Collector policy screen.
See “Add/Configure an IBM Spectrum Protect (TSM) server within the Data Collector
policy” on page 106.
Field Description
Collector Domain The domain of the collector to which the collector backup
policy is being added. This is a read-only field. By default,
the domain for a new policy will be the same as the domain
for the collector. This field is set when you add a collector.
Pre-Installation setup for IBM Spectrum Protect (TSM) 103
Adding an IBM Spectrum Protect (TSM) Data Collector policy
Field Description
Policy Domain The Policy Domain is the domain of the policy that is being
configured for the Data Collector. The Policy Domain must
be set to the same value as the Collector Domain. (The
Collector Domain is the domain that was supplied during the
Data Collector installation process.)
To find your Domain name, click your login name and select
My Profile from the menu. Your Domain name is displayed
in your profile settings.
Field Description
Import Click Import to browse for the CSV file in which you entered
the IBM Spectrum Protect (TSM) server/instance configuration
details. This enables you quickly add a list of IBM Spectrum
Protect (TSM) instances or servers.
Backup Software Location* The home directory of the TSM Admin Client software--that
is, the dsmadmc command on the Data Collector server.
Backup Policy Details Collects backup policy details including policy name, policy
identifier, date and time as it relates to the specified IBM
Spectrum Protect (TSM) instances/servers.
Backup Event Details Collects backup event details including backup type, client
names, duration, dates, size, files, directories and so on.
Restore Event Details Collects restore event details such as job duration, number
of files backed up and also skipped, amount of data restored,
and restore job status.
Pre-Installation setup for IBM Spectrum Protect (TSM) 105
Adding an IBM Spectrum Protect (TSM) Data Collector policy
Field Description
Storage Pool Details Collects IBM Spectrum Protect (TSM) storage pool
information, such as migration and reclamation details.
Job Summary Details Collects IBM Spectrum Protect (TSM) job summary details
including client name, node name, backup type, dates,
duration and so on, as it relates to the specified IBM Spectrum
Protect (TSM) instances/servers.
Client Node Details Collects client node details covered by the selected policy
domain as they relate to the specified IBM Spectrum Protect
(TSM) instances/servers.
Drive Status Monitor Continuously monitors the IBM Spectrum Protect (TSM)
console for drive status messages, there is no schedule.
Drive Status Details Collects drive status details including, drive name, library
name, status, start and end times as it relates to the specified
IBM Spectrum Protect (TSM) instances/servers.
Inventory Details Collects IBM Spectrum Protect (TSM) host details including
host name, IP Address, Host ID, and port number. Also
collects from drives and paths such as device type, device
name, library name, and ACS drive ID.
Tape Details Collects the tape details including media type name, media
status, storage pool name, and estimated capacity.
Volume Usage and Media Collects client and node information, and then for each node
Occupancy Details it retrieves the volume usage details from the IBM Spectrum
Protect (TSM) instance or server. For each volume, collects
details of media occupying the storage pool. Details include
the type and size of the media.
Storage Pool Backup Monitor Continuously monitors the IBM Spectrum Protect (TSM)
console for storage pool migration messages, there is no
schedule.
Pre-Installation setup for IBM Spectrum Protect (TSM) 106
Add/Configure an IBM Spectrum Protect (TSM) server within the Data Collector policy
Field Description
Notes Enter or edit notes for your data collector policy. The
maximum number of characters is 1024. Policy notes are
retained along with the policy information for the specific
vendor and displayed on the Collector Administration page
as a column making them searchable as well.
Note: Data Collector policies can be in place for multiple instances but an instance
cannot be assigned multiple policies within the same domain. If you try add an
instance that is already assigned to another Data Collector policy, you will be
prompted to remove it from its current policy and reassign it.
1. Click Add on the IBM Tivoli Storage Manager Data Collector Policy screen.
The Add IBM Tivoli Storage Manager Instance screen displays.
Pre-Installation setup for IBM Spectrum Protect (TSM) 107
Add/Configure an IBM Spectrum Protect (TSM) server within the Data Collector policy
Field Description
Instance Name The name assigned to the IBM Spectrum Protect (TSM) instance. This
is a separate instance of the server software running on a IBM Spectrum
Protect (TSM) server. A single server can run multiple instances. An
instance is defined by instance name, host name, and port number.
Host Name The name of the IBM Spectrum Protect (TSM) server. This is the host
running the IBM Spectrum Protect (TSM) server software. This system
will be known by its host name. If you do not know the Host Name, you
can enter the IP Address in this field.
Host IP Address The IP address of the host running the IBM Spectrum Protect (TSM)
server software. By default it is set to 127.0.0.1. You can revise it as
required and this field is not mandatory.
Host Port Port number used by dsmadmc to communicate with the IBM Spectrum
Protect (TSM) Instance. Each instance will have its own specific port.
By default it is set to 1500. You can revise it, but the field is required.
User ID IBM Spectrum Protect (TSM) user ID with query and select privileges.
Pre-Installation setup for IBM Spectrum Protect (TSM) 108
Import IBM Spectrum Protect (TSM) information
Field Description
■ Introduction
Introduction
In most cases, a single instance of the Data Collector can support any number of
enterprise objects. However, each environment has its own unique deployment
configuration requirements, so it is important to understand where the Data Collector
software must be installed so that you can determine how many Data Collectors
must be installed and which servers are best suited for the deployment.
■ When the NetBackup IT Analytics system collects data from any vendor
subsystem, the collection process expects name/value pairs to be in US English,
and requires the installation to be done by an Administrator with a US English
locale. The server’s language version can be non-US English.
■ Supports Amazon Corretto 17. Amazon Corretto is a no-cost, multi-platform,
production-ready distribution of the Open Java Development Kit (OpenJDK).
■ For performance reasons, do not install Data Collectors on the same server as
the NetBackup IT Analytics Portal. However, if you must have both on the same
server, verify that the Portal and Data Collector software do not reside in the
same directory.
■ NAKIVO Backup & Replication Server must have the license that allows access
to NetBackup IT Analytics reporting REST APIs. User credentials to access
NetBackup IT Analyticsreporting REST APIs are also required.
■ Install only one Data Collector on a server (or OS instance).
■ Default port 4443 - Director Web UI port (used during NAKIVO Backup &
Replication installation.
■ After adding the policy: For some policies, collections can be run on-demand
using the Run button on the Collector Administration page action bar. The Run
button is only displayed if the policy vendor is supported.
On-demand collection allows you to select which probes and devices to run collection
against. This action collects data the same as a scheduled run, plus logging
information for troubleshooting purposes. For probe descriptions, refer to the policy.
To add the policy
1 Select Admin > Data Collection > Collector Administration. Currently
configured Portal Data Collectors are displayed.
2 Search for a Collector if required.
3 Select a Data Collector from the list.
4 Click Add Policy, and then select the vendor-specific entry in the menu.
5 Enter or select the parameters. Mandatory parameters are denoted by an
asterisk (*):
Pre-Installation setup for NAKIVO Backup & Replication 113
Add a NAKIVO Backup & Replication Data Collector policy
Field Description
Collector Domain The domain of the collector to which the collector backup policy is being
added. This is a read-only field. By default, the domain for a new policy
will be the same as the domain for the collector. This field is set when
you add a collector.
Policy Domain The Policy Domain is the domain of the policy that is being configured
for the Data Collector. The Policy Domain must be set to the same
value as the Collector Domain. The domain identifies the top level of
your host group hierarchy. All newly discovered hosts are added to the
root host group associated with the Policy Domain.
To find your Domain name, click your login name and select My Profile
from the menu. Your Domain name is displayed in your profile settings.
Server Address Enter one or more NAKIVO Backup & Replication server IP addresses
to probe. Comma-separated addresses are supported.
Port Enter the value for NAKIVO server's Director Web UI port. Default port
is 4443.
Username Enter the user name used for the NAKIVO Backup & Replication system.
Password* Password for NAKIVO Backup & Replication Server associated with
the Username.
Active Probes
*/20 9-18 * * * means every 20 minutes between the hours of 9am and
6pm
Table 11-1 NAKIVO Backup & Replication Data Collector Policy (continued)
Field Description
Notes Enter or edit notes for your data collector policy. The maximum number
of characters is 1024. Policy notes are retained along with the policy
information for the specific vendor and displayed on the Collector
Administration page as a column making them searchable as well.
Test Connection Test Connection initiates a Data Collector process that attempts to
connect to the subsystem using the IP addresses and credentials
supplied in the policy. This validation process returns either a success
message or a list of specific connection errors. Test Connection requires
that Agent Services are running.
You can also test the collection of data using the Run functionality
available inAdmin>Data Collection>Collector Administration . This
On-Demand data collection run initiates a high-level check of the
installation at the individual policy level, including a check for the domain,
host group, URL, Data Collector policy and database connectivity. You
can also select individual probes and servers to test the collection run.
See Working with On-Demand Data Collection for details.
Chapter 12
Pre-Installation setup for
Veritas Backup Exec
This chapter includes the following topics:
■ Introduction
Introduction
In most cases, a single instance of the Data Collector can support any number of
enterprise objects. However, each environment has its own unique deployment
configuration requirements, so it is important to understand where the Data Collector
software must be installed so that you can determine how many Data Collectors
must be installed and which servers are best suited for the deployment.
Pre-Installation setup for Veritas Backup Exec 116
Architecture overview (Veritas Backup Exec)
Upgrade MS SQL Server to the latest version to enable secure collection. Your MS
SQL Server version may not be supported for NetBackup IT Analytics version 10.3.
If upgrade is not possible, a workaround can be attempted to restore compatibility.
If the following steps do not resolve the issue, your version of MS SQL Server is
not supported.
Use the following steps to modify the enabled algorithms to allow communication
with the data collector:
1. Edit <collector install dir>/jre/conf/security/java.security.
2. Search for jdk.tls.disabledAlgorithms.
3. Copy the existing lines and comment (to have a backup for easy restore).
4. Remove 3DES_EDE_CBC.
1024, \
EC keySize < 224, DES40_CBC, RC4_40
Note: These steps apply only if you are performing an IN-HOUSE installation. If a
third-party service provider is hosting your Portal--that is, a HOSTED installation
(perhaps for a product evaluation)--skip this section and contact your hosting
organization’s representative to configure the hosted portal for your Data Collector.
3. The user can be restricted within the context of Backup Exec by configuring
the login account, as shown in the following example.
WindowsdomainName,hostname,ip_address,dbInstance,adminUserName,adminPassword
,server1,,,,
,server2,,,,
,server3,,,,
,server4,,,,
windowsdomainname,myserver,10.0.0.67,scdb,Administrator,password
In the previous example file, there are five Backup Exec servers to be loaded
into the Portal. The first four servers will use the default credentials. The last
server will use the credentials as specified in this file.
4 Click Add Policy, and then select the vendor-specific entry in the menu.
Field Description
Collector Domain The domain of the collector to which the collector backup
policy is being added. This is a read-only field. By default,
the domain for a new policy will be the same as the domain
for the collector. This field is set when you add a collector.
Pre-Installation setup for Veritas Backup Exec 125
Add a Veritas Backup Exec Data Collector policy
Field Description
Policy Domain The Collector Domain is the domain that was supplied during
the Data Collector installation process. The Policy Domain
is the domain of the policy that is being configured for the
Data Collector. The Policy Domain must be set to the same
value as the Collector Domain.
To find your Domain name, click your login name and select
My Profile from the menu. Your Domain name is displayed
in your profile settings.
Example: yourdomain
Default Windows Domain Windows domain name; If the host is not a member of a
domain, or to specify a local user account, use a period (.)
to substitute the local host SSID for the domain.
Notes Enter or edit notes for your data collector policy. The
maximum number of characters is 1024. Policy notes are
retained along with the policy information for the specific
vendor and displayed on the Collector Administration page
as a column making them searchable as well.
Chapter 13
Pre-Installation setup for
Veritas NetBackup
This chapter includes the following topics:
■ Introduction
Introduction
In most cases, a single instance of the Data Collector can support any number of
enterprise objects. However, each environment has its own unique deployment
configuration requirements, so it is important to understand where the Data Collector
software must be installed so that you can determine how many Data Collectors
must be installed and which servers are best suited for the deployment.
Pre-Installation setup for Veritas NetBackup 127
General prerequisites for adding Data Collectors (Veritas NetBackup)
Watch this video of how to configure a Data Collector and configure a policy for
data collection.
http://video.symantec.com/services/play-
er/bcpid292374537001?bckey=AQ~~,AAAABuIiy9k~,I8Bhas-
Vwr9zYL9V36WFi86fR_NoePscn&bctid=6301527837001
■ For performance reasons, do not install Data Collectors on the same server as
the Portal. However, if you must have both on the same server, verify that the
Portal and Data Collector software do not reside in the same directory.
■ Install only one Data Collector on a server (or OS instance).
■ For most Backup Manager systems, install the Data Collector on a server that
is in the same time zone as the backup server from which you are collecting
data. For Veritas NetBackup and IBM Spectrum Protect (TSM) collection, the
Data Collector server and backup server can be in different time zones.
■ Uses ports 443, 1556, and 13724 WMI range of ports, Linux ssh 22
■ The NetBackup Event Monitor probe, enabled on the data collector policy screen,
uses the nb_monitor_util executable. This executable is installed by default for
all NetBackup 8.2 installations. It can be found in the
/usr/openv/netbackup/bin/goodies directory on Linux and \Program
Pre-Installation setup for Veritas NetBackup 128
Prerequisites to use SSH and WMI (Veritas NetBackup)
export EDITOR=nano
2 Once the preferred editor is configured, execute the following commands. Use
visudo if available.
visudo -f /etc/sudoers.d/<username>
Pre-Installation setup for Veritas NetBackup 130
Prerequisites to use SSH and WMI (Veritas NetBackup)
3 Add the following lines to the sudoers file, substituting the name of the user
you created for <username>:
Defaults:<username> !requiretty
<username> ALL=(ALL) NOPASSWD: \
/usr/openv/netbackup/bin/admincmd/* ,\
/usr/openv/volmgr/bin/* ,\
/usr/openv/netbackup/bin/*
Defaults:<username> !requiretty
<username> ALL=(ALL) NOPASSWD: \
/usr/openv/netbackup/bin/admincmd/bpgetconfig ,\
/usr/openv/netbackup/bin/admincmd/bpcoverage ,\
/usr/openv/netbackup/bin/admincmd/bpdbjobs ,\
/usr/openv/netbackup/bin/admincmd/bpimagelist ,\
/usr/openv/netbackup/bin/admincmd/bperror ,\
/usr/openv/netbackup/bin/admincmd/bppllist ,\
/usr/openv/netbackup/bin/admincmd/bpretlevel ,\
/usr/openv/netbackup/bin/admincmd/bpplclients ,\
/usr/openv/netbackup/bin/admincmd/bpmedialist ,\
/usr/openv/netbackup/bin/admincmd/bpstulist ,\
/usr/openv/netbackup/bin/admincmd/nbdevquery ,\
/usr/openv/netbackup/bin/admincmd/nbauditreport ,\
/usr/openv/netbackup/bin/admincmd/nbstl ,\
/usr/openv/netbackup/bin/admincmd/nbstlutil ,\
/usr/openv/netbackup/bin/admincmd/bpstsinfo ,\
/usr/openv/netbackup/bin/admincmd/bpminlicense ,\
/usr/openv/volmgr/bin/vmquery ,\
/usr/openv/volmgr/bin/vmpool ,\
/usr/openv/volmgr/bin/vmglob ,\
/usr/openv/volmgr/bin/vmcheckxxx ,\
/usr/openv/volmgr/bin/vmoprcmd ,\
/usr/openv/volmgr/bin/tpconfig ,\
/usr/openv/netbackup/bin/bplist ,\
/usr/openv/netbackup/bin/nbsqladm ,\
/usr/openv/netbackup/bin/nboraadm
3. Assign read and write permissions to the folder for the CLI user account and
nbusers group.
Refer to Overriding the NetBackup appliance intrusion prevention system policy
in the Veritas NetBackup™ Appliance Security Guide.
TMPDIR=/log/aptare/tmp
export TMPDIR
fi
OR
Use the advanced parameter NBU_SSH_TMPDIR. For available methods of
configuring the TMPDIR environment variable.
See “Veritas NetBackup SSH: Changing the Linux Temporary Directory for
Collection” on page 135.
in the procedure below. You must also obtain the REST API key from the NetBackup
UI.
1 Open a SSH session to the NetBackup instance as an admin or root user to
create an appadmin user.
2 Create a local user account:
3 Grant sudo access to the local user account created above in /etc/sudoers.d:
■ Create sudoers file in /etc/sudoers.d, substituting the name of the user
you created for <username>.
Defaults:<username> !requiretty
<username> ALL=(ALL) NOPASSWD: \
/usr/openv/netbackup/bin/admincmd/* ,\
/usr/openv/volmgr/bin/* ,\
/usr/openv/netbackup/bin/*
Defaults:<username> !requiretty
<username> ALL=(ALL) NOPASSWD:
/usr/openv/netbackup/bin/admincmd/bpgetconfig ,\
/usr/openv/netbackup/bin/admincmd/bpcoverage ,\
/usr/openv/netbackup/bin/admincmd/bpdbjobs ,\
/usr/openv/netbackup/bin/admincmd/bpimagelist ,\
/usr/openv/netbackup/bin/admincmd/bperror ,\
/usr/openv/netbackup/bin/admincmd/bpminlicense ,\
/usr/openv/netbackup/bin/admincmd/bppllist ,\
/usr/openv/netbackup/bin/admincmd/bpretlevel ,\
/usr/openv/netbackup/bin/admincmd/bpplclients ,\
/usr/openv/netbackup/bin/admincmd/bpmedialist ,\
/usr/openv/netbackup/bin/admincmd/bpstulist ,\
/usr/openv/netbackup/bin/admincmd/nbdevquery ,\
/usr/openv/netbackup/bin/admincmd/nbauditreport ,\
/usr/openv/netbackup/bin/admincmd/nbstl ,\
Pre-Installation setup for Veritas NetBackup 133
Prerequisites to use SSH and WMI (Veritas NetBackup)
/usr/openv/netbackup/bin/admincmd/nbstlutil ,\
/usr/openv/netbackup/bin/admincmd/bpstsinfo ,\
/usr/openv/volmgr/bin/vmquery ,\
/usr/openv/volmgr/bin/vmpool ,\
/usr/openv/volmgr/bin/vmglob ,\
/usr/openv/volmgr/bin/vmcheckxxx ,\
/usr/openv/volmgr/bin/vmoprcmd ,\
/usr/openv/volmgr/bin/tpconfig ,\
/usr/openv/netbackup/bin/bplist ,\
/usr/openv/netbackup/bin/nbsqladm ,\
/usr/openv/netbackup/bin/nboraadm
Defaults:<username> !requiretty
<username> ALL=(ALL) NOPASSWD: \
/usr/openv/netbackup/bin/admincmd/* ,\
/usr/openv/volmgr/bin/* ,\
/usr/openv/netbackup/bin/*
Defaults:<username> !requiretty
<username> ALL=(ALL) NOPASSWD:
/usr/openv/netbackup/bin/admincmd/bpgetconfig ,\
/usr/openv/netbackup/bin/admincmd/bpcoverage ,\
/usr/openv/netbackup/bin/admincmd/bpdbjobs ,\
/usr/openv/netbackup/bin/admincmd/bpimagelist ,\
/usr/openv/netbackup/bin/admincmd/bperror ,\
/usr/openv/netbackup/bin/admincmd/bpminlicense ,\
/usr/openv/netbackup/bin/admincmd/bppllist ,\
/usr/openv/netbackup/bin/admincmd/bpretlevel ,\
/usr/openv/netbackup/bin/admincmd/bpplclients ,\
/usr/openv/netbackup/bin/admincmd/bpmedialist ,\
Pre-Installation setup for Veritas NetBackup 134
Prerequisites to use SSH and WMI (Veritas NetBackup)
/usr/openv/netbackup/bin/admincmd/bpstulist ,\
/usr/openv/netbackup/bin/admincmd/nbdevquery ,\
/usr/openv/netbackup/bin/admincmd/nbauditreport ,\
/usr/openv/netbackup/bin/admincmd/nbstl ,\
/usr/openv/netbackup/bin/admincmd/nbstlutil ,\
/usr/openv/netbackup/bin/admincmd/bpstsinfo ,\
/usr/openv/volmgr/bin/vmquery ,\
/usr/openv/volmgr/bin/vmpool ,\
/usr/openv/volmgr/bin/vmglob ,\
/usr/openv/volmgr/bin/vmcheckxxx ,\
/usr/openv/volmgr/bin/vmoprcmd ,\
/usr/openv/volmgr/bin/tpconfig ,\
/usr/openv/netbackup/bin/bplist ,\
/usr/openv/netbackup/bin/nbsqladm ,\
/usr/openv/netbackup/bin/nboraadm
5 Obtain the REST API key from the NetBackup UI and copy it in the API key
field. The API key field appears on Add Backup Server or Edit Backup Server
popup that is displayed when you click Add or Edit on the Veritas NetBackup
Data Collector Policy window.
Note: To know more about configuring a RBAC role in NetBackup, see NetBackup
Web UI Administrator's Guide > Managing Security > Managing role-based access
control > Add a customs RBAC role section.
Note: You need to select Customs option when configuring the user.
IMAGES VIEW
LICENSING VIEW
MALWARE|SCAN-TOOL VIEW
MALWARE|SCAN-HOST VIEW
MALWARE|SCAN-HOST-POOL VIEW
MALWARE:VIEW-SCAN-RESULT VIEW
Note: The user needs to have customs role access with global access permissions
in NetBackup to execute data collection successfully.
TMPDIR=/path/to/tmp
export TMPDIR
For the NetBackup appliance: all CLI users share the /home/nbusers directory.
To only change the TMPDIR directory for the collection user, you must first check
the logged-in user. For example:
export TMPDIR
fi
3. To test, run the following command and verify that it returns the
configured TMPDIR:
Note: There may be additional output before the TMPDIR path, for example
the NBU appliance displays a banner.
NBU_SSH_TMPDIR=/path/to/tm
Prerequisites
Ensure the following prerequisites:
■ Data collector and NetBackup systems are under Kerberised domain.
■ Password-less SSH using Kerberos authentication from Data Collector system
to NetBackup server works without errors.
■ The data collector system must have the keytab/ticket cache file ready,
which contains the Kerberos user service keys for authentication.
■ The krb5.conf file must be present at the below location. It contains information
related to the default realm and kdc server address
■ Linux: /etc/krb5.conf
■ Solaris: /etc/krb5/krb5.conf
Pre-Installation setup for Veritas NetBackup 137
Prerequisites for NetBackup collection over SSH (Kerberos option)
■ Verify the SSH configurations for Kerberos on the data collector and NetBackup
server as follows:
■ Ensure that the following lines are present in the /etc/ssh/ssh_config file.
If required, add them within the Hosts section. Make sure these lines are
uncommented.
GSSAPIAuthentication
GSSAPIDelegateCredentials
■ Set the values of the following configuration to yes in the
/etc/ssh/sshd_config file. Make sure these lines are uncommented.
KerberosAuthentication
KerberosOrLocalPasswd
KerberosTicketCleanup
GSSAPIAuthentication
GSSAPICleanupCredentials
Pre-Installation setup for Veritas NetBackup 138
Prerequisites for NetBackup collection over SSH (Kerberos option)
KERBEROS_USE_TICKET_CACHE Y None
(Mandatory)
KERBEROS_TICKET_CACHE Enter the path to the ticket cache on the data None
collector server, only if
KERBEROS_USE_TICKET_CACHE value is
set to Y.
Firewall consideration
If the Firewall of the NetBackup primary server is turn on, follow these steps to
communicate through the Firewall port:
1 Open and edit the file /etc/firewalld/zones/public.xml.
2 Add the following lines in the file:
<service-name="https"/>
The following list provides an overview of the steps to be taken. Details are provided
later in this section.
5. For both upgrades and new installations, install the Data Collector software on
each NetBackup Primary Server.
6. On each NetBackup Primary Server, run checkinstall.
See “Validation methods” on page 211.
7. Start the Data Collector.
Note: After modifications of the access rights, RBAC user should to run bpnbat
-login -loginType WEB command as non-root user.
3 Click Add Policy, and then select Veritas NetBackup from the policy list.
4 Configure the Veritas NetBackup Data Collector policy based on the filed
descriptions under policy parameters below and then click OK to save the
policy. Mandatory parameters are denoted by an asterisk (*).
See the section called “Policy Parameters” on page 145.
Policy Parameters
The following are the fields and its description:
■ Collector Domain: The domain of the collector to which the collector backup
policy is being added. This is a read-only field. By default, the domain for a new
policy will be the same as the domain for the collector. This field is set when
you add a collector.
■ Policy Domain: The Collector Domain is the domain that was supplied during
the Data Collector installation process. The Policy Domain is the domain of the
policy that is being configured for the Data Collector. The Policy Domain must
be set to the same value as the Collector Domain.
The domain identifies the top level of your host group hierarchy. All newly
discovered hosts are added to the root host group associated with the Policy
Domain.
Typically, only one Policy Domain will be available in the drop-down list. If you
are a Managed Services Provider, each of your customers will have a unique
domain with its own host group hierarchy.
Pre-Installation setup for Veritas NetBackup 146
Add a Veritas NetBackup Data Collector policy
Note: You can add multiple servers while creating the NetBackup policy, provided
the NetBackup servers have the same credentials and Backup Software
Location is also same for the servers.
Note: If you are using the SSH/WMI remote collection method, this location is
where the NetBackup software is installed on all the remote NetBackup Primary
Servers that are configured.
■ Remote Probe Login Details: These details are required for either of the
following conditions.
■ The collector is centralized and the SLP Job Details, License Details, or
Backup Policies probe is selected.
■ The collector is distributed and the Backup Policies probe is selected.
■ The Collection Method is SSH or WMI protocol to the NetBackup Primary
Server.
■ Primary Server Domain: Specify the domain associated with the NetBackup
Primary Server User ID. For Windows Primary Servers, this domain is used, in
conjunction with the User ID, for the execution of the remote lifecycle policies
utility (nbstlutil) by the SLP Job Details probe, when the Data Collector is not
installed on the NetBackup Primary Server; unused for remote Linux Primary
Servers. In addition, for NetBackup 7.7.3 only, this domain is used by the License
Details probe to collect plugin information (bpstsinfo).
For NetBackup 8.3 and above, this domain is used by Backup Policies probe
(FETB and Protection Plan collection) for REST API based authentication.
This field is required when the Collection Method is SSH or WMI protocol to the
NetBackup Primary Server and that Primary Server is a Windows Server.
■ Primary Server User ID: This field is required when the Collection Method is
SSH or WMI protocol to the NetBackup Primary Server. Depending on NBAC
or RBAC-enabled NetBackup, enter the appropriate credentials of the user
created using the steps described in the prerequisites above.
Specify the user name with login rights on the selected NetBackup Primary
Server. The user name and password are used for the execution of the remote
lifecycle policies utility (nbstlutil) by the SLP Job Details probe, when the Data
Collector is not installed on the NetBackup Primary Server. In addition, for
NetBackup 7.7.3 only, the credentials are used by the License Details probe to
collect plugin information (bpstsinfo). A Windows user name requires
administrative privileges.
In case of NetBackup 8.3 and above, these credentials are also used by the
Backup Policies probe for REST API based authentication. These credentials
will be used for all Primary Servers.
If SSH/WMI collection is specified, the username must have superuser privileges
to run most NetBackup commands.
■ Primary Server Password: This field is required when the Collection Method
is SSH or WMI protocol to the NetBackup Primary Server.
Pre-Installation setup for Veritas NetBackup 148
Add a Veritas NetBackup Data Collector policy
The password associated with the NetBackup Primary Server User ID. The user
name and password are used for the execution of the remote lifecycle policies
utility (nbstlutil) by the SLP Job Details probe, when the Data Collector is not
installed on the NetBackup Primary Server. In addition, for NetBackup 7.7.3
only, the credentials are used by the License Details probe to collect plugin
information (bpstsinfo).
In case of NetBackup 8.3 and above these credentials are also used by the
Backup Policies probe for REST API based authentication. These credentials
will be used for all Primary Servers.
If password-based login to NetBackup primary server is not allowed, for example
in cloud deployment of NetBackup, then SSH private key can be specified here
in the following format:
privateKey=<path-of-private-key>|password=<passphrase> where
■ <path-of-private-key>| is the file path of the SSH private key.
■ <passphrase> is the password used while creating the SSH private key.
See “Prerequisites for collection from Veritas NetBackup deployed on Kubernetes
clusters” on page 139.
■ WMI Proxy Address: Specify the IP address or hostname of the WMI Proxy.
If this field is blank, 127.0.0.1 will be used. This is used for remote nbstlutil
execution of the SLP Job Details probe, when the Data Collector is not installed
on the NetBackup Primary Server. In addition, for NetBackup 7.7.3 only, this is
used by the License Details probe to collect plugin information (bpstsinfo).
For NetBackup 8.3 and above, this domain is used by Backup Policies probe
(FETB and Protection Plan collection) for REST API based authentication.
This field is required when the Collection Method is SSH or WMI protocol to the
NetBackup Primary Server and that Primary Server is a Windows Server.
Active Probes
Note: Explicit schedules set for a Collector policy are relative to the time on the
Collector server. Schedules with frequencies are relative to the time that the Data
Collector was restarted.
■ Tape Library & Drive Inventory: Select the check box to activate Tape Library
data collection from your NetBackup environment.
The default polling frequency is every 12 hours. Click the clock icon to create a
schedule frequency for collecting data. You can schedule the collection frequency
by minute, hour, day, week and month. Advanced use of native CRON strings
is also available. Optimize performance by scheduling less frequent collection.
Pre-Installation setup for Veritas NetBackup 149
Add a Veritas NetBackup Data Collector policy
■ Telemetry: This probe collects Telemetry data from NetBackup primary and
sends to SORT/Usage Insights in Alta View. This probe is active by default for
Alta View users only.
■ Probe will be active by default and not editable for a new Veritas NetBackup
Data Collector Policy when the Collection Method is NetBackup software
on Data Collector server.
■ Probe will be de-activated and disabled when the protocol SSH / WMI is
selected for the Collection Method to NetBackup Primary Server.
■ The default scheduled for the execution is: Runs Every day at 12:00:00
■ This probe is NOT visible or active for non-Alta View customers
■ Tape Inventory: Select the check box to activate Tape data collection from your
NetBackup environment.
The default polling frequency is every 18 hours. Click the clock icon to create a
schedule frequency for collecting data. You can schedule the collection frequency
by minute, hour, day, week and month. Advanced use of native CRON strings
is also available. Optimize performance by scheduling less frequent collection.
■ Drive Status: Select the check box to activate Tape Drive status collection from
your NetBackup environment. The default polling frequency is every 20 minutes.
This probe is selected by default. Click the clock icon to create a schedule
frequency for collecting data. You can schedule the collection frequency by
minute, hour, day, week and month. Advanced use of native CRON strings is
also available.
■ Job Details: Select the check box to activate Job data collection from your
NetBackup environment. The polling frequency would depend on the value of
ENABLE_MINUS_T_OPTION advanced parameter.
Refer to Backup Manager advanced parameters section for more details on
ENABLE_MINUS_T_OPTION parameter.
This probe is selected by default. Click the clock icon to create a schedule
frequency for collecting data. You can schedule the collection frequency by
minute, hour, day, week and month. Advanced use of native CRON strings is
also available.
■ Duplication Jobs: Select the check box to activate Duplication Job data
collection from your NetBackup environment. The default polling frequency is
every 60 minutes. This probe is selected by default. Click the clock icon to create
a schedule frequency for collecting data. You can schedule the collection
frequency by minute, hour, day, week and month. Advanced use of native CRON
strings is also available.
■ Backup Message Logs:
Pre-Installation setup for Veritas NetBackup 150
Add a Veritas NetBackup Data Collector policy
Note: When selecting this SLP Job Details option, if you are using centralized
NetBackup data collection, you must also configure the settings in the Login
Details for Remote Probes section of this Data Collector policy.
■ Host Details: Select the check box to activate Host Details data collection from
your NetBackup environment. This probe calls NetBackup REST APIs to collect
and persist environmental details. The default polling frequency is once a week.
This probe is selected by default.
Also, ensure this probe is selected to enable access to NetBackup web interface
from the IT Analytics Portal. The steps to enable access to the web interface
are documented under Access NetBackup web interface from the IT Analytics
Portal section of the User Guide.
Click clock icon to modify the scheduled frequency for collecting data. You can
schedule the collection frequency by minute, hour, day, week, and month.
Advanced use of native CRON strings is also available.
■ Event Notifications: Select the check box to activate Event Notifications data
collection from your NetBackup environment. This probe calls NetBackup REST
APIs to collect and persist critical event notifications.
This probe supports NetBackup version 9.1 and above. For version lower than
9.1, the data collection fails and an error status is displayed on the collection
status page.
The default polling frequency is every minute. This probe is selected by default.
Click the clock icon to modify the schedule frequency for collecting data. You
can schedule the collection frequency by minute, hour, day, week and month.
Advanced use of native CRON strings is also available.
■ Audit Events: The Audit Events probe collects the audit events such as user
login success or failure, policy modification etc. from Netbackup Primary server.
Select the check box to activate Audit Events data collection from your
NetBackup environment. This probes connects directly to NetBackup Primary
server to collect and persist the audit details.
Pre-Installation setup for Veritas NetBackup 151
Add a Veritas NetBackup Data Collector policy
Note: When selecting this Audit Events option, if you are using centralized
NetBackup data collection, you must also configure the settings in the Login
Details for Remote Probes section of this Data Collector policy.
■ License Details: Select the check box to activate License Details data collection
from your NetBackup environment. This probes collects and persists license
key information for NetBackup. The default polling frequency is monthly. This
probe is selected by default. Click the clock icon to create a schedule frequency
for collecting data. You can schedule the collection frequency by minute, hour,
day, week and month.
■ Client Exclude/Include List Details: Select the check box to activate Client
Exclude/Include List Details data collection from your NetBackup environment.
This probe collects from Linux/Unix and Windows NetBackup clients. This probe
connects directly to each NetBackup client to collect and persist the NetBackup
client exclude/include list of files and directories. The default polling frequency
is monthly. This probe is selected by default. Click the clock icon to create a
schedule frequency for collecting data. You can schedule the collection frequency
by minute, hour, day, week and month.
This probe skips exclude/include data collection from VMware, HyperV, RHEV,
Nutanix, AWS, Azure, GCP and Kubernetes workloads. Configure the
NBU_CLNT_EXC_INC_SKIP_POLICY_TYPE_IDS advanced parameter to add
comma separated policy id(s) of any additional workload(s) to skip
exclude/include data collection.
USE_ALT_NBU_INCL_EXCL - This advanced parameter can be configured
for collection of NetBackup include/exclude lists from Unix clients. By default,
the collector uses the NetBackup-recommended command syntax to retrieve
the lists. If the lists are not collected successfully, set the advanced parameter
to Y, which instructs the collector to use an alternative command syntax for list
data retrieval from Unix clients. Valid values for this parameter are Y or N
(Default= N). This parameter can be set at the Data Collector level.
Pre-Installation setup for Veritas NetBackup 152
Add a Veritas NetBackup Data Collector policy
Note: For information on NetBackup policy types, refer to NetBackup Self Service
Configuration Guide > Appendix A NetBackup policy types > List of NetBackup
policy types section.
at minute 15. This probe is not selected by default. It collects data using
NetBackup commands and REST APIs, provided the NetBackup version is 10.0
or later. You need to provide the REST API credentials under Remote Probe
Login Details to allow the APIs to collect data. If API key is provided during
configuration of NetBackup Primary servers, it is used to execute the REST API.
See Add/Edit Netbackup Primary Servers within the Data Collector policy for
details about the API key.
Click clock icon to create a schedule frequency for collecting data. You can
schedule the collection frequency by minute, hour, day, week, and month.
Advanced use of native CRON string is also available.
■ NetBackup Resources Monitor: Select the checkbox to activate NetBackup
Resources data collection from your NetBackup environment. The probe does
not have a default schedule. Once enabled, it collects data received from the
NetBackup IT Analytics Exporter installed on the NetBackup Primary Server.
When you enable this probe, the NetBackup Primary Server (Internal Name) is
added to Compute Resources Data Collection Policy. If there is no existing
policy, a new policy for Compute Resources is added.
Note that the Internal Name of the NetBackup Primary server must match the
instance (Hostname) of the NetBackup Primary Server.
See the NetBackup IT Analytics Exporter Installation and Configuration Guide
for details on exporter installation.
■ NetBackup Actions:
Note: The following Three actions for NetBackup probe are ALTA-specific
■ Veritas.NetBackup.VTNB.AltaConnectorTaskProbeAPIKeyRenewal
In Alta Connector deployment there is a API Key shared between Alta View
and NBU. This key needs to be renewed periodically. This action is
implemented to trigger key renewal logic.
■ Veritas.NetBackup.VTNB.AltaConnectorTaskProbeNotificationMessageKey
In Alta Connector deployment notification keys need to be add to NBU so
that NBU can correctly interpret I18N text send to it by Alta Connector. This
action is implemented to add notification keys on NBU Primary Server.
■ Veritas.NetBackup.VTNB.AltaConnectorTaskProbeUpgrade
In Alta Connector deployment , NBU specific scripts need to be invoked
when NBU upgrades to 10.1.1 or above. This Action triggers execution of
the script once it detects NBU is upgraded to 10.1.1 or above.
■ Notes: Enter or edit notes for your data collector policy. The maximum number
of characters is 1024. Policy notes are retained along with the policy information
Pre-Installation setup for Veritas NetBackup 154
Add a Veritas NetBackup Data Collector policy
for the specific vendor and displayed on the Collector Administration page as a
column making them searchable as well.
■ Download SSL Certificate: Downloads the SSL certificate required to set up
NetBackup IT Analytics Exporter on the NetBackup Primary Server.
See the NetBackup IT Analytics Data Exporter Installation and Configuration
Guide for details on exporter installation.
■ Test Connection: Test Connection initiates a Data Collector process that
attempts to connect to the subsystem using the IP addresses and credentials
supplied in the policy. This validation process returns either a success message
or a list of specific connection errors. Test Connection requires that Agent
Services are running.
Test Connection checks if the utility nb_monitor_util is installed. This is required
to use the probe NetBackup Event Monitor.
It also checks if the REST APIs were successfully executed against the
NetBackup Primary Server. For REST APIs to succeed, you must provide the
user credentials of the NetBackup Primary that has REST API access. The
FETB and Protection Plan collection fails in absence of the user credentials.
Several factors affect the response time of the validation request, causing some
requests to take longer than others. For example, there could be a delay when
connecting to the subsystem. Likewise, there could be a delay when getting the
response, due to other processing threads running on the Data Collector.
You can also test the collection of data using the Run functionality available in
Admin>Data Collection>Collector Administration. This On-Demand data
collection run initiates a high-level check of the installation at the individual policy
level, including a check for the domain, host group, URL, Data Collector policy
and database connectivity. You can also select individual probes and servers
to test the collection run.
After adding the policy, collections can be run on-demand using the Run button on
the Collector Administration page action bar. The Run button is only displayed
if the policy vendor is supported for some policies. On-demand collection allows
you to select which probes and devices to run collection against. This action collects
data the same as a scheduled run, plus logging information for troubleshooting
purposes. For probe descriptions, refer to the policy.
See “Veritas NetBackup SSH: Changing the Linux Temporary Directory for
Collection” on page 135.
Pre-Installation setup for Veritas NetBackup 155
Add/Edit NetBackup Primary Servers within the Data Collector policy
Note: Data Collector policies can be in place for multiple servers, but a server
cannot be assigned multiple policies within the same domain. If you try add a server
that is already assigned to another Data Collector policy, you will be prompted to
remove it from its current policy and reassign it.
1. Click Add.
2. Select a Primary Server and click Edit.
3. The Add Backup Server window is displayed.
Pre-Installation setup for Veritas NetBackup 156
Add/Edit NetBackup Primary Servers within the Data Collector policy
Data Collector
Minimum RAM 32 GB
Portal
Minimum RAM 32 GB
The above guidelines and recommended values are inline with the Recommend
Portal Configurations in File Analytics Certified Configurations Guide. However,
you may have to manage your resource allocation based on the data collection
side in your environment.
■ Automatically collect data from all backed up hosts: Enables collecting data
from all the hosts probed by the NetBackup Policy but with respect to the policies
relevant to File Analytics.
■ Select backed up hosts for data collection: Enables data collection from
selective hosts probed by the NetBackup Policy.
■ Enable Collection: Marks the host for data collection. You must select a host
from the Host Name column before clicking this option. Once marked for data
collection, the Collection Status for the host is indicated as On.
■ Disable Collection: Removes the host from data collection. You must select a
host from the Host Name column before clicking this option. After removal, the
Collection status for host is indicated as Off.
■ Delete File Analytics Data: Deletes the data collected from the selected hosts
and stored on the portal server. The data once deleted is not recoverable.
Once File Analytics is configured within the NetBackup policy, the respective data
collection hosts are displayed in the Inventory as follows:
■ Primary Servers: Under Inventory > Backup Servers > Veritas NetBackup
■ Shares and volumes: Under Inventory > File Shares & Volumes > Volumes
■ Hosts: Under Inventory > Hosts > File Analytics
Pre-Installation setup for Veritas NetBackup 161
Configuring file analytics in NetBackup Data Collector policy
■ Owner
■ CreateTime
■ ModifiedTime
■ AccessTime
■ BackupPolicies
■ BackupTime
Note: BackupPolicies and BackupTime headers are seen on when you export the
File Analytics data collected by the NetBackup policy.
See the Data Export section of the NetBackup IT Analytics Data Collector Installation
Guide for File Analytics guide for the export procedure.
Chapter 14
Pre-Installation setup for
Oracle Recovery Manager
(RMAN)
This chapter includes the following topics:
■ Introduction
Introduction
In most cases, a single instance of the Data Collector can support any number of
enterprise objects. However, each environment has its own unique deployment
configuration requirements, so it is important to understand where the Data Collector
software must be installed so that you can determine how many Data Collectors
must be installed and which servers are best suited for the deployment.
■ 64-bit OS. See the Certified Configurations Guide for supported operating
systems.
■ When the NetBackup IT Analytics system collects data from any vendor
subsystem, the collection process expects name/value pairs to be in US English,
and requires the installation to be done by an Administrator with a US English
locale. The server’s language version can be non-US English.
■ Supports Amazon Corretto 17. Amazon Corretto is a no-cost, multi-platform,
production-ready distribution of the Open Java Development Kit (OpenJDK).
■ For performance reasons, do not install Data Collectors on the same server as
the NetBackup IT Analytics Portal. However, if you must have both on the same
server, verify that the Portal and Data Collector software do not reside in the
same directory.
■ Install only one Data Collector on a server (or OS instance).
■ Database Host and Port: Identify the hostname of the server where the database
in which the RMAN data resides as well as the port this database is listening
on. The default port is 1521.
■ The configured user must exist in all configured instances with the same
password. The user should the CREATE SESSION privilege and have the
following permissions in the schemas to be collected from:
■ User must have the SELECT_CATALOG_ROLE to retrieve RMAN data from
the Dynamic Performance (V$) views:
or
or have a virtual private catalog set up for the user (see Oracle
documentation).
■ User must have SELECT permission to V$INSTANCE and V$DATABASE
(which is included in SELECT_CATALOG_ROLE).
Pre-Installation setup for Oracle Recovery Manager (RMAN) 165
Installation overview (Oracle Recovery Manager - RMAN)
4 Click Add Policy, and then select the vendor-specific entry in the menu.
Field Description
Collector Domain The domain of the collector to which the collector backup policy is being
added. This is a read-only field. By default, the domain for a new policy
will be the same as the domain for the collector. This field is set when
you add a collector.
Pre-Installation setup for Oracle Recovery Manager (RMAN) 167
Add an Oracle Recovery Manager (RMAN) Data Collector policy
Field Description
Policy Domain The Policy Domain is the domain of the policy that is being configured
for the Data Collector. The Policy Domain must be set to the same
value as the Collector Domain. The domain identifies the top level of
your host group hierarchy. All newly discovered hosts are added to the
root host group associated with the Policy Domain.
To find your Domain name, click your login name and select My Profile
from the menu. Your Domain name is displayed in your profile settings.
Database Instance RMAN database instance names and optional schemas, each formatted
and Schema as INSTANCE.SCHEMA and separated by commas or spaces.
User ID* This field is required. RMAN database user name. This must be a user
with SELECT permission for the Dynamic Performance (V$) views and
any Recovery Catalog schema views being collected from.
Active Probes
Field Description
*/20 9-18 * * * means every 20 minutes between the hours of 9am and
6pm
Notes Enter or edit notes for your data collector policy. The maximum number
of characters is 1024. Policy notes are retained along with the policy
information for the specific vendor and displayed on the Collector
Administration page as a column making them searchable as well.
Test Connection Test Connection initiates a Data Collector process that attempts to
connect to the subsystem using the IP addresses and credentials
supplied in the policy. This validation process returns either a success
message or a list of specific connection errors. Test Connection requires
that Agent Services are running.
You can also test the collection of data using the Run functionality
available in Admin>Data Collection>Collector Administration. This
On-Demand data collection run initiates a high-level check of the
installation at the individual policy level, including a check for the domain,
host group, URL, Data Collector policy and database connectivity. You
can also select individual probes and servers to test the collection run.
■ Introduction
Introduction
In most cases, a single instance of the Data Collector can support any number of
enterprise objects. However, each environment has its own unique deployment
configuration requirements, so it is important to understand where the Data Collector
software must be installed so that you can determine how many Data Collectors
must be installed and which servers are best suited for the deployment.
■ 64-bit OS. See the Certified Configurations Guide for supported operating
systems.
■ When the NetBackup IT Analytics system collects data from any vendor
subsystem, the collection process expects name/value pairs to be in US English,
and requires the installation to be done by an Administrator with a US English
locale. The server’s language version can be non-US English.
■ Supports Amazon Corretto 17. Amazon Corretto is a no-cost, multi-platform,
production-ready distribution of the Open Java Development Kit (OpenJDK).
■ For performance reasons, do not install Data Collectors on the same server as
the NetBackup IT Analytics Portal. However, if you must have both on the same
server, verify that the Portal and Data Collector software do not reside in the
same directory.
■ Install only one Data Collector on a server (or OS instance).
■ Read-only Rubrik user account
On-demand collection allows you to select which probes and devices to run
collection against. This action collects data the same as a scheduled run, plus
logging information for troubleshooting purposes. For probe descriptions, refer
to the policy.
To add the policy
1 Select Admin > Data Collection > Collector Administration. Currently
configured Portal Data Collectors are displayed.
2 Search for a Collector if required.
3 Select a Data Collector from the list.
4 Click Add Policy, and then select the vendor-specific entry in the menu.
Field Description
Collector Domain The domain of the collector to which the collector backup policy is being
added. This is a read-only field. By default, the domain for a new policy
will be the same as the domain for the collector. This field is set when
you add a collector.
Policy Domain The Policy Domain is the domain of the policy that is being configured
for the Data Collector. The Policy Domain must be set to the same
value as the Collector Domain. The domain identifies the top level of
your host group hierarchy. All newly discovered hosts are added to the
root host group associated with the Policy Domain.
To find your Domain name, click your login name and select My Profile
from the menu. Your Domain name is displayed in your profile settings.
User ID* Read-only userID for the Rubrik Cloud Data Management system.
Password* Password for the Rubrik Cloud Data Management system. The
password associated with the User ID.
Protection Sources Probe for Rubrik Cloud Data Management Protection Details.
*/20 9-18 * * * means every 20 minutes between the hours of 9am and
6pm
Field Description
Test Connection Test Connection initiates a Data Collector process that attempts to
connect to the subsystem using the IP addresses and credentials
supplied in the policy. This validation process returns either a success
message or a list of specific connection errors. Test Connection requires
that Agent Services are running.
You can also test the collection of data using the Run functionality
available in Admin>Data Collection>Collector Administration. This
On-Demand data collection run initiates a high-level check of the
installation at the individual policy level, including a check for the domain,
host group, URL, Data Collector policy and database connectivity. You
can also select individual probes and servers to test the collection run.
Chapter 16
Pre-Installation setup for
Veeam Backup &
Replication
This chapter includes the following topics:
■ Introduction
Introduction
In most cases, a single instance of the Data Collector can support any number of
enterprise objects. However, each environment has its own unique deployment
configuration requirements, so it is important to understand where the Data Collector
software must be installed so that you can determine how many Data Collectors
must be installed and which servers are best suited for the deployment.
Pre-Installation setup for Veeam Backup & Replication 175
Prerequisites for adding data collectors (Veeam Backup & Replication)
Note: Skip this command for Veeam version 11.0, as it supports the PowerShell
module.
Disconnect-VBRServer
Verification steps
In this section, we’ll use a scenario to illustrate the verification steps. The task is to
connect to Veeam Servers Server-A and Server-B from the same Veeam Data
Collector Server, where the Veeam Backup & Replication Console is installed.
1 From Microsoft PowerShell Console (in Administrator mode), add Veeam
SnapIn.
Note: Skip this step for Veeam version 11.0, as it supports the PowerShell
module.
2 Log into the Veeam Backup & Replication Console using Server-A credentials.
3 Open a PowerShell Console and connect to Server-A.
Disconnect-VBRServer
it means: Server-A and Server-B are on different software versions. The Veeam
Backup & Replication Console is only in sync with Server-A.
■ Veeam Collection does not support collection of jobs or backup data which are
based on VMware Tags.
Table 16-1
Field Description
Collector Domain The domain of the collector to which the collector backup
policy is being added. This is a read-only field. By default,
the domain for a new policy will be the same as the domain
for the collector. This field is set when you add a collector.
Policy Domain The Policy Domain is the domain of the policy that is being
configured for the Data Collector. The Policy Domain must
be set to the same value as the Collector Domain. The
domain identifies the top level of your host group hierarchy.
All newly discovered hosts are added to the root host group
associated with the Policy Domain.
To find your Domain name, click your login name and select
My Profile from the menu. Your Domain name is displayed
in your profile settings.
Backup Server Host Name* One or more Veeam Backup Server Host Names to probe.
IP Address is not supported. Comma-separated host names
are supported. Example, VeeamServer1, VeeamServer2.
Time Zone Select the time zone of Veeam Backup Server. By default,
the data collector server time zone is used.
Password* Password for Veeam Backup Server associated with the User
ID.
Active Probes
Job Details Probe for collecting jobs scheduled for Veeam Backup &
Replication.
Session and Backup Details Probe for collecting session details and backups created by
Veeam Backup & Replication.
Pre-Installation setup for Veeam Backup & Replication 181
Add a Veeam Backup & Replication Data Collector policy
Field Description
You can also test the collection of data using the Run
functionality available in Admin>Data Collection>Collector
Administration. This On-Demand data collection run initiates
a high-level check of the installation at the individual policy
level, including a check for the domain, host group, URL,
Data Collector policy and database connectivity. You can
also select individual probes and servers to test the collection
run.
■ Introduction
Introduction
This section includes the instructions for installing the Data Collector software on
the Data Collector Server. Data Collector software is supported in various flavors
of Linux and Windows. On Windows, if you are collecting data from host resources,
you may need to install the WMI Proxy Service. The WMI Proxy Service is installed
by default, as part of the Data Collector installation on a Windows server.
A GUI based version is available for Windows and a console (command line) based
interface is available for Linux.
Installing the Data Collector software 183
Installing the WMI Proxy service (Windows host resources only)
When the NetBackup IT Analytics system collects data from any vendor subsystem,
the collection process expects name/value pairs to be in US English, and requires
the installation to be done by an Administrator with a US English locale. The server’s
language version can be non-US English.
Note: Log in as a Local Administrator to have the necessary permissions for this
installation.
4. In the Connect window, preface the Namespace entry with the IP address or
hostname of the target remote server in the following format:
\\<IP Address>\root\cimv2
Installing the Data Collector software 185
Testing WMI connectivity
5. Complete the following fields in the Connect window and then click Connect.
■ User - Enter the credentials for accessing the remote computer. This may
require you to enable RPC (the remote procedure call protocol) on the
remote computer.
■ Password
■ Authority: Enter NTLMDOMAIN:<NameOfDomain>
where NameOfDomain is the domain of the user account specified in the
User field.
Non-English Linux OS
On a non-English Linux host:
■ The user locale can be one of the non-English supported locales if the Data
Collector will collect only from a Veritas product.
■ The user locale must be English if the Data Collector will be used to collect from
any non-Veritas product.
To install the Data Collector in one of the supported locales, verify whether the host
OS has multiple languages and then add the preferred locale for the installation.
The procedure below guides you to set one of the supported languages as the
system locale.
To set one of the supported languages as the system locale for Data Collector
installation, set the preferred language as described below:
1 Check the current language.
#locale
#locale -a
Installing the Data Collector software 187
Considerations to install Data Collector on non-English systems
3 To change the System locale into one of the supported languages, run the
command #vi /etc/profile and add the following at the end of the file based
on your preferred language:
■ To add Simplified Chinese:
export LANG=zh_CN.utf8
export LC_ALL=zh_CN.utf8
■ To add French:
export LANG=fr_FR.utf8
export LC_ALL=fr_FR.utf8
■ To add Korean
export LANG=ko_KR.utf8
export LC_ALL=ko_KR.utf8
■ To add Japanese
export LANG=ja_JP.utf8
export LC_ALL=ja_JP.utf8
4 Reboot the host to set the desired system locale for the Data Collector
installation.
Having completed setting the system locale, proceed with the Data Collector
installation, with the appropriate user locale.
See “Install Data Collector software on Linux” on page 197.
Non-English Windows OS
Veritas recommends that the user locale to be set to English while installing the
Data Collector on a non-English Windows host, be it for a Veritas or a non-Veritas
product.
To verify the user locale and system locale respectively before the Data Collector
installation, run the get-culture and get-winsystemlocale commands from
PowerShell Windows. This way, you can decide which user locale to set for the
Data Collector installation.
If you must run the Data Collector installer in one of the supported locales, ensure
the Windows OS is installed in either Simplified Chinese, French, Korean, or
Japanese. Avoid having Windows OS in English, installed with language pack and
changing the locale later. The Data Collector installer detects the locale from the
Windows Language Settings and launches the installer in the respective locale. If
Installing the Data Collector software 188
Install Data Collector Software on Windows
the Windows Time & Language Setting is set to a language other than Simplified
Chinese, French, Korean, or Japanese, the installer is launched in English.
See “Install Data Collector Software on Windows” on page 188.
5 Review the End User License Agreement (EULA), select I accept the terms
of the license agreement, and click Next.
Installing the Data Collector software 191
Install Data Collector Software on Windows
6 Specify the directory where you would like to install the Data Collector software
and click Next. The default Windows path is C:\Program
Files\AptareC:\Program Files\Veritas\AnalyticsCollector. Accepting
the default paths is recommended.
If you specify a custom directory, the install creates the AnalyticsCollector
folder within the specified directory.
7 Provide accurate details as described below on the next page and then click
Next.
Data Collector Registration File Enter the absolute path of the registration
file downloaded from the NetBackup IT
Analytics Portal.
8 Review the installation summary and the available disk space before you
proceed with the installation.
Installing the Data Collector software 194
Install Data Collector Software on Windows
If you wish to run checkinstall.bat later, you can run the script from the
command prompt.
Installing the Data Collector software 197
Install Data Collector software on Linux
For SUSE:
■ Access the command prompt.
■ Install the rng-tools.
mkdir /mnt/diska
mount -o loop <itanalytics_datacollector_linux_xxxxx.iso>
/mnt/diska
cd /
/mnt/diska/dc_installer.sh
7 Review the End User License Agreement (EULA) and enter accept to agree.
8 Provide the install location. The default location is
/usr/openv/analyticscollector. Accepting the default paths is
recommended.
If you specify a custom location, analyticscollector directory is created at
the specified location.
9 The installer requests for the following details.
■ Data Collector Registration File Path: Enter the absolute file path of the
registration file generated and downloaded from the NetBackup IT Analytics
Portal.
■ Web Proxy (HTTP) settings can be configured. Enter y to configure proxy.
The installer prompts for:
■ HTTP Proxy IP Address: Enter the hostname or IP address and a port
number.
■ HTTP Proxy Port: Enter the proxy port number for HTTP proxy.
■ Proxy UserId and password: Enter the credentials for the proxy server.
■ No Proxy For: Enter the host names or IP addresses separated by
commas that will not be routed through the proxy.
■ Obtain the following Data Collector details. You are required to supply these
details to the installer during the installation process.
■ Registry: The name of the registry to which you want to push the installer
images.
■ Absolute path of Data Receiver Certificate file: Absolute path of the data
receiver certificate file downloaded from NetBackup IT Analytics Portal.
■ Absolute path of the Data Collector Registration File: Absolute path of the
Data Collector registration file downloaded from the NetBackup IT Analytics
Portal.
■ Proxy settings:
■ Portal IP address: IP address of the system hosting the NetBackup IT
Analytics Portal.
■ Portal HostName: aptareportal.<DOMAIN> or itanalyticsportal.<DOMAIN>
■ Agent HostName: aptareagent.<DOMAIN> or itanalyticsagent.<DOMAIN>
■ StorageClass Name: Name of the Kubernetes storage class to be used.
Installing the Data Collector software 201
Deploy Data Collector in native Kubernetes environment
cd scripts/
sh itanalytics_dc_installer.sh
5 Provide the Data Collector configuration details when asked by the installer in
the following order.
■ Registry
The installer asks for a confirmation after providing the registry name to
proceed with pushing the images. You need to enter y for a fresh installation.
If for any reason, you are required to re-run the installation and this step
was successfully completed anytime before for the same cluster node, you
can enter n to avoid a rewrite and bypass this step.
■ Absolute path of Data Receiver Certificate file (if you have set an https://
URL for the data receiver)
■ Absolute path of the Data Collector registration file
■ Proxy settings
Installing the Data Collector software 202
Configure Data Collector manually for Veritas NetBackup
■ Portal IP address
■ Portal HostName
■ Agent HostName
■ StorageClass Name
6 The installer asks to confirm the configuration details before proceeding with
the installation. Enter y to proceed with the data collector installation
After a successful installation, verify whether the Data Collector status appears
Online on the NetBackup IT Analytics Portal.
"C:\ProgramData\Veritas\NetBackup IT Analytics\DC\configure.cmd"
/RESPFILE:<response_file_path> /INSTALL_TYPE:CONFIG
Or
<ISO_MOUNT_DRIVE>:\silentinstall.cmd /INSTALL_TYPE:INSTALL
<ISO_MOUNT_DRIVE>:\silentinstall.cmd /INSTALL_PATH:<custom
location for dc installation> /INSTALL_TYPE:INSTALL
responsefile with the credentials of the Data Collector created on the portal and run
a configuration command as described in the procedure below.
To configure the Data Collector:
1 Obtain the following details from the NetBackup IT Analytics Portal:
■ Registration file downloaded from the Portal.
Installing the Data Collector software 206
Install Data Collector binaries on Windows (without configuration)
@ECHO OFF
REM -------------------------------------------------
SET DATACOLLECTOR_REGISTRATION_FILE_PATH=
REM -------------------------------------------------
REM Description: Enter the Data Collector's Key File path. The
file path must include name of the file that was downloaded from
the Portal.
REM Valid input values: Absolute path of key file
REM Required: True
REM -------------------------------------------------
SET HTTP_PROXY_CONF=N
REM -------------------------------------------------
REM Description: It indicate whether proxy should be configured
or not
REM Valid input values: Y,N
REM Default value: N
REM -------------------------------------------------
SET PROXY_HTTP_URL=
REM -------------------------------------------------
REM Description: IP/hostname for HTTP Proxy
REM Valid input values: 10.20.30.40, localhost
REM -------------------------------------------------
SET PROXY_HTTP_PORT=
REM -------------------------------------------------
REM Description: Port for HTTP proxy
REM Valid input values: Any number between 0 and 65535
REM -------------------------------------------------
SET PROXY_HTTPS_URL=
REM -------------------------------------------------
REM Description: IP/hostname for HTTPS Proxy
REM Valid input values: 10.20.30.40, localhost
REM -------------------------------------------------
SET PROXY_HTTPS_PORT=
REM -------------------------------------------------
REM Description: Port for HTTPS proxy
REM Valid input values: Any number between 0 and 65535
Installing the Data Collector software 208
Install Data Collector binaries on Linux host (without configuration)
REM -------------------------------------------------
SET PROXY_USERID=
REM -------------------------------------------------
REM Description: Proxy UserId
REM Default value:
REM -------------------------------------------------
SET PROXY_PASSWORD=
REM -------------------------------------------------
REM Description: Proxy user password
REM Default value:
REM -------------------------------------------------
SET PROXY_NOT_FOR=
REM -------------------------------------------------
REM Description: List of IP/hostname which should be excluded
for proxy
REM Default value:
The Data Collector installation without connecting it with the portal is complete
<ISO_MOUNT_DRIVE>:\silentinstall.cmd /RESPFILE:<responsefile_path>
/INSTALL_PATH:<Data_Collector_installation_path> /INSTALL_TYPE:CONFIG
or
<INATALL_PATH>\DC\configure.cmd" /RESPFILE:<response_file_path>
/INSTALL_TYPE:CONFIG
configure it using a response file, that contains credentials of the Data Collector
created on the NetBackup IT Analytics Portal and the data receiver.
To install a Data Collector:
1 Download and mount the Data Collector installer
itanalytics_datacollector_linux_<version>.iso.
Example:
or
<INSTALL_PATH>/UninstallerData/uninstall_dc.sh -r
Chapter 18
Validate data collection
This chapter includes the following topics:
■ Validation methods
Validation methods
Validation methods are initiated differently based on subsystem vendor associated
with the Data Collector policy, but perform essentially the same functions. Refer to
the following table for vendor-specific validation methods.
■ Test Connection - Initiates a connection attempt directly from a data collector
policy screen that attempts to connect to the subsystem using the IP addresses
and credentials supplied in the policy. This validation process returns either a
success message or a list of specific connection errors.
■ On-Demand data collection run - Initiates an immediate end-to-end run of the
collection process from the Portal without waiting for the scheduled launch. This
on-demand run also serves to validate the policy and its values (the same as
Test Connection), providing a high-level check of the installation at the individual
policy level, including a check for the domain, host group, URL, Data Collector
policy and database connectivity. This is initiated at the policy-level from
Admin>Data Collection>Collector Administration.
See “Working with on-demand Data Collection” on page 214.
Validate data collection 212
Data Collectors: Vendor-Specific validation methods
■ CLI Checkinstall Utility- This legacy command line utility performs both the Test
Connection function and On-Demand data collection run from the Data Collector
server.
See “Using the CLI check install utility” on page 221.
Note: NetBackup IT Analytics does not recommend using the CLI Checkinstall
utility for any Data Collector subsystem vendor which supports On-Demand
runs.
Brocade Switch x
Cisco Switch x
Cohesity DataProtect x x
Commvault Simpana x
Compute Resources x
Dell Compellent x
EMC Avamar x
EMC Isilon x
EMC Symmetrix x x
EMC VNX x x
EMC VPLEX x
EMC XtremIO x x
HDS HCP x x
HDS HNAS x
HP 3PAR x
HP Data Protector x
HP EVA x
Hitachi Block x
Hitachi NAS x x
IBM Enterprise x
IBM SVC x
IBM VIO x x
IBM XIV x
Microsoft Azure x x
Microsoft Hyper-V x x
NetApp E Series x
Netapp x
OpenStack Ceilometer x x
OpenStack Swift x x
Test Connection is
included with the Get
Nodes function.
Pure FlashArray x x
VMWare x
Veritas NetBackup x x
On-Demand data collection serves multiple purposes. You can use it to:
Validate data collection 215
Working with on-demand Data Collection
■ Validate the collection process is working end-to-end when you create a data
collector policy
■ Launch an immediate run of the collection process without waiting for the
scheduled run
■ Populate your database with new/fresh data
■ Choose to view the collection logs on the portal while performing an on-demand
run.
To initiate an on-demand data collection
1 Select Admin > Data Collection > Collector Administration. All Data
Collectors are displayed.
2 Click Expand All to browse for a policy or use Search.
3 Select a data collector policy from the list. If the vendor is supported, the Run
button is displayed on the action bar.
4 Click Run. A dialog allowing you to select servers and individual probes to test
the collection run is displayed. The following example shows the Amazon Web
Services dialog. See the vendor specific content for details on probes and
servers.
6 The portal enables the user to log the messages at various level during the
collection process. Following are the available options:
■ Enable Real-Time Logs: This option enables the user to log generally
useful information in real-time when the collection is in progress, select
Enable Real-Time Logs.
■ Enable Debug Logs: This option enables the user to log information at a
granular level, select Enable Debug Logs
7 Click Start. Data is collected just like a scheduled run plus additional logging
information for troubleshooting. Once started, you can monitor the status of
the run through to completion.
Note: If there is another data collection run currently in progress when you
click Start, the On-Demand run will wait to start until the in-progress run is
completed.
You can use the filter on the console to selectively view the logs of your choice.
The Collection Console icon is not visible if the data collection is not in
progress.
Note: The path for generated log file on data collector server:
<APTARE_HOME>/mbs/logs/validation/
5 Select Historic Collection and specify other details based on the field
descriptions below:
NetBackup Primary Servers (for NetBackup Select the servers from which the policy
policies) must collect the historic data.
Lookback Hours (for Cohesity policies) Enter the duration in hours. The moment
you initiate the on-demand collection,
historic data from the collection performed
within the Lookback Hours is retrieved,
based on the policy and the probe.
Start Date and Time Specify the start date and time of the
historic data collection.
End Date and Time Specify the end date and time of the
historic data collection.
Client Names (for NetBackup policies) Specify one or more client names for
NetBackup historic collection. Separate the
names by commas.
Note: The Start Date/Time and End Date/Time are calculated based on the
time zone of the server specified while adding the primary server on the policy
screen. Also, the Scheduled Collection interface remains disabled until the
on-demand collection is in progress.
Validate data collection 221
Using the CLI check install utility
6 Click Start. Data is collected just like a scheduled run plus additional logging
information for troubleshooting. You can monitor the collection progress of the
run through Collection Status page. The collection is queued if another data
collection run is in progress.
Note: NetBackup IT Analytics does not recommend using the CLI Checkinstall
utility for any Data Collector subsystem vendor which supports On-Demand runs.
The following directions assume that the Data Collector files have been installed
in their default location:
■ Windows:C:\Program Files\Veritas\AnalyticsCollector
■ Linux:/usr/openv/analyticscollector
If you have installed the files in a different directory, make the necessary path
translations in the following instructions.
Note: Some of the following commands can take up to several hours, depending
on the size of your enterprise.
To run Checkinstall
1 Open a session on the Data Collector server.
Windows: Open a command prompt window.
Linux: Open a terminal/SSH session logged in as root to the Data Collector
Server.
2 Change to the directory where you’ll run the validation script.
Windows: At the command prompt, type:
cd /usr/openv/analyticscollector <enter>
■ Linux: /usr/openv/analyticscollector/mbs/logs/validation/
Validate data collection 223
List Data Collector configurations
■ Introduction
Introduction
The installer configures the Data Collector to start automatically, however, it does
not actually start it upon completion of the installation because you must first validate
the installation. Follow these steps, for the relevant operating system, to manually
start the Data Collector service.
This also starts the Aptare Agent process, Zookeeper, and Kafka services on the
respective systems.
On Windows
The installer configures the Data Collector process as a Service.
On Linux
The installer automatically copies the Data Collector “start” and “stop” scripts to the
appropriate directory, based on the vendor operating system.
Manually start the Data Collector 225
Introduction
/opt/aptare/mbs/bin/aptare_agent start
Chapter 20
Uninstall the Data
Collector
This chapter includes the following topics:
For example:
For example:
/opt/aptare/UninstallerData/uninstall_dc.sh
Uninstall the Data Collector 227
Uninstall the Data Collector on Windows
■ Introduction
■ Cohesity
Introduction
After installing the backup Data Collectors, you may want to capture historical
backup events for inclusion in the NetBackup IT Analytics database.
Load historic events 229
Load Commvault Simpana events
Note: If the scheduled data collection process oversights the data, the Historic
Event Collection should be used.
Note: For example, the server may have been unavailable for a period of time. Or,
you may want to capture data that was available before you actually installed the
Data Collector software.
Note: The allowed maximum number of hours to look back for historic data collection
is 168 hours. Historic collection is normally used on first time collection. It retrieves
the details of the backup jobs, restore jobs, and job failures from the database using
the SQL queries.
Windows:
C:\Program Files\Aptare\mbs\bin\commvault\cvsimpanadetails.bat
Linux:
<APTARE HOME>/mbs/bin/commvault/cvsimpanadetails.sh
Where:
■ This utility will write data to a set of files in the output directory specified in
output_dir.
■ v_datasets - extracts record for each data set known to the MCS.
C:\Program Files\Aptare\mbs\bin\avamar\avamarhistoricdetails.bat
Linux:
<APTARE HOME>/mbs/bin/avamar/avamarhistoricdetails.sh
To capture the data from a specific period, use the following utility:
Where:
Load historic events 231
Load EMC NetWorker events
Note: If the Start and End Dates are not specified, the utility will capture events
that occurred in the last two weeks.
C:\Program Files\Aptare\mbs\bin\networker\nwhistoricevents.bat
Linux:
<APTARE HOME>/mbs/bin/networker/nwhistoricevents.sh
Where:
■ The EventCollectorID and the ServerID can be found by executing the following
utility:
For Windows: C:\opt\Aptare\mbs\bin\listcollectors.bat
For Linux: /opt/aptare/mbs/bin/listcollectors.sh
Load historic events 232
Load HP Data Protector events
Note: If the Start and End Dates are not specified, the utility will capture events
that occurred in the last 24 hours.
C:\Program Files\Aptare\mbs\bin\dataprotector\hpdphistoricevents.bat
Linux:
<APTARE HOME>/mbs/bin/dataprotector/hpdphistoricevents.sh
Where:
■ The EventCollectorID and the ServerID can be found by executing the following
utility:
For Windows: C:\opt\Aptare\mbs\bin\listcollectors.bat
Load historic events 233
Load IBM Spectrum Protect (TSM) events
Note: If the Start and End Dates are not specified, the utility will capture events
that occurred in the previous 24 hours. HP Data Protector commands ignore the
time segment of the start and end date values. In addition, the end date value is
used as an “until” value.
Note: For example, a value of "2015-05-11 23:59:59" will only collect historic values
up to 2015-05-11 00:00:00. To collect values for the date of 2015-05-11 you should
enter a end date of "2015-05-12 00:00:00".
Linux:
C:\Program Files\Aptare\mbs\bin\tsm\tsmhistoricevents.bat
Linux:
<APTARE HOME>/mbs/bin/tsm/tsmhistoricevents.sh
Where:
■ The MetadataCollectorID and the ServerID can be found by executing the
following utility:
For Windows: C:\opt\Aptare\mbs\bin\listcollectors.bat
For Linux: /opt/aptare/mbs/bin/listcollectors.sh
■ yyyy-mm-dd hh:mm:ss date format.
■ Specifying verbose will log the IBM Spectrum Protect (TSM) commands called
to the metadata.log file.
Note: If the Start and End Dates are not specified, the utility will capture events
that occurred in the previous 24 hours.
C:\Program Files\Aptare\mbs\bin\veritas\load_nbu_backups.bat
<metaDataCollectorId> <primaryServerName> <client_name> "<Start_Date>"
"<End_Date>"
Linux:
<APTARE HOME>/mbs/bin/symantec/load_nbu_backups.sh
<metaDataCollectorId> <primaryServerName> <client_name> "<Start_Date>"
"<End_Date>"
Where:
■ The MetadataCollectorID can be found by executing the following utility:
For Windows: C:\opt\Aptare\mbs\bin\listcollectors.bat
For Linux: /opt/aptare/mbs/bin/listcollectors.sh
Note: This process will only load data for clients that are listed in standard policies.
It will not retrieve data for clients not explicitly listed in policies. For example, VMware
VMs that are part of a VMware Intelligent Policy will not be included.
Linux
1. Create a NetBackup client list:
"<End_Date>"
done
Where:
■ The MetadataCollectorID can be found by executing the following utility:
For Windows: C:\opt\Aptare\mbs\bin\listcollectors.bat
For Linux: /opt/aptare/mbs/bin/listcollectors.sh
Windows
1. Create a NetBackup client list:
C:\program files\Veritas\netbackup\bin\admincmd\bpplclients
-noheader -allunique > c:\client_list.txt
■ Jobs
■ TaskDefinition
■ Policy
■ Device
■ Schedule
■ ComplexTask
Windows:
C:\Program Files\Aptare\mbs\bin\backupexec\buehistoricevents.bat
Linux:
<APTARE HOME>/mbs/bin/backupexec/buehistoricevents.sh
Where:
■ yyyy-mm-dd hh:mm:ss date format.
■ Specifying verbose will log the Backup Exec commands called to the
metadata.log file.
Note: If the Start and End Dates are not specified, the utility will capture events
that occurred in the last 24 hours.
■ DNS Name
■ Instance UUID
■ BIOS UUID
Due to which the name of the same VM reported by NetBackup may differ depending
on how the policy is created, which may result in the same VM getting persisted in
Aptare database with different name resulting in duplicates.
To fix the historic data having duplicate clients, execute the following command-line
scripts on Data Collector.
Warning: The command should be executed only after upgrading to version 10.6
P14 or 11.0.02.
Windows:
C:\Program Files\Aptare\mbs\bin\veritas\correct_vm_dup_clients.bat
<metaDataCollectorId> <primaryServerName>
Linux:
<APTARE HOME>/mbs/bin/symantec/correct_vm_dup_clients.sh
<metaDataCollectorId> <primaryServerName>
Cohesity
Cohesity Historic collection uses REST APIs to collect the session and backup
information.
To execute the Cohesity Historic data collection from the portal, configure the
following advanced parameters within the specific time frame.
■ COHESITY_BACKUP_LOOKBACK_HOURS
■ COHESITY_BACKUP_START_TIMESTAMP (Unix epoch time in microseconds)
Example: 1590488100000000 for (May 26, 2020 10:15:00 AM GMT)
■ COHESITY_BACKUP_END_TIMESTAMP (Unix epoch time in microseconds)
Example: 1590922800000000 (May 31, 2020 11:00:00 AM GMT)
Load historic events 241
Dell EMC NetWorker Backup & Recovery
Note: If the above parameters are not set, then the default look-back time is 36
hours
REST APIs
■ GET /irisservices/api/v1/public/protectionRuns
All the parent and child data
■ GET /irisservices/api/v1/public/restore/tasks
All the recovery-related data
■ GET /irisservices/api/v1/public/protectionSources/objects/
Returns the Protection Source objects corresponding to the specified IDs.
■ GET /irisservices/api/v1/public/protectionJobs/
All Protection Jobs currently on the Cohesity Cluster are returned
■ GET /irisservices/api/v1/public/protectionPolicies/
Returns the created Protection Policy
■ DELLEMC_NETWORKER_START_TIME
■ DELLEMC_NETWORKER_END_TIME
■ Date
Format - 'yyyy-MM-ddTHH:mm:ss'.
Example: 2018-05-10T16:00:03. This will be Dell EMC NetWorker server time.
■ Host
Add Dell EMC NetWorker servers, for which this advanced parameter should
be applied.
REST API
global/jobs - This call is used in REST API to collect job details.
Appendix B
Firewall configuration:
Default ports
This appendix includes the following topics:
https 443
Kafka 9092
MS Exchange 389
MS SQL 1433
Oracle 1521
WMI 135
ZooKeeper 2181
Note: NetBackup IT Analytics uses
standalone installation of single-node
Apache ZooKeeper server. For secure
communications, ZooKeeper
single-node cluster must be protected
from external traffic using network
security such as firewall. This is
remediated by ensuring that the
ZooKeeper port (2181) is only
accessible on the local host where
NetBackup IT Analytics Portal/Data
Collector is installed (that includes
Apache ZooKeeper).
EMC VNX (Celerra) XML API 443, 2163, 6389, 6390, 6391,
6392
HP EVA 2372
Hitachi Vantara All-Flash and Hybrid Flash Storage Hitachi Ops Center Configuration
Manager REST API: 23450 for HTTP
and 23451 for HTTPS.
DSCLI
WMI 135
80/443
DCOM >1023
Dell EMC Networker Backup & Recovery Port used for Dell EMC NetWorker
REST API connection. Default: 9090.
SSH 22
80/443
SSH 22