NetBackup IT Analytics Administrator Guide v11.5
NetBackup IT Analytics Administrator Guide v11.5
Release 11.5
NetBackup IT Analytics System Administrator Guide
Last updated: 2025-02-03
Legal Notice
Copyright © 2025 Veritas Technologies LLC. All rights reserved.
Veritas and the Veritas Logo are trademarks or registered trademarks of Veritas Technologies
LLC or its affiliates in the U.S. and other countries. Other names may be trademarks of their
respective owners.
This product may contain third-party software for which Veritas is required to provide attribution
to the third party (“Third-party Programs”). Some of the Third-party Programs are available
under open source or free software licenses. The License Agreement accompanying the
Software does not alter any rights or obligations you may have under those open source or
free software licenses. Refer to the Third-party Legal Notices document accompanying this
Veritas product or available at:
https://www.veritas.com/about/legal/license-agreements
The product described in this document is distributed under licenses restricting its use, copying,
distribution, and decompilation/reverse engineering. No part of this document may be
reproduced in any form by any means without prior written authorization of Veritas Technologies
LLC and its licensors, if any.
The Licensed Software and Documentation are deemed to be commercial computer software
as defined in FAR 12.212 and subject to restricted rights as defined in FAR Section 52.227-19
"Commercial Computer Software - Restricted Rights" and DFARS 227.7202, et seq.
"Commercial Computer Software and Commercial Computer Software Documentation," as
applicable, and any successor regulations, whether delivered by Veritas as on premises or
hosted services. Any use, modification, reproduction release, performance, display or disclosure
of the Licensed Software and Documentation by the U.S. Government shall be solely in
accordance with the terms of this Agreement.
http://www.veritas.com
Technical Support
Technical Support maintains support centers globally. All support services will be delivered
in accordance with your support agreement and the then-current enterprise technical support
policies. For information about our support offerings and how to contact Technical Support,
visit our website:
https://www.veritas.com/support
You can manage your Veritas account information at the following URL:
https://my.veritas.com
If you have questions regarding an existing support agreement, please email the support
agreement administration team for your region as follows:
Japan [email protected]
Documentation
Make sure that you have the current version of the documentation. Each document displays
the date of the last update on page 2. The latest documentation is available on the Veritas
website.
https://sort.veritas.com/data/support/SORT_Data_Sheet.pdf
Contents
■ Portal updates
Verify the connectivity by entering the URL into a browser window. The following
message should display:
Performance profiling
Enable the following URL for Performance Profiling:
https://cloud.aptare.com/remoting/CommunityService
Verify the connectivity by enter the URL into a browser window. The following
message should display:
# /opt/aptare/mbs/bin/agentversion.sh
Windows:
> C:\opt\aptare\mbs\bin\agentversion.bat
Sample Output:
datarcvr Version
Version: 9.0.0.01
aptare.jar Version
Preparing for updates 16
Data collector updates with an aptare.jar file
2. If you run the downloadlib utility and you get the following message, it indicates
that the restore failed or is in progress: Restore has been running for more
than 10 minutes, unable to proceed with collector upgrade.
To proceed, delete the restore.txt file and re-run downloadlib.
Windows: <Home>\upgrade\restore.txt
Linux: <Home>/upgrade/restore.txt
Portal updates
Portal updates contain feature enhancements and bug fixes. These updates are
packaged with an auto-installer.
Once the upgrade package has been installed, perform the upgrade using:
Linux:
# /opt/aptare/upgrade/upgrade.sh
Preparing for updates 18
Portal updates
Windows:
> C:\opt\aptare\upgrade\upgrade.bat
Chapter 3
Backing up and restoring
data
This chapter includes the following topics:
File Comments
/etc/init.d/*aptare*
/etc/rc3.d/*aptare*
/etc/rc5.d/*aptare*
Other Files
■ /usr/java
■ /etc/profile.d/java.sh
Note: When backing up the above directories, follow the symbolic links to back up
the source directory. For example: /user/java is typically a symbolic link to
/usr/java_versions/jdk<version>
/opt/aptare/bin/oracle stop
2 Using your organization’s file system backup software, back up all the data
files from:
$ORACLE_HOME/dbs/initscdb.ora
/data0?/*
$ORACLE_HOME\dbs\iniscdb.ora
C:\oradata
Note: During installation, you may choose a different drive for the oradata
install, so verify its location before backing up the data.
6. Copy the
c:\opt\aptare\datarcvrconf\aptare_external_password.properties file
to c:\opt\oracle\logs folder. This file will be required during the database
import.
./export_database_template.sh
7. Run the cp
/opt/aptare/datarcvrconf/aptare_external_password.properties /tmp
command. This file will be required during the database import.
su - aptare
crontab -e
04 15 * * * /opt/aptare/database/tools/export_database_template.sh
2>&1 >>/tmp/database_export.log
crontab -l
c:\opt\oracle\database\tools\expdp_database_template.bat
tables. To create a full export file of all database objects, refer to the following
section.
See “Oracle database: Export backups” on page 22.
For optimum performance, use this utility rather than your favorite backup solution’s
backup utility (for example, rman) because most backup solutions require archive
logging. This setting is not enabled or exposed because archive logging can have
a significant, negative impact on performance.
You will import this export in the event that you need to:
■ Restore the entire Reporting Database.
See “Restoring the NetBackup IT Analytics system” on page 26.
■ Retrieve a data table that’s been corrupted or accidentally deleted. Simply drop
the portal user then import the export.
See “Import the Oracle database” on page 27.
Note: If you do not have a full system backup, it still may be possible to recover by
re-installing the NetBackup IT Analytics application, re-installing the Oracle binaries,
and then restoring the Oracle database. Contact Veritas Support if you need to
follow this recovery method.
If your data loss is isolated to the Oracle database, it may be possible to skip a full
restore and proceed with restoring the Oracle database.
Backing up and restoring data 27
Import the Oracle database
Note: Before restoring user objects, stop the Tomcat and Portal processes.
■ The Oracle user must have read and execute privileges on these files before
starting the database export.
1. Log into the Linux database server and switch to user Aptare.
2. Place the export file aptare_scdb.exp in the /tmp directory.
If you have a different preferred directory (for example, /new_directory_path),
then place aptare_scdb.exp in your preferred directory
(/new_directory_path). Subsequently, change the path for the creation of
directory from /tmp to the new directory (new_directory_path) in the
/opt/aptare/database/tools/drop_users_linux.sql file.
/opt/aptare/bin/aptare stop
/opt/aptare/bin/oracle start
/opt/aptare/bin/aptare status
chmod +x /opt/aptare/database/tools/import_database_template.sh
Backing up and restoring data 29
Import the Oracle database
/opt/aptare/database/tools/import_database_template.sh
10. After successful completion, the data pump export file aptare_scdb.exp is
saved on the Linux database server in the /tmp directory.
The import_database_template.sh script, unlocks Portal user, grants privileges,
and validates the packages after the completion of the import, so they are not
required to be run manually. The scripts also address the compilation warnings for
the packages that follow.
The import log - import_scdb.log is located in the /tmp directory.
1. Check the log file for compilation warnings for the packages:
■ view apt_v_solution_history_log
■ cmv_adaptor_pkg
■ avm_common_pkg
■ sdk_common_pkg
■ load_package
■ common_package
■ util
These compilation warnings are addressed by the script itself and no action is
required from the user.
Note: If you are importing a database from version 10.4, upgrade the portal
after the import to a 10.5 build.
2. This step is required only if you are exporting the database from NetBackup
IT Analytics version 10.5 or above. Run the following commands to copy the
aptare.ks file to datarcvrconf folder.
cp /tmp/aptare.ks /opt/aptare/datarcvrconf/
chown aptare:tomcat /opt/aptare/datarcvrconf/aptare.ks
chmod 660 /opt/aptare/datarcvrconf/aptare.ks
services are restarted. The following modifications are required for both the
files.
Modification required for portal.properties file
■ Edit /opt/aptare/portalconf/portal.properties file
■ Remove all the characters following the first "=" on the lines containing
db.password.encrypted and db.ro_user_password.encrypted
#Database connection
db.driver=oracle.jdbc.driver.OracleDriver
db.url=jdbc:oracle:thin:@//localhost:1521/scdb
db.user=portal
db.password=portal
db.password.encrypted=
db.connection.max=75
db.connection.min=25
db.connection.expiration=30
db.ro_user=aptare_ro
db.ro_user_password=aptaresoftware123
db.ro_user_password.encrypted=
Note: The UserId and ro_user information shown is for a default installation.
Clear the text entries to match your environment. These will be re-encrypted
when the portal services are restarted.
<dataSource>
<Driver>oracle.jdbc.driver.OracleDriver</Driver>
<URL>jdbc:oracle:thin:@//localhost:1521/scdb</URL>
<UserId>portal<=/UserId>
Backing up and restoring data 31
Import the Oracle database
<Password>portal</Password>
<oracle_service_name>scdb</oracle_service_name>
<ro_user>aptare_ro</ro_user>
<ro_password>aptaresoftware123</ro_password>
<MaxConnections>150</MaxConnections>
<MinConnections>5</MinConnections>
<ConnExpirationTime>5</ConnExpirationTime>
</dataSource>
4. This step is required only if you are exporting the database from NetBackup
IT Analytics version 11.0 or above. Execute the following commands to copy
the file aptare_external_password.properties to datarcvrconf directory.
cp /tmp/aptare_external_password.properties
/opt/aptare/datarcvrconf/
chown aptare:tomcat
/opt/aptare/datarcvrconf/aptare_external_password.properties
chmod 660
/opt/aptare/datarcvrconf/aptare_external_password.properties
6. Restart all Oracle and APTARE services by running them from the root user:
/opt/aptare/bin/aptare restart
7. If the Portal is deployed at a custom path (other than the default path :
/opt/aptare), update the system parameter as follows:
■ Login to the Portal host as root user and run:
su - aptare
cd /customPath/aptare/database/tools
■ Login to SQL Plus as a Portal user and substitute pwd with your password
sqlplus portal/pwd@<ServiceName>
@update_system_parameter.sql
■ Enter the custom installation path of the Portal, when you prompted as:
#Database connection
db.driver=oracle.jdbc.driver.OracleDriver
db.url=jdbc:oracle:thin:@//localhost:1521/scdb
db.user=portal
db.password=portal
db.password.encrypted=
db.connection.max=75
Backing up and restoring data 33
Import the Oracle database
db.connection.min=25
db.connection.expiration=30
db.ro_user=aptare_ro
db.ro_user_password=aptaresoftware123
db.ro_user_password.encrypted=
Note: The UserId and ro_user information shown is for a default installation.
Clear the text entries to match your environment. These will be re-encrypted
when the portal services are restarted.
<dataSource>
<Driver>oracle.jdbc.driver.OracleDriver</Driver>
<URL>jdbc:oracle:thin:@//localhost:1521/scdb</URL>
<UserId>portal<=/UserId>
<Password>portal</Password>
<oracle_service_name>scdb</oracle_service_name>
<ro_user>aptare_ro</ro_user>
<ro_password>aptaresoftware123</ro_password>
<MaxConnections>150</MaxConnections>
<MinConnections>5</MinConnections>
<ConnExpirationTime>5</ConnExpirationTime>
</dataSource>
6. Stop all Oracle and Aptare services using stopAllServices from the Windows
Services tab.
7. Verify the Oracle TNS Listener is running and start OracleServicescdb from
the Windows Services tab.
8. From the command prompt, run the script import_database_template.bat by
executing the command:
c:\opt\oracle\database\tools\import_database_template.bat
Note: If you are importing a database from version 10.4, upgrade the portal
after the import to a 10.5 build.
su - aptare
sqlplus / as sysdba
alter session set container=scdb;
/opt/aptare/oracle/bin/expdp
parfile=/opt/aptare/database/tools/expdp_scdb.par
Backing up and restoring data 37
Manual steps for database import / export using data pump
6 You can also choose to ignore the par file and include parameters in the expdp
command directly. In other words, the above command can be replaced by
the following command which can also be executed from aptare user.
/opt/aptare/oracle/bin/expdp
system/aptaresoftware@//localhost:1521/scdb FULL=Y
directory=datapump_dir dumpfile=aptare_scdb.exp
logfile=export_scdb.log CONTENT=ALL flashback_time=systimestamp
6 Ensure Oracle listener is running. Using aptare user check for status of Listener
using following command: lsnrctl status
Backing up and restoring data 38
Manual steps for database import / export using data pump
su - aptare
sqlplus / as sysdba
alter session set container=scdb;
/opt/aptare/oracle/bin/impdp
parfile=/opt/aptare/database/tools/impdp_scdb.par
9 You can also choose to ignore the par file and include parameters in the impdp
command directly. In other words, the above command can be replaced by
the following command which can also be executed from aptare user.
/opt/aptare/oracle/bin/impdp
system/aptaresoftware@//localhost:1521/scdb
schemas=portal,aptare_ro directory=datapump_dir
dumpfile=aptare_scdb.exp logfile=import_scdb.log
Backing up and restoring data 39
Manual steps for database import / export using data pump
sqlplus / as sysdba
@/opt/aptare/database/tools/unlock_portal_linux.sql
11 After exiting from sqlplus, execute following command from aptare user
sqlplus portal/portal@//localhost:1521/scdb
@/opt/aptare/database/tools/validate_sp.sql
Note: If you are importing DB from 10.4, upgrade the portal after the import to
10.5 build.
cp /tmp/aptare.ks /opt/aptare/datarcvrconf/
chown aptare:tomcat /opt/aptare/datarcvrconf/
chmod 664 /opt/aptare/datarcvrconf/aptare.ks
Exit
5 You can also choose to ignore the par file and include parameters in the expdp
command directly. In other words, the above command can be replaced by
the following command: c:\opt\oracle\bin\expdp
system/aptaresoftware@//localhost:1521/scdb FULL=Y
DIRECTORY=datapump_dir LOGFILE=export_scdb.log
DUMPFILE=aptare_scdb.exp CONTENT=ALL FLASHBACK_TIME=systimestamp
6 After successful completion, the data pump export file aptare_scdb.exp is saved
in C:\opt\oracle\logs directory of the Windows Database server.
Alter session set container = scdb; (note this command is included only for a
container database, otherwise switch to container database is not required)
DROP USER aptare_ro CASCADE;
EXIT;
8 You can also choose to ignore the par file and include parameters in the impdp
command directly. In other words, the above command can be replaced by
the following command: c:\opt\oracle\bin\impdp
"sys/*@//localhost:1521/scdb as sysdba" SCHEMAS=portal,aptare_ro
DIRECTORY=datapump_dir LOGFILE=import_scdb.log
DUMPFILE=aptare_scdb.exp
Backing up and restoring data 42
Manual steps for database import / export using data pump
■ Check the log file for the compilation warnings for the packages: view
apt_v_solution_history_log, cmv_adaptor_pkg, avm_common_pkg,
sdk_common_pkg, server_group_package, load_package, common_package,
util. These compilation warnings are addressed by the script itself and no action
is required from the user.
Note: If you are importing DB from 10.4, upgrade the portal after the import to
10.5 build
■ Monitoring tablespaces
C:\opt\aptare\utils\startportal.bat
C:\opt\aptare\utils\stopportal.bat
# cd /opt/aptare/bin
./tomcat-portal start|restart
# cd /opt/aptare/bin
./tomcat-portal stop
C:\opt\aptare\utils\
If all components (Portal Server and Reporting Database) are on the same
server and you want to start allcomponents including the Reporting Database
use:
startallservices.bat
If all components (Portal Server and Reporting Database) are on the same
server and you want to only start the Reporting Database use:
startoracle.bat
2 Verify the following services have started using the Windows Services Control
panel:
■ Oracle Service SCDB
■ OracleSCDBTNSListener
Monitoring NetBackup IT Analytics 46
Starting and stopping the reporting database
C:\opt\aptare\utils\
If all components (Portal Server and Reporting Database) are on the same
server and you want to stop allcomponents including the Reporting Database
use:
stopallservices.bat
If all components (Portal Server and Reporting Database) are on the same
server and you want to only stop the Reporting Database use:
stoporacle.bat
stoporacle.bat
2 Verify the following services have stopped using the Windows Services Control
panel:
■ Oracle Service SCDB
■ OracleSCDBTNSListener
# cd /opt/aptare/bin
./aptare start|restart
2 If all components (Portal Server and Reporting Database) are on the same
server and you want to only start the Reporting Database, run the following
command:
# cd /opt/aptare/bin
oracle start
# cd /opt/aptare/bin
oracle start
# cd /etc/init.d
# ./aptare_agent status
# ./aptare_agent start|restart
# cd /etc/init.d
# ./aptare_agent status
# ./aptare_agent stop
Monitoring tablespaces
The Reporting Database contains the user tablespaces outlined in the following
table.
See Table 4-1 on page 48.
Monitoring NetBackup IT Analytics 48
Monitoring tablespaces
During your initial installation, NetBackup IT Analytics created these user tablespaces
and corresponding data files. These tablespaces have AUTOEXTEND turned on,
so when a data file fills up, the tablespace increases the data file. You do not need
to add any data files. However, you must add disk space to the mount point as
needed; otherwise, NetBackup IT Analytics cannot extend the data files.
aptare_tbs_data_1m aptare_tbs_data_1m_01.dbf
aptare_tbs_idx_1m aptare_tbs_idx_1m_01.dbf
aptare_tbs_data_20m aptare_tbs_data_20m_01-09.dbf
aptare_tbs_idx_10m aptare_tbs_idx_10m_01-09.dbf
aptare_tbs_data_200m aptare_tbs_data_200m_01-09.dbf
aptare_tbs_idx_100m aptare_tbs_idx_100m_01-09.dbf
aptare_tbs_data_200m_lob aptare_tbs_data_200m_lob_01-09.dbf
aptare_tbs_data_200m_col aptare_tbs_data_200m_col_01-09.dbf
aptare_undo_tbs aptare_undo_tbs_01.dbf
aptare_temp_tbs aptare_tbs_temp_01.dbf
Chapter 5
Accessing NetBackup IT
Analytics reports with the
REST API
This chapter includes the following topics:
■ Overview
■ Exporting reports
Overview
With the REST APIs, you can access report data as follows:
■ You can extract data from tabular reports using pagination in JSON and XML
formats.
■ You can export reports as HTML, PDF, and CSV formats
■ You can export custom dashboards in HTML and PDF formats.
for the portal for user authentication. Once authenticated, your user information is
used for authorization on the portal.
If you have upgraded from 10.5 or lower versions of NetBackup IT Analytics, you
may see authentication error while accessing the reports, as the authentication
method is changed from basic to API key. You may need to make changes in your
API code/script to be able to access the reports.
Accessing NetBackup IT Analytics reports with the REST API 51
Authentication for REST APIs
The portal generates and API key and prompts you to copy it to your system.
This key is unique for each user and is generated only once, as a new key is
generated at every attempt.
2 Click Copy & Close. You need to use this key to execute the REST API.
3 Save the key securely as you need to provide the API key every time you
access APTARE reports using REST APIs.
Accessing NetBackup IT Analytics reports with the REST API 52
Extracting data from tabular reports (with pagination)
Note: If a user is inactive or removed from the LDAP, update the NetBackup IT
Analytics Portal manually to prevent the user from using the REST API. If not done,
the user automatically becomes inactive after the configured number of days.
3 Press Ctrl + Alt + T to view the Report Statistics and find the Report ID.
Exporting reports
With the REST API, you can export reports as HTML, PDF and CSV formats.
As a prerequisite, ensure you have the API key required for user authentication on
Swagger. See “Authentication for REST APIs” on page 49. for steps to generate
the API key.
To extract data from tabular reports
1 In the NetBackup IT Analytics portal, generate a tabular report and save it.
3 Press Ctrl + Alt + T to view the Report Statistics and find the Report ID.
As a prerequisite, ensure you have the API key required for user authentication on
Swagger. See “Authentication for REST APIs” on page 49. for steps to generate
the API key.
To export custom dashboards
1 In the NetBackup IT Analytics portal, generate a custom dashboard and save
it.
sqlplus portal/portal@//localhost:1521/scdb
In this example, only the media_type will be used when the calculation searches
for an estimated capacity override.
5. To verify estimated capacities after updating the database table, execute the
following commands, supplying the NetBackup Primary Server ID:
sqlplus portal/portal@//localhost:1521/scdb
execute media_package.setupTapeMediaCapacity(<primary server ID>);
When you create this custom report via the Report Template Designer, configure
the Report Designer to include the selection of a host group, enabling users to
narrow the scope of the report when they generate the report.
Chapter 7
Automating host group
management
This chapter includes the following topics:
■ General utilities
To make host group changes in bulk, use the PL/SQL utilities that NetBackup IT
Analytics provides. Instead of manually creating and organizing host groups through
the Portal, you can run PL/SQL utilities to do the work for you.
These utilities provide the following capabilities:
■ Matching. You can base your host group management on specific criteria. For
example, if you want to organize backup servers by geographical location and
your backup servers have a specific naming convention that indicates the servers’
region, you need only specify that the SQL utilities to match on that naming
convention.
Automating host group management 64
Task overview: managing host groups in bulk
■ Automation. You can automate how you create and organize host groups. You
can automate how you do the following:
■ Move or copy clients.
■ Move and delete host groups.
■ Organize clients into groups by management server and IBM Tivoli Storage
Manager server.
■ Set up an inactive clients group.
■ Set up host group for clients in inactive policies.
■ Set up clients by policy, policy type, policy domain, and IBM Tivoli Storage
Manager instance.
■ Load details of new hosts or update existing hosts.
■ Load relationships between hosts and host groups.
These utilities communicate directly with the Reporting Database to manage and
manipulate the host group membership for large quantities of servers. There are
two types of utilities:
■ General. These utilities apply to all backup solutions.
■ Product-specific. These utilities only apply to a specific backup solution.
sqlplus portal/<portal_password>@//localhost:1521/scdb
SET SERVEROUTPUT ON
General utilities
The utilities contained in this section apply to all host groups and hosts.
■ See “Categorize host operating systems by platform and version” on page 66.
■ See “Identifying a host group ID” on page 69.
■ See “Move or copy clients” on page 69.
■ See “Organize clients by attribute” on page 70.
■ See “Move host group” on page 71.
■ See “Delete host group” on page 72.
■ See “Move hosts and remove host groups” on page 72.
Automating host group management 66
Categorize host operating systems by platform and version
10.04LTS|12.04LTS|
14.04LTS|16.04LTS|
\d+\.?\d?+
\d+\.?\d?+
\s?\d?+
Linux (64-bit)|(32-bit) 11
vmnix -x86|x86 13
Usage To insert a regular expression row into the database table, use this command:
execute server_group_package.insertCustomerOsNormData(null,
'os_platform_regex', 'os_platform', 'os_version_regex',
'ignore_string', priority, domain_id);
To update values in a regular expression row into the database table, use this command:
execute server_group_package.insertCustomerOsNormData
(os_normalization_id, 'os_platform_regex', 'os_platform',
'os_version_regex' ,'ignore_strin g', priority, domain_id);
Where:
IDs less than 100000 are system defaults and cannot be removed, but their values can be
modified. When inserting a regular expression into the database table, this value must be null
because the process assigns this number.
os_platform_regex: These strings are used to match a substring in the collected text to identify
the platform. This field cannot be null.
os_platform: This is the value that is saved to the database when the regular expression is
encountered in the collected Host OS. This platform value can never be null, however, the version
derived from the version regex may be null.
os_version_regex: This is the regular expression used to match a substring in the collected text
to identify the version.
ignore_string: These strings are ignored and are treated as irrelevant details when determining
the platform or version.
priority: This value indicates precedence: the higher the value, the higher the priority. For example,
Red Hat has a higher priority than Linux, which means that a Host OS that contains a Red Hat
substring and a Linux substring will result in a Host OS of Red Hat. User-defined regular
expressions must have a priority higher than 1 to override system defaults. This field cannot be
null.
domain_id: The Domain ID is shipped with a null default value. In multi-tenancy environments,
such as Managed Services Providers, the Domain ID can be updated to change the processing
for a specific domain/customer.
Note that a Creation Date also is saved in the database table. This is the date and time that the
Regex record was created in the database.
Automating host group management 69
Identifying a host group ID
Where:
Where:
source_host_group is the full pathname to the source host group, for example
/ITAnalytics/Primary/GroupA
client_name_mask is a string that can contain wildcards (*). For example, abc* indicates all clients
that have an internal_name that starts with abc. To process all clients use the value NULL
(which should not be within quotes).
Create a host group named Geography. This will be the destination group that will be used to
organize the clients by location.
For a subset of a host group’s clients, set their Geography attribute value to London and for
another subset of clients, set their Geography attribute to New York.
Use the following groupClientsbyAttributesutility to organize the clients that have a Geography
attribute configured.
Where 300000 is the group ID of the root group, Global; 302398is the ID of the Geography group
you just created.
Additional References:
Where:
source_Group_ID is the numeric identifier of the host group for which you want to group the
clients.
destination_group_ID is the numeric identifier of the group under which you want to group the
clients.
cascade_Source_Group is a numeric flag that indicates if you want this utility to process the
source host group’s sub-groups and organize those clients in the destination group.
attribute_Listis a comma-separated list of attribute names, each enclosed in straight single quotes.
These names are used to create the sub-groups that organize the clients underneath the source
group.
Where:
source_host_group is the full pathname to the source host group, for example
/ITAnalytics/Primary/GroupA. Be sure to use the host group name, not the host group ID.
destination_host_group is the full pathname to the destination host group. Be sure to use the
host group name, not the host group ID.
Automating host group management 72
Delete host group
Where:
parent_group_id is the group id of the parent group which contains the group to be deleted.
Description Prior versions of NetBackup IT Analytics automatically created several server/host groups during
backup data collection. In certain environments, these auto-generated groups may not be needed,
as other host groups are more relevant. This utility can be used to clean up a Portal’s host groups
by moving servers/hosts and child host groups from a host group and then deleting the source
host group. While this utility, by default, is intended for system-created host groups, it can be
used for any host group that you want to delete, but retain its contents.
Note: Once this process completes, log out of the Portal and log back in before accessing host
groups and hosts in the Inventory.
Usage server_mgmt_pkg.serverGroupCleanup(<processingMode>,
'<domain_name>', (<server_group_names_list>),
'<log_file_path_name>', '<log_file_name>');
Where:
processing_mode is either 1 = Validate or 2 = Execute. Run this command in Validate mode first
to understand what hosts will be moved and what host groups will be deleted.
domain_name, enclosed in single straight quotes, is the case-insensitive name of the NetBackup
IT Analytics domain for the group to be deleted. See the Best Practice listed above.
log_file_path_name, enclosed in single straight quotes, is the location of the log file for this
process.
log_file_name, enclosed in single straight quotes, is the name of the log file.
execute server_mgmt_pkg.serverGroupCleanup
(1, 'EMEAfinance', stringListType
('NetBackup Policy Types','NetBackup Policies',
'Inactive Policy Clients', 'Policy Domains'),
'/tmp', 'serverGrpCleanup.log');
execute server_mgmt_pkg.serverGroupCleanup
(2, 'EMEAfinance', stringListType
('NetBackup Policy Types','NetBackup Policies',
'Inactive Policy Clients', 'Policy Domains'),
'/tmp', 'serverGrpCleanup.log');
exec server_mgmt_pkg.serverGroupCleanup
(1, 'EMEAfinance', NULL, '/tmp',
'serverGrpCleanup.log');
Description This utility enables you to create a hierarchy of servers and links all clients that are members of
a server into the respective host group.
For example, in an IBM Tivoli Storage Manager environment if you have two IBM Tivoli Storage
Manager servers called TSM1, TSM2, this utility creates two host groups, TSM1 and TSM2, and
links the IBM Tivoli Storage Manager server’s clients into the corresponding IBM Tivoli Storage
Manager host group.
Example:
exec common_package.moveClientsIntoServerGroups
(300000, 300010, 1, 1), ;
Where:
destination_group_id is the group ID in which host groups by management server will be created.
Create a host group under source_group_id called <vendor_name> Servers and use the group
ID of this new host group for the second parameter.
When you organize by server, if a host group exists anywhere under the source group hierarchy
with the name of that server, the routine associates the clients with that folder and does not create
a new folder under the destination folder. This association occurs whether you explicitly specify
the destination folder or if the destination is NULL. However, if you pass a source folder that is
at a lower level, the routine only checks for a folder under that hierarchy. If you specify NULL as
the destination folder, the routine creates a host group under the source_group_id called Servers.
move_or_copy_flag can be set to 0=Link (copy) clients or 1=Move clients. If set to 0, the utility
links the clients to their respective host groups and keeps the clients in their original group location.
If set to 1, the utility moves all clients from the source host group and to their respective host
groups.
The utility processes and organizes all clients of the source group hierarchy into the target server
grouping. However, if the move_or_copy flag is set to 1, the utility removes only clients in the
top level source_group_id group--and does not remove those already organized in lower-level
sub-groups.
latest_server_only, when set to 1, indicates the last server to back up the client; otherwise, set
this flag to 0.
Description Under certain circumstances, backup clients may have duplicate entries in the NetBackup IT
Analytics database. This utility enables you to merge the data of clients that appear more than
once in the database.
In most cases, it is not necessary to shut down the data receiver while the client records are
being merged. Although not required, it is recommended that you shut down the data receiver
before executing this utility so that data will not continue to be collected for the hosts that are
being merged.
Usage execute
duplicate_package.mergeDuplicateServers(<'host_grp'>,<host_name_type>);
Example:
exec duplicate_package.mergeDuplicateServers('/Corp',1);
Where:
host_name_typeindicates whether to use only the host’s base name while finding duplicates, or
use the fully qualified name. 0 = fully qualified host name, 1 = host base name.
Recommendations:
Follow these recommendations before you merge duplicate hosts:
■ Carefully set the report scope and generate the Duplicate Host Analysis report,
as its CSV export copy serves as an input for the host merge script.
■ Use a copy of the original CSV export as an input for the merge duplicate hosts
script. The original CSV can serve as a reference in future.
■ Since the host merge process is irreversible, it must be executed by an
administrator with comprehensive knowledge of backup solutions.
Automating host group management 76
Merge duplicate hosts
■ Back up the database before performing the host merge since the process is
irreversible.
Host Type for the Duplicate Host ■ Clients Only allows you to find duplicates
only for hosts that are identified as Clients
■ Clients Only
(hosts backed up by any backup system).
■ All
■ All detects duplicates for all types of
hosts.
Surviving host Selection Criteria Allows you to specify the criteria to select the
surviving host among the duplicates when
■ Highest Job Count
performing a host merge.
■ Most Recently Updated
■ Highest Job Count: Selects the host with
most associated jobs as the surviving
host. This is the default criterion of the
legacy host merge option, as a higher job
count suggests that the host has more
data associated with it.
■ Most Recently Updated: Selects the
most recently updated host as the
surviving host. Use this option when the
duplicate hosts found are no longer
actively collecting new data, as it helps
to retain the most current host.
Filter by Common Attributes Select this checkbox to have the report scope
display attributes using "AND" logic. By
selecting this feature, the report will display
those results with the intersection of the
selected criteria.
Apply Attributes to Backup Servers Select this checkbox to apply the attributes
only to the backup servers, instead of hosts.
3 After generating the report, export the report in CSV format on your system.
4 Create a copy of the CSV report and prepare the copy for the host merge script
as described in the next step.
Update the values of the following columns in the CSV copy as suggested below:
1. Surviving Host: Default value of this report column is Main, which indicates
that the duplicates will be merged into the Main host. To change the surviving
host, change its value to Duplicate. This way, all hosts are merged into the
duplicate host. Main and Duplicate are the only acceptable values in this
column.
2. Is Duplicate Host's Merge supported: This column supports only Yes and
No as values. Delete all the rows containing the value No from the report CSV
that you plan to use as input for the host merge process.
None other than the above modifications must be made to the report CSV that you
plan to use for the host merge process. Your report CSV is now ready to serve as
an input for the host merge script.
Step-3: Run the host merge script using the report CSV
The host merge script has a provision to perform a pre-assessment during which
it evaluates errors in the CSV and suggests corrections before proceeding further.
You must ensure a successful pre-assessment and only then proceed to merge
the hosts. Any error in the report CSV will result in the script aborting the process.
You must provide the report CSV path along with the file name, log file path, and
log file name when you run the script.
Caution: As the host merge process is irreversible, you must back up your database
and follow all the recommendations suggested above before you proceed.
Automating host group management 79
Merge duplicate hosts
/opt/aptare/database/stored_procedures (Linux)
\opt\oracle\database\stored_procedures (Windows)
Where file_name is the fully qualified path to the csv file that contains the aliases to be loaded.
The second field, hostname, is the external name, as defined in the Portal database.
Logic Conditions If the host alias already exists, no updates take place.
If the host alias does not already exist in the Reporting Database, the utility adds it.
The utility applies case differences in the input file as updates to preexisting rows.
Logging The utility logs all additions, updates, warnings and errors to the specified log file. Logging strings
are typically in the format: Date -> Time -> load_package:sub_routine -> Action
/opt/aptare/database/hosts.csv
Example:
/APTARE/Test,testhost01,testhost01,description,location,
172.20.16.1,Sun,E450,Solaris 10
/APTARE/Test,testhost02,testhost02,,location,
172.20.16.2,Sun,,Solaris
The first field, path_to_host_group, must be the full path to an existing host group otherwise
the host will not be inserted.
Logic Conditions If the host already exists in the specified host group, the utility updates its details.
If the host does not already exist in the Reporting Database, the utility adds the host to the
specified host group.
If a host attribute field has a NULL value in the input file, the corresponding field in the database
will not be updated for a pre-existing row.
The utility applies case differences in the input file as updates to preexisting rows.
Since the primary key to the record is the internal_name, the internal_name for a host cannot
be updated via this utility.
If the number of parameters passed in a row exceeds 9, the utility skips the row.
Automating host group management 85
Bulk load utilities
Logging The utility logs all additions, updates, warnings and errors to the file scon.log, which is located
under /tmp by default on Linux systems and C:\opt\oracle\logs on Windows systems.
Logging strings are typically in the following format:
Example:
Where:
file_name is the fully qualified path to the csv file. For example:
/opt/aptare/database/hosts.csv
recycle_group is the full path to the group into which deleted hosts will be moved (i.e., the 'recycle
bin').
remove_old_entries enables you to remove relationships in the Reporting Database that are not
in the file. If set to 1 and where there are hosts with a previous relationship to a host group and
where that relationship is no longer represented within the file, the utility moves those hosts to
the recycle group. If set to 0, the utility does not remove those hosts.
audit_pathnameis the full path to the audit file, not including the filename.
audit_output_fileis the name of the audit file where the audit results will be stored.
do_logenables you to turn on the auditing function so that all host movements are logged in the
audit_output_file. Enter a numeric: 0 or 1, where 0 = No, 1 = Yes.
Example command:
execute load_package.loadgroupmemberfile
('/opt/aptare/database/movehosts.csv','/Global1/Recycle',1,'/opt/aptare/database','movehosts.out',1);
Where path_to_host_group is the fully qualified path to the host group into which the hosts should
be added, and internal_name1 is the internal name of a host within the existing host group
hierarchy.
Example:
Data Constraints The first field, path_to_host_group, must be the full path to an existing host group. If any host
groups in the path_to_host_group field value do not exist, the utility creates them.
Each row must have at least one host specified, otherwise the row will not be processed.
Logic Conditions If you list hosts after the path_to_host_group field and those hosts are located in the existing
host group hierarchy, the utility adds those host groups to the specified host group.
If a host with the specified internal name does not exist in the hierarchy, the relationship will not
be added. The host must already be configured in the reporting database.
If any host groups in the path_to_host_group field value do not exist, the utility creates them.
If the removeOldEntries parameter is set to 1, the utility assumes that this file will contain all
the required relationships. In other words, for all the host groups that you specify in the file, only
those hosts will be in that group after you run this utility. If the host group previously contained
other host(s) that are now no longer listed in the file, the utility removes those host(s) from the
host group and moves them to the recycle folder.
The utility does not delete host groups from the Reporting Database; it only removes members
of a host group.
If a host group in the Reporting Database is not listed in the file, the utility does not take any
processing action against that host group.
Host groups with many hosts can be split into multiple lines for ease of file maintenance--for
example, the host group and some of the hosts appear on the first line, then the same host group
and other hosts appear on subsequent lines.
Logging The utility logs all additions, updates, warnings, and errors to the scon.log file, which is located
under /tmp by default on Linux systems and C:\opt\oracle\logs on Windows systems.
Logging strings are typically in the following format:
Where:
Example:
Linux:
opt/aptare/database/stored_procedures/nbu/setup_nbu_jobs_manual.sql
All five are included in this file. To omit a particular utility from the scheduled job,
use the following syntax before and after the block of code.
See “Veritas NetBackup utilities” on page 88.
■ Before the block of code to be omitted, use: /*
■ After the block of code to be omitted, use: */
----------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------
jobName := dba_package.getSchedulerJobName('setupInactivePolicyClients');
sqlplus portal/<portal_password>@//localhost:1521/
scdb@setup_nbu_jobs_manual.sql
destination_group_id is the group ID in which the host group for your primary servers groups will
be created. Create a host group under source_group_id called Primaries or Management Servers
and use the group ID of this new host group for the second parameter.
When you organize by primary server, if a host group exists anywhere under the source group
hierarchy with the name of the primary server, the routine associates the clients with that folder and
does not create a new folder under the destination folder. This association occurs whether you
explicitly specify the destination folder or if the destination is NULL. However, if you pass a source
folder that is at a lower level, the routine only checks for a folder under that hierarchy. If you specify
NULL as the destination, the routine will create (if it does not exist already) a group called
“NetBackup” under the Source group ID. It then creates a host group called “Primary Servers” under
the “NetBackup” group.
move_clients If set to 0, the clients link into the respective host group and remain in their original
host group location. If set to 1, all the clients move from the source host group and into the respective
host groups.
The utility processes and organizes all clients of the source group hierarchy into the target primary
server grouping. However, if the move_clients flag is set to 1, the utility removes only clients in the
top level source_group_id group--and those already organized in lower level sub-groups remain.
latest_primary_only defaults to 0, but can be set to 1, indicating organization by the latest primary
server. If a client is backed up by two primary servers, or if a client was backed up by primary server
A in the past, but is now backed up by primary server B, setting this flag to true will result in the
client being organized by the latest primary server.
Automating host group management 92
Organize clients into groups by management server
destination_group_id is the group ID in which the new host group for your primary servers will be
created. Create a host group under source_group_id called Primaries or Management Servers and
use the group ID of this new host group for the second parameter.
When you organize by primary server, if a host group exists anywhere under the source group
hierarchy with the name of the primary server, the routine associates the clients with that folder and
does not create a new folder under the destination folder. This association occurs whether you
explicitly specify the destination folder or if the destination is NULL. However, if you pass a source
folder that is at a lower level, the routine only checks for a folder under that hierarchy. If you specify
NULL as the destination, the routine will create (if it does not exist already) a host group called
“NetBackup” under the Source group ID. It then creates a host group called “Primary Servers” under
the “NetBackup” host group.
move_clients If set to 0, the clients link into the respective host group and remain in their original
host group location. If set to 1, all the clients move from the source group and into the respective
management server host groups.
The utility processes and organizes all clients of the source group hierarchy into the target primary
server grouping. However, if the move_clients flag is set to 1, the utility removes only clients in the
top-level source_group_id group--and those already organized in lower-level sub-groups remain.
latest_primary_only defaults to 0, but can be set to 1, indicating organization by the latest primary
server. If a client is backed up by two primary servers, or if a client was backed up by primary server
A in the past, but is now backed up by primary server B, setting this flag to true will result in the
client being organized by the latest primary server.
exclude_policy_client defaults to 0, but can be set to 1, indicating that you want to organize the
clients based on backups and exclude policy-based clients. If this flag is set to 0, the utility finds
the clients that are backed up by the primary server and also clients that are in the policy that is
controlled by the primary server.
Automating host group management 93
Set up an inactive clients group
Where:
host_group_to_traverse is the full pathname to the host group hierarchy to traverse looking for
inactive clients, for example /Aptare/hostgroup1.
inactive_clients_group is the full pathname to the host group into which the inactive clients will
be moved or linked. The default value for this parameter is NULL (which should not be within
quotes). If set to NULL, the utility automatically creates a host group called Clients Not In Policy
within host_group_to_traverse.
move_or_copy_flag can be set to 0=Link (copy) clients or 1=Move clients. If set to 0, the utility
links the clients to the inactive_clients_group and keeps the clients in their original host group
location. If set to 1, the utility moves all the inactive clients from their current host group location
and consolidates them into the inactive_clients_group.
Where:
host_group_to_traverse is the full pathname to the host group hierarchy to traverse looking for
inactive policies, for example /Aptare/hostgroup1.
inactive_clients_group is the full pathname to the host group into which the clients in an inactive
policy will be moved or linked. The default value for this parameter is NULL (which should not
be within quotes). If set to NULL, the utility automatically creates a host group called Inactive
Policy Clients within host_group_to_traverse.
move_or_copy_flagcan be set to 0=Link (copy) clients or 1=Move clients. If set to 0, the utility
links the client to the inactive_clients_group and keeps the client in the original host group
location. If set to 1, the utility moves all the clients in inactive policies from their current host group
location and consolidates them into the inactive_clients_group.
Where:
source_host_group is the full pathname to the host group hierarchy to traverse for clients, for
example /Aptare/hostgroup1. The default value for this parameter is NULL (which should not
be within quotes). If set to NULL, the utility automatically locates the highest level host group to
traverse.
destination_host_group is the full pathname to the host group under which the new groups by
policy name will be automatically created. The default value for this parameter is NULL (which
should not be within quotes). If set to NULL, the utility automatically creates a host group called
NetBackup Policies within source_server_group.
If a client is removed from a Veritas NetBackup policy, added to a new policy and the utility is
subsequently run again, the client will appear in the new policy group but will not be deleted from
the old policy group. To remove the client from the old policy group and completely re-synchronize
the grouping structure, simply delete the Policy grouping hierarchy via the
deleteEntireGroupContents utility, referenced in
Where:
source_host_group is the full pathname to the server group hierarchy to traverse for clients, for
example /Aptare/hostgroup1. The default value for this parameter is NULL (which should not
be within quotes). If set to NULL, the utility automatically locates the highest level host group to
traverse.
destination_server_group is the full pathname to the server group under which the new groups
by Policy type will be automatically created. The default value for this parameter is NULL (which
should not be within quotes). If set to NULL , the utility automatically creates a host group called
NetBackup Policy Types within source_host_group.
If a client is removed from one Veritas NetBackup policy type, added to a new policy type and
the utility is subsequently run again, the client will appear in the new policy type host group but
will not be deleted from the old policy group. To remove the client from the old policy group and
completely re-synchronize the grouping structure, simply delete the Policy Type grouping hierarchy
via the deleteEntireGroupContents utility, referenced in
Where:
source_host_group is the full pathname to the server group hierarchy to traverse for clients, for
example /Aptare/hostgroup1. The default value for this parameter is NULL (which should not
be within quotes). If set to NULL, the utility automatically locates the highest level host group to
traverse.
destination_host_group is the full pathname to the host group under which the new groups by
policy domain name will be automatically created. The default value for this parameter is NULL
(which should not be within quotes). If set to NULL the utility automatically creates a host group
called Policy Domains within source_host_group .
If a client is removed from a IBM Tivoli Storage Manager policy domain, added to a new policy
domain and the utility is subsequently run again, the client will appear in the new policy group
but will not be deleted from the old policy group. To remove the client from the old policy group
and completely re-synchronize the grouping structure, simply delete the Policy domain grouping
hierarchy via the deleteEntireGroupContents utility, referenced in
Where
source_host_group is the full pathname to the host group hierarchy to traverse for clients, for
example /Aptare/hostgroup1. The default value for this parameter is NULL (which should not
be within quotes). If set to NULL, the utility automatically locates the highest level host group to
traverse.
destination_host_group is the full pathname to the host group under which new groups by instance
name will be automatically created. The default value for this parameter is NULL (which should
not be within quotes). If set to NULL the utility automatically creates a host group called IBM
Tivoli Storage Manager instances within source_host_group.
move_or_copy_flag can be set to 0=Link (copy) clients or 1=Move clients. If set to 0, the utility
links the clients to their respective host groups and keeps the clients in their original host group
location. If set to 1, the utility moves all clients from the source host group and to their respective
host groups.
■ <database_home>/stored_procedures/nbu/setup_nbu_jobs.sql
■ <database_home>/stored_procedures/nbu/setup_nbu_jobs.sql
■ <database_home>/stored_procedures/tsm/setup_leg_jobs.plb
sqlplus portal/<portal_password>@//localhost:1521/scdb@setup_ora_job.sql
by those policies added to the collector after the identifier was assigned. For existing
policies within the collector, you must manually assign the Host Matching Identifier.
Once a Host Matching Identifier is assigned to a policy that collects host data, it
becomes an integral part of the host matching process. Changing the Host Matching
Identifier after the data collection results in creation of new hosts with the new host
matching identifier.
Given the expertise required to use this feature and its potential to create duplicate
hosts if not used responsibly, the feature is disabled by default. Also, this feature
access is restricted to Super User or Administrator, who can grant access to specific
users or user groups on the Portal.
1 On the NetBackup IT Analytics Portal, go to Admin tab > Advanced > System
Configuration > Custom Parameters.
2 Click Add.
3 Enter Custom Parameter Name as portal.hostMatchingIdentifier.enabled
and Custom Parameter Value as True.
4 Click Save and also click Save and Apply.
This introduces the Host Matching Identifier column and Edit Host Matching
Identifier button on the Collector Administration view. You may need to
logout and login again on the Portal to view the changes.
Step-2: Add Host Matching Identifier to the Data Collector or the collector
policy
1 On the NetBackup IT Analytics Portal, go to Admin tab > Collector
Administration.
2 Select a Data Collector and click Edit Host Matching Identifier.
3 Enter Host Matching Identifier label of your choice and click OK.
The label you assign appears under the Host Matching Identifier column on
the Collector Administration view.
You can assign an identifier to a policy by repeating the above steps only after
you have assigned it to its Data Collector. While assigning to a policy, you need
to select the Host Matching Identifier from a drop-down list.
Chapter 8
Attribute management
This chapter includes the following topics:
Often, large enterprise environments need to configure attributes for many objects,
such as hosts, arrays, and switches. Bulk load utilities assign attributes to objects.
While the Portal has capabilities for assigning attributes to certain objects, the
utilities described in this section fulfill the large-scale requirement for assigning
attributes to a large number of objects.
To facilitate bulk loading and configuration of attributes, several utilities are provided.
These utilities load attributes and values into the NetBackup IT Analytics database
from comma-separated-values (CSV) files.
Note: Currently, there are no utilities available for the bulk load of Datastore
attributes.
Note: After you rename an attribute, any report templates that used these attributes
must be updated via the Portal SQL Template Designer.
To rename existing attributes so that their values do not get merged into a single
attribute, take the following steps.
1. Log in to the Portal server.
2. At the command line:
su - aptare
sqlplus <pwd>/<pwd>@//localhost:1521/scdb>
UPDATE apt_attribute
SET attribute_name - '<NewAttributeName>'
WHERE attribute_id = <ExistingAttributeID>;
Commit;
/opt/aptare/database/stored_procedures (Linux)
\opt\oracle\database\stored_procedures (Windows)
su - aptare
Attribute management 106
Load attributes and values and assign to hosts
sqlplus <pwd>/<pwd>@//localhost:1521/scdb
SQL> Execute
load_package.loadAttributeFile('pathname_and_filename',
'domain_name');
where:
Example:
Execute load_package.loadAttributeFile('c:\temp\attributes.csv',
'APTARE');
6 Restart the Portal services so that the newly added attributes become available
in the Dynamic Template Designer.
/opt/aptare/database/stored_procedures (Linux)
\opt\oracle\database\stored_procedures (Windows)
Take the following steps to load attributes and values and assign those attributes
to hosts:
1. Create a CSV File of Hosts, Attributes, and Values
2. Execute the Load Host Attribute Utility
3. Verify the Host Attributes Load
4. Create a report template using a Report Template Designer.
Once attribute values are assigned to hosts, a report can query the database
to report on hosts, filtered by the attributes that you’ve created to categorize
them.
Columns
■ One column lists the hosts, which must already exist in the NetBackup IT
Analytics database.
■ Each additional column lists attributes and values that will be applied to the host.
Rows
■ First (Header) Row - Enter the object type--in this case, Host Name--followed
by attribute names. Note that any column may be used for the list of host names.
When you run the utility, you’ll indicate which column contains the host names.
The header row is information only and is not processed as a data row.
■ Subsequent rows list host names, followed by the attribute values that you are
assigning to each host.
Attribute management 108
Load attributes and values and assign to hosts
Note: This utility can be used to load new data as well as to update previously
loaded data. To revise existing data, simply run the utility with an updated CSV file.
su - aptare
sqlplus <pwd>/<pwd>@//localhost:1521/scdb
Where:
'pathname_and_filename' Full path + filename (enclosed in single straight quotes) of the CSV file that you
created.
'domain_name' Name (enclosed in single straight quotes) of the NetBackup IT Analytics domain in
which the host groups and hosts reside.
host_name_column_num Column number in the csv file where the host names are listed. These hosts must
already exist in the NetBackup IT Analytics database. Typically, this would be column
1.
'log_path_name' Full path (enclosed in single straight quotes) where the log file will be
created/updated. Verify that you have write access to this directory.
Example: 'C:\tmp'
Optional: If you do not specify a path and log file name, only error messages will be
written to the scon.err file. To omit this parameter, enter: ''
'log_file_name' Filename of the log where execution status and errors messages are written.
Example: 'HostAttributeLoad.log'
Optional: If you do not specify a path and log file name, only error messages will be
written to the scon.err file. To omit this parameter, enter: ''
'check_valid_value' 'Y' or 'N'Indicates if you want the utility to check if the values provided in this file are
among the existing possible values for the attributes. Y or N must be enclosed in
single straight quotes.
Example:
Execute load_package.loadServerAttributeFile
('C:\myfiles\HostAttributes.csv','QA_Portal',
1,'C:\tmp','HostAttributeLoad.log','Y');
7 Restart the Portal services so that the newly added attributes become available
in the Dynamic Template Designer.
Columns
■ One column lists the arrays, which must already exist in the NetBackup IT
Analytics database.
■ Each additional column lists attributes and values.
Rows
■ First (Header) Row - Enter the object type--in this case, Array Name--followed
by attribute names. Note that any column may be used for the list of array names.
Attribute management 111
Load array attributes and values and assign to arrays
When you run the utility, you’ll indicate which column contains the array names.
The header row is information only and is not processed as a data row.
■ Subsequent rows list arrays, followed by the attribute values that you are
assigning to each array.
/opt/aptare/database/stored_procedures (Linux)
\opt\oracle\database\stored_procedures (Windows)
Note: This utility only assigns attributes to active arrays. If an array exists in the
system, but it is inactive, the log will indicate that no attribute was assigned.
su - aptare
sqlplus <pwd>/<pwd>@//localhost:1521/scdb
Where:
'pathname_and_filename' Full path + filename (enclosed in single straight quotes) of the CSV file you created.
'domain_name' Name (enclosed in single straight quotes) of the NetBackup IT Analytics domain in
which the arrays reside.
array_name_column_num Column number in the csv file where the array names are listed. These arrays must
already exist in the NetBackup IT Analytics database. Typically, this would be column
1.
'log_path_name' Full path (enclosed in single straight quotes) where the log file will be created/updated.
Verify that you have write access to this directory.
Example: 'C:\tmp'
Optional: If you do not specify a path and log file name, only error messages will be
written to the scon.err file. To omit this parameter, enter: ''
'log_file_name' Filename of the log where execution status and errors messages are written.
Example: 'ArrayAttributeLoad.log'
Optional: If you do not specify a path and log file name, only error messages will be
written to the scon.err file. To omit this parameter, enter: ''
'check_valid_value' 'Y' or 'N' Indicates if you want the utility to check if the values provided in this file are
among the existing possible values for the attributes. Y or N must be enclosed in single
straight quotes.
Example:
Execute load_package.loadArrayAttributeFile
('C:\myfiles\ArrayAttributes.csv','QA_Portal',
1,'C:\tmp','ArrayAttributeLoad.log','Y');
7 Restart the Portal services so that the newly added attributes become available
in the Dynamic Template Designer.
Note: Currently, application attributes can be used only in reports created with the
SQL Template Designer.
Note: This CSV file becomes the primary document of record for Application
Database Attributes and therefore should be preserved in a working directory for
future updates.
1. Create a spreadsheet table, in the format shown in the following example, and
save it as a CSV file in a working directory. This file is specific to application
databases.
Columns
■ Columns list the objects that uniquely identify an application. For an Application
Database, the required columns are: Host Name, DB Name, DB Instance.
■ Each additional column lists attributes and values.
Rows
■ First (Header) Row - Contains the fields that uniquely identify an application,
followed by the attribute names. The header row is information only and is not
processed as a data row.
Attribute management 115
Load application database attributes and values
■ Subsequent rows list the objects that uniquely identify an application database,
followed by the attribute values that you are assigning to each application
database.
/opt/aptare/database/stored_procedures (Linux)
\opt\oracle\database\stored_procedures (Windows)
su - aptare
sqlplus <pwd>/<pwd>@//localhost:1521/scdb
Where:
'pathname_and_filename' Full path + filename (enclosed in single straight quotes) of the CSV file
'domain_name' Name (enclosed in single straight quotes) of the domain in which the host groups and
hosts reside; Example: 'DomainEMEA'
Attribute management 116
Load application database attributes and values
db_name_column_num Column number in the csv file where the DB Name is listed; Example: 2
db_instance_column_num Column number in the csv file where the DB Instance is listed; Example: 3
host_name_column_num Column number in the csv file where the Host Name is listed; Example: 1
'log_path_name' Full path (enclosed in single straight quotes) where the log file will be created/updated;
verify that you have write access to this directory.
Optional: If a log path and filename are not specified, log records are written to scon.log
and scon.err. To omit this parameter, enter: ''
Example: 'c:\configs'
Optional: If a log path and filename are not specified, entries are written to scon.log
and scon.err. To omit this parameter, enter: ''
Example: 'DBAttributes.log'
Y - Checks if the attribute value exists. If the utility determines that the attribute value
is not valid, it skips this row and does not assign the attribute value to the application
database.
N - Updates without checking that the attribute value exists. This option is seldom
chosen, but it is available for certain customer environments where attributes may have
been created without values (with scripts that bypass the user interface).
Example:
SQL> Execute
load_package.loadDBAttributeFile('/config/DBAttributes.csv',
'DomainEMEA', 2, 3, 1,'/config/logs','DBAttributes.log','Y');
5. Enter the following query in the SQL Template Designer to verify Application
Database attributes:
Note: This CSV file becomes the primary document of record for MS Exchange
Organization Attributes and therefore should be preserved in a working directory
for future updates.
1. Create a spreadsheet table, in the format shown in the following example, and
save it as a CSV file in a working directory. This file is specific to MS Exchange
Organizations.
Attribute management 118
Load MS Exchange organization attributes and values
Columns
■ Columns list the objects that uniquely identify an application. For MS Exchange,
the required columns are: MS Exchange Organization and Host Name.
Rows
■ First (Header) Row - Names the fields that uniquely identify an application,
followed by the attribute names.
■ Subsequent rows list the objects that uniquely identify an MS Exchange
Organization--in this case, MS Exchange Organization and Host Name--followed
by the attribute values that you are assigning to each MS Exchange Organization.
/opt/aptare/database/stored_procedures (Linux)
\opt\oracle\database\stored_procedures (Windows)
Attribute management 119
Load MS Exchange organization attributes and values
su - aptare
sqlplus <pwd>/<pwd>@//localhost:1521/scdb
SQL> Execute
load_package.loadExchOrgAttributeFile('pathname_and_filename',
'domain_name',exchange_org_column_num,host_name_column_num,
'log_path_name','log_file_name','check_valid_value');
Where:
'pathname_and_filename' Full path + filename (enclosed in single straight quotes) of the CSV file
'domain_name' Name (enclosed in single straight quotes) of the NetBackup IT Analytics Domain in
which the host groups and hosts reside; Example: 'DomainEMEA'
exchange_org_column_num Column number in the csv file where the MS Exchange Organization is listed; Example:
1
host_name_column_num Column number in the csv file where the Host Name is listed; Example: 2
'log_path_name' Full path (enclosed in single straight quotes) where the log file will be created/updated;
verify that you have write access to this directory.
Optional: If a log path and filename are not specified, log records are written to scon.log
and scon.err.
Example: 'c:\configs'
Optional: If a log path and filename are not specified, entries are written to scon.log and
scon.err.
Example: 'MSExchangeAttributes.log'
Y - Checks if the attribute value exists. If the utility determines that the attribute value
is not valid, it skips this row and does not assign the attribute value to the Exchange
Organization.
N - Updates without checking that the attribute value exists. This option is seldom chosen,
but is available for certain customer environments where attributes may have been
created without values (with scripts that bypass the user interface).
Example:
Attribute management 121
Load LUN attributes and values
SQL> Execute
load_package.loadExchOrgAttributeFile('/config/MSExchangeAttributes.csv',
'DomainEMEA',1,2,'/config/logs','MSExchangeAttributes.log','Y');
Note: This CSV file becomes the primary document of record for LUN Attributes
and therefore should be preserved in a working directory for future updates.
Attribute management 122
Load LUN attributes and values
1. Create a spreadsheet table, in the format shown in the following example, and
save it as a CSV file in a working directory. This file is specific to loading LUN
attributes.
Columns
■ The first column lists the Array Name.
■ The second column lists the LUN Name.
■ Each additional column lists attributes and values that will be applied to the
LUN. Multiple attributes can be assigned to a single LUN object.
Rows
■ First (Header) Row - Contains the fields that uniquely identify the LUN (array
and LUN names), followed by Attribute names. The header row is information
only and is not processed as a data row.
■ Subsequent rows list the Array name and LUN name, followed by the attribute
values that you are assigning to each LUN.
/opt/aptare/database/stored_procedures (Linux)
\opt\oracle\database\stored_procedures (Windows)
su - aptare
sqlplus <pwd>/<pwd>@//localhost:1521/scdb
SQL> Execute
load_package.loadLunAttributeFile('pathname_and_filename',
'domain_name',array_name_column_num, lun_name_column_num
,'log_path_name','log_file_name','check_valid_value');
Where:
'pathname_and_filename' Full path + filename (enclosed in single straight quotes) of the CSV file
'domain_name' Name (enclosed in single straight quotes) of the domain in which the host groups and
hosts reside; Example: 'DomainEMEA'
array_name_column_num Column number in the csv file where the Array Name is listed; Example: 1
Note that the Array Name and the LUN Name can be either column 1 or 2 of the CSV.
This parameter tells the utility in which column the Array Name will be found.
lun_name_column_num Column number in the csv file where the LUN Name is listed; Example: 2
'log_path_name' Full path (enclosed in single straight quotes) where the log file will be created/updated;
verify that you have write access to this directory.
Optional: If a log path and filename are not specified, log records are written to scon.log
and scon.err. To omit this parameter, enter: ''
Example: 'c:\config'
Optional: If a log path and filename are not specified, entries are written to scon.log and
scon.err. To omit this parameter, enter: ''
Example: 'SwitchAttributes.log'
Y - Checks if the attribute value exists. If the utility determines that the attribute value
is not valid, it skips this row and does not assign the attribute value to the switch object.
N - Updates without checking that the attribute value exists. This option is seldom chosen,
but it is available for certain customer environments where attributes may have been
created without values (with scripts that bypass the user interface).
Example:
Attribute management 125
Load switch attributes and values
SQL> Execute
load_package.loadLunAttributeFile('/config/LUNAttributes.csv',
'DomainEMEA', 1, 2,'/config/logs','LUNAttributes.log','Y');
Note: This CSV file becomes the primary document of record for Switch Attributes
and therefore should be preserved in a working directory for future updates.
1. Create a spreadsheet table, in the format shown in the following example, and
save it as a CSV file in a working directory. This file is specific to loading switch
attributes.
Columns
■ The first column lists the SAN Name.
■ The second column lists the Switch Name.
■ Each additional column lists attributes and values that will be applied to the
switch. Multiple attributes can be assigned to a single switch object.
Rows
■ First (Header) Row - Contains the fields that uniquely identify the SAN and
Switch names, followed by Attribute names. The header row is information only
and is not processed as a data row.
■ Subsequent rows list the SAN Name and Switch Name, followed by the attribute
values that you are assigning to each switch.
/opt/aptare/database/stored_procedures (Linux)
\opt\oracle\database\stored_procedures (Windows)
su - aptare
sqlplus <pwd>/<pwd>@//localhost:1521/scdb
Where:
'pathname_and_filename' Full path + filename (enclosed in single straight quotes) of the CSV file
'domain_name' Name (enclosed in single straight quotes) of the domain in which the host groups and
hosts reside; Example: 'DomainEMEA'
san_name_column_num Column number in the csv file where the SAN Name is listed; Example: 1
Note that the SAN Name and the Switch Name can be either column 1 or 2 of the CSV.
This parameter tells the utility in which column the SAN Name will be found.
switch_name_column_num Column number in the csv file where the Switch Name is listed; Example: 2
'log_path_name' Full path (enclosed in single straight quotes) where the log file will be created/updated;
verify that you have write access to this directory.
Optional: If a log path and filename are not specified, log records are written to scon.log
and scon.err. To omit this parameter, enter: ''
Example: 'c:\config'
Optional: If a log path and filename are not specified, entries are written to scon.log and
scon.err. To omit this parameter, enter: ''
Example: 'SwitchAttributes.log'
Y - Checks if the attribute value exists. If the utility determines that the attribute value
is not valid, it skips this row and does not assign the attribute value to the switch object.
N - Updates without checking that the attribute value exists. This option is seldom
chosen, but it is available for certain customer environments where attributes may have
been created without values (with scripts that bypass the user interface).
Example:
Attribute management 129
Load port attributes and values
SQL> Execute
load_package.loadSwitchAttributeFile('/config/SwitchAttributes.csv',
'DomainEMEA', 1, 2,'/config/logs','SwitchAttributes.log','Y');
Note: This CSV file becomes the master document of record for Port Attributes and
therefore must be preserved in a working directory for future updates.
■ Create a spreadsheet table, in the format shown in the following example, and
save it as a CSV file in a working directory. This file is specific to loading port
attributes.
Columns:
■ The first column lists the Fabric Identifier.
■ The second column lists the Switch Identifier.
■ The third column lists the Port Element Name.
■ Each additional column lists attributes and values that will be applied to the port.
Multiple attributes can be assigned to a single port object.
Rows:
■ First (Header) Row - Contains the fields that uniquely identify the Fabric Identifier
Name, Switch Identifier, Port element name followed by Attribute names. The
header row is information only and is not processed as a data row.
■ Subsequent rows list the Fabric Identifier, Switch Identifier, Port element name
followed by the attribute values that you assign to each port.
su -aptare
sqlplus <pwd>/<pwd>@//localhost:1521/scdb
Example:
sqlplus portal/portal@//localhost:1521/scdb
Attribute management 132
Load port attributes and values
Example:
SQL> Execute
load_package.loadPortAttributeFile('/tmp/portAttributes.csv',
'DomainEMEA', 1, 2,3,'/tmp/logs','portAttributes.log','Y');
Where:
Fabric_identifier_col_num Column number in the CSV file where the Fabric Identifier is
listed; Example: 1
switch_identifier_col_num Column number in the CSV file where the Switch Identifier is
listed; Example: 2
port_ele_name_col_num Column number in the CSV file where the Port Element Name
is listed; Example: 3
log_path_name Full path (enclosed in single straight quotes) where the log
file will be created/updated; verify that you have write access
to this directory.
Example: 'PortAttributes.log'
Attribute management 133
Load Subscription attributes and values
Note: This CSV file becomes the master document of record for Subscription
Attributes and hence must be preserved in a working directory for future updates.
Create a spreadsheet table, in the format shown in the following example, and save
it as a CSV file in a working directory. This file is specific to loading Subscription
attributes.
Columns:
■ The first column lists the Subscription Identifier.
■ Each additional column lists attributes and values that will be applied to the
subscription.
Multiple attributes can be assigned to a single subscription object.
Rows:
■ First (Header) Row - Contains the fields that uniquely identify the Subscription
Identifier followed by Attribute names. The header row is information only and
is not processed as a data row.
■ Subsequent rows list the Subscription followed by the attribute values that you
are assigning to each subscription.
su -aptare
sqlplus <pwd>/<pwd>@//localhost:1521/scdb
Example:
sqlplus portal/portal@//localhost:1521/scdb
Attribute management 136
Load Subscription attributes and values
Example:
Execute load_package.loadSubscriptionAttrFile
('/tmp/subscription.csv', 'DomainEMEA',1,'/tmp',
'subscription.log',’Y');
Where:
Windows example:
'c:\temp\subscription.csv'
Linux example:
'/tmp/subscription.csv'
Example: 'subscription.log'
Note: In addition to the regularly scheduled data collection, the CSV file also can
be imported manually.
See “Manually loading the CSV file” on page 141.
Considerations
■ Files can be imported more than once. Importing will not result in duplicate
entries.
Importing generic backup data 139
Configuring generic backup data collection
Note: The CSV file must be UTF-8 encoded, however be sure to remove any UTF-8
BOMs (Byte Order Marks). The CSV cannot be properly parsed with these additional
characters.
Importing generic backup data 140
CSV Format Specification
VendorName STRING The name of the backup application used to perform the backup,
enclosed in single straight quotes.
ClientName STRING The host name of the machine being backed up, enclosed in
single straight quotes.
StartDateString DATE The start date and time of the backup job in the format:
YYYY-MM-DD HH:MI:SS (enclosed in single straight quotes).
Note: Adhere to the specific date format--number of digits and
special characters--as shown above.
FinishDateString DATE The end date and time of the backup job in the format:
YYYY-MM-DD HH:MI:SS (enclosed in single straight quotes).
Note: Adhere to the specific date format--number of digits and
special characters--as shown above.
BackupKilobytes NUMBER The numeric size of the backup in kilobytes (otherwise use 0).
Remember NetBackup IT Analytics uses 1024 for a KiB.
NbrOfFiles NUMBER The number of files that were backed up (otherwise use 0).
MediaType STRING The type of media that was used: T for Tape or D for Disk,
enclosed within single straight quotes.
TargetName STRING File system backed up by the managed backup system (MBS),
enclosed in single straight quotes.
Importing generic backup data 141
Manually loading the CSV file
EXAMPLE: genericBackupJobs.csv
'Mainframe Backup','mainframe_name','10.10.10.10','BACKUP','2008-03-24
10:25:00', '2008-03-24
11:50:00',3713,45221,'D',0,'413824','Retail_s01002030','Incremental','/I:/Shared/','Daily'
'UNIX tar backup','host_xyz.anyco.com','null','BACKUP','2008-03-24
10:22:00','2008-03-24
12:50:00',1713,45221,'T',1,'5201','HQ_Finance','Full','/D:/Backups/','Daily'
'ArcServe','host_123.anyco.com','null','RESTORE','2008-03-24
8:22:00','2008-03-24
9:12:00',0,0,'T',0,'2300','Retail_s03442012','Incremental','/I:/Shared/','EOM'
C:\opt\APTARE\mbs\bin\listcollectors.bat
Linux:
/opt/aptare/mbs/bin/listcollectors.sh
In the output, look for the Event Collectors section associated with the Software
Home--the location of the CSV file (the path that was specified when the Data
Collector Policy was created). Find the Event Collector ID and Server ID.
Active: true
Active: true
Software Home: C:\gkgenericBackup.csv
Server Address: 102961
Domain: gkdomain
Group Id: 102961
Sub-system/Server Instance/Device Manager Id: 102961
Schedule: */10 * * * *
2. Use the following commands to load the data from the CSV file into the Portal
database.
Windows:
C:\opt\APTARE\mbs\bin\loadGenericBackupData.bat <EventCollectorID>
<ServerID> [verbose]
Linux:
/opt/aptare/mbs/bin/loadGenericBackupData.sh <EventCollectorID>
<ServerID> [verbose]
Note: If you run the command with no parameters, it will display the syntax.
The load script will check to see if the backup server and client already exist; if not,
they will be added to the database. The script then checks for a backup job with
the exact same backup server, client, start date and finish date. If no matches are
found, the job will be added; otherwise, it will be ignored. This prevents duplicate
entries and allows the import of the script to be repeated, if it has not been updated.
Once the load is complete, these clients and jobs will be visible via the NetBackup
IT Analytics Portal and the data will be available for reporting.
Chapter 10
Backup job overrides
This chapter includes the following topics:
■ Overview
Overview
In some backup environments, it is desirable to treat backup warning status
messages as successful backups. A configuration modification can change the
default behavior of NetBackup IT Analytics reports. You may want to override other
backup statuses as well. NetBackup IT Analytics supports job overrides for all
supported backup products.
Use the following procedure to update the job override configuration.
Note: For the purpose of simplicity, only NetWorker and NetBackup job override
steps are shown. Similar configuration changes can be done for other backup
products.
/opt/aptare/database/stored_procedures/job_override.sql
/opt/aptare/database/stored_procedures/leg/leg_adaptor_pkg.plb
Backup job overrides 144
Configure a backup job override
Windows:
C:\opt\aptare\database\stored_procedures\job_override.sql
C:\opt\aptare\database\stored_procedures\leg\leg_adaptor_pkg.plb
Note: The above example is for illustration purposes only. You may choose
to customize job overrides for other backup vendor job statuses.
6. Go to:
■ Linux: /opt/aptare/database/stored_procedures/
■ Windows: C:\opt\aptare\database\stored_procedures\
8. Go to:
■ Linux: /opt/aptare/database/stored_procedures/leg/
■ Windows: C:\opt\aptare\database\stored_procedures\leg
port WWN ensures unique hosts. With WWN matching, if different host names are
encountered, a host alias record is also created in anticipation of future host data
collection.
By default, WWN matching is turned off. A system parameter can be configured to
turn on WWN matching prior to data collection.
To turn on host WWN matching, type the following command at the command line.
1 5 205 CommVault
Simpana
4 51 301 VMware
2 22 403 EMC
2 23 409 NetApp
2 24 410 HP
2 25 412 IBM
2 28 415 HP EVA
4. Execute the following to customize the host source subsystem ranking for your
enterprise. This command can be repeated for as many vendor products
(subsystems) as needed in your environment. It updates a custom host source
ranking table, which is specific to your environment.
See “Determining host ranking” on page 152.
where:
1 = backup, 2 = capacity, 4 =
virtualization, 8 = replication, 16 = fabric,
32 = File Analytics
Example:
5. Execute the following to view the host ranking that you customized for your
enterprise:
6. To update a rank that you have customized, use the following steps. Refer to
the
for column names.
Managing host data collection 152
Determining host ranking
Example:
If a host has multiple HBAs, the CSV should contain a row for every HBA so that
all HBAs for the host will be loaded. For example:
Note: When running this script, pay attention to the value you supply for the
isIncremental parameter. When you specify 'N' your existing host data is deleted.
When you specify 'Y' your host data is added without removing existing records.
where:
If 'Y',
an HBA port record will be created if none exists.
If 'N',
old HBA port records will be deleted first
and then new records created. Take care
when choosing this option, as it will
remove existing host data from the database.
'CSVfile' CSV file path and name (enclosed in single straight quotes)
'logPathname' Log path name (enclosed in single straight quotes). The audit log
file is created only if errors occur. Other status is logged in
scon.log.
'logFilename' Log file name (enclosed in single straight quotes). This audit log
file is created only if errors occur. Other status is logged in
scon.log.
Example:
■ Navigation overview
■ Anomaly detection
■ Custom parameters
Functional Areas
Functional areas are divided into separate tabs as follows:
■ Data Collection - Set values for all collection, product-based collection and
vendor-based.
See “Data collection: Capacity chargeback” on page 160.
■ Data Retention - Modify default retention periods for systems that are collected
by traditional Data Collectors to determine when data is purged from the
database. Purging is required to maintain reasonable table sizes. Data types
include historical and performance data. Fields are displayed based on what
has been installed and collected.
For systems collected by Data Collectors deployed via the SDK, use the
procedure described in:
See “Data retention periods for SDK database objects” on page 271.
■ Database Administration - Set values to configure the structure of the database.
See “Database administration: database” on page 160.
■ Host Discovery - Enable rules for host matching when the system is discovering
new hosts/clients.
See “Host discovery: EMC Avamar” on page 161.
See “Host discovery: Host” on page 162.
■ Inventory - Modify the database polling frequency for Inventory objects.
■ Portal - Modify default values for a variety of Portal properties including:
■ Host attribute import parameters
■ Maximum number of open tabs
■ Security settings such as time out values and allowed login attempts
■ Custom headers and footers for reports and dashboards.
■ Event audit logging. See “Events captured for audit” on page 163.
■ Custom Parameters - Add, edit and delete custom system parameters, Portal
properties and their associated values. This area allows free form entry for
name/value pairs.
See “Custom parameters” on page 165.
System configuration in the Portal 158
System configuration: functions
Function Description
buttons
Save and Apply Before saving and applying changes, a dialog is displayed to show old
values and new values to verify the update. Some changes require a
Portal restart. If a restart is required, this is displayed in the confirmation
dialog.
Undo Changes Cancels changes and resets to the last value across all tabs within the
System Configuration area. Use the field level refresh icon to reset
values field by field.
Restore All Resets default values for all parameters across all tabs within the
Defaults System Configuration area. Rollover the icons to display the parameters
default value.
Download Click to download a text file of all your system setting values. This
includes any custom parameter values.
Navigation overview
This self-service portal makes it easy to quickly determine what parameters you
are setting. The following graphic outlines some of the built-in features.
System configuration in the Portal 159
System configuration parameter descriptions: Additional info
Anomaly detection
Anomaly detection helps to detect suspicious activity in backup operation through
analysis of various backup attributes such as:
■ Backup image size
■ Number of files backed up
■ Kilobytes transferred
■ Deduplication rate
■ Backup job completion time
System configuration in the Portal 160
Data collection: Capacity chargeback
Any significant change in the above parameters detected during a backup job are
reported as anomalies. The General section lists all the data protection tools for
which you can enable anomaly detection.
You can configure anomaly detection from NetBackup Web UI (see Configure
anomaly detection settings section of the NetBackup Security and Encryption Guide).
The anomalies detected by NetBackup are capture by the Security Details probe
in the NetBackup Policy for the primary server.
longer than this time, the job will stop. In very large environments, it may be
necessary to increase this time to accommodate large indexes.
Note: Another parameter, Enable IP address matching for Host search, is also
used by the Avamar host-matching algorithm. If your environment has already
enabled this parameter, it will be honored by the host-matching algorithm.
■ Enable short name matching for Avamar Host Search: This parameter is used
to enable comparisons of a client’s base name. During data collection, the data
persistence logic will compare the short name retrieved by data collection and
compare it to what exists in the Portal database. This parameter currently is
used only while searching for a host in Avamar data. For example, the host
name in the database might be xyz.aptare.com, but the collected host name
is xyz.apt.com. If this parameter is enabled, the host-matching algorithm will
find the host with the name, xyz.aptare.com, based on matching the short
name, xyz, thereby preventing the creation of a duplicate host.
■ Remove patterns in host matching: This parameter will enable the stripping of
unwanted suffixes while searching for hosts based on host name. This parameter
is currently used only while searching for a host in Avamar data.
■ Prerequisite: Any suffix that needs to be ignored must first be inserted into
the apt_host_name_excld_suffix database table, as described in the
following procedure. When the parameter is enabled, the host-matching
algorithm searches this table for suffixes that should be ignored.
Add suffixes to the database table:
Examples
The data searching logic used when this system parameter is enabled is described
in the following examples.
■ Host name in the database is xyz and the collected host name is
xyz_UCMAAZWlR6kihhBHN5R8iA. The host-matching algorithm will find the
host with the name xyz and _UCMAAZWlR6kihhBHN5R8iA will be removed
while searching.
■ Host name in the database is xyz and the collected host name is
xyz_UA3rT06VdULrQyViIxEFuQ2011.07.22.16.05.49. The host-matching
algorithm will find the host with the name xyz and
_UA3rT06VdULrQyViIxEFuQ2011.07.22.16.05.49 will be removed while
searching. The time portion, 2011.07.22.16.05.49, is automatically removed if
the parameter is enabled.
■ Host name in the database is xyz and the collected host name is
xyz2011.07.22.16.05.49. The host-matching algorithm will find the host with the
name xyz and 2011.07.22.16.05.49 will be removed while searching. The time
portion, 2011.07.22.16.05.49, is automatically removed if the parameter is
enabled.
■ Host name in database is xyz and the collected host name is
xyz2011.07.22.16.05.49_UA3rT06VdULrQyViIxEFuQ. The host-matching
algorithm will find the host with the name xyz and
2011.07.22.16.05.49_UA3rT06VdULrQyViIxEFuQ will be removed while
searching. The time portion, 2011.07.22.16.05.49, is automatically removed if
the parameter is enabled.
IT Analytics. The short name refers to the shortest possible matching name when
the FQDN is parsed into its constituent tokens. The base name matches the left-most
token in the FQDN of a host, also known as the host name, when it is parsed into
its constituent tokens.
Custom parameters
Customizations to the Portal extend beyond what is available in the System
Configuration. When working with Services and Veritas Support, you may be required
to add or edit custom parameters to address a particular issue. The Custom
System configuration in the Portal 166
Custom parameters
Parameters tab enables free-form key value pairs to further customize NetBackup
IT Analytics.
Note: Prior to version 10.3, customizations to the Portal were made using a file,
portal.properties. Not all of those settings are displayed in the System Configuration
feature. If you upgrade from a version prior to 10.3, those properties are displayed
and automatically populated in the Custom Parameters.
Portal customizations
This section covers customizations for the portal that are not available through the
user interface. Use Custom Parameters to add/edit and delete these properties.
■ See “Configuring global default inventory object selection” on page 167.
■ See “Restricting user IDs to single sessions” on page 167.
■ See “Customizing date format in the report scope selector” on page 167.
■ See “Customizing the maximum number of lines for exported reports”
on page 168.
■ See “Customizing the total label display in tabular reports” on page 168.
■ See “Customizing the host management page size” on page 168.
■ See “Customizing the path and directory for File Analytics database” on page 168.
■ See “Configuring badge expiration” on page 169.
■ See “Configuring the maximum cache size in memory” on page 169.
■ See “Configuring the cache time for reports” on page 170.
System configuration in the Portal 167
Custom parameters
portal.ocn.defaultVisibleObjectType=
HOST,ARRAY,SWITCH,BACKUPSERVER,VM_SERVER,VM_GUEST,
DEDUPLICATION_APPLIANCE,DATASTORE,EC2_INSTANCE,
S3_BUCKET,AZURE_STORAGE_ACCOUNT,AZURE_VIRTUAL_MACHINE
portal.security.allowUserToLoginMultipleTimes=false
Where the <new limit value> is the number of rows greater than 20,000 that
your report export requires. For example, if your report has 36,000 rows enter
a number greater than 36000. Note that the new limit value cannot contain
commas or decimal points. Keep in mind that Portal server performance can
degrade considerably for very large reports. For very large reports, you may
want to segment the scope into multiple reports.
portal.hostManagementPageSize=xxxx
For example:
Linux:
fa.root=/opt/aptare/fa_db
Windows:
Note: Specified preferred folder location should have a folder named raw . If the
raw folder is not specified, NetBackup IT Analytics will display an error message.
cloudTemplateNewBadgeExpireInDays = 14
portal.reports.cache.maxSizeInMemory
portal.reports.cache.timeOut
■ Overview
Overview
Array Performance Profiling enables you to monitor performance over time and to
compare your enterprise-specific performance with the performance found in a
broader community. You can customize the time of day when your environment’s
profiling job will run.
jobName := dba_package.getSchedulerJobName('recalIntPerformanceProfile');
THEN
DBMS_OUTPUT.PUT_LINE('recalIntPerformanceProfile exists with default name
'|| jobName ||' hence will be removed and recreated.');
DBMS_SCHEDULER.DROP_JOB(job_name => jobName);
jobName := NULL;
END IF;
■ Three hours after a Portal Installation or Upgrade, this job runs for the first
time. See the parameter: (SYSDATE + (3/24))
■ After the first run, this job will run at 10:00 a.m. every day. See the
parameter: (TRUNC(SYSDATE+1, "DD ") + (10/24)
■ This Performance Profiler will calculate the last two hours of statistics. See
the parameter: SYSDATE-2/24
su - aptare
sqlplus portal/portal@//localhost:1521/scdb @setup_srm_jobs.plb
Chapter 14
LDAP and SSO
authentication for Portal
access
This chapter includes the following topics:
■ Overview
■ Configure AD/LDAP
Overview
NetBackup IT Analytics supports user authentication and authorization using:
■ Active Directory (AD) or Lightweight Directory Access Protocol (LDAP):
NetBackup IT Analytics supports user authentication and optionally supports
authorization using Active Directory (AD) or Lightweight Directory Access Protocol
(LDAP).
■ Single Sign On (SSO) for a standard unified login: NetBackup IT Analytics
supports Single Sign On (SSO) for a standard unified login. User authentication
is performed through an external Identity Management Server allowing for an
increased level of security for user passwords and identity details.
NetBackup IT Analytics can support both authentication types simultaneously or
individually as required. User's login experience changes based on the authentication
type set for the user or the user group from the Portal.
LDAP and SSO authentication for Portal access 175
Overview
Note: If the Portal was upgraded from a lower version, you may have to clear the
browser cache for the authentication type and SSO options to appear on the login
screen.
2. Choose Connection -> Connect and enter the Server and Port number.
3. Choose Connection -> Bind and enter the Administrator for the User ID and
then the password to authenticate the user access.
4. Choose View -> Tree to browse the Active Directory tree.
5. The Tree View window expects a BaseDN entry. The Tree View displays a
tree hierarchy with the settings of the users under the Base DN. Most
environments have Exchange Objects located in:
(&(objectClass=msExchExchangeServer)(cn=<serverShortName>))
where <serverShortName> is the name before the dot (.) of a fully qualified
domain name
The filtered attributes of interest are: legacyExchangeDN and serialNumber
4. If the filter in the previous step does not result in what you need, try the following
Filter:
(&(objectClass=msExchExchangeServer)(cn=<serverName>))
(objectClass=msExchStorageGroup)
(objectClass=msExchMDB)
Configure AD/LDAP
NetBackup IT Analytics supports user authentication and optionally supports
authorization using Active Directory (AD) or Lightweight Directory Access Protocol
(LDAP). Configuration of AD/LDAP authentication and authorization is driven through
the configuration in Admin > Authentication > LDAP.
This section covers the configuration steps for the following scenarios:
■ AD/LDAP configuration for authentication - describes the procedure to configure
AD/LDAP for user authentication only.
■ AD/LDAP Configuration for authentication and authorization - describes the
procedure to configure AD/LDAP for user authentication and authorization.
■ Migrate portal users when AD/LDAP authentication is configured - describes
the configuration required to authenticate using AD/LDAP for users previously
using database for authentication.
■ Migrate portal users with LDAP authentication and authorization configured -
describes the configuration required for authentication and authorization using
AD/LDAP for users previously using database for authentication.
LDAP and SSO authentication for Portal access 178
Configure AD/LDAP
<AD_IP_Address> <AD_Domain_Name>
For example:
192.168.2.90 ad.gold
Authorisation You can skip enabling this as you are only enabling
authentication.
LDAP Domain Name This field is deprecated. If this field appears in your
Portal, enter LDAP as its value.
Example:
LDAP URL Set to the host and port of your AD. Note that this URL
value has a prefix ldap:. If using SSL, change the prefix
to ldaps:.
If you are using Active Directory for your external LDAP
configuration, you may want to use the global catalog
port of 3268 instead of port 389.
Example:
ldap://example.company.com:389
or
ldaps://example.company.com:636
LDAP and SSO authentication for Portal access 180
Configure AD/LDAP
Search Base Set the location from where the search will be performed
to locate users in the authentication directory.
Example:
dc=example,dc=company,dc=com
CN=Admin,CN=Users,DC=example,DC=company,DC=com
Login Attribute Enter the login attribute used for authentication. This is
the attribute name in Active Directory that specifies the
username, such as uid or sAMAccountName.
Example:
sAMAccountName
New User Domain Enter the domain name on which the user needs to be
authorized. Get the domain name from Admin >
Domains > Domain Name.
Example:
example.company.com
Disable User Attribute Name Enter the value of the AD attribute that indicates whether
the user is active or inactive. During Portal authentication
via AD, the REST API uses the AD attribute assigned to
this property to check whether the user is still an active
AD user.
Disable User Attribute Value Enter the same value as that of the AD attribute (specified
in Disable User Attribute Name, which indicates the
AD user is disabled.
7 Click Test Connection. Make the required changes if the test fails.
8 Click Save.
Enabling LDAP authentication is complete.
Note: If you are unable to save the configuration, check if the JDK truststore
password was changed before the last upgrade and ensure the updated
password is assigned to the portal.jdk.trustStore.password parameter
from Admin > System Configuration > Custom page of the Portal. The JDK
truststore locations for Windows and Linux are
<portal_installation_path>\jdk\lib\security\cacerts and
/usr/java/lib/security/cacerts respectively.
LDAP and SSO authentication for Portal access 183
Configure AD/LDAP
# sqlplus portal/<portal_password>@scdb
# UPDATE ptl_user SET ldap_id = 'Admin' WHERE user_id = 100000;
# commit;
Use this updated username to login to the external directory, instead of aptare.
Since the user account aptare (user_id=100), is an internal bootstrap user, it
is required to maintain referential integrity among database tables and therefore
you must avoid using aptare for external LDAP integration.
Note: The user_id = 100000 is always the default user_id for the super user
account.
10 Login to the portal using any user name common across AD/LDAP and the
NetBackup IT Analytics Portal.
If the Portal was upgraded from a lower version, you may have to clear the
browser cache for the authentication type and SSO options to appear on the
login screen.
LDAP Domain Name Enter the Portal domain name where the new user gets
created. It is used provided ldap.authorization is set to
true.
Example:
example.company.com
LDAP URL Set to the host and port of your AD. Note that this URL
value has a prefix ldap:. If using SSL, change the prefix
to ldaps.
Example:
ldap://example.company.com:389
or
ldaps://example.company.com:636
LDAP and SSO authentication for Portal access 185
Configure AD/LDAP
Search Base Set the location from where the search will be performed
to locate users in the authentication directory.
Example:
Example:
CN=Admin,CN=Users,DC=example,DC=company,DC=com
Login Attribute Enter the login attribute used for authentication. This is
the attribute name in Active Directory that specifies the
username, such as uid or sAMAccountName.
Example:
sAMAccountName
New User Domain Enter the Portal domain name where new user gets
created. It is used only if Authorisation is enabled. To
find domain name in portal, navigate to Admin >
Domains > Domain Name.
Example:
example.company.com
Disable User Attribute Name Enter the value of the AD attribute that indicates whether
the user is active or inactive. During Portal authentication
via AD, the REST API uses the AD attribute assigned to
this property to check whether the user is still an active
AD user.
Disable User Attribute Value Enter the same value as that of the AD attribute (specified
in Disable User Attribute Name, which indicates the
AD user is disabled.
7 Click Test Connection. Make the required changes if the test fails.
8 Click Save.
Enabling LDAP authentication and authorization is complete.
Note: If you are unable to save the configuration, check if the JDK truststore
password was changed before the last upgrade and ensure the updated
password is assigned to the portal.jdk.trustStore.password parameter
from Admin > System Configuration > Custom page of the Portal. The JDK
truststore locations for Windows and Linux are
<portal_installation_path>\jdk\lib\security\cacerts and
/usr/java/lib/security/cacerts respectively.
LDAP and SSO authentication for Portal access 188
Configure AD/LDAP
# sqlplus portal/<portal_password>@scdb
# UPDATE ptl_user SET ldap_id = 'Admin' WHERE user_id = 100000;
# commit;
Use this updated username to login to the external directory, instead of aptare.
Since the user account aptare (user_id=100), is an internal bootstrap user, it
is required to maintain referential integrity among database tables and therefore
you must avoid using aptare for external LDAP integration.
Note: The user_id = 100000 is always the default user_id for the super user
account.
10 Login to the portal using any user present in the Active Directory and part of
the group created in step 2.
If the Portal was upgraded from a lower version, you may have to clear the
browser cache for the authentication type and SSO options to appear on the
login screen.
Note that to automatically create a user in the portal, these attributes must be
set for each user in AD/LDAP:
■ givenName: Mandatory. It is used as the first name of the user.
■ telephoneNumber: Optional
■ mobile: Optional
■ mail: Mandatory
Note: If for any reason the LDAP configuration is disabled from the portal, the portal
administrator must set the password for all the AD/LDAP users in portal.
For example: Assume Joe has joe.smith as LDAP_ID in the portal database. If
ldap.loginAttribute is set to sAMAccountName on the LDAP screen, and on
AD/LDAP, the value of sAMAccountName must be joe.smith for the user to login
successfully. If the value of sAMAccountName is other than joe.smith, you must
change the LDAP_ID of the user in the PTL_USER table of the portal database to
joe.smith to match the user name present in AD/LDAP.
To update the LDAP_ID in the portal database:
1 Login to the Oracle database server of the NetBackup IT Analytics Portal.
■ On Linux: Login as aptare user. If you have already logged in as root, use
su -aptare.
For example:
3 Update the LDAP_ID with the user ID obtained from the above step.
For example:
4 Repeat steps 2 and 3 for all the users having a mismatch in their IDs.
Note: Use LDAP_ID mentioned in step 3 (above) to login to AD/LDAP. Avoid using
user name aptare as aptare (user_id=100) is an internal bootstrap user required
to maintain referential integrity amongst the database tables. Hence you must not
change aptare or use it for external LDAP integration.
you must update the user ID in the portal database. Also for user authorization, you
must create user groups in the portal which match with at least one AD group that
includes the user name.
For example: Assume Joe has joe.smith as LDAP_ID in the portal database. If
ldap.loginAttribute is set to sAMAccountName on the LDAP screen and on AD/LDAP,
the value of sAMAccountName must be joe.smith for the user to login successfully.
If the value of sAMAccountName is other than joe.smith, you must change the
LDAP_ID of the user in the PTL_USER table of the portal database to joe.smith to
match the user name present in AD/LDAP.
To update the LDAP_ID in the portal database:
1 Login to the NetBackup IT Analytics Portal before configuraing AD for
authentication and create the required user groups with appropriate privileges.
The user group name must match with that of the AD/LDAP group name. This
user group is used to authorize the user once AD/LDAP is configured.
2 Login to the Oracle database server of the NetBackup IT Analytics Portal.
■ On Linux: Login as aptare user. If you have already logged in as root, use
su -aptare.
For example:
4 Update the LDAP_ID with the user ID obtained from the above step.
For example:
5 Repeat steps 3 and 4 for all the users having mismatch in their user IDs.
LDAP and SSO authentication for Portal access 191
Configure single sign-on (SSO)
Note: Use LDAP_ID mentioned in step 4 (above) to login to AD/LDAP. Avoid using
user name aptare as aptare (user_id=100) is an internal bootstrap user required
to maintain referential integrity amongst the database tables. Hence you must not
change aptare or use it for external LDAP integration.
populated. These attributes must be exposed by both the external LDAP directory
and the IDP server. The names of attributes are as follows:
■ displayName: <first_name> <last_name> For example Jane Smith
■ email: email address
■ mobile: cell phone or mobile number
■ telephoneNumber: work phone or home phone number
■ sAMAccountName: the unique user name that is used as a login
■ memberOf: List of group names to which the user belongs, supporting with or
without domain prefixed for Azure IDP. This attribute requires customization for
a Microsoft Azure IDP. It is recommended to set Groups Assigned to the
application instead of All groups or Security groups for "memberOf" attribute.
Click here for more details.
The memberOf attribute must be in the below supported formats:
■ DOMAIN_NAME\userGroupName
■ CN=userGroupName,CN=Users,DC=aptareadfs,DC=com (for non-AZURE
IDPs)
Before an external user can use SSO to log into the Portal, they must belong to
one external directory group that also exists as a User Group in the NetBackup IT
Analytics Portal. If the setup criteria is met, when the user logs into the Portal for
the first time, their user profile will be synchronized from the external directory. They
will also inherit all privileges assigned to the User Group.
You must have downloaded the SAML metadata XML file from the external Identity
Provider (IdP).
Note: If you are unable to save the configuration, check if the JDK truststore
password was changed before the last upgrade and ensure the updated password
is assigned to the portal.jdk.trustStore.password parameter from Admin >
System Configuration > Custom page of the Portal. The JDK truststore locations
for Windows and Linux are
<portal_installation_path>\jdk\lib\security\cacerts and
/usr/java/lib/security/cacerts respectively.
Note: After activating SSO, a user performing a local login will have to use this
attribute value as a credential to access the Portal.
5 Enter the URL for the portal application. This should be an https URL with a
trailing '/' at the end of URL.
LDAP and SSO authentication for Portal access 194
Configure single sign-on (SSO)
6 Browse to the metadata.xml file that was downloaded from the external Identity
Provider (IdP).
7 Enter the URL for the external IdP server. The entityId must match the value
of entityId as listed in the IdP metadata XML file.
8 Enter the domain to be assigned to the SSO user when the Portal creates it
automatically.
9 Restart the Portal Tomcat service.
Note: If there are issues with the configuration, your Portal may not restart.
There is a utility (resetSSoConfig) available to reset the parameters so the
Portal can be restarted.
Note: The XML file is only available once the configuration settings have been
saved and the Portal Tomcat service has been restarted.
11 After the registration process is complete, open the Portal login screen and try
to login with Single Sign On. If there are issues with the configuration, your
Portal may not restart. Use the resetSSoConfig utility to reset the parameters
so the Portal can be restarted.
Scenario Solution
SSL Certificate Revisions If there are changes to the SSL certificates perform the following steps:
Identity Provider Server (IDP) Revisions If there are changes to the IDP server, the entire SSO configuration and
registration process must be redone.
Identity Provider (IDP) Login Screen Not ■ For this issue, check if both the IDP server and the NetBackup IT
Displaying Analytics Portal are able to resolve host names.
■ A second solution is try to ping the IDP server by hostname from the
NetBackup IT Analytics Portal.
Message Security Error This error is displayed when SSL certificates are not as expected with
regards to the metadata XML files. Verify the SSL certificates satisfy all
the requirements on both the NetBackup IT Analytics Portal and the IDP
server.
Security Provider Not Registered This error indicates the registration process between the IDP server
andNetBackup IT Analytics Portal was not completed successfully. Verify
the exchange of both metadata XML files was done correctly.
IDPSSODescriptor Not Found This error indicates the Entity Base URL and the path to the IDP server
was set incorrectly. Verify there is no ‘/’ at the end of the given path.
Stale Request ■ This error may be caused by using the browser window back button
from the IDP login screen. For this issue, clear your browser cache
and retry the SSO login.
■ This error may be caused by time zone discrepancies between the
IDP server and the NetBackup IT Analytics Portal.
LDAP and SSO authentication for Portal access 196
Enable local authentication
Scenario Solution
Login Issues/Reset Utility If Single Sign On (SSO) is not properly set up in the
Admin>Advanced>System Configuration, after restarting you may not
be able to log into the Portal. This utility resets the Single Sign On (SSO)
parameters to provide Portal access. Run the following scripts from the
command prompt:
Linux
cd /opt/aptare/utils
./resetSSOConfig.sh
Windows
cd C:\opt\aptare\utils
resetSSOConfig.bat
Windows: localAuth.bat
disable
Windows: localAuth.bat
enable
■ Overview
Overview
These instructions are for modifying the Oracle database user passwords for access
to the NetBackup IT Analytics database. You can modify the user passwords, but
do not modify the user names without the assistance of Professional Services.
db.driver This value is customized by the Portal installer and should not be modified.
Change Oracle database user passwords 199
Modify the Oracle database user passwords
db.url This is the address where the NetBackup IT Analytics database resides.
Depends on what was entered during the installation. This may need to
be modified when there is a host name change.
db.user Use this property to change the DB User ID for logging in to access the
database. The default value is portal.
db.password Enter a password to be used with the DB user. The default value is portal.
The password initially is stored in clear text, but after the restart of the
db.password.encrypted=
Tomcat Portal services, the password is saved in the encrypted format
and the clear text password is removed from portal.properties.
db.connection.max Use this property to specify the maximum database connections allowed.
The default value is 25.
db.connection.min Use this property to specify the minimum number of database connections
that the Portal can have. The default value is 25.
db.connection.expiration When a Portal report initiates a long-running database query, this value
(in minutes) establishes when the report will time out if the query takes too
long to complete. The default value is 5.
db.ro_user_password Enter a password to be used with the DB read-only user. The default value
is aptaresoftware123. The password initially is stored in clear text, but
db.ro_user_password.encrypted=
after the restart of the Tomcat Portal services, the password is saved in
the encrypted format and the clear text password is removed from
portal.properties.
db.ro_user_password The Oracle database read-only user password for the NetBackup IT
Analytics database tables. The preset value is aptaresoftware123.
db.ro_user_password.encrypted=
db.sysdba_user The Oracle database System DBA for the NetBackup IT Analytics database
tables. The preset value is system.
Complete these steps to modify passwords for the Oracle database user.
These instructions apply to aptare_ro and portal users.
1 Login with root access on Linux or with admin access on Windows.
2 Stop the portal and agent Tomcat services.
3 Change the user password:
On Linux:
On Windows:
■ sqlplus / as sysdba
■ alter session set container=scdb;
■ commit
■ exit
2. The utility will ask if the passwords are updated on the oracle server.
Enter 'y' if the password is updated on the oracle server, as mentioned in step
1, else enter 'n' to exit.
Note: This updates the specified user's password in the properties files like
portal.properties and datrarcvrproperties.xml.
Run the following SQL statement to determine if the PORTAL user uses the default
password:
If the row is returned, it means the PORTAL user uses the default password.
See “Modify the Oracle database user passwords” on page 199.
Chapter 16
Integrate with CyberArk
This chapter includes the following topics:
■ Introduction
Introduction
CyberArk, password vault application, is designed to discover, secure, rotate, and
control access to privileged account passwords used to access systems throughout
the enterprise IT environment. This integration enables NetBackup IT Analytics to
automate the process of fetching latest Oracle user accounts passwords from
CyberArk, which changes passwords periodically according to the organizational
policy.
To facilitate this integration, CyberArk exposes a REST API to fetch the passwords
from the Agentless AAM (Central Credential Provider). NetBackup IT Analytics
fetches the latest oracle user account password via this REST API. The following
instructions do not cover how to setup CyberArk, but they do call out the information
required from the CyberArk setup.
Note: At least one account must be configured in CyberArk to enable this feature.
vault_vendor_name CyberArk
Note: Use only CyberArk as a value in this
field.
For each user account repeat the following. This sample describes the aptare_ro user.
vault_vendor_name=CyberArk
host=10.x.x.x
port=443
Integrate with CyberArk 206
Setting up the portal to integrate with CyberArk
https=true
schedule_frequency=2
app_id=testappid
user_safe_id=safe1
user_password_object=portal_account
vault_vendor_name=CyberArk
host=10.x.x.x
port=443
https=true
schedule_frequency=2
app_id=testappid
user_safe_id=safe1
user_password_object=portal_account
ro_safe_id=safe2
ro_password_object=ro_account
Integrate with CyberArk 207
Setting up the portal to integrate with CyberArk
<APTARE_HOME>/utils/configure-db-vault-connection-info.sh
<APTARE_HOME/utils/configure-db-vault-connection-info.bat
<APTARE_HOME>\logs\passwordVaultValidator.log
Chapter 17
Tuning NetBackup IT
Analytics
This chapter includes the following topics:
■ Performance recommendations
Note: If you encounter any issues following these directions contact Veritas Support
for further guidance.
1. Before modifying your configuration, make a copy of all files you plan to edit.
2. Consider tuning to be a process--that is, increase/decrease a number slightly,
then monitor system performance. If your modification results in improvement,
you may consider additional adjustments later.
3. Whenever you undertake this tuning process, consider the potential negative
impact of settings that are either too high or too low, within the resource
constraints of your environment.
Tuning NetBackup IT Analytics 209
Tuning the portal database
Memory Recommendation: If your database server has sufficient memory, you may
consider making the changes listed below.
■ Increase the values for the following fields in initscdb.ora:
■ WithinBullet>pga_aggregate_target from 1000 MB to 1500 MB
■ sga_target from 1228 MB to 2048 MB
Windows: C:\opt\oracle\database\initscdb.ora
Linux: /opt/aptare/oracle/dbs/initscdb.ora
C:\opt\aptare\utils\setupservices.bat
2. Note that in this file, the following commands specify the number of connections:
To change the number of Oracle connections, you must remove the service
and then re-add it.
3. To remove the service:
4. To re-add the service, execute the following command, substituting the new
connection values:
Tuning NetBackup IT Analytics 210
Performance recommendations
Performance recommendations
To optimize database performance:
■ Use your fastest disk storage for the portal database. When choosing the device
for your Oracle database and if you have a choice between RAID1 - RAID5,
choose RAID1.
■ Minimize I/O wait time. Use the top command to determine the I/O wait time.
After about 24 to 48 hours or when all the collections have completed at least once,
disable the baselines capture using following steps:
1 sqlplus / as sysdba alter session set container = scdb;
2 Disable sql baseline capture using following: alter system set
OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES = FALSE;
Tuning NetBackup IT Analytics 211
Reclaiming free space from Oracle
2. sqlplus as sysdba
You can run the following script at any time to reclaim space. It examines every
Oracle database file (DBF) for “white space” at the end of the file. If the script
discovers more than 256 MB of white space, it re-sizes the DBF file to remove the
trailing space. This white space is a result of many insertions and deletions; in
addition, white space can occur if you have truncated tables or purged a lot of data.
1. Log in to the database server as aptare.
2. Go to the tools directory:
■ Linux: cd /opt/aptare/database/tools
■ Windows: cd C:\opt\oracle\database\tools
sqlplus / as sysdba
@ reclaim_aptare_tablespace
commit;
exit
Tuning NetBackup IT Analytics 212
Portal / Data receiver Java memory settings
Linux Locations
■ /opt/aptare/portalconf/tomcat/java-settings.sh
■ /opt/aptare/datarcvrconf/tomcat/java-settings.sh
Windows Locations
■ C:\opt\aptare\portalconf\tomcat\java-settings.bat
■ C:\opt\aptare\ datarcvrconf \tomcat\java-settings.bat
Chapter 18
Working with log files
This chapter includes the following topics:
■ Turn on debugging
■ Database logging
Turn on debugging
When you turn on debugging, additional entries are logged to provide troubleshooting
details.
1. In the Portal, within a report window, enter the following key combination:
Ctrl+Alt+D
This turns on debugging for the current report and it logs messages to both of
the following log files:
Linux: /tmp/scon.log and /opt/tomcat/logs/portal.log
Windows: C:\tmp\scon.log and C:\opt\tomcat\logs\portal.log
2. See “Portal log files” on page 229.
See “Database log files” on page 232.
Database logging
The /tmp/scon.log file (on Linux systems) or C:\opt\oracle\logs\scon.log (on
Windows systems) contains a database audit trail and troubleshooting messages.
You can control database logging by editing the following file, which contains
instructions on what to modify in the file.
Linux: /opt/aptare/database/stored_procedures/config.sql
Windows: C:\opt\oracle\database\stored_procedures\config.sql
Working with log files 215
Portal and data collector log files - reduce logging
<rollingPolicy
class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<fileNamePattern>/opt/tomcat/logs/datarcvr_%i.log</fileNamePattern>
<minIndex>1</minIndex>
<maxIndex>10</maxIndex>
</rollingPolicy>
<triggeringPolicy
class="com.aptare.dc.util.LogbackSizeBasedTriggeringPolicy">
<maxFileSize>20MB</maxFileSize>
</triggeringPolicy>
<!--The Threshold param can either be debug/info/warn/error/fatal.-->
<param name="Threshold" value="debug"/>
Linux: /opt/aptare/mbs/conf/metadatalogger.xml
Windows: C:\Program Files\Aptare\mbs\conf\metadatalogger.xml
<rollingPolicy
class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<fileNamePattern>/opt/aptare/agent_version/
DemoDC/mbs/logs/metadata${mdc_key}.%i.log</fileNamePattern>
<minIndex>1</minIndex>
<maxIndex>20</maxIndex>
</rollingPolicy>
<triggeringPolicy
class="com.aptare.dc.util.LogbackSizeBasedTriggeringPolicy">
<maxFileSize>50MB</maxFileSize>
</triggeringPolicy>
sqlplus
portal/<portal_password>@//localhost:1521/scdb@/opt/aptare/database/
stored_procedures/config.sql
sqlplus
portal/<portal_password>@//localhost:1521/scdb@/opt/aptare/database/tools/validate_sp
Windows:
sqlplus
portal/<portal_password>@//localhost:1521/scdb@C:\opt\oracle\database\
stored_procedures\config.sql
sqlplus
portal/<portal_password>@//localhost:1521/scdb@C:\opt\oracle\database\tools\
validate_sp
'C:\opt\oracle\logs','C:\opt\aptare\oracle\logs','C:\opt\aptare\oracle\log','/tmp'
sqlplus portal/<portal_password>@//localhost:1521/scdb
exec logfile_cleanup_pkg.cleanupLog('Y','Y');
Note: This utility is intended to be run no more than once a month. If you
plan to run it more than once in a month, be aware of the naming convention
for the backup scon.log file, as shown with the parameters in the above
table.
■ User login
■ User impersonate
Modify the logging level of the systemlogger.xml file to provide additional information
about a user’s activity in the Portal. You can set the level to info to capture only
what a user deletes OR set it to debug to capture all the user activity (including all
deletes).
<logger name="com.aptare.sc.gwt.shared.server.GwtSpringAdapter"
additivity="false">
<level value="info"/>
<appender-ref ref="SECURITY" />
</logger>
<logger name="com.aptare.sc.presentation.filter.AuthorizationFilter"
additivity="false">
<level value="info"/>
<appender-ref ref="SECURITY" />
</logger>
<logger name="com.aptare.sc.gwt.shared.server.GwtSpringAdapter"
additivity="false">
<level value="debug"/>
<appender-ref ref="SECURITY" />
</logger>
Working with log files 220
Data collector log files
<logger name="com.aptare.sc.presentation.filter.AuthorizationFilter"
additivity="false">
<level value="debug"/>
<appender-ref ref="SECURITY" />
</logger>
■ C:\Program Files\Aptare\mbs\logs\validation\
■ C:\Program Files\Aptare\mbs\logs\scheduled\
<vendor.product>/<subsystem>#META_<ID>/Probe.log
For example, an EMC Isilon probe from checkinstall would result in a file name
similar to:
/opt/aptare/mbs/logs/validation/emc.isilon
/alphpeifr023#META_EA1BA380E95F73C72A72B3B0792111E5
/IsilonClusterDetailProbe.log
Some collectors may have a period of time when they are not processing a specific
subsystem. For those periods, logging will occur in an aggregate log file similar to:
/opt/aptare/mbs/logs/validation/emc.isilon
/#META_EA1BA380E95F73C72A72B3B0792111E5
/IsilonClusterDetailProbe.log
cisco.cisco
commvault.simpana
dell.compellent
emc.avamar
emc.clariion
generic.host (Valid for a host resources discovery policy)
hp.3par
veritas.bue
Working with log files 222
Data collector log file naming conventions
Additionally, each Java Virtual Machine (JVM) creates its own logging file(s) when
starting up. This is necessary because multiple processes logging to the same file
could overwrite each other’s log messages. These log files can be found in the
framework sub-directory.
See “Data collector log file organization” on page 220.
See “Checkinstall Log” on page 223.
Examples
■ /opt/aptare/mbs/logs/scheduled/dell.compellent
/#META_EA1BA380E95F73C72A72B3B0792111E5
/META_EA1BA380E95F73C72A72B3B0792111E5.log
■ /opt/aptare/mbs/logs/scheduled/emc.avamar
/#HQBackupCollector/HQBackupCollector.log
Working with log files 223
Data collector log file naming conventions
Checkinstall Log
The checkinstall process produces its own log file, but in most cases, there is very
little to report in this log.
For example, the checkinstall creates:
/opt/aptare/mbs/logs/validation/framework/#checkinstall/checkinstall.log
■ C:\Program
Files\Aptare\mbs\logs\validation\<vendor.product>\#TestConnection\TestConnection.log
■ #META_<policyID>
■ #<policyID>
Example:
scheduled\legato.nw\#META_D922ACBCCFFA2933A301A530A0E011E4
NetBackup IT Analytics Collected System Where to find the logs (Linux syntax)
Product
Table 18-1 Log file locations for NetBackup IT Analytics products (continued)
NetBackup IT Analytics Collected System Where to find the logs (Linux syntax)
Product
Table 18-1 Log file locations for NetBackup IT Analytics products (continued)
NetBackup IT Analytics Collected System Where to find the logs (Linux syntax)
Product
HP 3PAR scheduled/hp.3par/#<policyID>
validation/hp.3par/#<policyID>
HP EVA scheduled/hp.eva/#<policyID>
validation/hp.eva/#<policyID>
Table 18-1 Log file locations for NetBackup IT Analytics products (continued)
NetBackup IT Analytics Collected System Where to find the logs (Linux syntax)
Product
Cisco scheduled/cisco.ciscoswitch/#<policyID>
validation/cisco.ciscoswitch/#<policyID>
Working with log files 227
General data collector log files
Table 18-1 Log file locations for NetBackup IT Analytics products (continued)
NetBackup IT Analytics Collected System Where to find the logs (Linux syntax)
Product
VMware scheduled/vmware.esx/
validation/vmware.esx/
C:\Program Files\APTARE\mbs\bin\listcollectors.bat
Linux:
/opt/aptare/mbs/bin/listcollectors.sh
In the output, look for the Event Collectors section associated with the Software
Home--the path that was specified when the Data Collector Policy was created.
Working with log files 229
Portal log files
aptareagent-error*.log C:\opt\apache\logs Standard Web Server error log file. Logs http
transaction errors between the Data Collector
/opt/apache/logs
and the Web Server.
/opt/tomcat/logs
aptareportal-error*.log C:\opt\apache\logs Standard Web Server error log file. Logs http
transaction errors between the
/opt/apache/logs
browser-based Portal application and the
Web Server.
Working with log files 230
Portal log files
/opt/apache/logs
/opt/tomcat/logs/
/tmp
/opt/aptare/installlogs
/opt/aptare/installlogs
sqlplus portal/<portal_password>@//localhost:1521/scdb
LONG_JOB_HOURS_DEFAULT 12 hours
Although these values are typical, your SLA might require a different values. You
can change these metrics for specific host groups or for all host groups.
To change the job status:
1. Determine the host group’s ID.
2. Log on to the Portal Server as user aptare.
3. Type the following command:
sqlplus portal/<portal_password>@//localhost:1521/scdb
■ Overview
■ SNMP configurations
■ Standard OIDs
Overview
Tabular Reports can be configured to alert users via a number of options:
■ Email
■ Script
■ SNMP
■ Native log
Typically, notifications are expected when a report’s content indicates a threshold
crossing or an error state. For example, a Job Summary report can be configured
to trigger an alert when failed backup events are reported.
SNMP configurations
The SNMP traps can be issued from:
1. Any saved tabular report instance, including custom reports that have been
created via the Report Template Designer.
Traps have the following characteristics:
SNMP trap alerting 237
Standard OIDs
■ A trap is sent for each row in a report; therefore, if a table is empty, no traps
are sent.
■ The name of the report is included in the trap.
■ The trap includes data for each column in a row.
2. Policy Based rules, which are created by choosing from a list of predefined
policy rules in Alert Policy Administration.
Traps have the following characteristics:
■ A trap is sent for each time the rules defined in policy are triggered. Traps
are triggered only after rules is set to active and rule condition is met.
Standard OIDs
Report based alerts
For report based alerts, no static MIB is provided. A standard set of Object ID are
used to create the MIB.
APTARE_ENTERPRISE_TRAP_OID = "1.3.6.1.4.1.15622.1.1.0.1"
SNMP_TRAP_OID = "1.3.6.1.6.3.1.1.4.1.0"
SYS_UP_TIME_OID = "1.3.6.1.2.1.1.3.0"
COLUMN_OID_PREFIX = "1.3.6.1.4.1.15622.1.2"
1.3.6.1.4.1.15622.1.2.4 Defines the "Object Type" for which the alert was set.
1.3.6.1.4.1.15622.1.2.5 Displays the "Object Name" on which the alert was issued.
1.3.6.1.4.1.15622.1.2.6 If a hierarchy structure exists, displays the "Parent Object Type" for
the alert object.
1.3.6.1.4.1.15622.1.2.11 Displays the date of the latest alert that occurred for the alert policy
and alert object combination. When this column has a value, the
Alert Date column indicates the date when this alert occurred for
the first time.
1.3.6.1.4.1.15622.1.2.12 Displays the number of times the alert was processed on the same
alert object in the defined number of Look Back hours.
For the above example, the data is delivered in the trap as follows:
ObjectId COLUMN_OID_PREFIX + .0 contains the saved report instance
name--Failed Full Backups
ObjectId COLUMN_OID_PREFIX + .1 contains the Client
ObjectId COLUMN_OID_PREFIX + .2 contains the Server
ObjectId COLUMN_OID_PREFIX + .3 contains the Product
Note: While these steps include directions for generating a temporary, self-signed
certificate, you should obtain the certificate from a third-party provider rather than
using the self-signed certificate.
Note: The actual SSL certificates get installed and configured within the Apache
Web Server, however, in cases where the issuing certificate authority (CA) is not
automatically trusted (for example, self-signed or a one-off domain reseller), the
certificates will need to be imported and configured to be trusted on the Data
Collector Server. In this case, follow the process to import certificates into the
keystore for both the Data Collector and the Upgrade Manager:
See “Configure the Data Collector to trust the certificate” on page 252.
Linux
/opt/apache/conf/ssl_cert
Windows
C:\opt\apache\conf\ssl_cert
2. Stop the Apache and Tomcat services. From a terminal console, enter the
following commands.
Linux
/opt/aptare/bin/tomcat-agent stop
/opt/aptare/bin/tomcat-portal stop
/opt/aptare/bin/apache stop
Windows
C:\opt\aptare\utils\stopagent.bat
C:\opt\aptare\utils\stopportal.bat
C:\opt\aptare\utils\stopapache.bat
Windows
■ To disable http protocol, edit httpd.conf file and remove VirtualHost section
of portal configuration
■ To redirect http protocol to https, edit httpd.conf file, remove all entries
of VirtualHost section of portal configuration and add following lines in same
VirtualHost:
ServerName itanalyticsportal.<hostname>
Redirect permanent / https://itanalyticsportal.<hostname>/
Linux Windows
Examples:
Linux: #SSLMutex "file:/opt/apache/logs/ssl_mutex"
Windows: #SSLMutex "file:c:\opt\apache\logs\ssl_mutex"
6. If any of the previous configurations are missing for either the Portal or Data
Collector, the host configuration information must be added to enable SSL.
Proceed with the following steps.
7. To ensure a secure web server, remove any port 80 VirtualHost sections from
the /opt/apache/conf/httpd.conf file.
This prevents the HTTP message headers from getting unencrypted if one end
of the communication is using non-HTTPS protocols.
8. If a Virtual Host declaration is missing from the default Apache SSL
configuration file, add the missing virtual host declaration to the configuration
file. See the relevant section for instructions.
SSL certificate configuration 245
Update the web server configuration to enable SSL
■ See “Configure virtual hosts for portal and / or data collection SSL”
on page 246.
■ See “SSL Implementation for the Portal Only” on page 246.
■ See “SSL Implementation for Data Collection Only” on page 247.
■ See “SSL Implementation for Both the Portal and Data Collection”
on page 248.
9. For each active virtual host section in the Apache SSL configuration file
(httpd-ssl.conf), ensure that declaration lines beginning with the following are
un-commented (they do not have a # at the beginning of the line):
SSLEngine
SSLCertificateFile (update certificate file details)
SSLCertificateKeyFile (update certificate key file details)
10. Run the deployCert utility as root user on the Portal server to save the ssl
certificates configured with Apache in java keystore itanalytics.jks .
This will be used while configuring SingleSignOn and Syslog over SSL.
■ Linux: /opt/aptare/utils/deployCert.sh update
■ Windows: C:\opt\aptare\utils>deployCert.bat update
# export LD_LIBRARY_PATH=/opt/apache/ssl/lib:$LD_LIBRARY_PATH
(If https is enabled)
# /opt/apache/bin/apachectl -t
/opt/aptare/bin/apache start
/opt/aptare/bin/tomcat-portal start
/opt/aptare/bin/tomcat-agent start
Windows
C:\opt\aptare\utils\startapache.bat
C:\opt\aptare\utils\startagent.bat
C:\opt\aptare\utils\startportal.bat
SSL certificate configuration 246
Configure virtual hosts for portal and / or data collection SSL
<VirtualHost IP_ADDRESS_PORTAL:443>
ServerName aptareportal.domainname:443
Document Root /opt/aptare/portal
#<VirtualHost aptareagent.domainname:443>
5. Set the Document Root path to a valid path for the Web Server’s OS.
Linux
/opt/aptare/portal
Windows
C:\opt\aptare\portal
SSL certificate configuration 247
Configure virtual hosts for portal and / or data collection SSL
<VirtualHost IP_ADDRESS_DATARCVR:443>
ServerName aptareagent.domainname:443
DocumentRoot /opt/aptare/datarcvr
#<VirtualHost aptareportal.domainname:443>
5. Set the Document Rootpath to a valid path for the Web Server’s OS.
Linux
/opt/aptare/datarcvr
Windows
C:\opt\aptare\datarcvr
/opt/aptare/datarcvrconf/
collectorConfig.global.properties
Windows
C:\opt\aptare\datarcvrconf\
collectorConfig.global.properties
SSL certificate configuration 248
Configure virtual hosts for portal and / or data collection SSL
<VirtualHost IP_ADDRESS_PORTAL:443>
ServerName aptareportal.domainname:443
Document Root /opt/aptare/portal
<VirtualHost IP_ADDRESS_DATARCVR:443>
ServerName aptareagent.domainname:443
DocumentRoot /opt/aptare/datarcvr
/opt/aptare/portal
/opt/aptare/datarcvr
Windows
C:\opt\aptare\portal
C:\opt\aptare\datarcvr
SSL certificate configuration 249
Enable / Disable SSL for a Data Collector
where
-x509 is used to create a certificate as opposed to a certificate request that is sent
to a certificate authority
-days determines the number of days that the certificate is valid
-newkey rsa:2048 sets the key as 2048-bit RSA
-nodes specifies that no passkey will be used
-keyout specifies the name of the key file
-out specified the name of the certificate file
Example:
Note: The use of the -nodes option in the previous example creates a certificate
that does not require a pass phrase. This makes it easier to install and use the
certificate, but weakens the security of the certificate. If the certificate is created
with a pass phrase, it must be entered when the certificate is installed and used.
The actual certificates get installed and configured on the Apache web server,
however, in cases where the issuing certificate authority (CA) is not automatically
trusted (such as self-signed certificates), the certificates need to be imported and
trusted on the Data Collector server.
Once the self-signed certificates have been created, configure the Data Collector
to trust the certificate.
See “Import a certificate into the Data Collector Java keystore” on page 253.
Windows:
changeit
4. Once completed, run the following keytool command to view a list of certificates
from the keystore and confirm that the certificate was successfully added. The
certificate fingerprint line displays with the alias name used during the import.
Linux:
Windows:
Linux
cd /opt/aptare/utils
./deployCert add
Windows
cd c:\opt\aptare\utils
deployCert.bat add
cd /opt/aptare/utils
# ./deployCert.sh update
Windows
cd c:\opt\aptare\utils
deployCert.bat update
cd /opt/aptare/utils
./deployCert download
Windows
cd c:\opt\aptare\utils
deployCert.bat download
SSL certificate configuration 257
Add a virtual interface to a Linux server
ifconfig -a
collisions:0 txqueuelen:1000
RX bytes:63235 (61.7 KiB) TX bytes:28143 (27.4 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:8762 errors:0 dropped:0 overruns:0 frame:0
TX packets:8762 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:5422509 (5.1 MiB) TX bytes:5422509 (5.1 MiB)
SSL certificate configuration 258
Add a virtual / secondary IP address on Windows
2. You must have two Ethernet connections, identified by the eth0 label. To add
a virtual interface on a Linux server, with a second IP address, to the existing
Ethernet interface, use the following command:
where
111.222.333.444 is the new IP address for the virtual interface.
3. You must add a file to the network scripts to recreate the virtual interface when
the server is rebooted. If the IP address assigned to the eth0 interface is static,
make a copy of the ifcfg-eth0 file in /etc/sysconfig/network-scripts and name
it ifcfg-eth0:0.
4. Update the IP address in ifcfg-eth0:0 to be the new IP address assigned to
the virtual interface.
5. If the IP address in the eth0 interface is dynamically assigned, as indicated by
the line BOOTPROTO=dhcp in the ifcfg-eth0 file, create a file named
ifcfg-eth0:0 with the following lines:
DEVICE=eth0:0
IPADDR=111.222.333.444
6. Finally, update your DNS server so that the new IP address is mapped to the
data collection URL (for example, itanalyticsportal.<domainname>).
2. Click Properties.
3. For the configured IP address, click Advanced.
■ Introduction
■ Configuring LDAP to use active directory (AD) for user group privileges
Introduction
This section covers customizations for the portal that are not available through the
user interface. Use Custom Parameters to add/edit and delete these properties.
Portal properties: Format and portal customizations 262
Configuring global default inventory object selection
Note: Prior to version 10.3, customizations to the Portal were made using a file,
portal.properties. Not all of those settings are displayed in the System Configuration
feature. If you upgrade from a version prior to 10.3, those properties are displayed
and automatically populated in the Custom Parameters.
/opt/aptare/portalconf/portal.properties
Windows:
C:\opt\aptare\portalconf\portal.properties
portal.ocn.defaultVisibleObjectType=HOST,ARRAY,SWITCH,BACKUPSERVER,
VM_SERVER,VM_GUEST,
DEDUPLICATION_APPLIANCE,DATASTORE,EC2_INSTANCE,S3_BUCKET,
AZURE_STORAGE_ACCOUNT,AZURE_VIRTUAL_MACHINE
/opt/aptare/portalconf/portal.properties
Windows:
C:\opt\aptare\portalconf\portal.properties
portal.security.allowUserToLoginMultipleTimes=false
/opt/aptare/portalconf/portal.properties
Windows:
C:\opt\aptare\portalconf\portal.properties
/opt/aptare/portalconf/portal.properties
Windows:
C:\opt\aptare\portalconf\portal.properties
Where the <new limit value> is the number of rows greater than 20,000 that
your report export requires. For example, if your report has 36,000 rows enter
a number greater than 36000. Note that the new limit value cannot contain
commas or decimal points. Keep in mind that Portal server performance can
degrade considerably for very large reports. For very large reports, you may
want to segment the scope into multiple reports.
2. Restart the Tomcat Portal services after making your modification.
/opt/aptare/portalconf/portal.properties
Windows:
C:\opt\aptare\portalconf\portal.properties
Portal properties: Format and portal customizations 265
Customizing the host management page size
/opt/aptare/portalconf/portal.properties
Windows:
C:\opt\aptare\portalconf\portal.properties
2. Add the following line and set the page size value:
portal.hostManagementPageSize=xxxx
For example, to only display 50 rows on the page, it would appear as follows:
portal.hostManagementPageSize=50
/opt/aptare/fa
Windows:
C:\opt\aptare\fa
Portal properties: Format and portal customizations 266
Configuring badge expiration
1. To customize the path for the location of the File Analytics database access
the portal.properties file:
Linux:
/opt/aptare/portalconf/portal.properties
Windows:
C:\opt\aptare\portalconf\portal.properties
fa.root=/opt/aptare/fa
Example
Linux:
fa.root=/opt/aptare/fa_db
Windows:
fa.root=D:\opt\aptare\fa
/opt/aptare/portalconf/portal.properties
Windows:
C:\opt\aptare\portalconf\portal.properties
/opt/aptare/portalconf/portal.properties
Windows:
C:\opt\aptare\portalconf\portal.properties
portal.reports.cache.maxSizeInMemory
portal.reports.cache.maxSizeInMemory=536870912
/opt/aptare/portalconf/portal.properties
Windows:
C:\opt\aptare\portalconf\portal.properties
Portal properties: Format and portal customizations 268
Configuring LDAP to use active directory (AD) for user group privileges
portal.reports.cache.timeOut
/opt/aptare/portalconf/portal.properties
Windows:
C;\opt\aptare\portalconf\portal.properties
ldap.authorization=true
ldap.newUserDomain=<string>
■ Data aggregation
■ Capacity: Default retention for Dell EMC Elastic Cloud Storage (ECS)
sqlplus <portal_user>/<portal_password>@//localhost:1521/scdb
where <portal_user> and <portal_password> are portal credentials
to connect to the database.
3. At the command line, execute the following SQL statement, substituting relevant
values for the variables shown < > in the syntax.
See “Find the domain ID and database table names” on page 276.
See the section called “Retention Period Update for Multi-Tenancy
Environments Example” on page 284.
See “Retention period update for SDK user-defined objects example”
on page 276.
Data retention periods for SDK database objects 272
Data aggregation
Data aggregation
Overview
Storage, Fabric, and Virtualization are just a few of the subsystems that NetBackup
IT Analytics supports in terms of performance metric gathering.
When regulating the growth of performance metrics data, NetBackup IT Analytics
purges old data periodically using customizable retention parameters. But if the
retention periods need to be longer than the defaults, this data can eventually take
up a lot of space on the disc where the database is stored.
Additionally, when the data in the underlying tables expand exponentially, data
retrieval becomes difficult.
With the release of version 11.2.02 for Capacity and Switches sub-systems,
NetBackup IT Analytics now provides Data Aggregation for performance metrics
data to manage the aforementioned scenarios. In the subsequent releases, it will
also be expanded to include Virtualization and Cloud subsystems.
About
Higher data retention durations are made possible by data aggregation, which
aggregates data for performance measures without increasing the database's
required disc space.
The existing data in the table is aggregated as per the preselected aggregation
metrics (average, max, min, count, sum, standard deviation etc.) for a specific time
resulting in reduced number of records.
For instance, NetBackup IT Analytics allows the collection of performance statistics
for logical units within 30 seconds. This can lead to 28 million records each day for
10,000 LUNs. Without aggregation, this table's retention duration of one year will
produce 10 billion records. While for the same retention, this data can be reduced
by ten times with aggregation enabled.
Data retention periods for SDK database objects 273
Data aggregation
Advantages
The following are the advantages of data aggregation.
■ Disk space: Depending on the frequency of data collection, data aggregation
can potentially shrink the quantity of raw data by up to 10 times, thus lowering
the impact of the enormous performance metrics data.
■ Quick report response: Report queries against the database processes more
quickly since aggregated values have already been persisted.
■ Data retrieval: Data retrieval is made simpler by lowering the size of the data.
Pre-requisites
The following are the pre-requisites to enable data aggregation in NetBackup IT
Analytics:
Table partitioning
The database tables must be partitioned in order to aggregate millions of records
(composite partitioning Interval-List). The existing data in the specified
non-partitioned tables will be transferred to the partitioned schema.
■ Level 4 retention: The Level 4 retention depends on the retention period of the
table.
For example, for the LUN performance metrics table, if the retention is set to 15
months, then records at level 4 aggregation will be maintained from 361st day
to 450th day.
If it is kept at 365 days, then these records will be retained for 5 days only and
then purged.
If the table's final retention period is kept lower than the level 4 retention period,
then no data will be purged till level 4 is achieved, making both retention periods
equal.
To list the currently configured Domain IDs, use the following SQL
SELECT statement:
<tableName> Find the relevant database table name for the retention period you
want to change
■ See “Capacity: Default retention for Dell EMC Elastic Cloud Storage (ECS)”
on page 279.
■ See “Capacity: Default retention for Windows file server” on page 280.
■ See “Capacity: Default retention for Pure Storage FlashArray” on page 281.
■ See “Cloud: Default retention for Amazon Web Services (AWS)” on page 281.
■ See “Cloud: Default retention for Microsoft Azure” on page 282.
■ See “Cloud: Default retention for OpenStack Ceilometer” on page 282.
Billing Record Tag SDK_aws_billing_rec_tag 366 days To modify the retention period,
Mapping from any sdk_aws_resource_map 999999 days To modify the retention period,
resource ID or
See “Data retention periods for
name to one of
SDK database objects”
many different
on page 271.
entities (or none at
all)
sqlplus portal/<portal_password>@//localhost:1521/scdb
3. At the command line, execute the following SQL statement, substituting relevant
values for the variables shown < > in the syntax.
See “Find the domain ID and database table names” on page 276.
See the section called “Retention Period Update for Multi-Tenancy
Environments Example” on page 284.
■ Login issues
■ Connectivity issues
# cd /opt/aptare/utils/
# ./findUser.sh
Windows:
C:\opt\aptare\utils\finduser.bat
cd /opt/aptare/utils
./updateUser.sh <currentUserId> <modLastName> <modPassword>
<modRestoreWizPassword>
Windows:
C:\opt\aptare\utils
updateuser.bat <currentUserId> <modLastName> <modPassword>
<modRestoreWizPassword>
For example:
Login issues
The following sections highlight common login issues and their possible solutions.
cd /opt/aptare/utils
./resetSSOConfig.sh
Windows
cd C:\opt\aptare\utils
resetSSOConfig.bat
Connectivity issues
No Connectivity
A number of conditions can disrupt connectivity, including:
■ Firewall issues can prevent connectivity.
■ A network change occurred. Typically a DNS, system domain, or hostname
change is the culprit.
Troubleshooting 288
Connectivity issues
Action Recommendations
The following list suggests actions you can take to determine what is causing the
issue.
■ Ping itanalyticsagent.mydomain.com from the primary server.
■ Check if you can connect to itanalyticsagent.mydomain.com telnet port 80.
# /opt/aptare/bin/oracle status
# getenforce
6 Connect to database using sqlplus on portal server and run basic sql
commands like select * from ptl_users.
$APTARE_HOME/mbs/conf/wrapper.conf
wrapper.app.parameter.2="$COLLECTOR_NAME$"
wrapper.app.parameter.3="$COLLECTOR_PASSWORD$"
$APTARE_HOME/mbs/bin/updateconfig.bat
to modify the name and passcode. They are the two parameters immediately
following "com.storage.mbs.watchdog.ConfigFileMonitorThread".
Changing the Name and Passcode on a Linux Data Collector Server
◆ Edit the following files:
$APTARE_HOME/mbs/bin/updateconfig.sh
$APTARE_HOME/mbs/bin/startup.sh
■ The name and the passcode will be passed as program arguments to the
Java program in the above two scripts.
■ In updateconfig.sh, the name and passcode are the two parameters
immediately following
"com.storage.mbs.watchdog.ConfigFileMonitorThread".
Troubleshooting 290
Data Collector and database issues
Insufficient Privileges
When creating the database, you may get an insufficient privileges message when
the Windows user is not local, or is not a member of the ORA_DBA group. In this
case, you receive an insufficient privileges error when you create the database.
3. Using the storage_array_id from the above query to execute this code:
Begin
srm_common_pkg.deleteStorageArray(<STORAGEARRAYID>);
End;
/
4. Verify that the array was deleted successfully using this query (should return
0).
select count(*)
FROM aps_storage_array
WHERE storage_array_id = <STORAGEARRAYID>;
Email SMTP host Make sure this references a running mail server.
IP address
Email debug mode Used for product developers to debug mail transmission issues.
Email enable The SMTP Transport Layer Security extension--the encryption and
Transport Layer authentication protocol. When a client and server that support TLS talk
Security (TLS) to each other, they can encrypt the data channel to guard against
eavesdroppers.
Email from name The name associated with the reply back email address.
SMTP User User name used for authentication on the email server.
Performance Issues
Performance can be impacted by a number of issues. Use the following checklist
to help you isolate problems.
■ Check the number of backup jobs that have been processed in the last 24-hour
period.
■ Determine the level of database logging that has been enabled in scon.log. If
DBG level messages have been enabled, this can negatively impact
performance. INFO and WARNING messages have negligible impact.
■ Check if anything else is running on the server.
■ Note if performance suffers at specific times of the day and determine which
processes are running during those times.
Troubleshooting 293
Portal upgrade performance issues
■ Aptare agent service takes longer time when started to get the
collectorconfig.xml from data receiver side.
For RHEL/OEL, execute the following steps to install the rng-tools and start the
services:
1. Access command prompt.
2. Type yum install rng-tools to install the rng-tools.
3. Type systemctl start rngd to start the services.
4. Type systemctl enable rngd to enable the services.
For Suse, execute the following steps to install the rng-tools and start the services:
1. Access command prompt.
2. Type zypper install rng-tools to install the rng-tools.
3. Type systemctl start rng-tools to start the services.
4. Type systemctl enable rng-tools to enable the services.
Appendix A
Kerberos based proxy
user's authentication in
Oracle
This appendix includes the following topics:
■ Overview
Overview
Kerberos is a computer network security protocol that authenticates the
communication of nodes over a non-secure network to prove their identity to one
another in a secure manner. It uses secret-key cryptography and a trusted third
party for authenticating client-server applications and verifying users' identities.
This section helps you to configure Kerberos-based authentication for proxy user/s
in Oracle. Kerberos authentication allows to connect to Oracle without specifying
the username / password credentials. The authentication is done externally. Proxy
authentication in Oracle, allows connection to a target database username via
another database user (the proxy user).
For example you can authorize a user with a development account to connect to
the application owner account using his/her credentials (thus no need to expose
the application user's password). This section helps to configure for Kerberos and
proxy authentication: you can provide a mean to connect to any given DB user via
Kerberos based proxy user's authentication in Oracle 296
Exporting service and user principal’s to keytab file on KDC
Note: k1portal is the Kerberos username is referred. It can vary from environment
to environment.
Pre-requisite
The following packages must be installed on the NetBackup IT Analytics portal and
pre-stashed Kerberos ticket:
■ krb5-libs
■ krb5-workstation
# kadmin.local
addprinc -randkey <oracle SID>/<oracle server host name>@<domain
realm name>
# kadmin.local
ktadd -k <keytab file path> <oracle SID>/<oracle server host
name>@<domain realm name>
# kadmin.local
addprinc <kerberos user name>
# kadmin.local
ktadd -k <keytab file path> <kerberos user name>
Note: The exported keytab file can be removed from KDC once it has been
copied to oracle server.
Note: For more information, see See “Exporting service and user principal’s
to keytab file on KDC” on page 296.
■ SQLNET.KERBEROS5_REALMS=/etc/krb5.conf
■ SQLNET.KERBEROS5_KEYTAB=/etc/v5srvtab
■ SQLNET.FALLBACK_AUTHENTICATION=TRUE
■ SQLNET.KERBEROS5_CC_NAME=/tmp/kcache
■ SQLNET.KERBEROS5_CLOCKSKEW=300
Note: The Oracle server and KDC should have the same time and Timezone
settings. If there is slight time mismatch, add the below entry to sqlnet.ora to
cover the time mismatch. For example within 20mins. The default value is 300.
SQLNET.KERBEROS5_CLOCKSKEW=1200. Veritas recommends to configure both
the servers to sync time from time servers.
For example:
Execute the following commands to verify and to fetch initial TGT for k1portal
user; login as Oracle service user
11 Create a trigger for the Kerberos users corresponding to portal to alter the
session which will set current schema as PORTAL
Portal Modifications
1 Create a copy of /etc/krb5.conf from KDC to Portal server /etc/krb5.conf
path.
2 Copy the keytab file from KDC to Portal at /etc/v5srvtab.
Note: The exported keytab file can be removed from KDC once it has been
copied to portal server.
Note: For more information, see See “Exporting service and user principal’s
to keytab file on KDC” on page 296.
3 Modify the owner and permission of above copied two files using the following
commands:
5 Modifications in the property file is required because when JDBC try to make
multiple connections to Oracle DB, Oracle application treats this as a replay
attack and errors out.
To avoid the error, ensure that the [libdefaults] section in the Kerberos
configuration file /etc/krb5.conf on KDC and client machine is configured
forwardable = false.
To update, restart kdc and admin service on KDC server using the following
commands:
systemctl restart krb5kdc.service
7 Tomcat user must have read privileges to the cache file. To ensure that the
Tomcat OS user is able to make a JDBC connection to Oracle DB, use the
following commands:
.
# chown <portal user>:<portal group> /tmp/portal_kcache;
# chmod 444 /tmp/portal_kcache;
■ db.url=jdbc:oracle:thin:@(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=localhost)
(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=scdb)))
Host and Service name could be different here.
■ db.user=<kerberos user name>@<domain realm name>
For example: [email protected] Combination of kerberos
portal user name and domain realm name
■ db.auth.scheme=kerberos
Kerberos based proxy user's authentication in Oracle 302
Modifications for Portal
■ db.connection.expiration=5
<dataSource>
<Driver> oracle.jdbc.driver.Oracle</Driver>
<URL>jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)
(HOST=localhost)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=scdb)))</URL>
<UserId><kerberos user name>@<>
<domain realm name></UserId>
<Password>Z0Q5W+lQD2jreQaLBoYsviYO21WGOq5iTEo0Ad2uUj/e0GtqPkOtXFblKxCse
KXO4VhpIQwwfrSfe59nGy156DV8lYoa7HWmL0hF+kAZXOoXfIN5YRAGfqDbCwrKQdtPY7pQh
uTkZMPLl0d9Kzy6sLGMb/33L4hKuEl0ZZN2FG5US26JZ/uSOBF7T69ppqxGqXMleZ19QBcv
UElLwJTn52SurL+K3RjCY7Xi0VJb4wLkax07xCkpSK9dJ6NMFJS3ybWP4jNs3rC3roudZak8
wGqLNhAacyXgW4pMpgigVjGwNr0N8rJIgcGmXgAxSNs0qmQItuXPIyqGf+nWWEfScQ==
</Password>
<oracle_service_name>scdb</oracle_service_name>
<ro_user>aptare_ro</ro_user>
<ro_password>U9a7a+af94q0CUaIfzaVmYl1P1DhdQW96CQiYWgxUGSV5sfVVsxoWF5Riy
V85MD8V0Ogy7UJo1sFmAL36KjDy8LA61pKeO4X39hRK/g8vvl/xNnG5bBYIF04/1LwD2FTz
0lJERWopKVZ6pd6TkT0mGeKrnu2oYi97GtlW4J73tPGTFRhHyVw7yZKMmaxbs/FBwrz5aIf
je3rT0w85m7Obtrjf2nJ2HjsaHnmToh0Ua96xlshjrE75UbaLMu0QEcF3PYF3qufYVIegn
4VGSHcpsU/AFzurKpr0JTsU/6VqvdE4veBLv4FH5D05bRetaOA0SGKCazWA50
xiirwocvgyw==
</ro_password>
<MaxConnections>125</MaxConnections>
<MinConnections>5</MinConnections>
<ConnExpirationTime>5</ConnExpirationTime>
<authScheme>kerberos</authScheme>
<portalKcacheFile>/tmp/portal_kcache</portalKcacheFile>
<kKeyTabFile>/etc/v5srvtab</kKeyTabFile>
</dataSource>
Kerberos based proxy user's authentication in Oracle 304
Modifications for Portal
# sqlplus / as sysdba
2. Ensure portal cache file is valid and Tomcat user must have read permission.
Post upgrade
The following are the steps to be performed after the upgrade.
1. Revoke DBA role and grant a specific list of privileges to Kerberos users after
a successful upgrade. k1portal is the Kerberos username here. It can be varied
from environment to environment.
Under sys user performs below revoke tasks:
2. Again under sys user runs individual PLSQL scripts to grant a list of required
privileges to Kerberos-enabled users for the normal functioning of ITA
application.
Kerberos based proxy user's authentication in Oracle 305
Modifications for Portal
3. Ensure that the correct Kerberos username is given as arguments to the script.
su - aptare
sqlplus "/ as sysdba"
/opt/aptare/bin/tomcat-portal restart
/opt/aptare/bin/tomcat-agent restart
Note: Kerberos cache file should not be expired, Tomcat and Aptare users must
have access to the cache file, for this add a script in crontab to re-generate cache
file as below :
# cat krb_cache_refresh.sh
su – aptare (login as oracle user)
okinit -k -t /etc/v5srvtab k1portal
kinit -k -t /etc/v5srvtab [email protected]
-c /tmp/portal_kcache
chmod 444 /tmp/portal_kcache;
chown <portal user>:<portal group> /tmp/portal_kcache
Authentication process
Authentication process includes the following:
■ The user initiates a Oracle Net connection from the client to the server using
TLS.
■ TLS performs the handshake between the client and the server.
■ After a successful handshake, the server verifies whether the user has
appropriate authorization to access the database.
Configure TLS-enabled Oracle database on NetBackup IT Analytics Portal and data receiver 308
Configure TLS in Oracle with NetBackup IT Analytics on Linux in split architecture
su -aptare
mkdir /opt/aptare/oracle/network/server_wallet
3 Create an empty wallet for the Oracle server with auto login enabled.
5 Check the contents of the wallet. Verity whether the self-signed certificate is a
trusted certificate.
6 Export the certificate so that it can be loaded into the client wallet later.
su - aptare
2 Create a directory client_wallet on the client system to store the client wallet.
mkdir /opt/aptare/oracle/network/client_wallet
3 Create a wallet for the Oracle client. Create an empty wallet with auto login
enabled.
4 Add a self-signed certificate in the wallet. A new pair of private/public keys are
created at this stage.
5 Check the contents of the wallet. Verify that the self-signed certificate is both
a user and a trusted certificate.
6 Export the certificate so that it can be loaded into the server wallet later.
3 Check the contents of the client wallet. Note that the server certificate is now
included in the list of trusted certificates.
5 Check the contents of the server wallet. Note that the client certificate is now
included in the list of trusted certificates.
lsnrctl stop
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
(ADDRESS = (PROTOCOL = TCPS)(HOST = xx.xx.xx.xx)(PORT =
2484))
)
)
SSL_CLIENT_AUTHENTICATION = FALSE
SECURE_PROTOCOL_LISTENER=(IPC)
WALLET_LOCATION =
(SOURCE =
(METHOD = FILE)
(METHOD_DATA =
(DIRECTORY = /opt/aptare/oracle/network/server_wallet)
)
)
/opt/aptare/oracle/network/server_wallet
Configure TLS-enabled Oracle database on NetBackup IT Analytics Portal and data receiver 312
Configure TLS in Oracle with NetBackup IT Analytics on Linux in split architecture
SSL_CLIENT_AUTHENTICATION = FALSE
WALLET_LOCATION =
(SOURCE =
(METHOD = FILE)
(METHOD_DATA =
(DIRECTORY = /opt/aptare/oracle/network/server_wallet)
)
)
SSL_CIPHER_SUITES = (SSL_RSA_WITH_AES_256_CBC_SHA,
SSL_RSA_WITH_3DES_EDE_CBC_SHA) SQLNET.WALLET_OVERRIDE = TRUE
SCDB =
(DESCRIPTION =
(ADDRESS=
(PROTOCOL=TCPS)
(HOST=xx.xx.xx.xx)
(PORT=2484)
)
(CONNECT_DATA=(SERVICE_NAME=scdb)(SID=SCDB))
)
lsnrctl start
lsnrctl status
Step 5: Configure the Oracle database to listen for TCPS connection on the
client system. Configure the listener.ora and sqlnet.ora files on the
database server using the following steps. In the procedure below, host is
Configure TLS-enabled Oracle database on NetBackup IT Analytics Portal and data receiver 313
Configure TLS in Oracle with NetBackup IT Analytics on Linux in split architecture
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
(ADDRESS = (PROTOCOL = TCPS)(HOST = xx.xx.xx.xx)(PORT =
2484))
)
)
SSL_CLIENT_AUTHENTICATION = FALSE
SECURE_PROTOCOL_LISTENER=(IPC)
WALLET_LOCATION =
(SOURCE =
(METHOD = FILE)
(METHOD_DATA =
(DIRECTORY = /opt/aptare/oracle/network/client_wallet)
)
)
/opt/aptare/oracle/network/client_wallet
SSL_CLIENT_AUTHENTICATION = FALSE
WALLET_LOCATION =
(SOURCE =
(METHOD = FILE)
(METHOD_DATA =
(DIRECTORY = /opt/aptare/oracle/network/client_wallet)
)
)
SSL_CIPHER_SUITES = (SSL_RSA_WITH_AES_256_CBC_SHA,
SSL_RSA_WITH_3DES_EDE_CBC_SHA) SQLNET.WALLET_OVERRIDE = TRUE
Configure TLS-enabled Oracle database on NetBackup IT Analytics Portal and data receiver 314
Configure TLS in Oracle with NetBackup IT Analytics on Linux in split architecture
SCDB =
(DESCRIPTION =
(ADDRESS=
(PROTOCOL=TCPS)
(HOST=xx.xx.xx.xx)
(PORT=2484)
)
(CONNECT_DATA=(SERVICE_NAME=scdb)(SID=SCDB))
)
sqlplus username/password@dbService
Step 6: Load Oracle server wallet certificate to the portal and upgrader Java
KeyStore.
1 Login as a root user.
2 Add server certificate in portal Java.
cd /usr/java/bin
keytool -import -trustcacerts -alias ora_server_cert -file
/opt/aptare/oracle/network/client_wallet/server-cert-db.crt
-keystore /usr/java/lib/security/cacerts
password: changeit
cd /opt/aptare/upgrade/jre/bin
keytool -import -trustcacerts -alias ora_server_cert -file
/opt/aptare/oracle/network/client_wallet/server-cert-db.crt
-keystore /opt/aptare/upgrade/jre/lib/security/cacerts
password: changeit
Configure TLS-enabled Oracle database on NetBackup IT Analytics Portal and data receiver 315
Configure TLS in Oracle with NetBackup IT Analytics on Linux in non-split architecture
Step 7: Modify connection URL in the portal and receiver property file.
1 Stop portal and agent services.
/opt/aptare/bin/tomcat-portal stop
/opt/aptare/bin/tomcat-agent stop
db.url=jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)
(HOST=xx.xx.xx.xx)(PORT=2484))(CONNECT_DATA=(SERVICE_NAME=SCDB)))
jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)
(HOST=xx.xx.xx.xx)(PORT=2484))(CONNECT_DATA=(SERVICE_NAME=SCDB)))
/opt/aptare/bin/tomcat-portal start
/opt/aptare/bin/tomcat-agent start
su - aptare
mkdir /opt/aptare/oracle/network/server_wallet
3 Create an empty wallet for the Oracle server with auto login enabled.
5 Check the contents of the wallet. Notice the self-signed certificate is both a
user and trusted certificate.
6 Export the certificate so it can be loaded into the client wallet later.
7 Check whether the certificate has been exported to the above directory.
Step 2: Configure Oracle wallet for client application.
1 Login as oracle user.
su - aptare
2 Create a directory on the client system to store the client wallet. Call it
client_wallet. Create it under the /opt/aptare/oracle/network folder.
mkdir /opt/aptare/oracle/network/client_wallet
3 Create a wallet of the oracle client. Create an empty wallet with auto login
enabled.
2 Check the contents of the client wallet. Note that the server certificate is now
included in the list of trusted certificates.
Step 4: Configure the Oracle database to listen for TCPS connection: Configure
the listener.ora, tnsnames.ora, and sqlnet.ora files on the database server
Configure TLS-enabled Oracle database on NetBackup IT Analytics Portal and data receiver 318
Configure TLS in Oracle with NetBackup IT Analytics on Linux in non-split architecture
using the following steps. In these steps, host is the oracle server IP address
and the server wallet location is /opt/aptare/oracle/network/server_wallet.
1 Stop the Oracle listener before updating the files.
lsnrctl stop
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
(ADDRESS = (PROTOCOL = TCPS)(HOST = xx.xx.xx.xx)(PORT =
2484))
)
)
SSL_CLIENT_AUTHENTICATION = FALSE
SECURE_PROTOCOL_LISTENER=(IPC)
WALLET_LOCATION =
(SOURCE =
(METHOD = FILE)
(METHOD_DATA =
(DIRECTORY = /opt/aptare/oracle/network/server_wallet)
)
)
/opt/aptare/oracle/network/server_wallet
SCDB =
(DESCRIPTION =
(ADDRESS=
(PROTOCOL=TCPS)
(HOST=xx.xx.xx.xx)
(PORT=2484)
)
(CONNECT_DATA=(SERVICE_NAME=scdb)(SID=SCDB))
)
Configure TLS-enabled Oracle database on NetBackup IT Analytics Portal and data receiver 319
Configure TLS in Oracle with NetBackup IT Analytics on Linux in non-split architecture
lsnrctl start
lsnrctl status
sqlplus username/password@dbService
Step 5: Load oracle server wallet certificate to the portal and upgrader Java
KeyStore.
1 Add server certificate in portal Java.
cd /usr/java/bin
keytool -import -trustcacerts -alias ora_server_cert -file
/opt/aptare/oracle/network/server_wallet/server-cert-db.crt
-keystore /usr/java/lib/security/cacerts
password: changeit
cd /opt/aptare/upgrade/jre/bin
keytool -import -trustcacerts -alias ora_server_cert -file
/opt/aptare/oracle/network/server_wallet/server-cert-db.crt
-keystore /opt/aptare/upgrade/jre/lib/security/cacerts
password: changeit
Step 6: Modify connection URL in the portal and receiver property file.
1 Login as a root user.
2 Stop portal and agent services.
/opt/aptare/bin/tomcat-portal stop
/opt/aptare/bin/tomcat-agent stop
db.url=jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)
(HOST=xx.xx.xx.xx)(PORT=2484))(CONNECT_DATA=(SERVICE_NAME=SCDB)))
Configure TLS-enabled Oracle database on NetBackup IT Analytics Portal and data receiver 320
Configure TLS in Oracle with NetBackup IT Analytics on Windows in split architecture
<URL>jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)
(HOST=xx.xx.xx.xx)(PORT=2484))(CONNECT_DATA=(SERVICE_NAME=SCDB)))</URL>
/opt/aptare/bin/tomcat-portal start
/opt/aptare/bin/tomcat-agent start
mkdir C:\opt\oracle\network\server_wallet
2 Create an empty wallet for the Oracle server with auto login enabled.
4 Check the contents of the wallet. Notice the self-signed certificate is both a
user and trusted certificate.
5 Check whether the certificate has been exported to the above directory.
6 Make sure the oracle service user can access the wallet file cwallet.sso
(READ permission).
Step 2: Configure Oracle wallet for client application.
1 Create a directory on the client machine to store the client wallet. Call it
client_wallet. Create it under the C:\opt\oracle\network folder.
mkdir C:\opt\oracle\network\client_wallet
2 Create a wallet for the Oracle client. Create an empty wallet with auto login
enabled.
4 Check the contents of the wallet. Note that the self-signed certificate is both a
user and a trusted certificate.
5 Export the certificate, so it can be loaded into the server wallet later.
3 Check the contents of the client wallet. Note that the server certificate is now
included in the list of trusted certificates.
5 Check the contents of the server wallet. Note that the client certificate is now
included in the list of trusted certificates.
lsnrctl stop
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
(ADDRESS = (PROTOCOL = TCPS)(HOST = xx.xx.xx.xx)(PORT =
2484))
)
)
SSL_CLIENT_AUTHENTICATION = FALSE
SECURE_PROTOCOL_LISTENER=(IPC)
WALLET_LOCATION =
(SOURCE =
(METHOD = FILE)
(METHOD_DATA =
(DIRECTORY = C:\opt\oracle\network\server_wallet)
)
)
C:\opt\oracle\network\server_wallet
Configure TLS-enabled Oracle database on NetBackup IT Analytics Portal and data receiver 324
Configure TLS in Oracle with NetBackup IT Analytics on Windows in split architecture
SSL_CLIENT_AUTHENTICATION = FALSE
WALLET_LOCATION =
(SOURCE =
(METHOD = FILE)
(METHOD_DATA =
(DIRECTORY = C:\opt\oracle\network\server_wallet)
)
)
SSL_CIPHER_SUITES = (SSL_RSA_WITH_AES_256_CBC_SHA,
SSL_RSA_WITH_3DES_EDE_CBC_SHA) SQLNET.WALLET_OVERRIDE = TRUE
SCDB =
(DESCRIPTION =
(ADDRESS=
(PROTOCOL=TCPS)
(HOST=xx.xx.xx.xx)
(PORT=2484)
)
(CONNECT_DATA=(SERVICE_NAME=scdb)(SID=SCDB))
)
lsnrctl start
lsnrctl status
sqlplus username/password@service_name
Step 5: Configure the Oracle database to listen for TCPS connection on the
client system. Configure the listener.ora and sqlnet.ora files on the
database server using the following steps. In the procedure below, host is
Configure TLS-enabled Oracle database on NetBackup IT Analytics Portal and data receiver 325
Configure TLS in Oracle with NetBackup IT Analytics on Windows in split architecture
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
(ADDRESS = (PROTOCOL = TCPS)(HOST = xx.xx.xx.xx)(PORT =
2484))
)
)
SSL_CLIENT_AUTHENTICATION = FALSE
SECURE_PROTOCOL_LISTENER=(IPC)
WALLET_LOCATION =
(SOURCE =
(METHOD = FILE)
(METHOD_DATA =
(DIRECTORY = C:\opt\oracle\network\client_wallet)
)
)
C:\opt\oracle\network\client_wallet
SSL_CLIENT_AUTHENTICATION = FALSE
WALLET_LOCATION =
(SOURCE =
(METHOD = FILE)
(METHOD_DATA =
(DIRECTORY = C:\opt\oracle\network\client_wallet)
)
)
SSL_CIPHER_SUITES = (SSL_RSA_WITH_AES_256_CBC_SHA,
SSL_RSA_WITH_3DES_EDE_CBC_SHA) SQLNET.WALLET_OVERRIDE = TRUE
Configure TLS-enabled Oracle database on NetBackup IT Analytics Portal and data receiver 326
Configure TLS in Oracle with NetBackup IT Analytics on Windows in split architecture
SCDB =
(DESCRIPTION =
(ADDRESS=
(PROTOCOL=TCPS)
(HOST=xx.xx.xx.xx)
(PORT=2484)
)
(CONNECT_DATA=(SERVICE_NAME=scdb)(SID=SCDB))
)
Step 6: Load Oracle server wallet certificate to the portal and upgrader Java
KeyStore.
1 Login as a root user.
2 Add server certificate in portal java.
cd C:\opt\jre\bin
keytool -import -trustcacerts -alias ora_server_cert -file
C:\opt\oracle\network\client_wallet\server-cert-db.crt -keystore
C:\opt\jre\lib\security\cacerts
cd C:\opt\jdk\bin
keytool -import -trustcacerts -alias ora_server_cert -file
C:\opt\oracle\network\client_wallet\server-cert-db.crt -keystore
C:\opt\jdk\lib\security\cacerts
password: changeit
cd C:\opt\aptare\upgrade\jre\bin
keytool -import -trustcacerts -alias ora_server_cert -file
C:\opt\oracle\network\client_wallet\server-cert-db.crt -keystore
C:\opt\aptare\upgrade\jre\lib\security\cacerts
password: changeit
Configure TLS-enabled Oracle database on NetBackup IT Analytics Portal and data receiver 327
Configure TLS in Oracle with NetBackup IT Analytics on Windows in non-split architecture
Step 7: Modify connection URL in the portal and receiver property file.
1 Stop portal and agent services.
2 Modify database URL in /opt/aptare/portalconf/portal.properties.
db.url=jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)
(HOST=xx.xx.xx.xx)(PORT=2484))(CONNECT_DATA=(SERVICE_NAME=SCDB)))
<URL>jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)
(HOST=xx.xx.xx.xx)(PORT=2484))(CONNECT_DATA=(SERVICE_NAME=SCDB))</URL>
mkdir C:\opt\oracle\network\server_wallet
2 Create an empty wallet for the Oracle server with auto login enabled.
4 Check the contents of the wallet. Notice the self-signed certificate is both a
user and trusted certificate.
5 Export the certificate so it can be loaded into the client wallet later.
6 Check whether the certificate has been exported to the above directory.
7 Make sure the Oracle service user can access the wallet file cwallet.sso
(READ permission).
Step 2: Configure Oracle wallet for client application.
1 Login as oracle user.
su - aptare
2 Create a directory on the client system to store the client wallet. Call it client
_wallet. Create it under the /opt/aptare/oracle/network folder.
mkdir C:\opt\oracle\network\client_wallet
3 Create a wallet of the oracle client. Create an empty wallet with auto login
enabled.
2 Check the contents of the client wallet. Note that the server certificate is now
included in the list of trusted certificates.
Step 4: Configure the Oracle database to listen for TCPS connection: Configure
the listener.ora, tnsnames.ora, and sqlnet.ora files on the database server
using the following steps. In these steps, host is the oracle server IP address
and the server wallet location is /opt/aptare/oracle/network/server_wallet.
1 Stop the Oracle listener before updating the files.
lsnrctl stop
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
(ADDRESS = (PROTOCOL = TCPS)(HOST = xx.xx.xx.xx)(PORT =
2484))
)
)
SSL_CLIENT_AUTHENTICATION = FALSE
SECURE_PROTOCOL_LISTENER=(IPC)
WALLET_LOCATION =
(SOURCE =
(METHOD = FILE)
(METHOD_DATA =
(DIRECTORY = C:\opt\oracle\network\server_wallet)
)
)
C:\opt\oracle\network\server_wallet
Configure TLS-enabled Oracle database on NetBackup IT Analytics Portal and data receiver 330
Configure TLS in Oracle with NetBackup IT Analytics on Windows in non-split architecture
SCDB =
(DESCRIPTION =
(ADDRESS=
(PROTOCOL=TCPS)
(HOST=xx.xx.xx.xx)
(PORT=2484)
)
(CONNECT_DATA=(SERVICE_NAME=scdb)(SID=SCDB))
)
lsnrctl start
lsnrctl status
sqlplus username/password@service_name
Configure TLS-enabled Oracle database on NetBackup IT Analytics Portal and data receiver 331
Configure TLS in Oracle with NetBackup IT Analytics on Windows in non-split architecture
Step 5: Load oracle server wallet certificate to the portal and upgrader Java
KeyStore.
1 Add server certificate in portal Java.
cd C:\opt\jre\bin
keytool -import -trustcacerts -alias ora_server_cert -file
C:\opt\oracle\network\server_wallet\server-cert-db.crt -keystore
C:\opt\jre\lib\security\cacerts
cd C:\opt\jdk\bin
keytool -import -trustcacerts -alias ora_server_cert -file
C:\opt\oracle\network\server_wallet\server-cert-db.crt -keystore
C:\opt\jdk\lib\security\cacerts
password: changeit
cd C:\opt\aptare\upgrade\jre\bin
keytool -import -trustcacerts -alias ora_server_cert -file
C:\opt\oracle\network\server_wallet\server-cert-db.crt -keystore
C:\opt\aptare\upgrade\jre\lib\security\cacerts
password: changeit
Step 6: Modify connection URL in the portal and receiver property file.
1 Stop portal and agent services.
2 Modify database URL in C:\opt\aptare\portalconf\portal.properties.
db.url=jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)
(HOST=xx.xx.xx.xx)(PORT=2484))(CONNECT_DATA=(SERVICE_NAME=SCDB)))
<URL>jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)
(HOST=xx.xx.xx.xx)(PORT=2484))(CONNECT_DATA=(SERVICE_NAME=SCDB)))</URL>
cd C:\opt\jre\bin
keytool -import -trustcacerts -alias ora_server_cert -file
C:\opt\oracle\network\client_wallet\server-cert-db.crt -keystore
C:\opt\jre\lib\security\cacerts
cd C:\opt\jdk\bin
keytool -import -trustcacerts -alias ora_server_cert -file
C:\opt\oracle\network\client_wallet\server-cert-db.crt -keystore
C:\opt\jdk\lib\security\cacerts
password: changeit
cd C:\opt\aptare\upgrade\jre\bin
C:\opt\aptare\upgrade\jre\lib\security\cacerts
password: changeit
Step 2: Ensure the Oracle service user has READ access to cwallet.sso file
of the server wallet. To provide the permission:
1 Right-click on the cwallet.sso file of the server wallet and select Properties.
2 Go to the Security tab and click Edit of the group or user names.
3 Click Add, search for Oracle service user, and click OK.
4 Select READ permission and click OK.
Step 3: Modify connection URL in portal and receiver property file.
db.url=jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)(HOST=xx.xx.xx.xx)(PORT=2484))(CONNECT_DATA=(SERVICE_NAME=SCDB)))
<URL>jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)(HOST=xx.xx.xx.xx)(PORT=2484))(CONNECT_DATA=(SERVICE_NAME=SCDB))</URL>
On Linux
Step 1: Load oracle server wallet certificate to portal and upgrader Java
KeyStore. This step is required only if the wallet certificate is self-signed.
1 Login as a root user.
2 Add server certificate in portal Java.
cd /user/java/bin
cd C:\opt\aptare\upgrade\jre\bin
C:\opt\aptare\upgrade\jre\lib\security\cacerts
password: changeit
/opt/aptare/bin/tomcat-portal stop
/opt/aptare/bin/tomcat-agent stop
db.url=jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)
(HOST=xx.xx.xx.xx)(PORT=2484))(CONNECT_DATA=(SERVICE_NAME=SCDB)))
<URL>jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)
(HOST=xx.xx.xx.xx)(PORT=2484))(CONNECT_DATA=(SERVICE_NAME=SCDB))</URL>
/opt/aptare/bin/tomcat-portal start
/opt/aptare/bin/tomcat-agent start
Appendix C
NetBackup IT Analytics for
NetBackup on Kubernetes
and appliances
This appendix includes the following topics:
Note: From NetBackup version 10.3 Cloud Scale release, Data Collector on primary
server pod is supported.
server pod first and then switch to the root user using sudo. On a NetBackup
Appliance, access shell by creating NetBackup CLI user.
To configure NetBackup IT Analytics for NetBackup deployment
1 Create a DNS server entry in such a way that IP of the NetBackup IT Analytics
Portal must be resolvable to a single FQDN. IP of the NetBackup IT Analytics
Portal must be resolved to:
itanalyticsagent.<yourdomain>
itanalyticsagent.<yourdomain>
aptareagent.<yourdomain>
cd "/mnt/nbdata/"
mkdir analyticscollector
NetBackup IT Analytics for NetBackup on Kubernetes and appliances 337
Configure embedded NetBackup IT Analytics Data collector for NetBackup deployment on appliances (including
Flex appliances)
COLLECTOR_NAME=<your-collector-name>
COLLECTOR_PASSCODE=<your-password>
DR_URL=<http>/<https>://itanalyticsagent.<yourdomain>
COLLECTOR_KEY_PATH=<path to your-collector-name.key>
HTTP_PROXY_CONF=N
HTTP_PROXY_ADDRESS=
HTTP_PROXY_PORT=
HTTPS_PROXY_ADDRESS=
HTTPS_PROXY_PORT=
PROXY_USERNAME=
PROXY_PASSWORD=
PROXY_EXCLUDE=
If the Portal version is 11.3 or later, create the response file with the following
contents.
COLLECTOR_REGISTRATION_PATH=<keyfile path>
HTTP_PROXY_CONF=N
HTTP_PROXY_ADDRESS=
HTTP_PROXY_PORT=
HTTPS_PROXY_ADDRESS=
HTTPS_PROXY_PORT=
PROXY_USERNAME=
NetBackup IT Analytics for NetBackup on Kubernetes and appliances 338
Configure embedded NetBackup IT Analytics Data collector for NetBackup deployment on appliances (including
Flex appliances)
PROXY_PASSWORD=
PROXY_EXCLUDE=
10 Configure the Data Collector with the NetBackup IT Analytics Portal as follows.
Note: If the Data Collector installed is of a lower version than the NetBackup
IT Analytics Portal, wait for the Data Collector auto-upgrade to finish before
you proceed.
/usr/openv/analyticscollector/installer/dc_installer.sh -c
/usr/openv/analyticscollector/installer/responsefile.sample
/usr/openv/analyticscollector/mbs/bin/checkinstall.sh
sudo /usr/openv/analyticscollector/installer/dc_installer.sh
-c /usr/openv/analyticscollector/installer/responsefile.sample
sudo /usr/openv/analyticscollector/mbs/bin/checkinstall.sh
/usr/openv/analyticscollector/installer/dc_installer.sh -c
/usr/openv/analyticscollector/installer/responsefile.sample
/usr/openv/analyticscollector/mbs/bin/checkinstall.sh
NetBackup IT Analytics for NetBackup on Kubernetes and appliances 339
Configure embedded NetBackup IT Analytics Data collector for NetBackup deployment on appliances (including
Flex appliances)
11 Check the Data Collector services status by running the following command
and ensure that the following Data Collector services are up and running:
/usr/openv/analyticscollector/mbs/bin/aptare_agent status
For more information about NetBackup IT Analytics Data Collector policy, see
NetBackup IT Analytics User Guide.
Note: Remote data collection from Netbackup primary server cannot be configured
when multi-factor authentication is configured on Netbackup Appliance.
Configuring the primary server with NetBackup IT Analytics tools is supported only
once from primary server custom resource.
For more information about NetBackup IT Analytics Data Collector policy, see Add
a Veritas NetBackup Data Collector policy section.
For more information about adding NetBackup Primary Servers within the Data
Collector policy, see Add/Edit NetBackup Primary Servers within the Data Collector
policy section in NetBackup IT Analytics Data Collector Installation Guide for Backup
Manager.
To change the already configured public key
1 Connect to the NetBackup Primary host or container.
2 Copy the new public keys in the
/home/nbitanalyticsadmin/.ssh/authorized_keys and
/mnt/nbdata/.ssh/nbitanalyticsadmin_keys files.
3 Restart the sshd service using the systemctl restart sshd command.
Note: From NetBackup version 10.3 Cloud Scale release, Data Collector on primary
server is supported.
NetBackup IT Analytics for NetBackup on Kubernetes and appliances 341
Configure NetBackup IT Analytics for NetBackup deployment on Kubernetes
itanalyticsagent.<yourdomain>
itanalyticsagent.<yourdomain>
aptareagent.<yourdomain>
COLLECTOR_NAME=name_of_the_data_collector
COLLECTOR_PASSCODE=passcode_for_the_data_collector
DR_URL=data_receiver_URL
COLLECTOR_KEY_PATH=path_to_the_key_file
HTTP_PROXY_CONF=N
HTTP_PROXY_ADDRESS=
HTTP_PROXY_PORT=
HTTPS_PROXY_ADDRESS=
HTTPS_PROXY_PORT=
PROXY_USERNAME=
PROXY_PASSWORD=
PROXY_EXCLUDE=
■ Run /usr/openv/analyticscollector/installer/dc_installer.sh
-c /usr/openv/analyticscollector/installer/responsefile.sample
command to configure Data Collector with IT Analytics portal.
8 Check the Data Collector services status by running the following command
and ensure that the following Data Collector services are up and running:
/usr/openv/analyticscollector/mbs/bin/aptare_agent status
For more information about IT Analytics Data Collector policy, see NetBackup IT
Analytics User Guide.
3 Create and copy NetBackup API key from NetBackup web UI.
Configuring the primary server with NetBackup IT Analytics tools is supported only
once from primary server custom resource.
For more information about NetBackup IT Analytics Data Collector policy, see Add
a Veritas NetBackup Data Collector policy section.
For more information about adding NetBackup Primary Servers within the Data
Collector policy, see Add/Edit NetBackup Primary Servers within the Data Collector
policy section in NetBackup IT Analytics Data Collector Installation Guide for Backup
Manager.
To change the already configured public key
1 Execute the following command in the primary server pod:
kubectl exec -it -n <namespace> <primaryServer-pod-name> --
/bin/bash
3 Restart the sshd service using the systemctl restart sshd command.