PDF TNPM Installguide
PDF TNPM Installguide
2
Wireline Component
Document Revision R2E1
Installation Guide
Note
Before using this information and the product it supports, read the information in Notices on page 251.
Copyright IBM Corporation 2006, 2012.
US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
Preface . . . . . . . . . . . . . . vii
Audience . . . . . . . . . . . . . . . vii
Tivoli Netcool Performance Manager - Wireline
Component . . . . . . . . . . . . . . vii
The Default UNIX Shell . . . . . . . . . . ix
Chapter 1. Introduction . . . . . . . . 1
Tivoli Netcool Performance Manager architecture . . 1
Co-location rules . . . . . . . . . . . . 2
Inheritance . . . . . . . . . . . . . . 4
Notable subcomponents and features . . . . . 5
Typical installation topology . . . . . . . . . 8
Basic topology scenario . . . . . . . . . . 8
Intermediate topology scenario . . . . . . . 9
Advanced topology scenario. . . . . . . . 10
Tivoli Netcool Performance Manager distribution. . 11
Chapter 2. Requirements . . . . . . . 13
Minimum requirements for installation . . . . . 13
Solaris hardware requirements . . . . . . . 13
AIX hardware requirements . . . . . . . . 14
Linux hardware requirements . . . . . . . 14
Oracle deployment space requirements . . . . 14
Tivoli Integrated Portal deployment space
requirements . . . . . . . . . . . . . 15
Screen resolution . . . . . . . . . . . 15
Minimum requirements for a proof of concept
installation . . . . . . . . . . . . . . 15
Solaris hardware requirements (POC). . . . . 15
AIX hardware requirements (POC) . . . . . 16
Linux hardware requirements (POC) . . . . . 16
Screen resolution . . . . . . . . . . . 16
Supported operating systems and modules . . . . 17
Solaris 10 for SPARC platforms . . . . . . . 17
AIX platforms . . . . . . . . . . . . 20
Linux platforms . . . . . . . . . . . . 24
Required user names . . . . . . . . . . . 27
pvuser . . . . . . . . . . . . . . . 27
oracle . . . . . . . . . . . . . . . 27
Ancillary software requirements . . . . . . . 28
FTP support . . . . . . . . . . . . . 28
OpenSSH and SFTP . . . . . . . . . . 28
File compression. . . . . . . . . . . . 29
DataView load balancing . . . . . . . . . 29
Oracle Database . . . . . . . . . . . . 30
Oracle server . . . . . . . . . . . . . 30
Tivoli Common Reporting client . . . . . . 30
Java Runtime Environment (JRE) . . . . . . 31
Web browsers and settings . . . . . . . . 31
X Emulation . . . . . . . . . . . . . 32
WebGUI integration . . . . . . . . . . 33
Microsoft Office Version . . . . . . . . . 33
Chapter 3. Installing and configuring
the prerequisite software . . . . . . . 35
Overview . . . . . . . . . . . . . . . 35
Supported platforms . . . . . . . . . . 36
Pre-Installation setup tasks . . . . . . . . . 36
Setting up a remote X Window display . . . . 36
Changing the ethernet characteristics . . . . . 37
Adding the pvuser login name . . . . . . . 40
Setting the resource limits (AIX only) . . . . . 42
Set the system parameters (Solaris only) . . . . 43
Enable FTP on Linux systems (Linux only) . . . 45
Disable SELinux (Linux only) . . . . . . . 45
Set the kernel parameters (Linux only) . . . . 45
Replace the native taring utility with gnu tar
(AIX 7.1 only) . . . . . . . . . . . . 46
Install a libcrypto.so . . . . . . . . . . 47
Deployer pre-requisites . . . . . . . . . . 47
Operating system check . . . . . . . . . 47
Mount points check . . . . . . . . . . 48
Authentication between distributed servers . . . 48
Downloading the Tivoli Netcool Performance
Manager distribution to disk . . . . . . . . 48
Downloading Tivoli Common Reporting to disk 49
General Oracle setup tasks . . . . . . . . . 49
Specifying a basename for DB_USER_ROOT . . 50
Specifying Oracle login passwords. . . . . . 51
Assumed values . . . . . . . . . . . . 52
Install Oracle 11.2.0.2 server (64-bit) . . . . . . 53
Download the Oracle distribution to disk . . . 53
Verify the required operating system packages. . 54
Run the Oracle server configuration script . . . 54
Set a password for the Oracle login name . . . 57
Run the preinstallation script . . . . . . . 57
Run the rootpre.sh script (AIX only) . . . . . 58
Verify PATH and Environment for the Oracle
login name . . . . . . . . . . . . . 58
Install Oracle using the menu-based script . . . 59
Run the root.sh script . . . . . . . . . . 61
Set the ORACLE_SID variable . . . . . . . 62
Set automatic startup of the database instance . . 63
Configure the Oracle listener . . . . . . . 63
Configure the Oracle net client . . . . . . . 65
Install Oracle 11.2.0.2 client (32-bit) . . . . . . 67
Download the Oracle distribution to disk . . . 67
Run the Oracle client configuration script . . . 68
Set a password for the Oracle login name . . . 70
Run the preinstallation script . . . . . . . 70
Verify PATH and Environment for the Oracle
login name . . . . . . . . . . . . . 71
Install the Oracle client (32-bit) . . . . . . . 71
Run the root.sh script . . . . . . . . . . 73
Configure the Oracle Net Client . . . . . . 74
Update the oracle user's .profile . . . . . . 74
Configure the Oracle Net client . . . . . . . 75
Next steps . . . . . . . . . . . . . . . 77
Copyright IBM Corp. 2006, 2012 iii
Chapter 4. Installing in a distributed
environment . . . . . . . . . . . . 79
Distributed installation process . . . . . . . . 79
Starting the Launchpad . . . . . . . . . . 81
Installing the Topology Editor . . . . . . . . 82
Starting the Topology Editor. . . . . . . . . 83
Creating a new topology . . . . . . . . . . 84
Adding and configuring the Tivoli Netcool
Performance Manager components . . . . . . 84
Add the hosts . . . . . . . . . . . . 84
Add a database configurations component . . . 86
Add a DataMart . . . . . . . . . . . . 87
Add a Discovery Server . . . . . . . . . 89
Add a Tivoli Integrated Portal . . . . . . . 90
Add a DataView. . . . . . . . . . . . 91
Add the DataChannel administrative components 92
Add a DataChannel . . . . . . . . . . 93
Add a Collector . . . . . . . . . . . . 95
Add a Cross Collector CME . . . . . . . . 98
Saving the topology . . . . . . . . . . . 99
Opening an existing topology file . . . . . 100
Starting the Deployer. . . . . . . . . . . 100
Primary Deployer . . . . . . . . . . . 100
Secondary Deployers . . . . . . . . . . 101
Pre-deployment check . . . . . . . . . 101
Deploying the topology . . . . . . . . . . 102
Reuse an existing Tivoli Integrated Portal and
Install DataView using a non root user on a
local host . . . . . . . . . . . . . . 104
Reuse an existing Tivoli Integrated Portal and
Install DataView using a non root user on a
remote host . . . . . . . . . . . . . 106
Next steps . . . . . . . . . . . . . . 108
Resuming a partially successful first-time
installation . . . . . . . . . . . . . . 109
Chapter 5. Installing as a minimal
deployment . . . . . . . . . . . . 111
Overview. . . . . . . . . . . . . . . 111
Before you begin . . . . . . . . . . . . 111
Special consideration . . . . . . . . . . 112
Overriding default values . . . . . . . . 112
Installing a minimal deployment . . . . . . . 113
Download the MIB-II files . . . . . . . . 113
Starting the Launchpad . . . . . . . . . 113
Start the installation . . . . . . . . . . 114
The post-installation script . . . . . . . . . 116
Next steps . . . . . . . . . . . . . . 116
Chapter 6. Modifying the current
deployment . . . . . . . . . . . . 117
Opening a deployed topology . . . . . . . . 117
Adding a new component . . . . . . . . . 118
Changing configuration parameters of existing
Tivoli Netcool Performance Manager components . 120
Moving components to a different host . . . . . 120
Moving a deployed collector to a different host . . 121
Moving a deployed SNMP collector . . . . . 121
Moving a deployed UBA bulk collector. . . . 123
Changing the port for a collector . . . . . . . 126
Modifying Tivoli Integrated Portal and Tivoli
Common Reporting ports . . . . . . . . . 126
Changing ports for the Tivoli Common
Reporting console . . . . . . . . . . . 127
Port assignments . . . . . . . . . . . 128
Viewing the application server profile . . . . 128
Chapter 7. Using the High Availability
Manager . . . . . . . . . . . . . 131
Overview. . . . . . . . . . . . . . . 131
HAM basics . . . . . . . . . . . . . . 131
The parts of a collector . . . . . . . . . 132
Clusters . . . . . . . . . . . . . . 133
HAM cluster configuration . . . . . . . . . 133
Types of spare hosts . . . . . . . . . . 133
Types of HAM clusters . . . . . . . . . 134
Example HAM clusters . . . . . . . . . 134
Resource pools . . . . . . . . . . . . . 140
How the SNMP collector works . . . . . . . 140
How failover works with the HAM and the
SNMP collector. . . . . . . . . . . . 141
Obtaining collector status . . . . . . . . 142
Creating a HAM environment . . . . . . . . 143
Topology prerequisites . . . . . . . . . 144
Procedures . . . . . . . . . . . . . 144
Create the HAM and a HAM cluster . . . . 144
Add the designated spare . . . . . . . . 145
Add the managed definitions . . . . . . . 146
Define the resource pools . . . . . . . . 147
Save and start the HAM. . . . . . . . . 149
Creating an additional HAM environment. . . 150
Modifying a HAM environment . . . . . . . 150
Removing HAM components . . . . . . . 150
Stopping and restarting modified components 151
Viewing the current configuration . . . . . . 151
Show Collector Process... dialog . . . . . . 152
Show Managed Definition... dialog . . . . . 152
Chapter 8. Enabling Common
Reporting on Tivoli Netcool
Performance Manager. . . . . . . . 155
Model Maker . . . . . . . . . . . . . 155
The Base Common Pack Suite . . . . . . . . 156
Installing the BCP package from the distribution 157
Chapter 9. Uninstalling components 159
Removing a component from the topology . . . 159
Restrictions and behavior . . . . . . . . 159
Removing a component . . . . . . . . . 160
Uninstalling the entire Tivoli Netcool Performance
Manager system . . . . . . . . . . . . 161
Order of uninstall . . . . . . . . . . . 161
Restrictions and behavior . . . . . . . . 162
Performing the uninstall. . . . . . . . . 162
Uninstalling the topology editor . . . . . . . 163
Residual files . . . . . . . . . . . . . 164
Appendix A. Remote installation
issues . . . . . . . . . . . . . . 167
iv IBM Tivoli Netcool Performance Manager: Installation Guide
When remote install is not possible . . . . . . 167
FTP is possible, but REXEC or RSH are not . . 167
Neither FTP nor REXEC/RSH are possible . . 168
Installing on a remote host using a secondary
deployer . . . . . . . . . . . . . . . 168
Appendix B. DataChannel architecture 171
Data collection . . . . . . . . . . . . . 171
Data aggregation . . . . . . . . . . . 171
Management programs and watchdog scripts 172
DataChannel application programs . . . . . 173
Starting the DataLoad SNMP collector . . . . . 175
DataChannel management components in a
distributed configuration . . . . . . . . . 176
Manually starting the Channel Manager
programs . . . . . . . . . . . . . . 176
Adding DataChannels to an existing system . . . 177
DataChannel terminology . . . . . . . . . 178
Appendix C. Aggregation sets . . . . 181
Overview. . . . . . . . . . . . . . . 181
Configuring aggregation sets . . . . . . . . 181
Installing aggregation sets . . . . . . . . . 185
Start the Tivoli Netcool Performance Manager
setup program . . . . . . . . . . . . 185
Set aggregation set installation parameters . . 185
Edit aggregation set parameters file . . . . . 188
Linking DataView groups to timezones. . . . . 189
Appendix D. Deployer CLI options 191
Using the -DTarget option . . . . . . . . . 192
Appendix E. Secure file transfer
installation . . . . . . . . . . . . 195
Overview. . . . . . . . . . . . . . . 195
Enabling SFTP . . . . . . . . . . . . . 195
Installing OpenSSH . . . . . . . . . . . 196
AIX systems. . . . . . . . . . . . . 196
Solaris systems . . . . . . . . . . . . 198
Linux systems . . . . . . . . . . . . 200
Configuring OpenSSH . . . . . . . . . . 200
Configuring the OpenSSH server . . . . . . 200
Configuring OpenSSH client . . . . . . . 201
Generating public and private keys . . . . . 201
Testing OpenSSH and SFTP . . . . . . . . 204
Troubleshooting . . . . . . . . . . . . 204
Netcool/Provisio SFTP errors . . . . . . . . 205
Appendix F. LDAP integration . . . . 207
Supported LDAP servers . . . . . . . . . 207
LDAP configuration . . . . . . . . . . . 207
Enable LDAP configuration . . . . . . . 207
Verifying the DataView installation . . . . . 208
Assigning Tivoli Netcool Performance Manager
roles to LDAP users . . . . . . . . . . 208
Appendix G. Using silent mode. . . . 211
Sample properties files . . . . . . . . . . 211
The Deployer . . . . . . . . . . . . . 211
Running the Deployer in silent mode . . . . 211
Confirming the status of a silent install . . . . 212
Restrictions . . . . . . . . . . . . . 213
The Topology Editor . . . . . . . . . . . 213
Appendix H. Installing an interim fix 215
Overview. . . . . . . . . . . . . . . 215
Installation rules . . . . . . . . . . . 215
Behavior and restrictions . . . . . . . . 215
Before you begin . . . . . . . . . . . . 216
Installing a patch . . . . . . . . . . . . 216
Appendix I. Error codes and log files 219
Error codes . . . . . . . . . . . . . . 219
Deployer messages . . . . . . . . . . 219
Topology Editor messages . . . . . . . . 232
InstallAnywhere messages . . . . . . . . 236
Log files . . . . . . . . . . . . . . . 237
COI log files. . . . . . . . . . . . . 237
Deployer log file . . . . . . . . . . . 238
Eclipse log file . . . . . . . . . . . . 238
Trace log file . . . . . . . . . . . . 238
Appendix J. Troubleshooting. . . . . 239
Deployment problems . . . . . . . . . . 239
Saving installation configuration files . . . . 241
Tivoli Netcool Performance Manager component
problems . . . . . . . . . . . . . . . 241
Topology Editor problems . . . . . . . . . 242
Telnet problems . . . . . . . . . . . . 242
Java problems . . . . . . . . . . . . . 243
Testing connectivity to the database . . . . . . 243
Testing external procedure call access . . . . . 244
Appendix K. Migrating DataView
content and users . . . . . . . . . 245
Moving DataView content between Tivoli
Integrated Portal servers. . . . . . . . . . 245
The synchronize command. . . . . . . . 245
Migrating SilverStream content to the Tivoli
Integrated Portal . . . . . . . . . . . . 246
SilverStream page conversion . . . . . . . 246
The migrate command . . . . . . . . . 248
Notices . . . . . . . . . . . . . . 251
Trademarks . . . . . . . . . . . . 255
Contents v
vi IBM Tivoli Netcool Performance Manager: Installation Guide
Preface
The purpose of this manual
IBM
Netcool
client-side requirements.
Tivoli Netcool Performance Manager supports the use of:
v Tivoli Integrated Portal 2.1 and
v Tivoli Integrated Portal 2.2
You should install Tivoli Integrated Portal 2.1 if your system hosts software that is
incompatible with Tivoli Integrated Portal 2.2
To use Cognos, you must download and install a windows version of Tivoli
Common Reporting 2.1 from xTreme Leverage.
30 IBM Tivoli Netcool Performance Manager: Installation Guide
There are two prerequisites that must be in place to use TCR/Cognos on a
Microsoft Windows environment:
v Framework Manager
v Oracle Client
Java Runtime Environment (JRE)
Required Java support.
Java Runtime Environment (JRE) 1.6 (32-bit) is required for all servers hosting
Tivoli Netcool Performance Manager components. The exception to this is
DataMart which requires is IBM Java 1.5 R6.
The IBM JDK is not supplied and installed automatically with the DataMart,
DataChannel and DataLoad components. When installing those components on
servers that are remote from the server hosting the primary deployer (Topology
Editor and Deployer) or TIP, then the required JRE, as stated above, will need to be
deployed to those servers separately.
Web browsers and settings
Supported browsers.
The following browsers are required to support the Web client and provide access
to DataView reports:
Important: If you are using Tivoli Netcool Performance Manager with WebGUI, see
WebGUI integration on page 33 for the browsers supported with both.
Important: No other browser types are supported.
Table 5. Windows Clients
Windows Vista Windows XP
v Microsoft Internet Explorer 7.0, 8.0
v Mozilla Firefox 3.6.
v Microsoft Internet Explorer 7.0, 8.0
v Mozilla Firefox 3.6.
Note: When using Windows Internet Explorer, IBM recommends that you have
available at least 1GB of memory
Table 6. UNIX Clients
AIX Red Hat 5 Solaris 10
v Firefox 3.6 v Firefox 3.6 v Firefox 3.6
The following are required browser settings:
v Enable JavaScript
v Enable cookies
Browser requirements for the Launchpad
Web browser requirements for the Launchpad.
The new Launchpad has been tested on the following browsers:
On Solaris:
v Firefox 3.6
Chapter 2. Requirements 31
On AIX:
v Firefox 3.6
On Linux:
v Firefox 3.6
For information about downloading and installing these browsers, see the
following web sites:
v http://www.mozilla.org/
v http://www-03.ibm.com/systems/p/os/aix/browsers/index.html
Note: You must be a registered user to use this site.
Screen resolution
Recommended screen resolution details.
A screen resolution of 1152 x 864 pixels or higher is recommended for the display
of DataView reports. Some reports may experience rendering issues at lower
resolutions.
Report Studio - Cognos
Report Studio support.
Report Studio is only supported by Microsoft Internet Explorer.
X Emulation
Remote desktop support.
For DataMart GUI access, Tivoli Netcool Performance Manager supports the
following:
v Native X Terminals
v Exceed V 6.2
The following libraries are required in order for Exceed to work with Eclipse:
v libgtk 2.10.1
v libgib 2.12.1
v libfreetype 2.1.10
v libatk 1.12.1
v libcairo 1.2.6
v libxft 2.1.6
v libpango 1.14.0
v Real VNC server 4.0
32 IBM Tivoli Netcool Performance Manager: Installation Guide
WebGUI integration
WebGUI version support.
The IBM Tivoli Netcool/OMNIbus Web GUI Integration Guide for Wireline describes
how to integrate IBM
Tivoli
, you may be asked if the script named rootpre.sh has been run. To
run this script:
a. Open another window.
b. Login as root.
c. cd to /var/tmp/oracle11202/database/rootpre
d. run rootpre.sh
e. Then answer Y to the installation script.
10. When the installation process is finished, the installation displays a success
message. Write down the log file location to aid in troubleshooting if there is
an installation error.
11. Type C and press Enter to return to the installation menu.
12. Type 0 and press Enter to exit the installation menu.
13. Perform the steps in Run the root.sh script on page 104.
Run the root.sh script
After successfully running an Oracle client or server installation, you must run the
root.sh script.
About this task
This step is also required after an Oracle patch installation. See Install Oracle
patches.
To run the root.sh script:
Procedure
1. Log in as root or become superuser. Set the DISPLAY environment variable.
2. Change to the directory where Oracle server files were installed. (This is the
directory as set in the ORACLE_HOME environment variable.) For example:
# cd /opt/oracle/product/11.2.0-client32/
3. Run the following command:
./root.sh
Chapter 3. Installing and configuring the prerequisite software 73
Messages like the following are displayed:
Running Oracle11 root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /opt/oracle/product/11.2.0-client32/
Enter the full pathname of the local bin directory: [/usr/local/bin]:
4. If the default entry, /usr/local/bin, is writable by root, press Enter to accept
the default value. The default entry might be NFS-mounted at your site so it
can be shared among several workstations and therefore might be
write-protected. If so, enter the location of a machine-specific alternative bin
directory. (You might need to create this alternative directory at a shell prompt
first.) For example, enter /usr/delphi/bin.
5. The script continues as follows:
...
Adding entry to /var/opt/oracle/oratab file...
Entries will be added to the /var/opt/oracle/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
#
The script runs to completion with no further prompts.
Configure the Oracle Net Client
After the Oracle 11g Client is installed on the client machine, copy the sqlnet.ora
and tnsnames.ora files.
Procedure
1. Copy the sqlnet.ora and tnsnames.ora files from the Oracle 10g Net Client
directory, $ORACLE10g_HOME/network/admin.
2. Place the copied files in the Oracle 11g Net Client directory,
$ORACLE11g_HOME/network/admin.
Update the oracle user's .profile
Modify the oracle user's .profile file.
Procedure
1. Make sure that ORACLE_HOME points to $ORACLE_BASE/product/11.2.0.
2. If there is not already an entry for TNS_ADMIN, add one.
TNS_ADMIN=$ORACLE_HOME/network/admin
When complete, the .profile should look similar to:
74 IBM Tivoli Netcool Performance Manager: Installation Guide
# -- Begin Oracle Settings --
umask 022
ORACLE_BASE=/opt/oracle
ORACLE_HOME=$ORACLE_BASE/product/11.2.0
NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data
LD_LIBRARY_PATH=$ORACLE_HOME/lib
TNS_ADMIN=$ORACLE_HOME/network/admin
PATH=$PATH:$ORACLE_HOME/bin:/usr/ccs/bin:/usr/local/bin
EXTPROC_DLLS=ONLY:${LD_LIBRARY_PATH}/libpvmextc.so
export PATH ORACLE_BASE ORACLE_HOME NLS_LANG
export ORA_NLS33 LD_LIBRARY_PATH TNS_ADMIN EXTPROC_DLLS
ORACLE_SID=PV
Export ORACLE_SID
# -- End Oracle Settings --
Configure the Oracle Net client
You must configure the Oracle Net client by setting up the TNS (Transport
Network Substrate) service names for your Tivoli Netcool Performance Manager
database instance.
About this task
Next, you configure the Oracle Net client by setting up the TNS (Transport
Network Substrate) service names for your Tivoli Netcool Performance Manager
database instance. You must perform this step for each instance of the Oracle client
software that you installed on the system.
Procedure
v You must configure sqlnet.ora and tnsnames.ora files for both Oracle server
and Oracle client installations. However, the tnsnames.ora file for client
installations should not have the EXTPROC_CONNECTION_DATA section.
v If you are installing DataView and one or more other Tivoli Netcool
Performance Manager components on the same system, you must make sure
that the tnsnames.ora and sqlnet.ora files for each set of client software are
identical. The easiest way to do this is to create these files when you are
configuring the first client instance for Net and then to copy it to the
corresponding directory when you configure the second instance.
Create the sqlnet.ora file
The sqlnet.ora file manages Oracle network operations. You can create a new
sqlnet.ora file, or FTP the file from your Oracle server.
About this task
To set up the TNS service names:
Procedure
1. Log in as oracle.
2. Change to the following directory:
$ cd $ORACLE_HOME/network/admin
3. To create the sqlnet.ora file, FTP the following file from your Oracle server:
Chapter 3. Installing and configuring the prerequisite software 75
/opt/oracle/admin/skeleton/bin/template.example_tnpm.sqlnet.ora
4. Add the following lines to it:
NAMES.DIRECTORY_PATH=(TNSNAMES) NAMES.DEFAULT_DOMAIN=WORLD
For example:
# sqlnet.ora network configuration file in
# /opt/oracle/product/11.2.0/network/admin
NAMES.DIRECTORY_PATH=(TNSNAMES)
NAMES.DEFAULT_DOMAIN=WORLD
Note: If you do not use WORLD as the DEFAULT_DOMAIN value, make sure you
enter the same value for DEFAULT_DOMAIN in both sqlnet.ora and tnsnames.ora.
5. Write and quit the sqlnet.ora file.
Create the tnsnames.ora file
The tnsnames.ora file maintains the relationships between logical node names and
physical locations of Oracle servers in the network.
About this task
You can create a new tnsnames.ora file, or FTP the file from your Oracle server.
To create the tnsnames.ora file:
Procedure
1. , FTP the following file from your Oracle server:
/opt/oracle/admin/skeleton/bin/template.example_tnpm.tnsnames.ora
2. Add the following lines:
# tnsnames.ora network configuration file in
# /opt/oracle/product/11.2.0/network/admin
#
# For Oracle client installations, tnsnames.ora
# only needs the PV.WORLD entry.
PV.WORLD =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS =
(PROTOCOL = TCP)
(HOST = yourhost)
(PORT = 1521)
)
)
(CONNECT_DATA =
(SERVICE_NAME = PV.WORLD)
(INSTANCE_NAME = PV)
)
)
Note: Indents in this file must be preserved.
3. Replace the string yourhost in the line (HOST = yourhost) with the name of
your Oracle server.
Note the following:
v You will use the value in the INSTANCE_NAME field as the TNS entry
when installing DataMart.
76 IBM Tivoli Netcool Performance Manager: Installation Guide
v If you reconfigure the Oracle client to connect to a different Oracle database
in another Tivoli Netcool Performance Manager installation, be sure you
update the HOST entry in the tnsnames.ora file, then restart the Oracle
client.
v Specify the host using the hostname only, do not use the IP address.
4. (optional) Replace the default port number 1521 in the line (PORT = 1521) with
your required port number.
5. Write and quit the file.
Test the Oracle net configuration
The steps required to test the Oracle Net configuration.
About this task
To test the Oracle Net configuration:
Procedure
1. Log in as oracle.
2. Enter a command with the following syntax:
tnsping Net_service_name 10
For example: tnsping PV.WORLD 10
3. Test again, using the same Net instance name without the domain suffix:
tnsping PV 10
Look for successful completion messages (OK).
Next steps
The steps that follow installation of the prerequisite software.
Once you have installed the prerequisite software, you are ready to begin the
actual installation of Tivoli Netcool Performance Manager. Depending on the type
of installation you require, follow the directions in the appropriate chapter:
v Chapter 5, Installing as a minimal deployment, on page 111 - Describes how to
install Tivoli Netcool Performance Manager in a distributed production
environment.
v Chapter 6, Modifying the current deployment, on page 117 - Describes how to
install Tivoli Netcool Performance Manager as a minimal deployment, which is
used primarily for demonstration or evaluation purposes.
v If you are planning on Installing Tivoli Netcool Performance Manager as a
distributed environment that uses clustering for high availability, please review
the Tivoli Netcool Performance Manager HA (High Availability) documentation,
which is available for download by going to http://www-01.ibm.com/software/
brandcatalog/opal/ and searching for "Netcool Proviso HA Documentation".
Chapter 3. Installing and configuring the prerequisite software 77
78 IBM Tivoli Netcool Performance Manager: Installation Guide
Chapter 4. Installing in a distributed environment
This section describes how to install Tivoli Netcool Performance Manager for the
first time in a fresh, distributed environment.
For information about installing the Tivoli Netcool Performance Manager
components using a minimal deployment, see Chapter 5, Installing as a minimal
deployment, on page 111.
Distributed installation process
The mains steps involved in a distributed installation.
A production Tivoli Netcool Performance Manager system that generates and
produces management reports for a real-world network is likely to be installed on
several servers. Tivoli Netcool Performance Manager components can be installed
to run on as few as two or three servers, up to dozens of servers.
Copyright IBM Corp. 2006, 2012 79
Before installing Tivoli Netcool Performance Manager, you must have installed the
prerequisite software. For detailed information, see Chapter 3, Installing and
configuring the prerequisite software, on page 35.
In addition, you must have decided how you want to configure your system. Refer
to the following sections:
v Co-location rules on page 2
v Typical installation topology on page 8
v Appendix A, Remote installation issues, on page 167
The general steps used to install Tivoli Netcool Performance Manager are as
follows:
v Start the launchpad.
v Install the Topology Editor.
v Start the Topology Editor.
v Create the topology.
80 IBM Tivoli Netcool Performance Manager: Installation Guide
v Add the Tivoli Netcool Performance Manager components.
v Save the topology to an XML file.
v Start the deployer.
v Install Tivoli Netcool Performance Manager using the deployer.
The following sections describe each of these steps in detail.
Note: Before you start the installation, verify that all the database tests have been
performed. Otherwise, the installation might fail. See Chapter 3, Installing and
configuring the prerequisite software, on page 35 for information about tnsping.
Starting the Launchpad
The steps required to start the launchpad.
About this task
To start the launchpad:
Procedure
1. Log in as root.
2. Set and export the DISPLAY variable.
See Setting up a remote X Window display on page 36.
3. Set and export the BROWSER variable to point to your Web browser. For
example:
On Solaris systems:
# BROWSER=/opt/mozilla/mozilla
# export BROWSER
On AIX systems:
# BROWSER=/usr/mozilla/firefox/firefox
# export BROWSER
On Linux systems:
# BROWSER=/usr/bin/firefox
# export BROWSER
Note: The BROWSER command cannot include any spaces around the equal
sign.
4. Change directory to the directory where the launchpad resides.
On Solaris systems:
# cd <DIST_DIR>/proviso/SOLARIS
On AIX systems:
# cd <DIST_DIR>/proviso/AIX
On Linux systems:
# cd <DIST_DIR>/proviso/RHEL
<DIST_DIR> is the directory on the hard drive where you copied the contents of
the Tivoli Netcool Performance Manager distribution.
Form more information see Downloading the Tivoli Netcool Performance
Manager distribution to disk on page 48.
5. Enter the following command to start the launchpad:
# ./launchpad.sh
Chapter 4. Installing in a distributed environment 81
Installing the Topology Editor
The steps required to install the Topology Editor.
About this task
Only one instance of the Topology Editor can exist in the Tivoli Netcool Performance
Manager environment. Install the Topology Editor on the same system that will host
database server.
You can install the Topology Editor from the launchpad or from the command line.
To install the Topology Editor:
Procedure
1. You can begin the Topology Editor installation procedure from the command
line or from the Launchpad. From the launchpad:
a. On the launchpad, click the Install Topology Editor option in the list of
tasks.
b. On the Install Topology Editor page, click the Install Topology Editor link.
From the command line:
a. Log in as root.
b. Change directory to the directory that contains the Topology Editor
installation script:
On Solaris systems:
# cd <DIST_DIR>/proviso/SOLARIS/Install/SOL10/topologyEditor/Disk1/InstData/VM
On AIX systems:
# cd <DIST_DIR>/proviso/AIX/Install/topologyEditor/Disk1/InstData/VM
On Linux systems:
# cd <DIST_DIR>/proviso/RHEL/Install/topologyEditor/Disk1/InstData/VM
<DIST_DIR> is the directory on the hard drive where you copied the contents
of the Tivoli Netcool Performance Manager distribution.
Form more information see Downloading the Tivoli Netcool Performance
Manager distribution to disk on page 48.
c. Enter the following command:
# ./installer.bin
2. The installation wizard opens in a separate window, displaying a welcome
page. Click Next.
3. Review and accept the license agreement, then click Next.
4. Confirm the wizard is pointing to the correct directory. The default is
/opt/IBM/proviso. If you have previously installed the Topology Editor on this
system, the installer does not prompt you for an installation directory and
instead uses the directory where you last installed the application.
5. Click Next to continue.
6. Confirm the wizard is pointing to the correct base installation directory of the
Oracle JDBC driver (/opt/oracle/product/11.2.0-client32/jdbc/lib), or click
Choose to navigate to another directory.
7. Click Next to continue.
8. Review the installation information, then click Run.
9. When the installation is complete, click Done to close the wizard.
82 IBM Tivoli Netcool Performance Manager: Installation Guide
The installation wizard installs the Topology Editor and an instance of the
deployer in the following directories:
Interface Directory
Topology Editor
install_dir/topologyEditor
For example:
/opt/IBM/proviso/topologyEditor
Deployer
install_dir/deployer
For example:
/opt/IBM/proviso/deployer
Results
The combination of the Topology Editor and the deployer is referred to as the
primary deployer.
For more information, see Resuming a partially successful first-time installation
on page 109.
Note: To uninstall the Topology Editor, follow the instructions in Uninstalling the
topology editor on page 163. Do not delete the /opt/IBM directory. Doing so will
cause problems when you try to reinstall the Topology Editor.
If the /opt/IBM directory is accidentally deleted, perform the following steps:
1. Change to the /var directory.
2. Rename the hidden file .com.zerog.registry.xml (for example, rename it to
.com.zerog.registry.xml.backup).
3. Reinstall the Topology Editor.
4. Rename the backup file to the original name (.com.zerog.registry.xml).
Starting the Topology Editor
After you have installed the Topology Editor, you can invoke it from either the
launchpad or from the command line.
Procedure
v To start the Topology Editor from the launchpad:
1. If the Install Topology Editor page is not already open, click the Install
Topology Editor option in the list of tasks to open it.
2. On the Install Topology Editor page, click the Start Topology Editor link.
v To start the Topology Editor from the command line:
1. Log in as root.
2. Change directory to the directory in which you installed the Topology Editor.
For example:
# cd /opt/IBM/proviso/topologyEditor
3. Enter the following command:
# ./topologyEditor
Chapter 4. Installing in a distributed environment 83
Note: If your DISPLAY environment variable is not set, the Topology Editor
will fail with a Java assertion message (core dump).
If you are running the Topology Editor for an AIX 6.1 or AIX 7.1
environment, use the command:
# ./topologyEditor -vm /opt/IBM/proviso/topologyEditor/jre/bin/java
Creating a new topology
The steps required to create a new topology.
Procedure
1. In the Topology Editor, select Topology > Create new topology.
The New Topology window is displayed.
2. Enter the Number of resources to be managed by Tivoli Netcool Performance
Manager.
The default value is 10000. The size of your deployment affects the database
sizing.
3. Click Finish.
The Topology Editor creates the following entities:
v In the Logical view, five items are listed: Tivoli Netcool Performance
Manager Topology, Cross Collector CMEs, DataChannels, DataMarts and
Tivoli Integrated Portals.
v In the Physical view, there is a new Hosts folder.
Adding and configuring the Tivoli Netcool Performance Manager
components
Your next step is to add and configure the individual Tivoli Netcool Performance
Manager components.
Note: When performing an installation that uses non-default values, that is,
non-default usernames, passwords and locations, it is recommended that you
check both the Logical view and Physical view to ensure that they both contain the
correct values before proceeding with the installation.
Note: The value defined in the configure_client script for ORACLE_HOME is the
value needed in the Topology Editor for Oracle Home on the host level.
Add the hosts
The first step is to specify all the servers that will host Tivoli Netcool Performance
Manager components.
About this task
Each host that you define has an associated property named PV User. PV User is
the default operating system user for all Tivoli Netcool Performance Manager
components.
You can override this setting in the Advanced Properties tab when you set the
deployment properties for individual components (for example, DataMart and
DataView). This allows you to install and run different components on the same
system as different users.
84 IBM Tivoli Netcool Performance Manager: Installation Guide
Note: DataChannel components always use the default user associated with the
host.
The user account used to transfer files using FTP or SCP/SFTP during installation
is always the PV User defined at the host level, rather than component level.
To add a single host to the topology:
Procedure
1. In the Physical view, right-click the Hosts folder and select Add Host from the
menu. The Add Host window opens.
2. Specify the details for the host machine.
The fields are as follows:
v Host name - Enter the name of the host (for example, delphi).
v Operating system - Specifies the operating system (for example, SOLARIS).
This field is filled in for you.
v Oracle home - Specifies the default ORACLE_HOME directory for all Tivoli
Netcool Performance Manager components installed on the system (by
default /opt/oracle/product/11.2.0-client32/).
v PV User - Specifies the default Tivoli Netcool Performance Manager Unix
user (for example, pvuser) for all Tivoli Netcool Performance Manager
components installed on the system.
v PV user password - Specifies the password for the default Tivoli Netcool
Performance Manager user (for example, PV).
v Create Disk Usage Server for this Host? - Selecting this check box creates a
DataChannel subcomponent to handle disk quota and flow control.
If you have not chosen to create a Disk Usage Server, Click Finish to create the
host. The Topology Editor adds the host under the Hosts folder in the Physical
view. If you have chosen to create a Disk Usage Server, click Next and the Add
Host window will allow you to add details for your Disk Usage Server.
3. Specify the details for the Disk Usage Server.
The fields are as follows:
v Local Root Directory - The local DataChannel root directory. This property
allows you to differentiate between a local directory and a remote directory
mounted to allow for FTP access.
v Remote Root Directory - Remote directory mounted for FTP access. This
property allows you to differentiate between a local directory and a remote
directory mounted to allow for FTP access.
v FC FSLL - This is the Flow Control Free Space Low Limit property. When
this set limit is reached the Disk Usage Server will contact all components
who reside in this root directory and tell them to free up all space possible.
v FC QUOTA - This is the Flow Control Quota property. This property allows
you to set the amount of disk space in bytes available to Tivoli Netcool
Performance Manager components on this file system.
v Remote User - User account used when attempting to access this Disk Usage
Server remotely.
v Remote User Password - User account password used when attempting to
access this Disk Usage Server remotely.
v Secure file transfer to be used - Boolean indicator identifying if ssh should
be used when attempting to access this directory remotely.
Chapter 4. Installing in a distributed environment 85
v Port Number - Port number to use for remote access (sftp) in case it is a non
default port.
Click Finishto create the host. The Topology Editor adds the host under the
Hosts folder in the Physical view.
Note: The DataChannel properties will be filled in automatically at a later
stage.
Adding multiple hosts
You may wish to add multiple hosts at one time.
About this task
To add multiple hosts to the topology:
Procedure
1. In the Physical view, right-click the Hosts folder and select Add Multiple Host
from the menu. The Add Multiple Hosts window opens.
2. Add new hosts by typing their names into the Host Name field as a comma
separated list.
3. Click Next.
4. Configure all added hosts.
The Configure hosts dialog allows you to enter configuration settings and
apply these settings to one or more of the specified host set.
To apply configuration settings to one or more of the specified host set:
a. Enter the appropriate host configuration values. All configuration options
are described in Steps 2, and 3 of the previous process, Add the hosts on
page 84.
b. Select the check box opposite each of the hosts to which you want to apply
the entered values.
c. Click Next. The hosts for which all configuration settings have been
specified disappear from the set of selectable hosts.
d. Repeat steps a, b and c till all hosts are configured.
5. Click Finish.
Add a database configurations component
The Database Configurations component hosts all the database-specific parameters.
About this task
You define the parameters once, and their values are propagated as needed to the
underlying installation scripts.
To add a Database Configurations component:
Procedure
1. In the Logical view, right-click the Tivoli Netcool Performance Manager
Topology component and select Add Database Configurations from the menu.
The host selection window opens.
2. You must add the Database Configuration component to the same server that
hosts the Oracle server (for example, delphi). Select the appropriate host using
the drop-down list.
86 IBM Tivoli Netcool Performance Manager: Installation Guide
3. Click Next to configure the mount points for the database.
4. Add the correct number of mount points.
To add a new mount point, click Add Mount Point. A new, blank row is added
to the window. Fill in the fields as appropriate for the new mount point.
5. Enter the required configuration information for each mount point.
a. Enter the mount point location:
v Mount Point Directory Name (for example, /raid_2/oradata)
Note: The mount point directories can be named using any string as
required by your organizations naming standards.
v Used for Metadata Tablespaces? (A check mark indicates True.)
v Used for Temporary Tablespaces? (A check mark indicates True.)
v Used for Metric Tablespaces? (A check mark indicates True.)
v Used for System Tablespaces and Redo? (A check mark indicates True.)
b. Click Back to return to the original page.
c. Click Finish to create the component.
The Topology Editor adds the new Database Configurations component to the
Logical view.
6. Highlight the Database Configurations component to display its properties.
Review the property values to make sure they are valid. For the complete list
of properties for this component, see the IBM Tivoli Netcool Performance
Manager: Property Reference Guide.
The Database Configurations component has the following subelements:
v Channel tablespace configurations
v Database Channels
v Database Clients configurations
v Tablespace configurations
v Temporary tablespace configurations
Note: Before you actually install Tivoli Netcool Performance Manager, verify
that both the raid_2/oradata and raid_3/oradata directory structures have been
created, and that the oradata subdirectories are owned by oracle:dba.
Add a DataMart
The steps required to add a DataMart component to your topology.
About this task
Tivoli Netcool Performance Manager DataMart is normally installed on the same
server on which you installed Oracle server and the Tivoli Netcool Performance
Manager database configuration. However, there is no requirement that forces
DataMart to reside on the database server.
Note the following:
v If you are installing DataMart on an AIX system or any remote AIX, Linux or
Solaris system, you must add the IBM JRE to the PATH environment variable for
the Tivoli Netcool Performance Manager Unix user, pvuser.
v You must ensure you are using the IBM JRE and not the RHEL JRE. The IBM
JRE is supplied with the Topology Editor or with Tivoli Integrated Portal. To
ensure you are using the right JRE you can either:
Chapter 4. Installing in a distributed environment 87
Set the JRE path to conform to that used by the Topology Editor, do this
using the following commands (using the default location for the primary
deployer):
PATH=/opt/IBM/proviso/topologyEditor/jre/bin:$PATH
export $PATH
For a remote server, that is one that does not host the primary deployer, you
must download and install the required JRE, and set the correct JRE path. See
the IBM Tivoli Netcool Performance Manager: Configuration
Recommendations Guide document for JRE download details.
To add a DataMart component:
Procedure
1. In the Logical view, right-click the DataMarts folder and select Add DataMart
from the menu. The host selection host window is displayed.
2. Using the drop-down list of available hosts, select the machine on which
DataMart should be installed (for example, delphi).
3. Click Finish.
The Topology Editor adds the new DataMart x component (for example,
DataMart 1) under the DataMarts folder in the Logical view.
4. Highlight the DataMart x component to display its properties. Review the
property values to make sure they are valid. You can specify an alternate
installation user for the DataMart component by changing the values of the
USER_LOGIN and USER_PASSWORD properties in the Advanced Properties
tab. For the complete list of properties for this component, see the IBM Tivoli
Netcool Performance Manager: Property Reference Guide.
Event notification scripts
When you install the DataMart component, two event notification scripts are
installed.
The scripts are called as needed by tablespace size checking routines in Oracle and
in Tivoli Netcool Performance Manager, if either routine detects low disk space
conditions on a disk partition hosting a portion of the Tivoli Netcool Performance
Manager database. Both scripts by default send their notifications by e-mail to a
local login name.
The two files and their installation locations are as follows:
v The script installed as $ORACLE_BASE/admin/$ORACLE_SID/bin/notifyDBSpace
notifies the login name oracle by e-mail of impending database space problems.
This script is called as needed by an Oracle routine that periodically checks for
available disk space.
v The script installed as /opt/datamart/bin/notifyDBSpace notifies the login name
pvuser of the same condition. This script is called as needed by the Hourly
Loader component of Tivoli Netcool Performance Manager DataChannel. The
loader checks for available disk space before attempting its hourly upload of
data to the database.
Either file can be customized to send its warnings to a different e-mail address on
the local machine, to an SMTP server for transmission to a remote machine, or to
send the notices to the local network's SNMP fault management system (that is, to
an SNMP trap manager). You can modify either script to send notifications to an
SNMP trap, instead of, or in addition to its default e-mail notification.
88 IBM Tivoli Netcool Performance Manager: Installation Guide
Add a Discovery Server
The Discovery Server is the Tivoli Netcool Performance Manager component
responsible for SNMP discovery.
About this task
You can add a discovery server for each DataMart defined in the topology.
To add a Discovery Server:
Procedure
In the Logical view, right-click the DataMart x folder and select Add Discovery
server from the menu.
The Topology Editor displays the new Discovery Server under the DataMart n
folder in the Logical view.
Adding multiple Discovery Servers
The steps required to add multiple Discovery servers.
About this task
If you want to run multiple Discovery servers on multiple hosts in your
environment, you must perform additional steps at deployment to make sure that
each host system contains identical inventory files and identical copies of the
inventory hook script. IBM recommends that you only use identically-configured
instances of the Discovery Server.
The inventory files used by the Discovery Server are configuration files named
inventory_elements.txt and inventory_subelements.txt. These files are located in
the $PVMHOME/conf directory of the system where you install the DataMart
component. Some technology packs provide custom sub-elements inventory files
with names different from inventory_subelements.txt that are also used by the
Discovery Server.
To add multiple Discovery Servers, do the following:
Procedure
v Install the primary instance of DataMart and the Discovery Server on one target
host system.
v Install and configure any required technology packs on the primary host. You
modify the contents of the inventory files during this step.
v Install secondary instances of DataMart and the Discovery Server on
corresponding target host systems.
v Replicate the inventory files from the system where the primary instance of
DataMart is running to the $PVMHOME/conf directory on the secondary hosts. You
must also replicate the InventoryHook.sh script that is located in the
$PVMHOME/bin directory and any other files that this script requires.
Chapter 4. Installing in a distributed environment 89
Add a Tivoli Integrated Portal
The Tivoli Integrated Portal (TIP) provides an integrated console for users to log
on and view information contained on the DataView server.
About this task
To add a Tivoli Integrated Portal:
Procedure
1. In the Logical view, right-click on the Tivoli Integrated Portals folder and select
Add TIP from the menu.
The Configure TIP Wizard is displayed.
2. The Topology Editor gives you the choice of adding an already existing Tivoli
Integrated Portal to the topology or to create a new Tivoli Integrated Portal. To
create a new TIP, select the Create a new TIP radio button. To import an
already existing Tivoli Integrated Portal into the topology, select the Import
existing TIPs from host radio button.
3. Using the drop-down list of available hosts, select the host on which Tivoli
Integrated Portal should be installed (for example, delphi).
Note: The hostname of the host selected for the TCR install must not contain
underscores. Underscores in the hostname will cause the installation of TCR to
fail.
4. Click Finish.
The Topology Editor adds the new Tivoli Integrated Portal component to the
Logical view.
5. Highlight the Tivoli Integrated Portal component to display its properties.
6. Review the other property values to make sure they are valid. For the complete
list of properties for this component, see the IBM Tivoli Netcool Performance
Manager: Property Reference Guide.
Discovering existing Tivoli Integrated Portals
How to update your topology so it sees existing Tivoli Integrated Portal (TIP)
instances on your system.
About this task
To discover existing Tivoli Integrated Portals:
This step runs an asynchronous check for existing Tivoli Integrated Portals on each
selected DataView host. If a Tivoli Integrated Portal is discovered to exist on a
host. The discovered Tivoli Integrated Portal detail is added to the topology.
Procedure
1. In the Physical view, right-click the Hosts folder and select Add Host from the
menu. Add the host that has an existing Tivoli Integrated Portal you wish to
discover.
2. Go to the Logical view, right-click on the Tivoli Integrated Portals folder and
select Import existing TIPs from host from the menu. The Run TIP Discovery
Wizard Page is displayed.
3. Select the check box for each host on which you would like to perform Tivoli
Integrated Portal discovery.
4. Click Import TIP.
90 IBM Tivoli Netcool Performance Manager: Installation Guide
If the discovered Tivoli Integrated Portal is an old version, it is flagged within
the topology for upgrade.
Any DataView without a Tivoli Integrated Portal is flagged within the topology
for Tivoli Integrated Portal installation on that host.
The deployer will take the appropriate action when run. You will find that the
discovered TIP status as "[TCR Found: <TIP Location>".
5. Click Next.
6. Configure Tivoli Integrated Portal properties.
a. Enter the appropriate host configuration values.
v TCR_INSTALLATION_DIRECTORY: This is the directory in which Tivoli
Common Reporting is installed.
v TIP_INSTALLATION_DIRECTORY: This is the directory in which Tivoli
Integrated Portal is installed.
v WAS_USER_NAME: This is the WAS user name.
v WAS_PASSWORD: This is the WAS password.
If you would like to configure LDAP for Tivoli Integrated Portal, please see
Appendix F, LDAP integration, on page 207.
b. Select the check box opposite each of the Tivoli Integrated Portal hosts to
which you want to apply the entered values.
c. Click Next.
The hosts for which all configuration settings have been specified disappear
from the set of selectable hosts.
d. Repeat steps a, b and c till all hosts are configured.
7. Click Next to add the discovered Tivoli Intergrated Portals to the topology.
Note: If you discover a Tivoli Common Reporting/Tivoli Intergrated Portal of
version 2.1 that was installed using the Tivoli Common Reporting installer and
not the Tivoli Netcool Performance Manager installer, the port will not align
with a Technology Pack automatically. To align the port numbers you must
specify the Tivoli Intergrated Portal port when performing the Technology Pack
installation.
Add a DataView
How to add a DataView.
About this task
Note: To display DataView real-time charts, you must have the Java runtime
environment (JRE) installed on the browser where the charts are to be displayed.
You can download the JRE from the Sun download page at http://www.sun.com.
Note: If you are reusing an existing Tivoli Integrated Portal that was installed by a
user other than root, the default deployment of DataView will encounter problems.
To avoid these problems you must remove the offending Tivoli Integrated Portal
from the your topology and add both the Tivoli Integrated Portal and DataView as
a separate post deployment step. The steps you must follow to install DataView
reusing an existing Tivoli Integrated Portal are outlined in the sections:
v Reuse an existing Tivoli Integrated Portal and Install DataView using a non
root user on a local host on page 104
v Reuse an existing Tivoli Integrated Portal and Install DataView using a non
root user on a remote host on page 106
Chapter 4. Installing in a distributed environment 91
To add a DataView component:
Procedure
In the Logical view, right-click on a Tivoli Integrated Portal and select Add
DataView from the menu. The DataView is automatically added inheriting its
properties from the Tivoli Integrated Portal instance.
Migration and synchronization of DataView
Tivoli Netcool Performance Manager supplies various options to migrate and
synchronize DataView component data.
Migrate
Migration of DataView data, that is, content and users, is solely for the purpose of
moving DataView content and users from SilverStream to Tivoli Integrated Portal.
The migrate option can be executed by using the migrate option presented in the
Topology Editor when you right click on a DataView listed in the topology. You
can also use the migrate command line option to move DataView content and
users from SilverStream to Tivoli Integrated Portal.
Note: The DataView migrate option available in the Topology Editor and through
the command line, can only be used to move DataView content and users from
SilverStream to Tivoli Integrated Portal.
Note: The migrate command is discussed in detail in The migrate command
Synchronize
Synchronization of DataView data, that is, content and users, is solely for the
purpose of moving DataView content and users from one Tivoli Integrated Portal
server to another Tivoli Integrated Portal server. Typically this is performed to
move DataView to a new platform, such as moving from Solaris to AIX.
Note: The synchronize command is discussed in detail in The synchronize
command
Add the DataChannel administrative components
The steps required to add DataChannel Administrative components.
Procedure
1. In the Logical view, right-click the DataChannels folder and select Add
Administrative Components from the menu. The host selection window opens.
2. Using the drop-down list of available hosts, select the machine that you want
to be the Channel Manager host for your DataChannel configuration (for
example, corinth).
3. Click Finish.
The Topology Editor adds a set of new components to the Logical view:
v Channel Manager - Enables you to start and stop individual DataChannels
and monitor the state of various DataChannel programs. There is one
Channel Manager for the entire DataChannel configuration. The Channel
Manager components are installed on the first host you specify
v Corba Naming Server - Provides near real-time data to DataView.
92 IBM Tivoli Netcool Performance Manager: Installation Guide
v High Availability Managers - This is mainly used for large installations that
want to use redundant SNMP collection paths. The HAM constantly
monitors the availability of one or more SNMP collection hosts, and switches
collection to a backup host (called a spare) if a primary host becomes
unavailable.
v Log Server - Used to store user, debug, and error information.
v Plan Builder - Creates the metric data routing and processing plan for the
other components in the DataChannel.
v Custom DataChannel properties - These are the custom property values that
apply to all DataChannel components.
v Global DataChannel properties - These are the global property values that
apply to all DataChannel components.
Add a DataChannel
A DataChannel is a software module that receives and processes network statistical
information from both SNMP and non-SNMP (BULK) sources.
About this task
This statistical information is then loaded into a database where it can be queried
by SQL applications and captured as raw data or displayed on a portal in a variety
of reports.
Typically, collectors are associated with technology packs, a suite of Tivoli Netcool
Performance Manager programs specific to a particular network device or
technology. A technology pack tells the collector what kind of data to collect on
target devices and how to process that data. See the Technology Pack Installation
Guide for detailed information about technology packs.
To add a DataChannel:
Procedure
1. In the Logical view, right-click the DataChannels folder and select Add
DataChannel from the menu. The Configure the DataChannel window is
displayed.
2. Using the drop-down list of available hosts, select the machine that will host
the DataChannel (for example, corinth).
3. Accept the default channel number (for example, 1).
4. Click Finish.
The Topology Editor adds the new DataChannel (for example, DataChannel 1)
to the Logical view.
5. Highlight the DataChannel to display its properties. Note that the DataChannel
always installs and runs as the default user for the host (the Tivoli Netcool
Performance Manager Unix username, pvuser). Review the other property
values to make sure they are valid. For the complete list of properties for this
component, see the IBM Tivoli Netcool Performance Manager: Property
Reference Guide.
The DataChannel has the following subelements:
v Daily Loader x - Processes 24 hours of raw data every day, merges it
together, then loads it into the database. The loader process provides
statistics on metric channel tables and metric tablespaces.
Chapter 4. Installing in a distributed environment 93
v Hourly Loader x - Reads files output by the Complex Metric Engine (CME)
and loads the data into the database every hour. The loader process provides
statistics on metric channel tables and metric tablespaces.
The Topology Editor includes the channel number in the element names. For
example, DataChannel 1 would have Daily Loader 1 and File Transfer Engine 1.
Note: When you add DataChannel x, the Problems view shows that the
Input_Components property for the Hourly Loader is blank. This missing value
will automatically be filled in when you add a DataLoad collector (as described
in the next section) and the error will be resolved.
Separating the data and executable directories
You may wish to separate the data and executable directories for your
DataChannel
About this task
Note: Separating the data and executable directories is only possible during the
first install activity. After the installation, you cannot modify the topology to
separate the data and the executable directories.
If you wish to separate the data and executable directories for your DataChannel,
perform the following steps:
Procedure
1. Create two directories on the DataChannel host, for example, DATA_DIR to
hold the data and EXE_DIR to hold the executable.
2. Change the LOCAL_ROOT_DIRECTORY value on that host's Disk Usage
Server to the data root folder DATA_DIR.
In the Host advanced properties you will see the DATA_DIR value propagated
to all DC folder values for the host.
3. Change DC_ROOT_EXE_DIRECTORY to the executable directory EXE_DIR.
This change will propagate to the DC conf directory, the DataChannel Bin
Directory and the Datachannel executable file name.
Note: For advanced information about DataChannels, see Appendix B,
DataChannel architecture, on page 171.
Add a DataChannel Remote (DCR)
A DataChannel is a software module that receives and processes network statistical
information from both SNMP and non-SNMP (BULK) sources. A DataChannel
remote is A DataChannel installation configuration in which the subchannel, CME
and FTE components are installed and run on one host, while the Loader
components are installed and run on another host.
About this task
In a DataChannel remote configuration, the subchannel hosts can continue
processing data and detecting threshold violations, even while disconnected from
the Channel Manager server.
The following task assumes we are placing the LDR and DLDR on a the current
host, called, for example, hostname1; and that we are placing the the subchannel,
CME and FTE, on another host, called, for example, hostname2.
94 IBM Tivoli Netcool Performance Manager: Installation Guide
To add a remote DataChannel:
Procedure
1. If it is not already open, open the Topology Editor (see Starting the Topology
Editor on page 83).
2. In the Topology Editor, select Topology > Open existing topology. The Open
Topology window is displayed.
3. For the topology source, select From database and click Next.
4. In the Physical View. Add a new host, hostname2, to the downloaded topology.
5. In the Logical view, right-click the DataChannels folder and select Add
DataChannel from the menu. The Configure the DataChannel window is
displayed.
6. Using the drop-down list of available hosts, select the machine that will host
the DataChannel, hostname1.
7. Accept the default channel number (for example, 1).
8. Click Finish.
The Topology Editor adds the new DataChannel (for example, DataChannel2)
to the Logical view.
9. Right-click on the new DataChannel, DataChannel2, and select Add SNMP
Collector.
10. Select server hostname2 as the host.
The Collector 2.2 is added.
11. Right-click on the Complex Metric Engine.2.2 and choose Change Host.
12. Select server hostname2 as the host.
The File Transfer Engine 2.2 is added to hostname2.
Results
The setup should looks as follows:
v DataChannel 2 - hostname1
v Collector 2.2 - hostname1
v Collector SNMP - hostname2
v Complex Metric Engine.2.2 - hostname2
v File Transfer Engine 2.2 - hostname2
v Daily Loader 2 - hostname1
v Hourly Loader 2 - hostname1
This accomplishes FTE and CME to be on one server and LDR and DLDR to be on
another server.
Add a Collector
Collectors collect and process raw statistical data about network devices obtained
from various network resources.
The collectors send the received data through a DataChannel for loading into the
Tivoli Netcool Performance Manager database. Note that collectors do not need to
be on the same machine as the Oracle server and DataMart.
Chapter 4. Installing in a distributed environment 95
Collector types
Collector types and their description, plus the steps required to associate a
collector with a Technology Pack.
About this task
There are two basic types of collectors:
v SNMP collector - Collects data using SNMP polling directly to network services.
Specify this collector type if you plan to install a Tivoli Netcool Performance
Manager SNMP technology pack. These technology packs operate in networking
environments where the associated devices on which they operate use an SNMP
protocol.
v Bulk DataLoad collector - Imports data from files. The files can have multiple
origins, including log files generated by network devices, files generated by
SNMP collectors on remote networks, or files generated by a non-Tivoli Netcool
Performance Manager network management database.
There are two types of bulk collectors:
v UBA. A Universal Bulk Adapter (UBA) Collector that handles bulk input files
generated by non-SNMP devices. Specify this collector type if you plan to install
a Tivoli Netcool Performance Manager UBA technology pack, including Alcatel
5620 NM, Alcatel 5620 SAM, and Cisco CWM.
v BCOL. A bulk Collector that retrieves and interprets the flat file output of
network devices or network management systems. This collector type is not
recommended for Tivoli Netcool Performance Manager UBA technology packs,
and is used in custom technology packs.
If you are creating a UBA collector, you must associate it with a specific technology
pack. For this reason, IBM recommends that you install the relevant technology
pack before creating the UBA collector. Therefore, you would perform the following
sequence of steps:
Procedure
1. Install Tivoli Netcool Performance Manager, without creating the UBA collector.
2. Download and install the technology pack.
3. Open the deployed topology file to load the technology pack and add the UBA
collector for it.
Note: For detailed information about UBA technology packs and the
installation process, see the Technology Pack Installation Guide. Configure the
installed pack by following the instructions in the pack-specific user's guide.
Restrictions
There are a number of collector restrictions that must be noted.
Note the following restrictions:
v The maximum collector identification number is 999.
v There is no relationship between the channel number and the collector number
(that is, there is no predefined range for collector numbers based on channel
number). Therefore, collector 555 could be attached to DataChannel 7.
v Each database channel can have a maximum of 40 subchannels (and therefore,
40 collectors).
96 IBM Tivoli Netcool Performance Manager: Installation Guide
Creating an SNMP collector
How to create an SNMP collector.
About this task
To add an SNMP collector:
Procedure
1. In the Logical view, right-click the DataChannel x folder.
The pop-up menu lists the following options:
v Add Collector SNMP - Creates an SNMP collector.
v Add Collector UBA - Creates a UBA collector.
v Add Collector BCOL - Creates a BCOL collector. This collector type is used
in custom technology packs. DataMart must be added to the topology before
a BCOL collector can be added.
Select Add Collector SNMP. The Configure Collector window opens.
2. Using the drop-down list of available hosts on the Configure Collector window,
select the machine that will host the collector (for example, corinth).
3. Accept the default collector number (for example, 1).
4. Click Finish.
The Topology Editor displays the new collector under the DataChannel x folder
in the Logical view.
5. Highlight the collector to view its properties. The Topology Editor displays
both the SNMP collector core parameters and the SNMP technology
pack-specific parameters. The core parameters are configured with all SNMP
technology packs. You can specify an alternate installation user for the SNMP
collector by changing the values of the pv_user, pv_user_group and
pv_user_password properties in the Advanced Properties tab. Review the
values for the parameters to make sure they are valid.
Note: For information about the core parameters, see the IBM Tivoli Netcool
Performance Manager: Property Reference Guide.
Results
The collector has two components:
v Complex Metric Engine x - Perform calculations on the collected data.
v File Transfer Engine (FTE) x - Transfers files from the collector's output
directories and places them in the input directory of the CME.
The FTE writes data to the file /var/adm/wtmpx on each system that hosts a
collector. As part of routine maintenance, check the size of this file to prevent it
from growing too large.
Note: Your Solaris version can be configured with strict access default settings for
secure environments. Strict FTP access settings might interfere with automatic
transfers between a DataChannel subchannel and the DataLoad server. Check for
FTP lockouts in /etc/ftpd/ftpusers, and check for strict FTP rules in
/etc/ftpd/ftpaccess.
Note: The Topology Editor includes the channel and collector numbers in the
element names. For example, DataChannel 1 could have Collector SNMP 1.1, with
Complex Metric Engine 1.1. and File Transfer Engine 1.1.
Chapter 4. Installing in a distributed environment 97
Add a Cross Collector CME
The steps required to add a Cross Collector CME.
Procedure
1. In the Logical view, right-click the Cross Collector CME folder and select Add
Cross Collector CME from the menu. The Specify the Cross Collector CME
details window is displayed.
2. Using the drop-down list of available hosts, select the machine that will host
the Cross-Collector CME (for example, corinth).
3. Select the desired Disk Usage Server on the selected host.
4. Select the desired channel number (for example, 1).
5. Click Finish.
The Topology Editor adds the new Cross-Collector CME (for example,
Cross-Collector CME 2000) to the Logical view.
6. Highlight the Cross-Collector CME to display its properties.
Note: The Cross-Collector CME always installs and runs as the default user for
the host (the Tivoli Netcool Performance Manager Unix username, pvuser).
7. Review the other property values to make sure they are valid. For the complete
list of properties for this component, see the IBM Tivoli Netcool Performance
Manager: Property Reference Guide
8. After running the deployer to install the Cross-Collector CME you will need to
restart the CMGR process.
Note: You will notice that dccmd start all will not start the Cross-Collector
CME at this point.
9. You must first deploy a formula against the Cross-Collector CME using the
DataChannel frmi tool.
Run the frmi tool. The following is an example command:
frmi ecma_formula.js -labels formula_labels.txt
Where:
v The format of formula_labels.txt is 2 columns separated by an "=" sign.
v First column is Full Path to formula.
v Second is the number of the Cross-Collector CME.
v The file formula_labels.txt is of the format:
Path_to_ECMA_formulas~Formula1Name=2000
Path_to_ECMA_formulas~Formula2Name=2001
Note: When a Cross-Collector CME (CC-CME) is installed on the system and
formulas are applied against it, the removal of collectors that the CC-CME
depends on is not supported. This is an exceptional case, that is, if you have
not installed a CC-CME, collectors can be removed.
Adding multiple Cross Collectors
About this task
To add multiple Cross Collectors:
98 IBM Tivoli Netcool Performance Manager: Installation Guide
Procedure
1. In the Logical view, right-click the Cross Collector CME folder and select Add
multiple Cross Collectors from the menu. The Add Cross Collector CME
window is displayed.
2. (Optional) Click Add Hosts to add to the set of Cross Collector hosts. Only
hosts that have a DUS can be added.
Note: It is recommended that you have 20 Cross Collector CMEs spread across
the set of toplolgy hosts.
3. Set the number of Cross Collector CMEs for the set of hosts, there are two ways
you can do this:
v Click Calculate Defaults to use the wizard to calculate the recommended
spread across the added hosts. This will set the number of Cross Collector
CMEs to the default value.
v To manually set the number of cross collector for each host, use the
drop-down menu opposite each host name.
4. Click Finish.
Saving the topology
When you are satisfied with the infrastructure, verify that all the property values
are correct and that any problems have been resolved, then save the topology to an
XML file.
About this task
To save the topology as an XML file:
Procedure
1. In the Topology Editor, select Topology then either Save Topology As or Save
Topology.
Click Browse to navigate to the directory in which to save the file. By default,
the topology is saved as topology.xml file in the topologyEditor directory.
2. Accept the default value or choose another name or location, then click OK to
close the file browser window.
3. The file name and path is displayed in the original window. Click Finish to
save the file and close the window.
You are now ready to deploy the topology file (see Starting the Deployer on
page 100).
Note: Until you actually deploy the topology file, you can continue making
changes to it as needed by following the directions in Opening an existing
topology file on page 100.
See Chapter 6, Modifying the current deployment, on page 117 for more
information about making changes to a deployed topology file.
Note: Only when you begin the process of deploying a topology is it saved to
the database. For more information, see the section Deploying the Topology.
Chapter 4. Installing in a distributed environment 99
Opening an existing topology file
As you create the topology, you can save the file and update it as needed.
About this task
To open a topology file that exists but that has not yet been deployed:
Procedure
1. If it is not already open, open the Topology Editor (see Starting the Topology
Editor on page 83).
2. In the Topology Editor, select Topology > Open existing topology. The Open
Topology window is displayed.
3. For the topology source, click local then use Browse to navigate to the correct
directory and file. Once you have selected the file, click OK. The selected file is
displayed in the Open Topology window.
Click Finish.
The topology is displayed in the Topology Editor.
4. Change the topology as needed.
Starting the Deployer
The primary deployer is installed on the same machine as the Topology Editor. You
first run the topology file on the primary deployer, and then run secondary
installers on the other machines in the distributed environment.
See Resuming a partially successful first-time installation on page 109 for more
information about the difference between primary and secondary deployers.
Note: Before you start the deployer, verify that all the database tests have been
performed. Otherwise, the installation might fail. See Chapter 3, Installing and
configuring the prerequisite software, on page 35 for more information.
Primary Deployer
The steps required to run the primary deployer from the Topology Editor
Procedure
Click Run > Run Deployer for Installation.
Note: When you use the Run menu options (install or uninstall), the deployer uses
the last saved topology file, not the current one. Be sure to save the topology file
before using a Run command.
100 IBM Tivoli Netcool Performance Manager: Installation Guide
Secondary Deployers
A secondary deployer is only required if remote installation using the primary
deployer is not possible.
About this task
For more information on why you may need to use a secondary deployer, see
Appendix A, Remote installation issues, on page 167.
To run a secondary deployer:
Procedure
v To run a secondary deployer from the launchpad:
1. On the launchpad, click Start the Deployer.
2. On the Start Deployer page, click the Start Deployer link.
v To run a secondary deployer from the command line:
1. Log in as root.
2. Change to the directory containing the deployer within the downloaded
Tivoli Netcool Performance Manager distribution:
On Solaris systems:
# cd <DIST_DIR>/proviso/SOLARIS/Install/SOL10/deployer/
On AIX systems:
# cd <DIST_DIR>/proviso/AIX/Install/deployer/
On Linux systems:
# cd <DIST_DIR>/proviso/RHEL/Install/deployer/
<DIST_DIR> is the directory on the hard drive where you copied the contents
of the Tivoli Netcool Performance Manager distribution in Downloading
the Tivoli Netcool Performance Manager distribution to disk on page 48.
3. Enter the following command:
# ./deployer.bin
Note: See Appendix D, Deployer CLI options, on page 191 for the list of
supported command-line options.
Pre-deployment check
The Deployer will fail if the required patches listed in this file are not installed.
About this task
The Deployer performs a check on the operating system versions and that the
minimum required packages are installed. The Deployer checks for the files as
listed in the relevant check_os.ini file.
The check_os.ini can be found at:
v The check_os.ini file detailing Solaris requirements can be found at:
/SOLARIS/Install/SOL10/deployer/proviso/bin/Check/check_os.ini
v The check_os.ini file detailing AIX requirements can be found at:
/AIX/Install/deployer/proviso/bin/Check/check_os.ini
v The check_os.ini file detailing Linux requirements can be found at:
/RHEL/Install/deployer/proviso/bin/Check/check_os.ini
Chapter 4. Installing in a distributed environment 101
Procedure
v To check if the required packages are installed:
1. Click Run > Run Deployer for Installation to start the Deployer.
2. Select the Check prerequisites check box.
3. Click Next.
The check will return a failure if any of the required files are missing.
v To repair a failure:
1. Log in as root.
2. Install the packages listed as missing.
3. (Linux only) If any openmotif package is listed as missing:
Install the missing openmotif package and update the package DB using the
command:
# updatedb
4. Rerun the check prerequisites step.
Deploying the topology
How to deploy your defined topology.
About this task
The deployer displays a series of pages to guide you through the Tivoli Netcool
Performance Manager installation. The installation steps are displayed in a table,
which enables you to run each step individually or to run all the steps at once. For
more information about the deployer interface, see Primary Deployer on page
100.
Important: By default, Tivoli Netcool Performance Manager uses Monday to
determine when a new week begins. If you wish to specify a different day, you
must change the FIRST_WEEK_DAY parameter in the Database Registry using the
dbRegEdit utility. This parameter can only be changed when you first deploy the
topology that installs your Tivoli Netcool Performance Manager environment, and
it must be changed BEFORE the Database Channel is installed. For more
information, see the Tivoli Netcool Performance Manager Registry and Space
Management Tech Note.
If you need to stop the installation, you can resume it at a later time. For more
information, see Resuming a partially successful first-time installation on page
109.
To deploy the Tivoli Netcool Performance Manager topology:
Procedure
1. The deployer opens, displaying a welcome page. Click Next to continue.
2. If you started the deployer from the launchpad or from the command line,
enter the full path to your topology file, or click Choose to navigate to the
correct location. Click Next to continue.
Note: If you start the deployer from within the Topology Editor, this step is
skipped.
The database access window prompts for the security credentials.
102 IBM Tivoli Netcool Performance Manager: Installation Guide
3. Enter the host name (for example, delphi) and database administrator
password (for example, PV), and verify the other values (port number, SID,
and user name). Note that if the database does not yet exist, these parameters
must match the values you specified when you created the database
configuration component (see Add a database configurations component on
page 86). Click Next to continue.
4. The node selection window shows the target systems and how the files will be
transferred (see Secondary Deployers on page 101 for an explanation of this
window). The table has one row for each machine where at least one Tivoli
Netcool Performance Manager component will be installed.
The default settings are as follows:
v The Enable checkbox is selected. If this option is not selected, no actions
will be performed on that machine.
v The Check prerequisites checkbox is not selected, if selected scripts are run
to verify that the prerequisite software has been installed.
v Remote execution is enabled, using both RSH and SSH.
If remote execution cannot be enabled, perhaps due to a particular
customer's security protocols, see Appendix A, Remote installation issues,
on page 167 and Resuming a partially successful first-time installation on
page 109.
v File transfer using FTP is enabled.
If desired, reset the values as appropriate for your deployment.
Click Next to continue.
5. Provide media location details.
The Tivoli Netcool Performance Manager Media Location for components
window is displayed, listing component and component platform.
a. Click on the Choose the Proviso Media button. You will be asked to
provide location of the media for each component.
b. Enter the base directory in which your media is located. If any of the
component media is not within the directory specified, you will be asked
to provide media location detail for that component.
6. The deployer displays summary information about the installation. Review the
information, then click Next.
The deployer displays the table of installation steps (see Pre-deployment
check on page 101 for an overview of the steps table). Note the following:
v Regardless of whether the steps are run, or if they pass or fail, closing the
wizard will result in the topology being posted to the Tivoli Netcool
Performance Manager Database, assuming it exists.
v If an installation step fails, see Resuming a partially successful first-time
installation on page 109 for debugging information. Continue the
installation by following the instructions in Resuming a partially successful
first-time installation on page 109.
v If the TCR installation step fails, which can happen when there is not
enough space available in /usr and /tmp or directory cleanup has not been
carried out, run the tcrClean.sh script. To run this script:
a. Copy the tcrClean.sh script from the Primary Deployer (host where the
Topology Editor is installed) to the server where the TCR installation step
fails.
The tcrClean.sh script can be found on the Primary Deployer in the
directory:
/opt/IBM/proviso/deployer/proviso/bin/Util/
Chapter 4. Installing in a distributed environment 103
b. Run tcrClean.sh.
c. When prompted, enter the install location of TCR.
d. Continue the installation by following the instructions in Resuming a
partially successful first-time installation on page 109
7. Click Run All to run all the steps in sequence.
8. The deployer prompts you for the location of the setup files. Use the file
selection window to navigate to the top-level directory for your operating
system to avoid further prompts.
For example:
<DIST_DIR>/RHEL/
<DIST_DIR> is the directory on the hard drive where you copied the contents
of the Tivoli Netcool Performance Manager distribution in Downloading the
Tivoli Netcool Performance Manager distribution to disk on page 48.
Note: This assumes that the Tivoli Netcool Performance Manager distribution
was downloaded to the folder /var/tmp/cdproviso as per the instructions in
Downloading the Tivoli Netcool Performance Manager distribution to disk
on page 48.
If Tivoli Integrated Portal is configured to install on a remote host, the Run
Remote TIP Install step is included. This step will prompt the user to enter the
root password. The deployer requires this information in order to run as root
on the remote host and perform the Tivoli Integrated Portal installation.
9. When all the steps have completed successfully, click Done to close the
wizard.
10. Stop and start TCR:
a. Navigate to the /tip_install_dir/products/tcr/bin/ directory.
b. Set the ORACLE_HOME environment variable. For example:
ORACLE_HOME=/opt/oracle/product/11.2.0/
export ORACLE_HOME
c. Execute the following:
LD_LIBRARY_PATH= $ORACLE_HOME/lib32:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH
d. Run the following scripts:
v stopTCRserver.sh <username> <password>
v startTCRserver.sh
Note: These scripts must be run every time Tivoli Integrated Portal is
restarted.
Note: The Topology Editor must be closed after every deployment.
Reuse an existing Tivoli Integrated Portal and Install DataView
using a non root user on a local host
If you are reusing an existing Tivoli Integrated Portal that was installed by a user
other than root the default deployment of DataView will encounter problems.
About this task
This procedure describes how you install DataView to a local host should you
decide to reuse an existing Tivoli Integrated Portal that was installed by a user
other than root.
104 IBM Tivoli Netcool Performance Manager: Installation Guide
Only pursue these steps if you are installing DataView to an existing Tivoli
Integrated Portal that was installed by a user other than root.
Procedure
1. Install Topology Editor on the local host.
2. Execute the steps required to discover existing Tivoli Integrated Portals, as
described in Discovering existing Tivoli Integrated Portals on page 90
3. Configure Tivoli Integrated Portal and DataView in the Topology Editor on
the local host.
The values of the parameters for Tivoli Integrated Portal in the Topology
Editor need to be the same as the Tivoli Integrated Portal previously installed
by OMNIBUS. For example check that the values of USER_INSTALL_DIR and
IAGLOBAL_WC_adminhost in the Topology Editor correspond to the Tivoli
Integrated Portal installed by OMNIbus.
4. Run the Deployer for installation from the Topology Editor.
5. Go through the screens as per usual to the last run steps screen.
6. Mark the Install DataView and Register DataView as held.
Note: If Tivoli Integrated Portal Install steps are noticed, just mark the steps
as Success.
7. Mark all other steps as ready.
8. Run the deployer so that all steps except Install DataView and Register
DataView have run and have a status of success.
9. Change to the Installer directory in the proviso media.
For example, ./proviso/RHEL/Install
10. Change to the DataView directory containing sample dataview file:
dvinstall.cfg.
For example, ./deployer/proviso/data/DeploymentPackage/DeploymentSteps/
DataView/templateDV
11. Manually configure the dvinstall.cfg for your environment.
You can right click on the Install DataView step in the deployer and open the
properties tab so that the values for DataView install are displayed. Use these
values to populate the dvinstall.cfg. See sample dvinstall.cfg provided
below.
12. Change to the DataView directory that contains the install.sh in the TNMP
media.
For example, on a Linux environment this would be:../proviso/RHEL/
DataView/RHEL5
13. In a command terminal, as the same non root user that installed OMNIbus,
set the PATH variable using the command:
export PATH=/opt/IBM/tivoli/tip/java/bin:${PATH}
Note: Check that the java path is correct. ( that /opt/IBM/tivoli/tip/java/
bin exists ).
14. In the same terminal that you set the PATH variable in, run the command to
silently install DataView as the non default user pointing to the dvinstall.cfg
file.
For example, use the command:
./install.sh -i silent -f dvinstall.cfg.
Chapter 4. Installing in a distributed environment 105
15. After the previous step has completed source the Tivoli Integrated Portal url
and check that DataView exists.
For example, https://hostTipIsInstalledOn:16316/ibm/console should be
visible in Performance > Network Resources > Defined Resource Views
16. Mark the Install DataView as success.
17. Mark the Register DataView step as ready.
18. Run the Register DataView step.
19. Click done on the deployer
Example
Sample dvinstall.cfg:
#
# Licensed Materials - Property of IBM
# 5724-P55, 5724-P57, 5724-P58, 5724-P59
# Copyright IBM Corporation 2008. All Rights Reserved.
# US Government Users Restricted Rights- Use, duplication or disclosure
# restricted by GSA ADP Schedule Contract with IBM Corp.
#
# Unix user name for the TIP installation location
USER_INSTALL_DIR=/opt/IBM/tivoli/tip
# Unix user name for the TIP administrative user
TIP_ADMINISTRATOR=tipadmin
# Unix user name for the TIP administrative user
TIP_ADMINISTRATOR_PASSWORD=tipadmin
# location of the Oracle driver
ORACLE_CLIENT_DIR=/opt/oracle/product/11.2.0-client32
# connection url to the database
TNPM_DATABASE_URL=jdbc:oracle:thin:@VOIPDEV3:1521:PV
# name of a valid database user (metric account: PV_LOUIS)
TNPM_DATABASE_USER=PV_LOIS
# password of the database user
TNPM_DATABASE_USER_PASSWORD=PV
# name of the context root of the web app
DATAVIEW_CONTEXT=PV
# If true then TIP is restarted after DataView is installed. Default is
TRUE if RESTART_TIP is not set RESTART_TIP=yes
Reuse an existing Tivoli Integrated Portal and Install DataView
using a non root user on a remote host
If you are reusing an existing Tivoli Integrated Portal that was installed by a user
other than root, the default deployment of DataView will encounter problems.
About this task
This procedure describes how you install DataView to a remote host should you
decide to reuse an existing Tivoli Integrated Portal that was installed by a user
other than root.
106 IBM Tivoli Netcool Performance Manager: Installation Guide
Only pursue these steps if you are installing DataView to an existing Tivoli
Integrated Portal that was installed by a user other than root.
Procedure
1. Install Topology Editor on the local host.
2. Execute the steps required to discover existing Tivoli Integrated Portals, as
described in Discovering existing Tivoli Integrated Portals on page 90
3. Add DataView to the discovered Tivoli Integrated Portal on the remote host
using the Topology Editor.
4. Run the Deployer for installation from the Topology Editor.
5. Go through the screens as per usual to the last run steps screen.
6. Mark the Run Remote DataView Install and Register Remote DataView as
held.
Note: If Tivoli Integrated Portal Install steps are noticed, just mark the steps
as Success.
7. Mark all other steps as ready including the Register DataView and Prepare
Remote DataView step.
8. Run the deployer so that all steps except Run Remote DataView Install and
Register Remote DataView have run and have a status of success. The
Prepare Remote DataView Install places the DataView files and configuration
files in the /tmp directory of the remote host.
9. Change to the runtime folder in the DataView_step folder on the remote host
in the tmp directory.
cd /tmp/ProvisoConsumer/Plan/MachinePlan_MachName/0000X_DataView_step/runtime
For example, /tmp/ProvisoConsumer/Plan/MachinePlan_voipdev4/
00003_DataView_step/runtime
10. As root user change the permissions off all files and folders in the
0000X_DataView_step folder.
Use the command:
chmod -R 777 *
11. As the non root user, that was used to install OMNIbus, run the run.sh file
Use the command:
./run.sh
12. After the ./run.sh step has completed, source the Tivoli Integrated Portal url
and check that DataView exists.
For example, https://hostTipIsInstalledOn:16316/ibm/console should be
visible in Performance > Network Resources > Defined Resource Views
13. Mark the Register DataView step as ready.
14. Run the Register DataView step.
15. Click done on the deployer
Example
Sample dvinstall.cfg:
#
# Licensed Materials - Property of IBM
# 5724-P55, 5724-P57, 5724-P58, 5724-P59
# Copyright IBM Corporation 2008. All Rights Reserved.
# US Government Users Restricted Rights- Use, duplication or disclosure
# restricted by GSA ADP Schedule Contract with IBM Corp.
#
Chapter 4. Installing in a distributed environment 107
# Unix user name for the TIP installation location
USER_INSTALL_DIR=/opt/IBM/tivoli/tip
# Unix user name for the TIP administrative user
TIP_ADMINISTRATOR=tipadmin
# Unix user name for the TIP administrative user
TIP_ADMINISTRATOR_PASSWORD=tipadmin
# location of the Oracle driver
ORACLE_CLIENT_DIR=/opt/oracle/product/11.2.0-client32
# connection url to the database
TNPM_DATABASE_URL=jdbc:oracle:thin:@VOIPDEV3:1521:PV
# name of a valid database user (metric account: PV_LOUIS)
TNPM_DATABASE_USER=PV_LOIS
# password of the database user
TNPM_DATABASE_USER_PASSWORD=PV
# name of the context root of the web app
DATAVIEW_CONTEXT=PV
# If true then TIP is restarted after DataView is installed. Default is TRUE if
RESTART_TIP is not set RESTART_TIP=yes
Next steps
The steps to perform after deployement.
The next step is to install the technology packs, as described in Technology Pack
Installation Guide.
Once you have created the topology and installed Tivoli Netcool Performance
Manager, it is very easy to make changes to the environment. Simply open the
deployed topology file (loading it from the database), make your changes, and run
the deployer with the updated topology file as input. For more information about
performing incremental installations, see Chapter 6, Modifying the current
deployment, on page 117.
Note: After your initial deployment, always load the topology file from the
database to make any additional changes (such as adding or removing a
component), because it reflects the current status of your environment. Once you
have made your changes, you must deploy the updated topology so that it is
propagated to the database. To make any subsequent changes following this
deployment, you must load the topology file from the database again.
To improve performance, IBM recommends that you regularly compute the
statistics on metadata tables. You can compute these statistics by creating a cron
entry that executes the dbMgr (Database Manager Utility) analyzeMetaDataTables
command at intervals.
The following example shows a cron entry that checks statistics every hour at 30
minutes past the hour. Note that the ForceCollection option is set to N, so that
statistics will only be calculated when the internal calendar determines that it is
necessary, and not every hour:
0 5 * * * [ -f /opt/DM/dataMart.env ] && [ -x /opt/DM/bin/dbMgr ] && .
/opt/DM/dataMart.env && dbMgr analyzeMetaDataTables A N
108 IBM Tivoli Netcool Performance Manager: Installation Guide
For more information on dbMgr and the analyzeMetaDataTables command, see the
Tivoli Netcool Performance Manager dbMgr Reference Guide.
For each new SNMP DataLoad, change the env file of the TNPM user to add the
directory with the openssh libcrypto.so to the LD_LIBRARY_PATH (or LIBPATH).
Resuming a partially successful first-time installation
Should your quit during an installation, this section describes how you can resume
the installation process.
About this task
In this scenario, you try deploying a Tivoli Netcool Performance Manager topology
for the first time. You define the topology and start the installation. Although some
of the components of the Tivoli Netcool Performance Manager topology are
installed successfully, the overall installation does not complete successfully.
It addition, it is possible to skip a section of the installation. For example, a remote
node might not be accessible for some reason. After skipping this portion of the
installation, resume the installation to continue with the remaining steps. The
deployer will list only those steps needed to complete the installation on the
missing node.
For example, suppose that during the first installation, Oracle wasn't running, so
the database check failed. Stop the installation, start Oracle, then resume the
installation.
To resume a partial installation:
Procedure
1. After correcting the problem, restart the deployer from the command line using
the following command:
./deployer.bin -Daction=resume
Using the resume switch enables you to resume the installation exactly where
you left off.
Note: If you are asked to select a topology file in order to resume your
installation, select the topology file you saved before beginning the install.
2. The deployer opens, displaying a welcome page. Click Next to continue.
3. Accept the default location of the base installation directory of the Oracle JDBC
driver (/opt/oracle/product/11.2.0-client32/jdbc/lib), or click Choose to
navigate to another directory. Click Next to continue.
4. The steps page shows the installation steps in the very same state they were in
when you stopped the installation (with the completed steps marked Success,
the failed step marked Error, and the remaining steps marked Held).
5. Select the step that previously failed, reset it to Ready, then click Run Next.
Verify that this installation step now completes successfully.
6. Run any remaining installation steps, verifying that they complete successfully.
7. At the end of the installation, the deployer loads the updated topology
information into the database.
Chapter 4. Installing in a distributed environment 109
110 IBM Tivoli Netcool Performance Manager: Installation Guide
Chapter 5. Installing as a minimal deployment
This chapter describes how to install Tivoli Netcool Performance Manager as a
minimal deployment.
Overview
A minimal deployment installation is used primarily for demonstration or evaluation
purposes, and installs the product on the smallest number of machines possible,
with minimal user input.
This installation type installs all the Tivoli Netcool Performance Manager
components on the local host using a predefined topology file to define the
infrastructure. The minimal deployment installation also installs the MIB-II SNMP
technology pack.
When you perform a minimal deployment installation, the Tivoli Netcool
Performance Manager components are installed on the server you are running the
deployer from.
Before you begin
Before installing Tivoli Netcool Performance Manager, you must have installed the
prerequisite software.
For detailed information, see Chapter 3, Installing and configuring the prerequisite
software, on page 35.
Note: Before you start the installation, verify that all the database tests have been
performed. Otherwise, the installation might fail. See
Chapter 3, Installing and configuring the prerequisite software, on page 35
for information about tnsping.
Minimal Installation Process:
If you are setting up a demonstration or evaluation system, it is possible to install
all Tivoli Netcool Performance Manager components on a single server for Linux,
Solaris or AIX systems. In this case your installation process will go as follows:
Copyright IBM Corp. 2006, 2012 111
Special consideration
By default, Tivoli Netcool Performance Manager uses Monday to determine when
a new week begins.
If you wish to specify a different day, you must change the FIRST_WEEK_DAY
parameter in the Database Registry using the dbRegEdit utility. This parameter can
only be changed when you first deploy the topology that installs your Tivoli
Netcool Performance Manager environment, and it must be changed BEFORE the
Database Channel is installed. For more information, see the Tivoli Netcool
Performance Manager Database Administration Guide.
No resume of partial install available
There is no resume functionality available to a minimal deployment installation.
As a result a minimal deployment installation must be carried out in full if it is to be
attempted.
Overriding default values
When performing a minimal deployment installation you must accept all default
values. The exceptions are listed in this section.
The expections to this are:
v The location of the oracle jdbc driver.
The default is /opt/oracle/product/11.2.0/jdbc/li
v The Tivoli Netcool Performance Manager installation destination folder.
The default is /opt/proviso
v Oracle server params. The defaults are:
Oracle Base: /opt/oracle/
Oracle home: /opt/oracle/product/11.2.0/
Oracle Port: 1521
112 IBM Tivoli Netcool Performance Manager: Installation Guide
Installing a minimal deployment
This section provides step-by-step instructions for installing Tivoli Netcool
Performance Manager on a single Solaris, AIX or Linux server.
Download the MIB-II files
The minimal deployment version installs the MIB-II Technology Pack.
About this task
Before beginning the installation, you must download both the Technology Pack
Installer and the MIB-II jar files.
To download these files, access either of the following distributions:
Procedure
v The product distribution site: https://www-112.ibm.com/software/howtobuy/
softwareandservices
Located on the product distribution site are the ProvisoPackInstaller.jar file,
the bundled jar file, and individual stand-alone technology pack jar files.
v (Optional) The Tivoli Netcool Performance Manager CD distribution, which
contains the ProvisoPackInstaller.jar file and the jar files for the Starter Kit
components.
See your IBM customer representative for more information about obtaining
software.
Note: Technology Pack Installer and the MIB-II jar files must be in the same
directory (for example, AP), and no other application jar files should be present,
if there are any more jars in that folder the installation step will fail due to "too
many jars" in the specified folder. In addition, you must add the AP directory to
the Tivoli Netcool Performance Manager distribution's directory structure.
Starting the Launchpad
The steps required to start the launchpad.
About this task
To start the launchpad:
Procedure
1. Log in as root.
2. Set and export the DISPLAY variable.
See Setting up a remote X Window display on page 36.
3. Set and export the BROWSER variable to point to your Web browser. For
example:
On Solaris systems:
# BROWSER=/opt/mozilla/mozilla
# export BROWSER
On AIX systems:
# BROWSER=/usr/mozilla/firefox/firefox
# export BROWSER
On Linux systems:
Chapter 5. Installing as a minimal deployment 113
# BROWSER=/usr/bin/firefox
# export BROWSER
Note: The BROWSER command cannot include any spaces around the equal
sign.
4. Change directory to the directory where the launchpad resides.
On Solaris systems:
# cd <DIST_DIR>/proviso/SOLARIS
On AIX systems:
# cd <DIST_DIR>/proviso/AIX
On Linux systems:
# cd <DIST_DIR>/proviso/RHEL
<DIST_DIR> is the directory on the hard drive where you copied the contents of
the Tivoli Netcool Performance Manager distribution.
Form more information see Downloading the Tivoli Netcool Performance
Manager distribution to disk on page 48.
5. Enter the following command to start the launchpad:
# ./launchpad.sh
Start the installation
Steps required to install.
About this task
A minimal deployment installation uses a predefined topology file.
To start the installation:
Procedure
1. On the launchpad, click the Install Tivoli Netcool Performance Manager 1.3.2
for Minimal Deployment option in the list of tasks, then click the Install
Tivoli Netcool Performance Manager 1.3.2 for Minimal Deployment link to
start the deployer.
Alternatively, you can start the deployer from the command line, as follows:
a. Log in as root.
b. Set and export your DISPLAY variable (see Setting up a remote X
Window display on page 36).
c. Change directory to the directory that contains the deployer:
On Solaris systems:
# cd <DIST_DIR>/proviso/SOLARIS/Install/SOL10/deployer
On AIX systems:
# cd <DIST_DIR>/proviso/AIX/Install/deployer
On Linux systems:
# cd <DIST_DIR>/proviso/RHEL/Install/deployer
d. Enter the following command:
# ./deployer.bin -Daction=poc -DPrimary=true
2. The deployer opens, displaying a welcome page. Click Next to continue.
3. Accept the terms of the license agreement, then click Next.
114 IBM Tivoli Netcool Performance Manager: Installation Guide
4. Accept the default location of the base installation directory of the Oracle
JDBC driver (/opt/oracle/product/11.2.0-client32/jdbc/lib), or click
Choose to navigate to another directory. Click Next to continue.
5. The deployer prompts for the directory in which to install Tivoli Netcool
Performance Manager. Accept the default value (/opt/proviso) or click
Choose to navigate to another directory. Click Next to continue.
6. Verify the following additional information about the Oracle database:
v Oracle Base. The base directory for the Oracle installation (for example,
/opt/oracle/). Accept the provided path or click Choose to navigate to
another directory.
v Oracle Home. The root directory of the Oracle database (for example,
/opt/oracle/product/11.2.0-client32/). Accept the provided path or click
Choose to navigate to another directory.
v Oracle Port. The port used for Oracle communications. The default value is
1521.
Click Next to continue.
7. The node selection window shows the target system and how the files will be
transferred. These settings are ignored for a minimal deployment installation
because all the components are installed on a single server.
Click Next to continue.
8. Provide media location details. The Tivoli Netcool Performance Manager
Media Location for components window is displayed, listing component and
component platform.
a. Click on the Choose the Proviso Media button. You will be asked to
provide location of the media for each component.
b. Enter the base directory in which your media is located. If any of the
component media is not within the directory specified, you will be asked
to provide media location detail for that component.
9. The deployer displays summary information about the installation. Review the
information, then click Next to begin the installation.
The deployer displays the table of installation steps (see Pre-deployment
check on page 101 for an overview of the steps table). Note the following:
v If an installation step fails, see Appendix I, Error codes and log files, on
page 219 for debugging information. Continue the installation by following
the instructions in Resuming a partially successful first-time installation
on page 109
v Some of the installation steps can take a long time to complete. However, if
an installation step fails, it will fail in a short amount of time.
10. Click Run All to run all the steps in sequence.
11. When all the steps have completed successfully, click Done to close the
wizard.
12. Run chmod -R 777 on /opt/IBM/tivoli in order to make all files in the TIP
directory structure accessible.
Your installation is complete. See The post-installation script on page 116 for
information about the post-installation script, or Next steps on page 116 for
what to do next.
Chapter 5. Installing as a minimal deployment 115
The post-installation script
The post-installation script is run automatically when installation is complete.
About this task
For a minimal deployment the script performs four actions:
Procedure
1. Starts the DataChannel.
2. Starts the DataLoad SNMP Collector, if it is not already running.
3. Creates a DataView user named tnpm.
4. Gives the poc user permission to view reports under the NOC Reporting group,
with the default password of tnpm.
Results
The script writes a detailed log to the file /var/tmp/poc-post-
install.${TIMESTAMP}.log.
Next steps
The steps to be performed following the deployment of your system.
When the installation is complete, you are ready to perform the final configuration
tasks that enable you to view reports on the health of your network. These steps
are documented in detail in the Tivoli Netcool Performance Manager
documentation set.
For information about the MIB-II Technology Pack, see the MIB-II Technology Pack
User's Guide.
For information about installing additional technology packs, see the Technology
Pack Installation Guide
For each new SNMP DataLoad, change the env file of the TNPM user to add the
directory with the openssh libcrypto.so to the LD_LIBRARY_PATH (or LIBPATH).
116 IBM Tivoli Netcool Performance Manager: Installation Guide
Chapter 6. Modifying the current deployment
This chapter describes how to modify an installation of Tivoli Netcool Performance
Manager.
It is possible to modify Tivoli Netcool Performance Manager after it has been
installed. To add, delete or upgrade components, load the deployed topology from
the database, make your changes, and run the deployer with the updated topology
as input.
Note: You must run the updated topology through the deployer in order for your
changes to take effect.
Note the following:
v After your initial deployment, always load the topology from the database to
make any additional changes (such as adding or removing a component),
because it reflects the current status of your environment. Once you have made
your changes, you must deploy the updated topology so that it is propagated to
the database. To make any subsequent changes following this deployment, you
must load the topology from the database again.
v You might have a situation where you have modified a topology by both adding
new components and removing components (marking them "To Be Removed").
However, the deployer can work in only one mode at a time - installation mode
or uninstallation mode. In this situation, first run the deployer in uninstallation
mode, then run it again in installation mode.
For information about deleting components from an existing topology, see
Removing a component from the topology on page 159.
Migrating between platforms
It is possible to migrate components between platforms when performing the
Tivoli Netcool Performance Manager upgrade.
The only supported migration path is from Solaris to AIX. There is no support for
any other platform migration scenario.
Opening a deployed topology
Once you have installed Tivoli Netcool Performance Manager, you can perform
incremental installations by modifying the topology that is stored in the database.
About this task
You retrieve the topology, modify it, then pass the updated data to the deployer.
When the installation is complete, the deployer stores the revised topology data in
the database.
To open a deployed topology:
Copyright IBM Corp. 2006, 2012 117
Procedure
1. If it is not already open, open the Topology Editor (see Starting the Topology
Editor on page 83).
2. In the Topology Editor, select Topology > Open existing topology. The Open
Topology window is displayed.
3. For the topology source, select and click Next.
4. Verify that all of the fields for the database connection are filled in with the
correct values:
v Database hostname - The name of the database host. The default value is
localhost.
v Port - The port number used for communication with the database. The
default value is 1521.
v Database user - The user name used to access the database. The default
value is PV_INSTALL.
v Database Password - The password for the database user account. For
example, PV.
v SID - The SID for the database. The default value is PV.
If desired, click Save as defaults to save these values for future incremental
installations.
Click Finish.
Results
The topology is retrieved from the database and is displayed in the Topology
Editor.
Adding a new component
After you have deployed your topology, you might need to make changes to it.
About this task
For example, you might want to add another SNMP collector.
To add a new component to the topology:
Procedure
1. If it is not already open, open the Topology Editor (see Starting the Topology
Editor on page 83).
2. Open the existing topology (see Opening a deployed topology on page 117).
3. In the Logical view of the Topology Editor, right-click the folder for the
component you want to add.
4. Select Add XXX from the pop-up menu, where XXX is the name of the
component you want to add.
5. The Topology Editor prompts for whatever information is needed to create the
component. See the appropriate section for the component you want to add:
v Add the hosts on page 84
v Add a database configurations component on page 86
v Add a DataMart on page 87
v Add a Discovery Server on page 89
v Add a Tivoli Integrated Portal on page 90
118 IBM Tivoli Netcool Performance Manager: Installation Guide
v Add a DataView on page 91
v Add the DataChannel administrative components on page 92
v Add a DataChannel on page 93
v Add a Collector on page 95
Note: that if you add a collector to a topology that has already been
deployed, you must manually bounce the DataChannel management
components (cnsw, logw, cmgrw, amgrw). For more information, see Manually
starting the Channel Manager programs on page 176.
v Add a Discovery Server on page 89
6. The new component is displayed in the Logical view of the Topology Editor.
7. Save the updated topology. You must save the topology after you add the
component and before you run the deployer. This step is not optional.
8. Run the deployer (see Starting the Deployer on page 100), passing the
updated topology as input.
The deployer can determine that most of the components described in the
topology are already installed, and installs only the new component.
9. When the installation ends successfully, the deployer uploads the updated
topology into the database.
For information about removing a component from the Tivoli Netcool
Performance Manager environment, see Removing a component from the
topology on page 159.
Example
In this example, you update the installed version of Tivoli Netcool Performance
Manager to add a new DataChannel and two SNMP DataLoaders to the existing
system.
To update the Tivoli Netcool Performance Manager installation:
1. If it is not already open, open the Topology Editor (see Starting the Topology
Editor on page 83).
2. Open the existing topology (see Opening a deployed topology on page 117).
3. In the Logical view of the Topology Editor, right-click the DataChannels folder.
4. Select Add Data Channel from the pop-up menu. Following the directions in
Add a DataChannel on page 93, add the following components:
a. Add a new DataChannel (Data Channel 2) with two different SNMP
DataLoaders to the topology. The Topology Editor creates the new
DataChannel.
b. Add two SNMP collectors to the channel structure created by the Topology
Editor. The editor automatically creates a Daily Loader component, an
Hourly Loader component, and two Sub Channels with an FTE component
and a CME component.
5. Save the updated topology.
6. Run the deployer (see Starting the Deployer on page 100), passing the
updated topology as input.
The deployer can determine that most of the components described in the
topology are already installed, and installs only the new components (in the
example, DataChannel 5 with two new subchannels and DataLoaders).
7. When the installation ends, successful or not, the deployer uploads the updated
topology into the database.
Chapter 6. Modifying the current deployment 119
Changing configuration parameters of existing Tivoli Netcool
Performance Manager components
Configuration information is stored in the database. This enables the
DataChannel-related components to retrieve the configuration from the database at
run time.
You set the configuration information using the Topology Editor. As with the other
components, if you make changes to the configuration values, you must pass the
updated topology data to the deployer to have the changes propagated to both the
environment and the database.
Note: After the updated configuration has been stored in the database, you must
manually start, stop, or bounce the affected DataChannel component to have your
changes take effect.
Moving components to a different host
You can use the Topology Editor to move components between hosts.
About this task
You can move all components between hosts when they have not yet been
installed and are in the configured state. You can move SNMP and UBA collectors
when they are in the configured state or after they have been deployed and are in
the installed state.
If the component in the topology has not yet been deployed and is in the configured
state, the Topology Editor provides a Change Host option in the pop-up menu
when you click the component name in the Logical view. This option allows you to
change the host associated with the component prior to deployment.
If the component is an SNMP or UBA collector that was previously deployed and
is in the installed state, the Topology Editor provides a Migrate option in the
pop-up menu. This option instructs the deployer to uninstall the component from
the previous host and re-install it on the new system.
For instructions on moving deployed SNMP and UBA collectors after deployment,
see Moving a deployed collector to a different host on page 121. For instructions
on moving components that have not yet been deployed, see the information
below.
Note: The Movement of installed DataChannel Remote components is not
supported. All other components can be moved.
To change the host associated with a component before deployment:
Procedure
1. Start the Topology Editor (if it is not already running) and open the topology
that includes the component's current host (see Starting the Topology Editor
on page 83 and Opening a deployed topology on page 117).
2. In the Logical view, navigate to the name of the component to move.
3. Right-click the component name, then click Change Host from the pop-up
menu.
120 IBM Tivoli Netcool Performance Manager: Installation Guide
The Migrate Component dialog appears, containing a drop-down list of hosts
where you can move the component.
4. Select the name of the new host from the list, then click Finish.
The name of the new host appears in the Properties tab.
Moving a deployed collector to a different host
You can move a deployed SNMP or UBA collector to a different host. The
instructions for doing so differ for SNMP collectors and UBA collectors.
After you move a collector to a new host, it may take up to an hour for the change
to be registered in the database.
Moving a deployed SNMP collector
The steps required to move a deployed SNMP collector to a different host
About this task
Note: To avoid the loss of collected data, leave the collector running on the
original host until you complete Step 7 on page 108.
To move a deployed SNMP collector to a different host:
Procedure
1. Start the Topology Editor (if it is not already running) and open the topology
that includes the collector's current host (see Starting the Topology Editor on
page 83 and Opening a deployed topology on page 117).
2. In the Logical view, navigate to the name of the collector to move. For example
if moving SNMP 1.1, navigate as follows:
DataChannels > DataChannel 1 > Collector 1.1 > Collector SNMP.1.1
3. Right-click the collector name (for example, Collector SNMP 1.1), then click
Migrate from the pop-up menu.
The Migrate Collector dialog appears, containing a drop-down list of hosts
where you can move the collector.
Note: If you are moving a collector that has not been deployed, select Change
host from the pop-up menu (Migrate is grayed out). After the Migrate Collector
dialog appears, continue with the steps below.
4. Select the name of the new host from the list, then click Finish.
In the Physical view, the status of the collector on the new host is Configured.
The status of the collector on the original host is To be uninstalled. You will
remove the collector from the original host in Step 9.
Note: If you are migrating a collector that has not been deployed, the name of
the original host is automatically removed from the Physical view.
5. Click Topology > Save Topology to save the topology data.
6. Click Run > Run Deployer for Installation to run the deployer, passing the
updated topology as input. For more information on running the deployer, see
Starting the Deployer on page 100.
The deployer installs the collector on the new host and starts it.
Chapter 6. Modifying the current deployment 121
Note: Both collectors are now collecting data - the original collector on the
original host, and the new collector on the new host.
7. Before continuing with the steps below, note the current time, and wait until a
time period equivalent to two of the collector's collection periods elapses. Doing
so guards against data loss between collections on the original host and the
start of collections on the new host.
Because data collection on the new host is likely to begin sometime after the
first collection period begins, the data collected during the first collection
period will likely be incomplete. By waiting for two collection time periods to
elapse, you can be confident that data for one full collection period will be
collected.
The default collection period is 15 minutes. You can find the collection period
for the sub-element, sub-element group, or collection formula associated with
the collector in the DataMart Request Editor. For information on viewing and
setting a collection period, see the Tivoli Netcool Performance Manager DataMart
Configuration and Operation Guide.
8. Bounce the FTE for the collector on the collector's new host, as in the following
example:
./dccmd bounce FTE.1.1
The FTE now recognizes the collector's configuration on the new host, and will
begin retrieving data from the collector's output directory on the new host.
9. In the current Topology Editor session, click Run > Run Deployer for
Uninstallation to remove the collector from the original host, passing the
updated topology as input. For more information, see Removing a component
from the topology on page 159.
Note: This step is not necessary if you are moving a collector that has not been
deployed.
Moving a deployed SNMP collector to or from a HAM
environment
If you move a deployed SNMP collector into or out of a High Availability Manager
(HAM) environment, you must perform the steps in this section.
About this task
To move a deployed SNMP collector to or from a HAM environment:
Procedure
1. Move the collector as described in Moving a deployed SNMP collector on
page 121.
Note: If you are moving a spare collector out of the HAM environment, the
navigation path is different than the path shown in Step 2 of the above
instructions. For example, suppose you have a single HAM environment with a
cluster MyCluster on host MyHost, and you are moving the second SNMP
spare out of the HAM. The navigation path to the spare would be as follows:
DataChannels > Administrative Components > High Availability Managers >
HAM MyServer.1 > MyCluster > Collector Processes > Collection Process
SNMP Spare 2.
2. Log in as Tivoli Netcool Performance Manager Unix user, pvuser, on the
collector's new host.
3. Change to the directory where DataLoad is installed. For example:
122 IBM Tivoli Netcool Performance Manager: Installation Guide
cd /opt/dataload
4. Source the DataLoad environment:
. ./dataLoad.env
5. Stop the SNMP collector:
pvmdmgr stop
6. Edit the file dataLoad.env and set the field DL_HA_MODE as follows:
v Set DL_HA_MODE=true if you moved the collector onto a HAM host.
v Set DL_HA_MODE=false if you moved the collector off of a HAM host.
7. Source the DataLoad environment again:
. ./dataLoad.env
8. Start the SNMP collector:
pvmdmgr start
Note: If you move an SNMP collector to or from a HAM host, you
must bounce the HAM. For information, see Stopping and restarting modified
components on page 151.
Moving a deployed UBA bulk collector
The steps required to move a deployed UBA collector to a different host.
About this task
Note: You cannot move BCOL collectors, or UBA collectors that have a BLB or
QCIF subcomponent. If you want to move a UBA collector that has these
subcomponents, you must manually remove it from the old host in the topology
and then add it to the new host.
To move a deployed UBA collector to a different host:
Procedure
1. Log in as pvuser to the DataChannel host where the UBA collector is running.
2. Change to the directory where DataChannel is installed. For example:
cd /opt/datachannel
3. Source the DataChannel environment:
. dataChannel.env
4. Stop the collector's UBA and FTE components. For example, to stop these
components for UBA collector 1.1, run the following commands:
dccmd stop UBA.1.1
and...
dccmd stop FTE.1.1
For information on the dccmd command, see the Tivoli Netcool Performance
Manager Command Line Interface Guide.
Note: Some technology packs have additional pack-specific components that
must be shut down - namely, BLB (bulk load balancer) and IF (inventory file)
components. IF component names have the format xxxIF, where xxx is a
pack-specific name. For example, Cisco CWM packs have a CWMIF
component, Alcatel 5620 SAM packs have a SAMIF component, and Alcatel
5620 NM packs have a QCIF component. Other packs do not use these
technology-specific components.
Chapter 6. Modifying the current deployment 123
5. Tar up the UBA collector's UBA directory. You will copy this directory to the
collector's new host later in the procedure (Step 13).
For example, to tar up a UBA directory for UBA collector 1.1, run the
following command:
Note: This step is not necessary if the collector's current host and the new
host share a file system.
tar -cvf UBA_1_1.tar ./UBA.1.1/*
Note: Some technology packs have additional pack-specific directories that
need to be moved. These directories have the same names as the
corresponding pack-specific components described in Step 4.
6. Start the Topology Editor (if it is not already running) and open the topology
that includes the collector's current host (see Starting the Topology Editor on
page 83 and Opening a deployed topology on page 117).
7. In the Logical view, navigate to the name of the collector to move - for
example, Collector UBA.1.1.
8. Right-click the collector name and select Migrate from the pop-up menu.
The Migrate Collector dialog appears, containing a drop-down list of hosts
where you can move the collector.
9. Select the name of the new host from the list, then click Finish.
In the Physical view, the status of the collector on the new host is Configured.
The collector is no longer listed under the original host.
Note: If the UBA collector was the only DataChannel component on the
original host, the collector will be listed under that host, and its status will be
"To be uninstalled." You can remove the DataChannel installation from the
original host after you finish the steps below. For information on removing
DataChannel from the host, see Removing a component from the topology
on page 159.
10. Click Topology > Save Topology to save the topology.
11. Click Run > Run Deployer for Installation to run the deployer, passing the
updated topology as input. For more information on running the deployer, see
Starting the Deployer on page 100.
If DataChannel is not already installed on the new host, this step installs it.
12. Click Run > Run Deployer for Uninstallation to remove the collector from
the original host, passing the updated topology as input. For more
information, see Removing a component from the topology on page 159.
13. Copy any directory you tarred in Step 5 and the associated JavaScript files to
the new host.
Note: This step is not necessary if the collector's original host and the new
host share a file system.
For example, to copy UBA_1_1.tar and the JavaScript files from the collector's
original host:
a. Log in as pvuser to the UBA collector's new host.
b. Change to the directory where DataChannel is installed. For example:
cd /opt/datachannel
c. FTP to the collector's original host.
d. Run the following commands to copy the tar file to the new host. For
example:
124 IBM Tivoli Netcool Performance Manager: Installation Guide
cd /opt/datachannel
get UBA_1_1.tar
bye
tar -xvf UBA_1_1.tar
e. Change to the directory where the JavaScript files for the technology pack
associated with the collector are located:
cd /opt/datachannel/scripts
f. FTP the JavaScript files from the /opt/datachannel/scripts directory on the
original host to the /opt/datachannel/scripts directory on the new host.
14. Log in as pvuser to the Channel Manager host where the Administrator
Components (including CMGR) are running.
15. Stop and restart the Channel Manager by performing the following steps:
a. Change to the $DC_HOME directory (typically, /opt/datachannel).
b. Source the DataChannel environment:
. dataChannel.env
c. Get the CMGR process ID by running the following command:
ps -ef | grep CMGR
The process ID appears in the output immediately after the user ID, as
shown below in bold:
pvuser 6561 6560 0 Aug 21 ? 3:04 /opt/datachannel/bin/CMGR_visualn/dc.im -a CMGRpvuser 25976 24244 0 11:39:38 pts/7 0:00 grep
d. Stop the CMGR process. For example, if 6561 is the CMGR process ID:
kill -9 6561
e. Change to the $DC_HOME/bin directory (typically, /opt/datachannel/bin).
f. Restart CMGR by running the following command:
./cmgrw
16. Log in as pvuser to the UBA collector's new host and change to the
$DC_HOME/bin directory (typically, /opt/datachannel/bin).
17. Run the following command to verify that Application Manager (AMGR) is
running on the new host:
./findvisual
If the AMGR process is running, you will see output that includes an entry
like the following:
pvuser 6684 6683 0 Aug 21 ? 3:43 /opt/datachannel/bin/AMGR_visual -nologo /opt/datachannel/bin/dc.im -a AMGR -lo
Note: If AMGR is not running on the new host, do not continue. Verify that
you have performed the preceding steps correctly.
18. Start the collector's UBA and FTE components on the new host. For example,
to start these components for collector 1.1, run the following commands:
./dccmd start UBA.1.1
and...
./dccmd start FTE.1.1
Note: If any pack-specific components were shut down on the old host (see
Step 4), you must also start those components on the new host.
Chapter 6. Modifying the current deployment 125
Changing the port for a collector
You can use the Topology Editor to change the port associated with a collector.
About this task
To change the port associated with a collector after deployment:
Procedure
1. Start the Topology Editor (if it is not already running) and open the topology
(see Starting the Topology Editor on page 83 and Opening a deployed
topology on page 117).
2. In the Logical view, navigate to the collector.
3. Highlight the collector to view its properties.
The Topology Editor displays both the collector core parameters and the
technology pack-specific parameters.
4. Edit the port parameter within the list, SERVICE_PORT, then click Finish.
5. Click Topology > Save Topology to save the topology data.
6. Click Run > Run Deployer for Installationto run the deployer, passing the
updated topology as input.
7. When deployment is complete, log onto the server hosting the collector.
8. Log in as Tivoli Netcool Performance Manager Unix user, pvuser, on the
collector's host.
9. Change to the directory where DataLoad is installed. For example:
cd /opt/dataload
10. Source the DataLoad environment:
. ./dataLoad.env
11. Stop the SNMP collector:
pvmdmgr stop
12. Edit the file dataLoad.env and set the field DL_ADMIN_TCP_PORT.
For example:
DL_ADMIN_TCP_PORT=8800
13. Source the DataLoad environment again:
. ./dataLoad.env
14. Start the SNMP collector:
pvmdmgr start
Modifying Tivoli Integrated Portal and Tivoli Common Reporting ports
You can update the ports used by Tivoli Integrated Portal and Tivoli Common
Reporting.
The Tivoli Integrated Portal specific ports defined and used to build the
topology.xml file are as follows:
WAS_WC_defaulthost 16710
COGNOS_CONTENT_DATABASE_PORT 1557
IAGLOBAL_LDAP_PORT 389
126 IBM Tivoli Netcool Performance Manager: Installation Guide
Changing ports for the Tivoli Common Reporting console
You can assign new ports to an installed Tivoli Common Reporting console.
Procedure
1. Create a properties file containing values such as host name that match your
environment. The exemplary properties file below uses default values. Modify
the values to match your environment. Save the file in any location.
WAS_HOME=C:/ibm/tivoli/tip22
was.install.root=C:/ibm/tivoli/tip22
profileName=TIPProfile
profilePath=C:/ibm/tivoli/tipv2/profiles/TIPProfile
templatePath=C:/ibm/tivoli/tipv2/profileTemplates/default
nodeName=TIPNode
cellName=TIPCell
hostName=your_TCR_host
portsFile=C:/ibm/tivoli/tipv2/properties/TIPPortDef.properties
2. Edit the TCR_install_dir\properties\TIPPortDef.properties file to contain the
desired port numbers.
3. Stop the Tivoli Common Reporting server by navigating to the following
directory in the command-line interface:
v
Windows
TCR_component_dir\bin, and running the stopTCRserver.bat
command.
v
UNIX
and
Linux
TCR_component_dir/bin, and running the
stopTCRserver.sh.
Important: To stop the server, you must log in with the same user that you
used to install Tivoli Common Reporting.
4. In the command-line interface, navigate to the TCR_install_dir\bin directory.
5. Run the following command: ws_ant.bat -propertyfile C:\temp\tcrwas.props
-file "C:\IBM\tivoli\tipv2\profileTemplates\default\actions\
updatePorts.ant" C:\temp\tcrwas.props is the path to the properties file
created in Step 1.
6. Change the port numbers in IBMCognos Configuration:
a. Open IBMCognos Configuration by running TCR_component_dir\cognos\
bin\tcr_cogconfig.bat for Windows operating systems and
TCR_install_dir/cognos/bin/tcr_cogconfig.sh for Linux and UNIX.
b. In the Environment section, change the port numbers to the desired values,
as in Step 2.
c. Save your settings and close IBMCognos Configuration.
7. Start the Tivoli Common Reporting server by navigating to the following
directory in the command-line interface:
v
Windows
TCR_component_dir\bin, and running the startTCRserver.bat
command.
v
UNIX
and
Linux
TCR_component_dir/bin, and running the
startTCRserver.sh.
Important: To start the server, you must log in with the same user that you
used to install Tivoli Common Reporting.
Chapter 6. Modifying the current deployment 127
Port assignments
The application server requires a set of sequentially numbered ports.
The sequence of ports is supplied during installation in the response file. The
installer checks that the number of required ports (starting with the initial port
value) are available before assigning them. If one of the ports in the sequence is
already in use, the installer automatically terminates the installation process and
you must specify a different range of ports in the response file.
Viewing the application server profile
Open the application server profile to review the port number assignments and
other information.
About this task
The profile of the application server is available as a text file on the computer
where it is installed.
Procedure
1. Locate the /opt/IBM/tivoli/tipv2/profiles/TIPProfile/logs directory.
2. Open AboutThisProfile.txt in a text editor.
Example
This is the profile for an installation on in a Windows environment as it appears in
/opt/IBM/tivoli/tipv2\profiles\TIPProfile\logs\AboutThisProfile.txt:
Application server environment to create: Application server
Location: /opt/IBM/tivoli/tcr\profiles\TIPProfile
Disk space required: 200 MB
Profile name: TIPProfile
Make this profile the default: True
Node name: TIPNode Host name: tivoliadmin.usca.ibm.com
Enable administrative security (recommended): True
Administrative consoleport: 16315
Administrative console secure port: 16316
HTTP transport port: 16310
HTTPS transport port: 16311
Bootstrap port: 16312
SOAP connector port: 16313
Run application server as a service: False
Create a Web server definition: False
What to do next
If you want to see the complete list of defined ports on the application server, you
can open /opt/IBM/tivoli/tipv2/properties/TIPPortDef.properties in a text
editor:
#Create the required WAS port properties for TIP
#Mon Oct 06 09:26:30 PDT 2008
CSIV2_SSL_SERVERAUTH_LISTENER_ADDRESS=16323
WC_adminhost=16315
DCS_UNICAST_ADDRESS=16318
BOOTSTRAP_ADDRESS=16312
SAS_SSL_SERVERAUTH_LISTENER_ADDRESS=16321
SOAP_CONNECTOR_ADDRESS=16313
ORB_LISTENER_ADDRESS=16320
128 IBM Tivoli Netcool Performance Manager: Installation Guide
WC_defaulthost_secure=16311
CSIV2_SSL_MUTUALAUTH_LISTENER_ADDRESS=16322
WC_defaulthost=16310
WC_adminhost_secure=16316
Chapter 6. Modifying the current deployment 129
130 IBM Tivoli Netcool Performance Manager: Installation Guide
Chapter 7. Using the High Availability Manager
This chapter describes the optional Tivoli Netcool Performance Manager High
Availability Manager (HAM), including how to set up a HAM environment.
Overview
The High Availability Manager (HAM) is an optional component for large
installations that want to use redundant SNMP collection paths.
The HAM constantly monitors the availability of one or more SNMP collection
hosts, and switches collection to a backup host (called a spare) if a primary host
becomes unavailable.
The following figure shows a simple HAM configuration with one primary host
and one spare. In the panel on the left, the primary host is operating normally.
SNMP data is being collected from the network and channeled to the primary host.
In the panel on the right, the HAM has detected that the primary host is
unavailable, so it dynamically unbinds the collection path from the primary host
and binds it to the spare.
HAM basics
An SNMP collector collects data from a specific set of network resources according
to a set of configuration properties.
A collector has two basic parts: the collector process running on the host computer,
and the collector profile that defines the collector's properties.
Note: Do not confuse a "collector profile" with an "inventory profile." A collector
profile contains properties used in the collection of data from network resources -
properties such as collector number, polling interval, and output directory for the
collected data. An inventory profile contains information used to discover network
resources - properties such as the addresses of the resources to look for and the
mode of discovery.
A collector that is not part of a HAM environment is static - that is, the collector
process and the collector profile are inseparable. But in a HAM environment, the
Copyright IBM Corp. 2006, 2012 131
collector process and collector profile are managed as separate entities. This means
that if a collector process is unavailable (due to a collector process crash or a host
machine outage), the HAM can dynamically reconfigure the collector, allowing
data collection to continue. The HAM does so by unbinding the collector profile
from the unavailable collector process on the primary host, and then binding the
collector profile to a collector process on a backup (spare) host.
Note: It may take several minutes for the HAM to reconfigure a collector,
depending on the amount of data being collected.
The parts of a collector
Collector parts and their description.
When you set up a HAM configuration in the Topology Editor, you manage the
two parts of a collector - the collector process and the collector profile - through
the following folders in the Logical view:
v Collector Processes. A collector process is a Unix process representing a runtime
instance of a collector. A collector process is identified by the name of the host
where the process is running and by the collector process port (typically 3002).
A host can have just one SNMP collector process.
v Managed Definitions. A managed definition identifies a collector profile through
the unique collector number defined in the profile.
Every managed definition has a default binding to a host and to the collector
process on that host. The default host and collector process are called the
managed definition's primary host and collector process.
A host that you designate as a spare host has a collector process but no default
managed definition.
The following figure shows the parts of a collector that you manage through the
Collector Process and Managed Definition folders. In the figure, the HAM
dynamically unbinds the collector profile from the collector process on the primary
host, and then binds the profile to the collector process on the spare. This dynamic
re-binding of the collector is accomplished when the HAM binds the managed
definition - in this case, represented by the unique collector ID, Collector 1 - to the
collector process on the spare.
132 IBM Tivoli Netcool Performance Manager: Installation Guide
Clusters
A HAM environment can consist of a single set of hosts or multiple sets of hosts.
Each set of hosts in a HAM environment is called a cluster.
A cluster is a logical grouping of hosts and collector processes that are managed by
a HAM.
The use of multiple clusters is optional. Whether you use multiple clusters or just
one has no affect on the operation of the HAM. Clusters simply give you a way to
separate one group of collectors from another, so that you can better deploy and
manage your primary and spare collectors in a way that is appropriate for your
needs.
Multiple clusters may be useful if you have a large number of SNMP collector
hosts to manage, or if the hosts are located in various geographic areas.
The clusters in a given HAM environment are distinct from one another. In other
words, the HAM cannot bind a managed definition in one cluster to a collector
process in another.
HAM cluster configuration
For host failover to occur, a HAM cluster must have at least one available spare
host.
The cluster can have as few as two hosts - one primary and one spare. Or, it can
have multiple primary hosts with one or more spares ready to replace primary
hosts that become unavailable.
The ratio of primary hosts to spare hosts is expressed as p + s. For example, a
HAM cluster with four primary hosts and two spares is referred to as a 4+2
cluster.
Types of spare hosts
There are two types of spare hosts:
v Designated spare. The sole purpose of this type of spare in a HAM cluster is to
act as a backup host.
A designated spare has a collector process, but no default managed definition.
Its collector process remains idle until the HAM detects an outage on one of the
active hosts, and binds that host's managed definition to the spare's collector
process.
A HAM cluster must have at least one designated spare.
v Floating spare. This type of spare is a primary host that can also act as a backup
host for one or more managed definitions.
Chapter 7. Using the High Availability Manager 133
Types of HAM clusters
The types of HAM clusters that can be created.
When the HAM binds a managed definition to a spare (either a designated spare
or a floating spare), the spare becomes an active component of the collector. It
remains so unless you explicitly reassign the managed definition back to its
primary host or to another available host in the HAM cluster. This is an important
fact to consider when you plan the hosts to include in a HAM cluster.
There are two types of HAM clusters:
v Fixed spare cluster. In this type of cluster, failover can occur only to designated
spares. There are no floating spares in this type of cluster.
When the HAM binds a managed definition to the spare, the spare temporarily
takes the place of the primary that has become unavailable. When the primary
becomes available again, you must reassign the managed definition back to the
primary (or to another available host). The primary then resumes its data
collection operations, and the spare resumes its role as backup host.
If you do not reassign the managed definition back to the primary, the primary
cannot participate in further collection operations. Since the primary is not
configured as a floating spare, it also cannot act as a spare now that its collector
process is idle. As a result, the HAM cluster loses its failover capabilities if no
other spare is available.
Note: A primary host cannot act as a spare unless it is configured as a floating
spare.
v Floating spare cluster. This type of cluster has one or more primary hosts that
can also act as a spare. Failover can occur to a floating spare or to a designated
spare.
You do not need to reassign the managed definition back to this type of primary,
as you do with primaries in a fixed spare cluster. When a floating spare primary
becomes available again, it assumes the role of a spare.
You can designate some or all of the primaries in a HAM cluster as floating
spares. If all the primaries in a HAM cluster are floating spares, you should
never have to reassign a managed definition to another available host in order to
maintain failover capability.
Note: IBM recommends that all the primaries in a cluster be of the same type -
either all floating spares or no floating spares.
Example HAM clusters
Examples of HAM cluster options.
The Tivoli Netcool Performance Manager High Availability Manager feature is
designed to provide great flexibility in setting up a HAM cluster. The following
illustrations show just a few of the possible variations.
1 + 1, fixed spare
A fixed spare cluster with one primary host and one designated spare.
The figure below shows a fixed spare cluster with one primary host and one
designated spare:
v In the panel on the left, Primary1 is functioning normally. The designated spare
is idle.
134 IBM Tivoli Netcool Performance Manager: Installation Guide
v In the panel on the right, Primary1 experiences an outage. The HAM unbinds
the collector from Primary1 and binds it to the designated spare.
v With the spare in use and no other spares in the HAM cluster, failover can no
longer occur - even after Primary1 returns to service. For failover to be possible
again, you must reassign Collector 1 to Primary1. This idles the collector process
on the spare, making it available for the next failover operation if Primary 1 fails
again.
Note: When a designated spare serves as the only spare for a single primary, as in
a 1+1 fixed spare cluster, the HAM pre-loads the primary's collector definition on
the spare. This results in a fast failover with a likely loss of no more than one
collection cycle.
The following table shows the bindings that the HAM can and cannot make in this
cluster:
Collector Possible Host Bindings Host Bindings Not Possible
Collector 1 Primary1 (default binding)
Designated spare
-
2 + 1, fixed spare
A fixed spare cluster with two primary hosts and one designated spare
The figure below shows a fixed spare cluster with two primary hosts and one
designated spare:
v In the panel on the left, Primary1 and Primary2 are functioning normally. The
designated spare is idle.
v In the panel on the right, Primary2 experiences an outage. The HAM unbinds
the collector from Primary2 and binds it to the designated spare.
v With the spare in use and no other spares in the HAM cluster, failover can no
longer occur - even after Primary2 returns to service. For failover to be possible
again, you must reassign Collector 2 to Primary2. This idles the collector process
on the spare, making it available for the next failover operation.
Chapter 7. Using the High Availability Manager 135
The following table shows the bindings that the HAM can and cannot make in this
cluster:
Collector Possible Host Bindings Host Bindings Not Possible
Collector 1 Primary1 (default binding)
Designated spare
Primary2
Collector 2 Primary2 (default binding)
Designated spare
Primary1
2 + 1, both primaries are floating spares
Both primaries are floating spares.
The figure below shows a floating spare cluster with two primary hosts and one
designated spare, with each primary configured as a floating spare:
v In the panel on the left, Primary1 and Primary2 are functioning normally. The
designated spare is idle.
v In the panel on the right, Primary2 experiences an outage. The HAM unbinds
the collector from Primary2 and binds it to the designated spare.
v When Primary2 returns to service, it will assume the role of spare, meaning its
collector process remains idle. The host originally defined as the dedicated spare
continues as the active platform for Collector 2.
v The following figure shows the same cluster after Primary2 has returned to
service. In the panel on the left, Primary2 is idle, prepared to act as backup if
needed.
136 IBM Tivoli Netcool Performance Manager: Installation Guide
v In the panel on the right, Primary1 experiences an outage. The HAM unbinds
the collector from Primary1 and binds it to the floating spare, Primary2.
The following table shows the bindings that the HAM can and cannot make in this
cluster:
Collector Possible Host Bindings Host Bindings Not Possible
Collector 1 Primary1 (default binding)
Primary2
Designated spare
-
Collector 2 Primary1
Primary2 (default binding)
Designated spare
-
3+ 2, fixed spares
A fixed spare cluster with three primary hosts and two designated spares.
The figure below shows a fixed spare cluster with three primary hosts and two
designated spares:
v In the panel on the left, all three primaries are functioning normally. The
designated spares are idle.
v In the panel on the right, Primary3 experiences an outage. The HAM unbinds
the collector from Primary3 and binds it to Designated Spare 2. The HAM chose
Designated Spare 2 over Designated Spare 1 because the managed definition for
Collector 3 set the failover priority in that order.
Note: Each managed definition sets its own failover priority. Failover priority
can be defined differently in different managed definitions.
v With one spare in use and one other spare available (Designated Spare 1),
failover is now limited to the one available spare - even after Primary3 returns
to service. For dual failover to be possible again, you must reassign Collector 3
to Primary3.
Chapter 7. Using the High Availability Manager 137
The following table shows the bindings that the HAM can and cannot make in this
cluster:
Collector Possible Host Bindings Host Bindings Not Possible
Collector 1 Primary1 (default binding)
Designated Spare 1
Designated Spare 2
Primary2
Primary3
Collector 2 Primary2 (default binding)
Designated Spare 1
Designated Spare 2
Primary1
Primary3
Collector 3 Primary3 (default binding)
Designated Spare 1
Designated Spare 2
Primary1
Primary2
3+ 2, all primaries are floating spares
A floating spare cluster with three primary hosts and two designated spares, with
each primary configured as a floating spare.
The figure below shows a floating spare cluster with three primary hosts and two
designated spares, with each primary configured as a floating spare:
v In the panel on the left, Primary3 had previously experienced an outage. The
HAM unbound its default collector (Collector 3) from Primary3, and bound the
collector to the first available spare in the managed definition's priority list,
which happened to be Designated Spare 2. Now that Primary3 is available
again, it is acting as a spare, while Designated Spare 2 remains the active
collector process for Collector 3.
v In the panel on the right, Primary2 experiences an outage. The HAM unbinds
Collector 2 from Primary2, and binds it to the first available spare in the
managed definition's priority list. This happens to be the floating spare
Primary3.
v When Primary2 becomes available again, there will once more be two spares
available - Primary2 and Designated Spare 1.
138 IBM Tivoli Netcool Performance Manager: Installation Guide
The following table shows the bindings that the HAM can and cannot make in this
cluster:
Collector Possible Host Bindings Host Bindings Not Possible
Collector 1 Primary1 (default binding)
Primary2
Primary3
Designated Spare 1
Designated Spare 2
-
Collector 2 Primary1
Primary2 (default binding)
Primary3
Designated Spare 1
Designated Spare 2
-
Collector 3 Primary1
Primary2
Primary3 (default binding)
Designated Spare 1
Designated Spare 2
-
Chapter 7. Using the High Availability Manager 139
Resource pools
When you configure a managed definition in the Topology Editor, you specify the
hosts that the HAM can bind to the managed definition, and also the priority order
in which the hosts are to be bound. This list of hosts is called the resource pool for
the managed definition.
A resource pool includes:
v The managed definition's primary host and collector process (that is, the host
and collector process that are bound to the managed definition by default).
v Zero or more other primary hosts in the cluster.
If you add a primary host to a managed definition's resource pool, that primary
host becomes a floating spare for the managed definition.
v Zero or more designated spares in the cluster.
Typically, each managed definition includes one or more designated spares in its
resource pool.
Note: If no managed definitions include a designated spare in their resource
pools, there will be no available spares in the cluster, and therefore failover
cannot occur in the cluster.
How the SNMP collector works
The SNMP collector capability and behaviour.
The SNMP collector is state-based and designed both to perform initialization and
termination actions, and to "change state" in response to events generated by the
HAM or as a result of internally-generated events (like a timeout, for example).
The following table lists the events that the SNMP collector understands and
indicates whether they can be generated by the HAM.
Event HAM-Generated Description
Load Yes Load collection profile, do
not begin scheduling
collections.
Pause Yes Stop scheduling collections;
do not unload profile.
Reset Yes Reset expiration timer.
Start Yes Start scheduling collections.
Stop Yes Stop scheduling collections;
unload profile
Timeout No Expiration timer expires;
start scheduling collections.
The SNMP collector can reside in one of the following states, as shown in the
following table:
140 IBM Tivoli Netcool Performance Manager: Installation Guide
SNMP Collector State Event Description
Idle N/A Initial state; a collector
number may or may not be
assigned; the collection
profile has not been loaded.
Loading Load Intermediate state between
Idle and Ready. Occurs after
a Load event. Collector
number is assigned, and the
collection profile is being
loaded.
Ready N/A Collector number assigned,
profile loaded, but not
scheduling requests or
performing collections.
Starting Start Intermediate state between
Idle and Running. Occurs
after a Start event. Collector
number assigned, and profile
is being loaded.
Running N/A Actively performing requests
and collections.
Stopping Stop/Pause Intermediate state between
Running and Idle.
The following state diagram shows how the SNMP collector transitions through its
various states depending upon events or time-outs:
How failover works with the HAM and the SNMP collector
How Failover Works With the HAM and the SNMP Collector
The following tables illustrate how the HAM communicates with the SNMP
collectors during failover for a 1+1 cluster and a 2+1 cluster.
Chapter 7. Using the High Availability Manager 141
Table 8. HAM and SNMP Collector in a 1+1 Cluster
State of Primary State of Spare Events and Actions
Running Idle The HAM sends the spare
the Load event for the
specified collection profile.
Running Ready The HAM sends a Pause
event to the spare to extend
the timeout. Note: If the
timeout expires, the spare
will perform start actions
and transition to a Running
state.
Running Running The HAM sends a Pause
event to the collector process
that has been in a Running
state for a shorter amount of
time.
No response Ready The HAM sends a Start
event to the spare.
Table 9. HAM and SNMP Collector in a 2+1 Cluster
State of Primary State of Spare Events and Actions
Running Idle No action
Running Ready No action
Running Running The HAM sends a Stop event
to the collector process that
has been in Running state for
the shorter amount of time.
No Response Idle The HAM sends a Start
event to the spare.
No Response Ready The HAM sends a Start
event to the spare.
Because more than one physical system may produce SNMP collections, the File
Transfer Engine (FTE) must check every capable system for a specific profile. The
FTE retrieves all output for the specific profile. Any duplicated collections are
reconciled by the Complex Metrics Engine (CME).
Obtaining collector status
How to get the status of a collector.
To obtain status on the SNMP collectors managed by the HAM, enter the following
command on the command line:
$ dccmd status HAM.<hostname>.1
The dccmd command returns output similar to the following:
COMPONENT APPLICATION HOST STATUS ES DURATION EXTENDED STATUS
HAM.DCAIX2.1 HAM DCAIX2 running 10010 1.1 Ok: (box1:3012 ->
Running 1.1 for 5h2m26s); No avail spare; Check: dcaix2:3002, birdnestb:3002 1.2 Ok: (box2:3002 ->
Running 1.2 for 5h9m36s); No avail spare; Check: box4:3002, box5:3002 1.3 Not
Running; No avail spare;
Check: box4:3002, box5:3002
142 IBM Tivoli Netcool Performance Manager: Installation Guide
The following list describes EXTENDED STATUS information:
v 1.1 - Load # Collection profile 1.1
v Ok: - Status of the load. Ok means it is properly collected, Not Running indicates
a severe problem (data losses)
v (box1:3012 -> Running 1.1 for 5h2m26s)- The collector that is currently
performing the load, with its status and uptime.
v No avail spare - List of possible spare, if something happens to the collector
currently working. In this example there is no spare available, a failover would
fail. A list of host:port would indicate the possible spare machines.
v Check: box4:3002, box5:3002 - Indicates what is currently wrong with the
system/configuration. Machines box4:3002 and box5:3002 should be spare but
are either not running, or not reachable. The user is instructed to check these
machines.
For a 1-to-1 failover configuration, the dccmd command might return output like
the following:
$ dccmd status HAM.SERVER.1
COMPONENT APPLICATION HOST STATUS ES DURATION EXTENDED STATUS
HAM.SERVER.1 HAM SERVER running 10010 1.1 Ok: (box1:3002 ->
Running 1.1 for 5h2m26s); 1 avail spare: (box2:3002 -> Ready 1.1)
This preceding output shows that Collector 1.1 is in a Running state on Box1, and
that the Collector on Box2 is in a Ready state, with the profile for Collector 1.1
loaded.
Creating a HAM environment
This section describes the steps required to create a 3+1 HAM environment with a
single cluster, and with all three primaries configured as floating spares.
About this task
This is just one of the many variations a HAM environment can have. The
procedures described in the following sections indicate the specific steps where
you can vary the configuration.
Note: If you are setting up a new Tivoli Netcool Performance Manager
environment and plan to use a HAM in that environment, perform the following
tasks in the following order:
Procedure
1. Install all collectors.
2. Configure and start the HAM.
3. Install all technology packs.
4. Perform the discovery.
Chapter 7. Using the High Availability Manager 143
Topology prerequisites
The minimum component prerequisite.
A 3+1 HAM cluster requires that you have a topology with the following
minimum components:
v Three hosts, each bound to an SNMP collector. These will act as the primary
hosts. You will create a managed definition for each of the primary hosts.
v One additional host that is not bound to an SNMP collector. This will act as the
designated spare.
For information on installing these components, see Adding a new component
on page 118.
Procedures
The general procedures for creating a single-cluster HAM with one designated
spare and three floating spares.
Create the HAM and a HAM cluster
To create a High Availability Manager with a single cluster
Procedure
1. Start the Topology Editor (if it is not already running) and open the topology
where you want to add the HAM (see Starting the Topology Editor on page
83 and Opening a deployed topology on page 117).
2. In the Logical view, right-click High Availability Managers, located at
DataChannels > Administrative Components.
3. Select Add High Availability Manager from the pop-up menu.
The Add High Availability Manager Wizard appears.
4. In the Available hosts field, select the host where you want to add the HAM.
Note: You can install the HAM on a host where a collector process is installed,
but you cannot install more than one HAM on a host.
5. In the Identifier field, accept the default identifier.
The identifier has the following format:
HAM.<HostName>.<n>
where HostName is the name of the host you selected in Step 4, and n is a
HAM-assigned sequential number, beginning with 1, that uniquely identifies
this HAM from others that may be defined on other hosts.
6. Click Finish.
The HAM identifier appears under the High Availability Managers folder.
7. Right-click the identifier of the HAM you just created.
8. Select Add Cluster from the pop-up menu.
The Add Cluster Monitor Wizard appears.
9. In the Identifier field, type a name for the cluster and click Finish.
The cluster name appears under the HAM identifier folder you added in
Step 6. The following folders appear under the cluster name:
v Collector Processes
v Managed Definitions
144 IBM Tivoli Netcool Performance Manager: Installation Guide
Note: To add additional clusters to the environment, repeat Step 7 through
Step 9.
Add the designated spare
How to create and add a designated spare.
About this task
To create a designated spare, you must have a host defined in the Physical view
with no SNMP collector assigned to it. For information on adding a host to a
topology, see Add the hosts on page 84
To add a designated spare to a cluster:
Procedure
1. In the Logical view, right-click the Collector Processes folder that you created
in Step 9 of the previous section, Create the HAM and a HAM cluster on
page 144.
2. Select Add Collection Process SNMP Spare from the pop-up menu.
The Add Collection Process SNMP Spare - Configure Collector Process SNMP
Spare dialog appears.
3. In the Available hosts field, select the host that you want to make the
designated spare.
This field contains the names of hosts in the Physical view that do not have
SNMP collectors assigned to them.
4. In the Port field, specify the default port number, 3002, for the spare's collector
process, then click Finish.
Under the cluster's Collector Processes folder, the entry Collection Process
SNMP Spare <n> appears, where n is a HAM-assigned sequential number,
beginning with 1, that uniquely identifies this designated spare from others
that may be defined in this cluster.
Note: Repeat Step 1 through Step 4 to add an additional designated spare to
the cluster.
What to do next
Should you be making changes to an already existing configuration, please make
sure the dataLoad.env file contains all the right settings:
1. Change to the directory where DataLoad is installed. For example:
cd /opt/dataload
2. Source the DataLoad environment:
. ./dataLoad.env
3. Make sure that DL_HA_MODE field in the dataLoad.env file and set to
DL_HA_MODE=true.
4. Source the DataLoad environment again:
. ./dataLoad.env
Chapter 7. Using the High Availability Manager 145
Add the managed definitions
A managed definition allows the HAM to bind a collector profile to a collector
process.
About this task
Note: When you add a managed definition to a HAM cluster, the associated
collector process is automatically added to the cluster's Collector Processes folder.
To add a managed definition to a HAM cluster:
Procedure
1. In the Logical view, right-click the Managed Definitions folder that you
created in Create the HAM and a HAM cluster on page 144.
2. Select Add Managed Definition from the pop-up menu.
The Add Managed Definition - Choose Managed Definition dialog appears.
3. In the Collector number field, select the unique collector number to associate
with this managed definition.
4. Click Finish.
The following entries now appear for the cluster:
v Under the cluster's Managed Definitions folder, the entry Managed
Definition <n> appears, where n is the collector number you selected in Step
3.
v Under the cluster's Collector Processes folder, the entry Collector Process
[HostName] appears, where HostName is the host that will be bound to the
SNMP collector you selected in Step 3. This host is the managed definition's
primary host.
Note: Repeat Step 1 though to Step 4 to add another managed definition to the
cluster.
Example
When you finish adding managed definitions for a 3+1 HAM cluster, the Logical
and Physical views might look like the following:
146 IBM Tivoli Netcool Performance Manager: Installation Guide
In this example, the hosts dcsol1a, dcsol1b, and docserver1 are the primaries, and
docserver2 is the designated spare.
Define the resource pools
A resource pool is a list of the spares, in priority order, that the HAM can bind to a
particular managed definition.
About this task
When you create a managed definition, the managed definition's primary host is
the only host in its resource pool. To enable the HAM to bind a managed
definition to other hosts, you must add more hosts to the managed definition's
resource pool.
To add hosts to a managed definition's resource pool:
Procedure
1. Right-click a managed definition in the cluster's Managed Definitions folder.
2. Select Configure Managed Definition from the pop-up menu.
The Configure Managed Definition - Collector Process Selection dialog appears,
as shown below. In this example, the resource pool being configured is for
Managed Definition 1 (that is, the managed definition associated with Collector
1).
Chapter 7. Using the High Availability Manager 147
3. In the Additional Collector Processes list, check the box next to each host to
add to the managed definition's resource pool.
Typically, you will add at least the designated spare (in this example,
docserver2) to the resource pool. If you add a primary host to the resource
pool, that host becomes a floating spare for the managed definition.
Note: You must add at least one of the hosts in the Additional Collector
Processes list to the resource pool.
Since the goal in this example is to configure all primaries as floating spares,
the designated spare and the two primaries (docserver1 and dcsol1a) will be
added to the resource pool.
4. When finished checking the hosts to add to the resource pool, click Next.
Note: If you add just one host to the resource pool, the Next button is not
enabled. Click Finish to complete the definition of this resource pool. Return to
Step 1 to define a resource pool for the next managed definition in the cluster,
or skip to Save and start the HAM on page 149 if you are finished defining
resource pools.
The Configure Managed Definition - Collector Process Order dialog appears, as
shown below:
148 IBM Tivoli Netcool Performance Manager: Installation Guide
5. Specify the failover priority order for this managed definition. To do so:
a. Select a host to move up or down in the priority list, then click the Up or
Down button until the host is positioned where you want.
b. Continue moving hosts until the priority list is ordered as you want.
c. Click Finish.
In this example, if the primary associated with Managed Definition 1 fails, the
HAM will attempt to bind the managed definition to the floating spare dcsol1a.
If dcsol1a is in use or otherwise unavailable, the HAM attempts to bind the
managed definition to docserver1. The designated spare docserver2 is last in
priority.
6. Return to Step 1 to define a resource pool for the next managed definition in
the cluster, or continue with the next section if you are finished defining
resource pools.
Save and start the HAM
When you finish configuring the HAM as described in the previous sections, you
are ready to save the configuration and start the HAM.
About this task
To save and start the HAM:
Procedure
1. Click Topology > Save Topology to save the topology file containing the HAM
configuration.
2. Run the deployer (see Starting the Deployer on page 100), passing the
updated topology file as input.
3. Open a terminal window on the DataChannel host.
4. Log in as pvuser.
5. Change your working directory to the DataChannel bin directory
(/opt/datachannel/bin by default), as follows:
cd /opt/datachannel/bin
6. Bounce (stop and restart) the Channel Manager. For instructions, see Step 15 on
page 111.
7. Run the following command:
dccmd start ham
Chapter 7. Using the High Availability Manager 149
Monitoring of the HAM environment begins.
For information on using dccmd, see the Tivoli Netcool Performance Manager
Command Line Interface Guide.
Creating an additional HAM environment
How to create a HAM environment.
Typically, one HAM is sufficient to manage all the collectors you require in your
HAM environment. But for performance reasons, very large Tivoli Netcool
Performance Manager deployments involving dozens or hundreds of collector
processes might benefit from more than one HAM environment.
HAM environments are completely separate from one another. A host in one HAM
environment cannot fail over to a host in another HAM environment.
To create an additional HAM environment, perform all of the procedures described
in Creating a HAM environment on page 143.
Modifying a HAM environment
How to modify a HAM environment.
You can modify a HAM environment by performing any of the procedures in
Creating a HAM environment on page 143. For example, you can add collectors,
add clusters, configure a primary host as a floating spare, change the failover
priority order of a resource pool, and make a number of other changes to the
environment, including moving collectors into or out of a HAM environment.
For information on moving a deployed SNMP collector into or out of a HAM
environment, see Moving a deployed SNMP collector to or from a HAM
environment on page 122.
You can also modify the configuration parameters of the HAM components that
are writable. For information on modifying configuration parameters, see Changing
configuration parameters of existing Tivoli Netcool Performance Manager
components.
Removing HAM components
How to remove HAM components.
You can remove HAM components from the environment by right-clicking the
component name and selecting Remove from the pop-up menu. The selected
component and any subcomponents will be removed.
Before you can remove a designated spare (Collection Process SNMP Spare), you
must remove the spare from any resource pools it may belong to. To remove a
designated spare from a resource pool, open the managed definition that contains
the resource pool, and clear the check box next to the name of the designated spare
to remove. For information about managing resource pools, see Define the
resource pools on page 147
150 IBM Tivoli Netcool Performance Manager: Installation Guide
Stopping and restarting modified components
How to stop and restart modified components.
About this task
If you change the configuration of a HAM or any HAM components, or if you add
or remove an existing collector to or from a HAM environment, you must bounce
(stop and restart) the Tivoli Netcool Performance Manager components you
changed The is generally true for all Tivoli Netcool Performance Manager
components that you change, not just HAM.
To bounce a component:
Procedure
1. Open a terminal window on the DataChannel host.
2. Log in as pvuser.
3. Change your working directory to the DataChannel bin directory
(/opt/datachannel/bin by default), as follows:
cd /opt/datachannel/bin
4. Run the bounce command in the following format:
dccmd bounce <component>
For example:
v To bounce the HAM with the identifier HAM.dcsol1b.1, run:
dccmd bounce ham.dcsol1b.1
v To bounce all HAMs in the topology, run:
dccmd bounce ham.*.*
v To bounce the FTE for collector 1.1 that is managed by a HAM, run:
dccmd bounce fte.1.1
You do not need to bounce the HAM that the FTE and collector are in.
For information on using dccmd, see the Tivoli Netcool Performance Manager
Command Line Interface Guide.
5. Bounce the Channel Manager. For instructions, see Step 15.
Viewing the current configuration
During the process of creating or modifying a HAM cluster, you may find it useful
to check how the individual collector processes and managed definitions are
currently configured.
About this task
To view the current configuration of a collector process or managed definition:
Procedure
1. Right-click the collector process or managed definition to view.
2. Select Show from the pop-up menu.
The Show Collector Process... or Show Managed Definition... dialog appears.
The following sections describe the contents of these dialogs.
Chapter 7. Using the High Availability Manager 151
Show Collector Process... dialog
Dialog box description.
The following figure shows a collector process configured with three managed
definitions.
The configuration values are described as follows:
v dcsol1a. The primary host where this collector process runs.
v 3002. The port through which the collector process receives SNMP data.
v 3 2 (Primary) 1. The managed definitions that the HAM can bind to this
collector process. The values have the following meanings:
3. The managed definition for Collector 3.
2 (Primary). The managed definition for Collector 2. This is the default
managed definition for the collector process.
1. The managed definition for Collector 1.
Show Managed Definition... dialog
Dialog box description.
The Show Managed Definition... dialog contains the resource pool for a particular
managed definition.
This dialog contains the same information that appears in the Show Collector
Process... dialog, but for multiple hosts instead of just one. As such, this dialog
gives you a broader view of the cluster's configuration than a Show Managed
Definition... dialog.
The following figure shows a managed definition's resource pool configured with
four hosts:
152 IBM Tivoli Netcool Performance Manager: Installation Guide
Note the following about this managed definition's resource pool:
v The priority order of the hosts is from top to bottom - therefore, the first
collector process that the HAM will attempt to bind to this managed definition
is the one on host dcsol1a. The collector process on host docserver2 is last in the
priority list.
v The first three hosts are floating spares. They are flagged as such by each having
a primary managed definition.
v The host docserver2 is the only designated spare in the resource pool. It is
flagged as such by not having a primary managed definition.
Chapter 7. Using the High Availability Manager 153
154 IBM Tivoli Netcool Performance Manager: Installation Guide
Chapter 8. Enabling Common Reporting on Tivoli Netcool
Performance Manager
Additional software is required if you want to use the Tivoli Common Reporting
server installed with Tivoli Netcool Performance Manager.
IBM Tivoli Common Reporting is an enterprise reporting solution that delivers a
common reporting platform across the Tivoli portfolio. To make full use of
Common Reporting, you must install Model Maker IBM Cognos Edition in your
system.
The Model Maker IBM Cognos Edition tooling greatly simplifies the task of
reporting on network performance by using Tivoli Common Reporting. Through
the creation, deployment, and management of common packs, Model Maker
simplifies the process of publishing report packages for Network Performance
Management on Tivoli Common Reporting.
Note: Please ensure before setting up Model Maker that you have installed the
Oracle 11g Instant Client, as described in Installing Oracle 11g Instant Client
(Standalone Server or Database Server combined with Tivoli Common Reporting).
For more information about Model Maker IBM Cognos Edition, see
http://publib.boulder.ibm.com/infocenter/tivihelp/v8r1/topic/
com.ibm.tnpm.doc/welcome_tnpm.html.
Model Maker
The Model Maker tooling greatly simplifies the task of reporting on the
performance of wireless and wireline networks. Through the creation, deployment,
and management of common packs, Model Maker simplifies the process of
publishing report packages for network Performance Management on Tivoli
Common Reporting.
Common packs extend technology pack data models and, for example, Summary
and Busy Hour data models to Tivoli Common Reporting on Tivoli Netcool
Performance Manager. After you install a technology pack, you install the
corresponding common pack to enable Tivoli Common Reporting. Common packs
can be delivered in the COTS program with the corresponding technology packs,
or created and customized by customers and business partners.
You must install the Model Maker components on a Tivoli Netcool Performance
Manager 1.3.2 system in the following sequence:
1. Install the Common Pack Service on the Tivoli Common Reporting server.
2. Install Model Maker on a Windows computer.
3. Install a technology pack and the corresponding common packs for each
technology you require.
System requirements
For full details of prerequisite software and the system requirements, see the
following guides:
Copyright IBM Corp. 2006, 2012 155
v IBM Tivoli Netcool Performance Manager: Model Maker: Installation and User Guide
This guide are available at http://publib.boulder.ibm.com/infocenter/tivihelp/
v8r1/index.jsp?topic=/com.ibm.tnpm.doc/welcome_tnpm.html
The following platforms are supported.
The Common Pack Service
The following server operating systems are supported:
v IBM AIX
v Oracle Solaris (SPARC)
v Red Hat Linux
Model Maker
The following client operating systems are supported:
v Microsoft Windows XP Professional, x86-32 or later.
The Base Common Pack Suite
The Base Common Pack Suite is a set of generic Base Common Packs (BCPs), some
of which are mandatory requirements for working with common packs. All
common packs have a dependency on at least one BCP, the TCR Time BCP. For
Wireless users, BCPs provide the cross-vendor technology support provided by
GOMs and GOMlets for technology packs.
The Base Common Pack Suite is updated periodically with the latest versions of
the BCPs. Before you install or create common packs, you must download the
latest version of the Base Common Pack Suite, and install the BCPs you require.
The Base Common Pack Suite consists of an archive file containing a set of
common pack JAR files, which you must download and extract before installing.
v The TCR Time BCP is a mandatory dependent pack for all wireless and wireline
common packs, and provides a common time dimension for reporting. It must
be installed before you can work with any common packs.
v The Wireline Common BCP is a mandatory dependent pack for all wireline
common packs. It must be installed before you can work with wireline common
packs.
v Typically, a number of Wireless BCPs are dependent packs for a wireless
common pack. Wireless BCPs support a number of Global Object Model (GOM)
and GOMlet technology packs. Refer to individual wireless common pack
release notes to see the list of dependent packs for a particular pack.
For current version information about the Base Common Pack Suite, see the Known
issues with Tivoli Netcool Performance Manager 1.3.2 technote in the Support
knowledge base.
156 IBM Tivoli Netcool Performance Manager: Installation Guide
Installing the BCP package from the distribution
Tivoli Netcool Performance Manager 1.3.2 distribution CD also contains the BCP
Suite (1.0.0.3-TIV-TNPM-BCPSUITE.tar.gz).
About this task
Install a number of Base Common Packs (BCPs) from the Base Common Pack
Suite. Base Common Packs (BCPs) are themselves common packs and you install
them exactly as you install any other common packs.
Procedure
1. Download and extract the Base Common Pack Suite to a location of your
choice. See The Base Common Pack Suite on page 156.
2. Install the Base Common Pack Suite.
For instructions on installing Base Common Packs, see Installing common packs
in IBM Tivoli Netcool Performance Manager: Model Maker 1.2.0 Installation and User
Guide.
Chapter 8. Enabling Common Reporting on Tivoli Netcool Performance Manager 157
158 IBM Tivoli Netcool Performance Manager: Installation Guide
Chapter 9. Uninstalling components
This section provides information on how to uninstall components.
When you perform an uninstall, the "uninstaller" is the same deployer used to
install Tivoli Netcool Performance Manager.
Removing a component from the topology
How to remove an installed component from the topology.
You might have a situation where you have modified a topology by both adding
new components and removing components (marking them "To Be Removed").
However, the deployer can work in only one mode at a time - installation mode or
uninstallation mode. In this situation, first run the deployer in uninstallation mode,
then run it again in installation mode.
Note: After the deployer has completed an uninstall, you must open the topology
(loaded from the database) in the Topology Editor before performing any
additional operations.
Restrictions and behavior
Restrictions and behaviour to observe when performing an uninstall.
Before you remove a component, note the following:
v You can only remove a component from the topology if you have uninstalled the
components that depend on it. To uninstall a component, you must remove it
from the topology and then run the deployer in uninstall mode and execute the
action(s) related to uninstalling the component.
v You can remove a host only if no components are configured or installed on it.
v If you remove a component and redeploy the file, the Topology Editor view is
not refreshed automatically. Reload the topology file from the database to view
the updated topology.
Note: Once components are marked for deletion, the topology must be consumed
by the deployer to propagate the required changes and load the updated file in the
database. When you open the database version of the topology, the "removed"
component will disappear from the topology.
To remove one or more components from the topology where the host system no
longer exists or is unreachable on the network, do the following:
1. Open the Topology Editor and remove all components related to the host
system
2. Remove the host system from the topology
3. Redeploy the topology, ignoring any messages related to the non-existent or
unreachable host.
4. At deployment, the modified topology is saved to the database without the
components that were previously installed on the host system.
DataChannel restrictions:
Copyright IBM Corp. 2006, 2012 159
v You can remove the DataChannel Administrative Component only after all the
DataChannels have been removed.
v If you are uninstalling a DataChannel component, the component should first be
stopped. If you are uninstalling all DataChannel components on a host, then you
should remove the DataChannel entries from the crontab.
v If you delete a DataChannel or collector, the working directories (such as the
FTE and CME) are not removed; you must delete these directories manually.
v When a Cross-Collector CME (CC-CME) is installed on the system and formulas
are applied against it, the removal of collectors that the CC-CME depends on is
not supported. This is an exceptional case, that is, if you have not installed a
CC-CME, collectors can be removed.
DataView restrictions:
Uninstall DataView manually if other products are installed in the same Tivoli
Integrated Portal instance. Use the following procedure to uninstall a DataView
component:
1. Run the uninstall command:
<tip_location>/products/tnpm/dataview/bin/uninstall.sh <tip_location>
<tip_administrator_username> <tip_administrator_password>
2. Remove the DataView directory:
rm -rf <tip_location>/products/tnpm/dataview
3. In the Topology Editor:
4. Remove the DataView component.
5. Save the topology.
6. Run the deployer for uninstallation.
7. Mark the DataView step successful.
8. Run the unregister DataView step.
Note: Once this manual un-install is completed, the DataView instance will remain
in the topology after the un-install operation completes, this is not usually the case
for un-installed components.
Removing a component
To remove component from the topology.
Procedure
1. If it is not already open, open the Topology Editor (see Starting the Topology
Editor on page 83).
2. Open the existing topology (see Opening a deployed topology on page 117).
3. In the Logical view of the Topology Editor, right-click the component you want
to delete and select Remove from the pop-up menu.
4. The editor marks the component as "To Be Removed" and removes it from the
display.
5. Save the updated topology.
6. Run the deployer (see Starting the Deployer on page 100).
Note: If you forgot to save the modified topology, the deployer will prompt
you to save it first.
160 IBM Tivoli Netcool Performance Manager: Installation Guide
The deployer can determine that most of the components described in the
topology file are already installed, and removes the component that is no
longer part of the topology.
7. The deployer displays the installation steps page, which lists the steps required
to remove the component. Note that the name of the component to be removed
includes the suffix "R" (for "Remove"). For example, if you are deleting a
DataChannel, the listed component is DCR.
8. Click Run All to run the steps needed to delete the component.
9. When the installation ends successfully, the deployer uploads the updated
topology file into the database. Click Done to close the wizard.
Note: If you remove a component and redeploy the file, the Topology Editor
view is not refreshed automatically. Reload the topology file from the database
to view the updated topology.
What to do next
If you have uninstalled DataChannel Components, you will need to bounce CMGR
after you have run the deployer, so it will pick up the updated configuration and
realize the components have been removed. If you do not bounce CMGR after the
deployer runs then you may get errors when you the components are restarted.
Uninstalling the entire Tivoli Netcool Performance Manager system
How to uninstall the entire system.
To uninstall Tivoli Netcool Performance Manager, you must have the CD or the
original electronic image. The uninstaller will prompt you for the location of the
image.
Order of uninstall
The order in which you must uninstall components.
About this task
For all deployments, you must use the Topology Editor to uninstall the Tivoli
Netcool Performance Manager components in the following order:
Procedure
1. DataLoad and DataChannel
When uninstalling DataChannel from a host, you must run ./dccmd stop all,
disable or delete the dataload cron processes and manually stop (kill -9) any
running channel processes (identified by running findvisual). See Appendix B,
DataChannel architecture, on page 171 for more information about the
findvisual command.
2. DataMart
3. DataChannel Administrative Components and any remaining DataChannel
components.
4. DataView Also remove Tivoli Integrated Portal/Tivoli Common Reporting.
5. Tivoli Netcool Performance Manager Database (remove only after all the other
components have been removed). The database determines the operating
platform of the Tivoli Netcool Performance Manager environment.
Chapter 9. Uninstalling components 161
Restrictions and behavior
Before you uninstall Tivoli Netcool Performance Manager you must note the
following restrictions and behavior:
v If you need to stop the uninstallation before it is complete, you can resume it.
The uninstaller relies on the /tmp/ProvisoConsumer directory to store the
information needed to resume an uninstall. However, if the ProvisoConsumer
directory is removed for any reason, the -Daction=resume command will not
work.
Note: When you reboot your server, the contents of /tmp might get cleaned out.
v When you run the uninstaller, it finds the components that are marked as
"Installed", marks them as "To Be Removed", then deletes them in order. The
deployer is able to determine the correct steps to be performed. However, if the
component is not in the Installed state (for example, the component was not
started), the Topology Editor deletes the component from the topology - not the
uninstaller.
v When the uninstallation is complete, some data files still remain on the disk. You
must remove these files manually. See Residual files on page 164 for the list of
files that must be deleted manually.
Performing the uninstall
How to uninstall the Tivoli Netcool Performance Manager installation.
Before you begin
The unistall procedure assumes that you have updated the topology file:
1. If it is not already open, open the Topology Editor (see Starting the Topology
Editor on page 83).
2. Open the existing topology (see Opening a deployed topology on page 117).
3. In the Logical view of the Topology Editor, right-click each component you
want to delete and select Remove from the pop-up menu.
4. The editor marks all components you selected for removal as "To Be Removed"
and removes it from the display.
5. Save the updated topology.
About this task
To remove a Tivoli Netcool Performance Manager installation:
Procedure
1. You can start the uninstaller from within the Topology Editor or from the
command line.
To start the uninstaller from the Topology Editor:
v Select Run > Run Deployer for Uninstallation.
To start the uninstaller from the command line:
a. Log in as root.
b. Set and export your DISPLAY variable (see Setting up a remote X Window
display on page 36).
c. Change directory to the directory that contains the deployer. For example:
# cd /opt/IBM/proviso/deployer
d. Enter the following command:
162 IBM Tivoli Netcool Performance Manager: Installation Guide
# ./deployer.bin -Daction=uninstall
2. The uninstaller opens, displaying a welcome page. Click Next to continue.
3. Accept the default location of the base installation directory of the Oracle JDBC
driver (/opt/oracle/product/11.2.0-client32/jdbc/lib), or click Choose to
navigate to another directory. Click Next to continue. A dialog opens, asking
whether you want to load a topology from disk.
4. Click No.
A dialog box opens asking for you to enter the details of the updated topology
file.
5. Enter the name of the topology file you updated as described in the "Before
you begin" section.
6. The database access window prompts for the security credentials. Enter the
host name (for example, delphi) and database administrator password (for
example, PV), and verify the other values (port number, SID, and user name).
Click Next to continue. The topology as stored in the database is then
compared with the topology loaded from the file.
7. The uninstaller displays several status messages, then displays a message
stating that the environment status was successfully downloaded and saved to
the file /tmp/ProvisoConsumer/Discovery.xml. Click Next to continue.
8. Repeat the process on each machine in the deployment.
Note: After the removal of each Component using the Topology Editor, you
must reload the topology from the Database.
Uninstalling the topology editor
How to uninstall the Topology Editor.
About this task
To uninstall the Topology Editor, follow the instructions in this section. Do not
simply delete the /opt/IBM directory. Doing so will cause problems when you try
to reinstall the Topology Editor. If the /opt/IBM directory is accidentally deleted,
perform the workaround documented in Installing the Topology Editor on page
82.
Note: Uninstall Tivoli Netcool Performance Manager before uninstalling the
Topology Editor.
To uninstall the Topology Editor:
Procedure
1. Log in as root.
2. Set and export your DISPLAY variable (see Setting up a remote X Window
display on page 36).
3. Change directory to the install_dir/uninstall directory. For example:
# cd /opt/IBM/proviso/uninstall
4. Enter the following command:
#./Uninstall_Topology_Editor
5. The Uninstall wizard opens. Click Uninstall to uninstall the Topology Editor.
6. When the script is finished, click Done.
Chapter 9. Uninstalling components 163
Residual files
How to remove the possible residual files that may exist after the uninstall process.
About this task
When you uninstall Tivoli Netcool Performance Manager, some of the files remain
on the disk and must be removed manually. After you exit from the deployer (in
uninstall mode), you must delete these residual files and directories manually.
Perform the following steps:
Procedure
1. Log in as oracle.
2. Enter the following commands to stop Oracle:
sqlplus "/ as sysdba"
shutdown abort
exit
lsnrctl stop
3. As root, enter the following commands to delete these files and directories:
rm -fR /tmp/PvInstall
rm -fR /var/tmp/PvInstall
rm -fR /opt/Proviso
rm -fR /opt/proviso
rm -fR $ORACLE_BASE/admin/PV
rm -fR $ORACLE_BASE/admin/skeleton
rm -fR $ORACLE_HOME/dbs/initPV.ora
rm -fR $ORACLE_HOME/dbs/lkPV
rm -fR $ORACLE_HOME/dbs/orapwPV
rm -fR $ORACLE_HOME/lib/libpvmextc.so
rm -fR $ORACLE_HOME/lib/libmultiTask.so
rm -fR $ORACLE_HOME/lib/libcmu.so
rm -fR $ORACLE_HOME/bin/snmptrap
rm -fR $ORACLE_HOME/bin/notifyDBSpace
rm -fR $ORACLE_HOME/bin/notifyConnection
where $ORACLE_BASE is /opt/oracle/ and $ORACLE_HOME is
/opt/oracle/product/11.2.0/.
4. Enter the following commands to clear your Oracle mount points and remove
any files in those directories:
rm -r /raid_2/oradata/*
rm -r /raid_3/oradata/*
5. Enter the following command to delete the temporary area used by the
deployer:
rm -fr /tmp/ProvisoConsumer
6. Delete the installer file using the following command:
rm /var/.com*
7. Delete the startup file, netpvmd.
v For Solaris, use the command:
rm /etc/init.d/netpvmd
v For AIX, use the command:
rm /etc/rc.d/init.d/netpvmd
v For Linux, use the command:
rm /etc/init.d/netpvmd
164 IBM Tivoli Netcool Performance Manager: Installation Guide
Note: The netpvmd startup and stop files are also present in /etc/rc2.d and
/etc/rc3.d as S99netpvmd and K99netpvmd. These files must also be removed.l
What to do next
Following TCR uninstallation:
To prevent any possible system instability caused by residual processes
post-uninstall of TCR, run the tcrClean.sh script on all systems where TCR has
been uninstalled:
1. On the host where the TCR installation failed, change to the directory
containing tcrClean.sh:
cd /opt/IBM/proviso/deployer/proviso/bin/Util/
2. Run tcrClean.sh
3. When prompted, enter the location where TCR was installed.
Note: If you have uninstalled TCR on a remote host, the tcrClean.sh file will need
to be sent using ftp to the remote host for execution.
Chapter 9. Uninstalling components 165
166 IBM Tivoli Netcool Performance Manager: Installation Guide
Appendix A. Remote installation issues
Remote installation of all Tivoli Netcool Performance Manager components is
supported.
A remote installation refers to installation on any host that is not the primary
deployer, that is, the host running the Topology Editor. For some systems, security
settings may not allow for components to be installed remotely. Before deploying
on such a system, you must be familiar with the information in this appendix.
When remote install is not possible
What to do when remote installation is not possible.
There may arise situations where a remote host does not support FTP or the
remote execution of files.
A remote host may not support FTP or the remote execution of files.
It is possible your topology may include hosts one which:
v FTP is possible, but REXEC/RSH are not.
v Neither FTP nor REXEC/RSH are possible
This section describes how to deploy in these situations.
FTP is possible, but REXEC or RSH are not
In situations where FTP is possible but remote execution or remote shell are not.
About this task
For any remote host where FTP is possible, but REXEC or RSH are not,
deployment of the required component or components must be carried out using
the following steps.
There are two options for deployment.
Procedure
v Option 1:
1. Unselect the Remote Command Execution option during the installation. The
deployer creates and transfers the directory with the required component
package in it.
2. As root, log in to the remote system and manually run the run.sh script.
v Option 2:
Follow the directions outlined in Installing on a remote host using a secondary
deployer on page 168.
Copyright IBM Corp. 2006, 2012 167
Neither FTP nor REXEC/RSH are possible
In situations where FTP, remote execution and remote shell are not possible.
About this task
For any remote host where neither FTP nor REXEC or RSH are possible the
deployment of the required component or components must be carried out using
the following steps.
There are two options for deployment.
Procedure
v Option 1:
1. Unselect the FTP option during the installation. The deployer creates a
directory containing the required component package.
2. Copy the required component directory to the target system.
3. As root, log in to the remote system and manually run the run.sh script.
v Option 2:
Follow the directions outlined in Installing on a remote host using a secondary
deployer.
Installing on a remote host using a secondary deployer
The general procedure for installing a component on a remote host using a
secondary deployer.
About this task
A secondary deployer is used when the host you wish to install on does not
support remote installation.
The following steps describe how to install a Tivoli Netcool Performance Manager
component using a secondary deployer. For the purposes of clarity, we will name
the primary deployer host delphi, and the host on which we want to install a
component using the secondary deployer we will name corinth.
Procedure
1. Copy the Tivoli Netcool Performance Manager distribution to the server on
which you would like to set up the secondary deployer, that is, copy the
distribution to corinth. For more information on copying the Tivoli Netcool
Performance Manager distribution to a server, see Downloading the Tivoli
Netcool Performance Manager distribution to disk on page 48.
2. Open the Topology Editor on the primary deployer host, that is, on delphi, and
add the remote component to the topology definition.
You may have completed this task already when creating your original
topology definition. If you have already added the remote component to your
topology definition, skip to the next step.
3. Deploy the new topology containing the added component using the Topology
Editor. This is done by clicking Run > Run Deployer for Installation. This will
push the edited topology to the database.
4. Open the Deployer on corinth by doing the following:
168 IBM Tivoli Netcool Performance Manager: Installation Guide
a. Connect to corinth, change to the directory where you have downloaded the
product distribution, and launch the deployer either in graphical mode (by
starting the Launchpad and clicking Start Deployer) or CLI mode (by
navigating to the directory containing the deployer and entering the
command ./deployer.bin).
b. Enter the database credentials when prompted. The deployer connects to
the database.
For more information on how to run a secondary deployer, see Secondary
Deployers on page 101.
Note: Due to Step 3, the secondary deployer sees the topology data and
knows that the required component is still to be installed on corinth.
5. Follow the on screen instructions to install the desired component.
Note: You cannot launch the deployer simultaneously from two different hosts.
Only one deployer can be active at any given time.
Appendix A. Remote installation issues 169
170 IBM Tivoli Netcool Performance Manager: Installation Guide
Appendix B. DataChannel architecture
This section provides detailed information about the DataChannel architecture.
Data collection
DataChannel data collection.
A Tivoli Netcool Performance Manager DataChannel consists of a number of
components, including the following:
v File Transfer Engine (FTE)
v Complex Metric Engine (CME)
v Daily Database Loader (DLDR)
v Hourly Database Loader (LDR)
v Plan Builder (PBL)
v Channel Manager
The FTE, DLDR, LDR, and PBL components are assigned to each configured
DataChannel. The FTE and CME components are assigned to one or more
Collector subchannels.
Data is produced by Tivoli Netcool Performance Manager DataLoad Collectors.
Both SNMP and BULK Collectors are fed into a subchannel's channel processor.
Data moves through the CME and is synchronized in the Hourly Loader. The
Hourly Loader computes group aggregations from resource aggregation records.
The Daily Loader provides statistics on metric channel tables and metric
tablespaces and inserts data into the database.
Data is moved from one channel component to another as files. These files are
written to and read from staging directories between each component. Within each
staging directory there are subdirectories named do, output, and done. The do
subdirectory contains files that are waiting to be processed by a channel
component. The output subdirectory stores data for the next channel component to
work on. After files are processed, they are moved to the done directory. All file
movement is accomplished by the FTE component.
Data aggregation
A DataChannel aggregates data collected by collectors for eventual use by
DataView reports.
The DataChannel provides online statistical calculations of raw collected data, and
detects real-time threshold violations.
Aggregations include:
v Resource aggregation for every metric and resource
v Group aggregation for every group
v User-defined aggregation computed from raw data
Threshold detections in real time include:
v Raw data violating configured thresholds
Copyright IBM Corp. 2006, 2012 171
v Raw data violating configured thresholds and exceeding the threshold during a
specific duration of time
v Averaged data violating configured thresholds
Management programs and watchdog scripts
Management Programs and Watchdog Scripts
The following table lists the names and corresponding watchdog scripts for the
DataChannel management programs running on different DataChannel hosts.
Table 10: Programs and Scripts
Component Program Executable*
Corresponding
Watchdog Script Notes
Channel Name
Server
CNS cnsw Runs on the host
running the Channel
Manager.
Log Server LOG logw
Channel Manager CMGR cmgrw
Application Manager AMGR amgrw One per subchannel
host and one on the
Channel Manager
host.
* The actual component's executable file seen in the output of ps -ef is named XXX_visual,
where XXX is an entry in this column. For example, the file running for CMGR is seen as
CMGR_visual.
The watchdog scripts run every few minutes from cron. Their function is to
monitor their corresponding management component, and to restart it if necessary.
You can add watchdog scripts for the Channel Manager programs to the crontab
for the pvuser on each host on which you installed a DataChannel component.
To add watchdog scripts to the crontab:
1. Log in as pvuser. Make sure this login occurs on the server running the Channel
Manager components.
2. At a shell prompt, go to the DataChannel conf subdirectory. For example:
$ cd /opt/datachannel/conf
3. Open the file dc.cron with a text editor. (The dc.cron files differ for different
hosts running different DataChannel programs. The following example shows
the dc.cron file for the host running the Channel Manager programs.)
0,5,10,15,20,25,30,35,40,45,50,55 1-31 1-12 0-6 /opt/datachannel/bin/cnsw >
/dev/null 2>&1
1,6,11,16,21.26,31,36,41,46,51,56 1-31 1-12 0-6 /opt/datachannel/bin/logw >
/dev/null 2>&1
2,7,12,17,22.27,32,37,42,47,52,57 1-31 1-12 0-6 /opt/datachannel/bin/cmgrw >
/dev/null 2>&1
3,8,13,18,23.28,33,38,43,48,53,58 1-31 1-12 0-6 /opt/datachannel/bin/amgrw >
/dev/null 2>&1
4. Copy the lines in the dc.cron file to the clipboard.
5. At another shell prompt, edit the crontab for the current user.
172 IBM Tivoli Netcool Performance Manager: Installation Guide
export EDITOR=vi
crontab -e
A text editor session opens, showing the current crontab settings.
6. Paste the lines from the dc.cron tab into the crontab file. For example:
0 * * * * [ -f /opt/datamart/dataMart.env ] && [ -x /opt/datamart/bin/pollinv ]
&& ....
0,5,10,15,20,25,30,35,40,45,50,55 1-31 1-12 0-6 /opt/datachannel/bin/cnsw >
/dev/null 2>&1
1,6,11,16,21,26,31,36,41,46,51,56 1-31 1-12 0-6 /opt/datachannel/bin/logw >
/dev/null 2>&1
2,7,12,17,22,27,32,37,42,47,52,57 1-31 1-12 0-6 /opt/datachannel/bin/cmgrw >
/dev/null 2>&1
3,8,13,18,23,28,33,38,43,48,53,58 1-31 1-12 0-6 /opt/datachannel/bin/amgrw >
/dev/null 2>&1
7. Save and exit the crontab file.
8. Repeat steps 1 to 8 on each DataChannel host, with this difference:
The dc.cron file on collector and loader hosts will have only one line, like this
example:
0,5,10,15,20,25,30,35,40,45,50,55 1-31 1-12 0-6 /opt/datachannel/bin/amgrw >
/dev/null 2>&1
On such hosts, this is the only line you need to add to the pvuser crontab.
DataChannel application programs
DataChannel Application Program names and descriptions.
The DataChannel subchannel application programs are listed in Table 11.
Table 11: DataChannel Subchannel Application Program Names
DataChannel Program* Description Example
BCOL.n.c Bulk Collector process for
channel n, with Collector
number c
BCOL.1.2
UBA.n.c UBA Bulk Collector process
for channel n, with Collector
number c
UBA.1.100
CME.n.s Complex Metric Engine for
channel n, Collector number
s
CME.2.1
DLDR.n Daily Loader for channel n DLDR.1
LDR.n Hourly Loader for channel n LDR.2
FTE.n. File Transfer Engine for
channel n
FTE.1.1
PBL.n. Plan Builder for channel n PBL.1
* The actual application's executable file visible in the output of ps -ef is named
XXX_visual, where XXX is an entry in this column.
Appendix B. DataChannel architecture 173
Note: For historical reasons, the SNMP DataLoad collector is managed by Tivoli
Netcool Performance Manager DataMart, and does not appear in Table 11.
Starting the DataChannel management programs
How to check if DataChannel management programs are running and then start
application programs.
Procedure
v Verify that the DataChannel management programs are running:
1. Log in as pvuser on each DataChannel host.
2. Change to the DataChannel installation's bin subdirectory. For example:
$ cd /opt/datachannel/bin
3. Run the findvisual command:
$ ./findvisual
In the resulting output, look for:
The AMGR process on every DataChannel host
The CNS, CMGR, LOG, and AMGR processes on the Channel Manager
host
v If the DataChannel management programs are running on all DataChannel
hosts, start the application programs on all DataChannel hosts by following
these steps:
1. Log in as pvuser. Make sure this login occurs on the host running the
Channel Manager programs.
2. Change to the DataChannel installation's bin subdirectory. For example:
$ cd /opt/datachannel/bin
3. Run the following command to start all DataChannel applications on all
configured DataChannel hosts:
./dccmd start all
The command shows a success message like the following example.
Done: 12 components started, 0 components already running
See the IBM Tivoli Netcool Performance Manager: Command Line Interface Guide
for information about the dccmd command.
There is a Java process associated with the LOG server that must be stopped
should the proviso.log need to be re-created:
1. To find this Java process, enter the following:
ps -eaf | grep LOG
The output should be similar to the following:
pvuser 15774 15773 0 Nov 29 ?
7:59 java -Xms256m -Xmx384m com.ibm.tivoli.analytics.Main -a LOG
2. Kill this process using the command:
kill -9 15774
15774 - is the pid of the Java process as discovered using the grep command.
3. Restart the LOGW process.
174 IBM Tivoli Netcool Performance Manager: Installation Guide
Starting the DataLoad SNMP collector
How to start the DataLoad SNMP Collector
About this task
Once you have started the DataChannel components, check every server that hosts
a DataLoad SNMP collector. to make sure the collectors are running. To check
whether a collector is running, run the following command:
ps -ef | grep -i pvmd
If the collector is running, you will see output similar to the following:
pvuser 27118 1 15 10:03:27 pts/4 0:06 /opt/dataload/bin/pvmd -nologo
-noherald /opt/dataload/bin/dc.im -headless -a S
If a collector is not running, perform the following steps:
Procedure
1. Log into the server that is running Tivoli Netcool Performance Manager SNMP
DataLoad by entering the username and password you specified when
installing SNMP DataLoad.
2. Source the DataLoad environment file by entering the following command:
./$DLHOME/dataLoad.env
where $DLHOME is the location where SNMP DataLoad is installed on the system
(/opt/dataload, by default).
Note: If DataLoad shares the same server as DataMart, make sure you unset
the environment variable by issuing the following command from a BASH shell
command line:
unset PV_PRODUCT
3. Change to the DataLoad bin directory by entering the following command:
cd $PVMHOME/bin
4. Start the DataLoad SNMP collector using the following command:
pvmdmgr start
The command displays the following message when the SNMP collector has
been successfully started:
PVM Collecting Daemon is running.
Results
The script controlling the starting and stopping of SNMP collectors, pvmdmgr,
prevents the possibility that multiple collector instances can be running
simultaneously.
If a user starts a second instance, that second instance will die by itself in under
two minutes without ever contacting or confusing the relevant watchdog script.
Appendix B. DataChannel architecture 175
DataChannel management components in a distributed configuration
A description of the DataChannel management components.
Two channels running on the same system share a common Application Manager
(AMGR) that has a watchdog script, amgrw. The AMGR is responsible for starting,
monitoring through watchdog scripts, and gathering status for each application
server process for the system it runs on. Application programs include the FTE,
CME, LDR, and DLDR programs.
An example of multiple processes running on the same host is:
v Application Manager (AMGR)
v Complex Metric Engines (CME)
v File Transfer Engines (FTE)
v Hourly Data Loaders (LDR)
v Daily Data Loaders (DLDR)
Each program has its own set of program and staging directories.
Manually starting the Channel Manager programs
If you need to manually start the Channel Manager programs, you must do so in a
certain order.
About this task
After a manual start, the program's watchdog script restarts the program as
required.
Procedure
v To start the Channel Manager programs manually:
1. Log in as pvuser on the host running the Channel Manager programs.
2. At a shell prompt, change to the DataChannel bin subdirectory. For example:
$ cd /opt/datachannel/bin
3. Enter the following commands at a shell prompt, in this order:
For the Channel Name Server, enter:
./cnsw
For the Log Server, enter:
./logw
For the Channel Manager, enter:
./cmgrw
For the Application Manager, enter:
./amgrw
v To manually start the DataChannel programs on all hosts in your DataChannel
configuration:
1. Start the Channel Manager programs, as described in the previous section.
2. On each DataChannel host, start the amgrw script.
3. On the Channel Manager host, start the application programs as described in
Starting the DataChannel management programs on page 174.
176 IBM Tivoli Netcool Performance Manager: Installation Guide
Adding DataChannels to an existing system
How to add DataChannels to an existing system.
About this task
If you add and configure a new remote DataChannel using the Topology Editor
after the initial deployment of your topology, the system will not pick up these
changes, unless the user manually stop starts the relevant processes, as explained
in Chapter 6, Modifying the current deployment, on page 117.
To shut down the DataChannel:
Note: The DataChannel CMGR, CNS, AMGR, and LOG visual processes must
remain running until you have gathered the DataChannel parameters from your
environment.
Procedure
1. On the DataChannel host, log in as the component user, such as pvuser.
2. Change your working directory to the DataChannel bin directory
(/opt/datachannel/bin by default) using the following command: $ cd
/opt/datachannel/bin
3. Shut down the DataChannel FTE.
Prior to shutting down all DataChannel components, some DataChannel work
queues must be emptied.
To shut down the DataChannel FTE and empty the work queues:
$ ./dccmd stop FTE.*
4. Let all DataChannel components continue to process until the .../do directories
for the FTE and CME components contain no files.
The .../do directories are located in the subdirectories of $DCHOME (typically,
/opt/datachannel) that contain the DataChannel components - for example,
FTE.1.1, CME.1.1.
5. Shut down all CMEs on the same hour (So the operator state files will be in
synch with each other). To accomplish this:
a. Identify the leading CME by either looking at the do and done directories
in each CME and the DAT files inside there; or using dccmd status all to see
which CME is reporting the latest hour in its processing status.
b. All CMEs on that hour must be stopped and then continue using the same
approach to finding the hour being processed and stop each CME as it
reaches the same hour until all CMEs are stopped. CMEs are stopped using
the command:
$ ./dccmd stop CME
6. Use the following dccmd commands to stop the DataChannel applications:
$ ./dccmd stop DLDR
$ ./dccmd stop LDR
$ ./dccmd stop FTE
$ ./dccmd stop DISC
$ ./dccmd stop UBA (if required)
Note: For details on how to restart a DataChannel, see Manually starting the
Channel Manager programs on page 176.
Appendix B. DataChannel architecture 177
DataChannel terminology
Terms used throughout the Tivoli Netcool Performance Manager DataChannel
installation procedure.
v Collector Subchannel: A subdivision of a DataChannel, with each Collector
subchannel associated with a single Collector and CME. The division into
Collector subchannels helps eliminate latency or loss of data caused by delayed
Collectors. If a Collector subchannel disconnects for a period of time, only that
Collector is affected, and all other Collector subchannels continue processing.
The number of Collector subchannels per DataChannel differs according to the
needs of a particular deployment. See the IBM Tivoli Netcool Performance Manager:
Configuration Recommendations Guide for information related to system
configuration requirements. The terms Collector and Collectors are used to refer
to Collector subchannel and Collector subchannels.
v Complex Metric Engine (CME): A DataChannel program that performs
on-the-fly calculations on raw metrics data for a DataChannel. These calculations
include time aggregations for resources, as well as statistical calculations using
raw data, thresholds, properties, and constants as inputs. If CME formulas are
defined for the incoming metrics data, the values are processed by those
formulas. The CME synchronizes metadata at the beginning of each hour, and
only processes the metadata that exists for the hour.
v CORBA (Common Object Request Broker Architecture): An industry-standard
programming architecture that enables pieces of programs, called objects, to
communicate with one another regardless of the programming language that
they were written in or the operating system they are running on.
v Daily Database Loader (DLDR): A DataChannel program that gathers statistical
data processed by a DataChannel's CME and inserts it into the Tivoli Netcool
Performance Manager database. There is one Daily Loader for each
DataChannel; it is part of the channel processor component of the DataChannel.
v DataChannel Remote (DCR): A DataChannel installation configuration in which
the subchannel, CME and FTE components are installed and run on one host,
while the Loader components are installed and run on another host. In this
configuration, the subchannel hosts can continue processing data and detecting
threshold violations, even while disconnected from the Channel Manager server.
v DataChannel Standard: A DataChannel installation configuration in which all
component programs of each subchannel are installed and run on the same
server. DataChannel Standard installation is described in this chapter.
v DataLoad Bulk Collector: A DataLoad program that processes different data
formats and resource files supplied by network devices, network management
platforms, network probes, and other types of resources such as BMC Patrol.
The Bulk Collector translates bulk statistics provided in flat files into Tivoli
Netcool Performance Manager metadata and data. If operating in synchronized
inventory mode, the Bulk Collector passes the resources and properties to the
Tivoli Netcool Performance Manager DataMart Inventory application.
v DataLoad SNMP Collector: A DataLoad program that collects data from
network resources via SNMP polling. The SNMP Collector provides raw data
files to the DataChannel for processing by the CME.
v DataLoad UBA Bulk Collector: A DataLoad program that imports data from
files (referred to as Bulk input files) generated by non-SNMP devices, including
Alcatel 5620 NM, Alcatel SAM, and Cisco CWM. These Bulk input files contain
formats that the Bulk Collector is unable to handle.
178 IBM Tivoli Netcool Performance Manager: Installation Guide
v Discovery Server (DISC): A DataChannel program that runs as a daemon to
perform an inventory of SNMP network devices from which to gather statistical
data.
v Hourly Database Loader (LDR): A DataChannel program that serves as the
point of data synchronization and merging, and of late data processing, for each
DataChannel. The Hourly Loader gathers files generated by the CME, computes
group aggregations from the individual resource aggregation records, and loads
the data into the Tivoli Netcool Performance Manager database.
v File Transfer Engine (FTE): A DataChannel program that periodically scans the
Collector output directories, examines the global execution plan to determine
which computation requests require the data, then sorts the incoming data for
storage.
v Next-Hour Policy: Specifies the number of seconds to wait past the hour for files
to arrive before the next hourly directory is created. The default value causes the
DataChannel to wait until 15 minutes after the hour before it starts processing
data for the next hour. To avoid losing data, you need to set a percentage and a
time-out period during the configuration of the CME.
v Plan Builder (PBL): A DataChannel program that creates the metric data routing
and processing plan for the other components in the DataChannel.
Appendix B. DataChannel architecture 179
180 IBM Tivoli Netcool Performance Manager: Installation Guide
Appendix C. Aggregation sets
This appendix describes how to configure and install aggregation sets.
Overview
An aggregation set is a grouping of network management raw data and computed
statistical information stored in the Tivoli Netcool Performance Manager database
for a single timezone.
For example, if your company provides network services to customers in both the
Eastern and Central US timezones, you must configure two aggregation sets.
Because each aggregation set is closely linked with a timezone, aggregation sets are
sometimes referred to as timezones in the in Tivoli Netcool Performance Manager
documentation. However, the two concepts are separate.
Note: "Aggregation set" is abbreviated to "Aggset" in some setup program menus.
Configuring aggregation sets
How to configure aggregation sets.
About this task
When you configure an aggregation set, the following information is stored in the
database:
v The timezone ID number associated with this aggregation set.
v The timezone offset from GMT, in seconds.
v Optionally, the dates that Daylight Savings Time (DST) begins and ends in the
associated timezone for each year from the present through 2010. (Or you can
configure an aggregation set to ignore DST transitions.)
You configure an aggregation set either by creating a new set or by modifying an
existing set. The first aggregation set is installed by default when you install Tivoli
Netcool Performance Manager Datamart, so if your network will monitor only one
timezone, you need only to configure the existing set.
To configure an aggregation set:
v
Procedure
1. Log in as root. (Remain logged in as root for the remainder of this appendix.)
2. At a shell prompt, change to the directory where Tivoli Netcool Performance
Manager DataMart program files are installed. For example:
# cd /opt/datamart
3. Load the DataMart environment variables into your current shell's
environment using the following command:
# . ./dataMart.env
4. Change to the bin directory:
Copyright IBM Corp. 2006, 2012 181
# cd bin
5. Enter the following command:
# ./create_modify_aggset_def
The following menu is displayed:
--------------------------------------------------
Tivoli Netcool Performance Manager Database
Date: <Current Date> <Current Time>
Script name: create_modify_aggset_def
Script revision: <revision_number>
- Aggregation set creation
- Aggregation set modification
- DST configuration for an aggregation set
--------------------------------------------------
Database user................. : [ PV_ADMIN ]
Database user password........ : [ ]
Menu :
1. Input password for PV_ADMIN.
2. Configure an aggset.
0. Exit
Choice : 1
6. Type 1 at the Choice prompt and press Enter to enter the password for
PV_ADMIN. The script prompts twice for the password you set up for
PV_ADMIN.
==> Enter password for PV_ADMIN : PV
==> Re-enter password : PV
Note: The script obtains the DB_USER_ROOT setting from the Tivoli Netcool
Performance Manager database configured in previous chapters, and
constructs the name of the Tivoli Netcool Performance Manager database
administrative login name, PV_ADMIN, from that base. If you set a different
DB_USER_ROOT setting, the "Database user" entry reflects your settings. For
example, if you previously set DB_USER_ROOT=PROV, this script would
generate the administrative login name PROV_ADMIN.
7. To configure the first aggregation set, type 2 at the Choice prompt and press
Enter twice.
The script shows the current settings for the aggregation set with ID 0
(configured by default):
The following Time Zones are defined into the Database :
___________________________________________________________________________________
id | Date (in GMT) | offset in | Name | Aggset status
| | seconds | |
___________________________________________________________________________________
0 | 1970/01/01 00:00:00 | 0 | GMT | Aggset created
==> Press <Enter> to continue ....
You can use this aggregation set as-is, or modify it to create a new timezone.
8. Press Enter. A list of predefined timezones and their timezone numbers is
displayed:
182 IBM Tivoli Netcool Performance Manager: Installation Guide
Num | OffSet | Time zone Name | Short | Long
| Hours | | Description | Description
___________________________________________________________________________________
[ 1] : -10:00 | America/Adak | HAST | Hawaii-Aleutian Standard Time
[ 2] : -10:00 | Pacific/Rarotonga | CKT | Cook Is. Time
[ 3] : -09:00 | America/Anchorage | AKST | Alaska Standard Time
[ 4] : -09:00 | AST | AKST | Alaska Standard Time
[ 5] : -08:00 | PST | PST | Pacific Standard Time
[ 6] : -07:00 | MST | MST | Mountain Standard Time
[ 7] : -06:00 | America/Mexico_City| CST | Central Standard Time
[ 8] : -06:00 | CST | CST | Central Standard Time
[ 9] : -05:00 | EST | EST | Eastern Standard Time
[10] : -04:00 | America/Santiago | CLT | Chile Time
[11] : -03:00 | America/Sao_Paulo | BRT | Brazil Time
[12] : -01:00 | Atlantic/Azores | AZOT | Azores Time
[13] : 000:00 | Europe/London | GMT | Greenwich Mean Time
[14] : +01:00 | Europe/Paris | CET | Central European Time
[15] : +01:00 | ECT | CET | Central European Time
[16] : +02:00 | Africa/Cairo | EET | Eastern European Time
[17] : +02:00 | Europe/Helsinki | EET | Eastern European Time
[18] : +02:00 | Europe/Bucharest | EET | Eastern European Time
[19] : +03:00 | Asia/Baghdad | AST | Arabia Standard Time
[20] : +03:00 | Europe/Moscow | MSK | Moscow Standard Time
[21] : +04:00 | Asia/Baku | AZT | Azerbaijan Time
[22] : +05:00 | Asia/Yekaterinburg | YEKT | Yekaterinburg Time
[23] : +06:00 | Asia/Novosibirsk | NOVT | Novosibirsk Time
[24] : +07:00 | Asia/Krasnoyarsk | KRAT | Krasnoyarsk Time
[25] : +08:00 | Asia/Irkutsk | IRKT | Irkutsk Time
[26] : +09:00 | Asia/Yakutsk | YAKT | Yakutsk Time
[27] : +10:00 | Australia/Sydney | EST | Eastern Standard Time (New
South Wales)
[28] : +11:00 | Pacific/Noumea | NCT | New Caledonia Time
[29] : +12:00 | Pacific/Auckland | NZST | New Zealand Standard Time
[30] : +12:00 | Asia/Anadyr | ANAT | Anadyr Time
==> Select Time Zone number [1-30 ] (E : Exit) : 9
9. Type the number of the timezone you want to associate with aggregation set
0. For example, type 9 for Eastern Standard Time.
The script prompts:
==> Select an Aggset ID to add/modify (E: Exit) : 0
To associate the specified timezone, EST, with the database's default
aggregation set, type 0.
10. The script asks whether you want your aggregation set to include Daylight
Saving Time (DST) transition dates:
Does your Time Zone manage DST [Y/N] : Y
For most time zones, type Y and press Enter.
11. The script displays the results:
Appendix C. Aggregation sets 183
Complete with Success ...
The following Time Zone has been modified:
___________________________________________________________________________________
id | Date (in GMT) | offset in | Name | Aggset status
| | seconds | |
___________________________________________________________________________________
0 | 1970/01/01 00:00:00 | 0 | GMT | Aggset created
0 | 2004/09/29 22:00:00 | -14400 | EST_2004_DST | Aggset created
0 | 2004/10/31 06:00:00 | -18000 | EST_2004 | Aggset created
0 | 2005/04/03 07:00:00 | -14400 | EST_2005_DST | Aggset created
0 | 2005/10/30 06:00:00 | -18000 | EST_2005 | Aggset created
0 | 2006/04/02 07:00:00 | -14400 | EST_2006_DST | Aggset created
0 | 2006/10/29 06:00:00 | -18000 | EST_2006 | Aggset created
0 | 2007/04/01 07:00:00 | -14400 | EST_2007_DST | Aggset created
0 | 2007/10/28 06:00:00 | -18000 | EST_2007 | Aggset created
0 | 2008/04/06 07:00:00 | -14400 | EST_2008_DST | Aggset created
0 | 2008/10/26 06:00:00 | -18000 | EST_2008 | Aggset created
0 | 2009/04/05 07:00:00 | -14400 | EST_2009_DST | Aggset created
0 | 2009/10/25 06:00:00 | -18000 | EST_2009 | Aggset created
0 | 2010/04/04 07:00:00 | -14400 | EST_2010_DST | Aggset created
0 | 2010/10/31 06:00:00 | -18000 | EST_2010 | Aggset created
==> Press <Enter> to continue ....
Note: The dates that appear in your output will most likely be different from
the dates that appear in the example.
12. Press Enter to return to the script's main menu.
13. To configure a second aggregation set, type 2 at the Choice prompt and press
Enter three times.
14. Specify the timezone number of your second timezone. For example, type 8 to
specify Central Standard Time.
The script prompts:
==> Select an Aggset ID to add/modify (E: Exit) : 1
If you enter a set number that does not exist in the database, the script creates
a new aggregation set with that number. Type the next available set number,
1.
15. Respond Y to the timezone management query.
The script shows the results of creating the second aggregation set:
____________
The following Time Zone has been modified :
___________________________________________________________________________________ ______________
id | Date (in GMT) | offset in | Name | Aggset status
| | seconds | |
___________________________________________________________________________________ ______________
1 | 2004/09/29 23:00:00 | -18000 | CST_2004_DST | Aggset created
1 | 2004/10/31 07:00:00 | -21600 | CST_2004 | Aggset created
1 | 2005/04/03 08:00:00 | -18000 | CST_2005_DST | Aggset created
1 | 2005/10/30 07:00:00 | -21600 | CST_2005 | Aggset created
1 | 2006/04/02 08:00:00 | -18000 | CST_2006_DST | Aggset created
1 | 2006/10/29 07:00:00 | -21600 | CST_2006 | Aggset created
1 | 2007/04/01 08:00:00 | -18000 | CST_2007_DST | Aggset created
1 | 2007/10/28 07:00:00 | -21600 | CST_2007 | Aggset created
1 | 2008/04/06 08:00:00 | -18000 | CST_2008_DST | Aggset created
1 | 2008/10/26 07:00:00 | -21600 | CST_2008 | Aggset created
1 | 2009/04/05 08:00:00 | -18000 | CST_2009_DST | Aggset created
1 | 2009/10/25 07:00:00 | -21600 | CST_2009 | Aggset created
1 | 2010/04/04 08:00:00 | -18000 | CST_2010_DST | Aggset created
1 | 2010/10/31 07:00:00 | -21600 | CST_2010 | Aggset created
==> Press <Enter> to continue ....
16. Press Enter to return to the main menu, where you can add more aggregation
sets, or type 0 to exit.
184 IBM Tivoli Netcool Performance Manager: Installation Guide
The next step is to install the aggregation sets on the server on which you
installed Tivoli Netcool Performance Manager DataMart.
Installing aggregation sets
How to install aggregation sets.
When you install DataMart, aggregation set 0 is automatically installed. If you
configured only the default aggregation set (in Configuring aggregation sets on
page 181), you can skip this section. However, f you configured timezones for
additional aggregation sets, you must install the non-zero sets using the steps in
this section.
Start the Tivoli Netcool Performance Manager setup program
How to start the Tivoli Netcool Performance Manager Setup Program.
About this task
Start the setup program by following these steps:
Procedure
1. Make sure your EDITOR environment variable is set.
2. Change to the /opt/Proviso directory:
cd /opt/Proviso
3. Start the setup program:
./setup
The setup program's main menu is displayed:
Tivoli Netcool Performance Manager <version number> - [Main Menu]
1. Install
2. Upgrade
3. Uninstall
0. Exit
Choice [1]> 1
4. Type 1 at the Choice prompt and press Enter. The Install menu is displayed:
Tivoli Netcool Performance Manager <version number> - [Install]
1. Tivoli Netcool Performance Manager Database Configuration
0. Previous Menu
Choice [1]> 1
Set aggregation set installation parameters
How to set the aggregation set installation parameters.
Procedure
1. Type 1 at the Choice prompt and press Enter. Setup displays the installation
environment menu:
Appendix C. Aggregation sets 185
Tivoli Netcool Performance Manager Database Configuration <version number> - [installation environment]
1. PROVISO_HOME : /opt/Proviso
2. DATABASE_DEF_HOME : -
3. CHANNELS_DEF_HOME : -
4. AGGRSETS_DEF_HOME : -
5. Continue
0. Exit
Choice [5]> 5
Note: Menu options 2, 3, and 4 are used later in the installation process.
2. Make sure the value for PROVISO_HOME is the same one you used when
you installed the database configuration. If it is not, type 1 at the Choice
prompt and correct the directory location.
3. The script displays the component installation menu:
Tivoli Netcool Performance Manager Database Configuration <version number> - [component installation]
1. Database
2. Channel
3. Aggregation set
0. Exit
Choice [1]> 3
4. Type 3 at the Choice prompt and press Enter. The script displays the
installation environment menu:
Tivoli Netcool Performance Manager Aggregation Set <version number> - [installation environment]
1. PROVISO_HOME : /opt/Proviso
2. ORACLE_HOME : /opt/oracle/product/11.2.0/
3. ORACLE_SID : PV
4. DB_USER_ROOT : -
5. Continue
0. Previous Menu
Choice [5]> 4
5. Type 4 at the Choice prompt and press Enter to specify the same value for
DB_USER_ROOT that you specified in previous chapters. This manual's
default value is PV.
Enter value for DB_USER_ROOT [] : PV
6. Make sure that the values for PROVISO_HOME, ORACLE_HOME, and
ORACLE_SID are the same ones you entered in previous chapters. Correct the
values if necessary.
7. Type 5 at the Choice prompt and press Enter. Setup displays the Aggregation
Set installation options menu:
Tivoli Netcool Performance Manager Aggregation Set <version number> - [installation options]
1. List of configured aggregation sets
2. List of installed aggregation sets
3. Number of the aggregation set to install : -
4. Channel where to install aggregation set : (all)
5. Start date of aggregation set : <Current Date>
6. Continue
0. Back to options menu
Choice [6]>
Note: Do not change the value for option 4. Retain the default value, "all."
8. The first time you use any menu option, the script prompts for the password
for PV_ADMIN:
Enter password for PV_ADMIN : PV
186 IBM Tivoli Netcool Performance Manager: Installation Guide
9. Use menu option 1 to list the aggregation sets you configured in Configuring
aggregation sets on page 181. The script displays a list similar to the
following:
============= LIST OF CONFIGURED AGGREGATION SETS ============
Num Effect Time Name Time lag
---- --------------------- ------------------------------------------- --------
0 01-01-1970 00:00:00 GMT +0h
04-01-2007 07:00:00 EST_2007_DST -4h
04-02-2006 07:00:00 EST_2006_DST -4h
04-03-2005 07:00:00 EST_2005_DST -4h
04-04-2010 07:00:00 EST_2010_DST -4h
04-05-2009 07:00:00 EST_2009_DST -4h
04-06-2008 07:00:00 EST_2008_DST -4h
09-29-2004 22:00:00 EST_2004_DST -4h
10-25-2009 06:00:00 EST_2009 -5h
10-26-2008 06:00:00 EST_2008 -5h
10-28-2007 06:00:00 EST_2007 -5h
10-29-2006 06:00:00 EST_2006 -5h
10-30-2005 06:00:00 EST_2005 -5h
10-31-2004 06:00:00 EST_2004 -5h
10-31-2010 06:00:00 EST_2010 -5h
1 04-01-2007 08:00:00 CST_2007_DST -5h
04-02-2006 08:00:00 CST_2006_DST -5h
04-03-2005 08:00:00 CST_2005_DST -5h
04-04-2010 08:00:00 CST_2010_DST -5h
04-05-2009 08:00:00 CST_2009_DST -5h
04-06-2008 08:00:00 CST_2008_DST -5h
09-29-2004 23:00:00 CST_2004_DST -5h
10-25-2009 07:00:00 CST_2009 -6h
10-26-2008 07:00:00 CST_2008 -6h
10-28-2007 07:00:00 CST_2007 -6h
10-29-2006 07:00:00 CST_2006 -6h
10-30-2005 07:00:00 CST_2005 -6h
10-31-2004 07:00:00 CST_2004 -6h
10-31-2010 07:00:00 CST_2010 -6h
2 aggregation sets configured
Press enter...
10. Select option 2 to list the aggregation sets already installed. The output is
similar to the following:
============== LIST OF CREATED AGGREGATION SETS ==============
============ X: created ==== #: partially created ============
Channels 0
| 1
AggSets -----------------------------------------------------------------------
| 0 X
Press enter...
Remember that aggregation set 0 is automatically installed when you install
the database channel, and continues to be installed even if you modified set 0
by assigning a different timezone.
11. Select option 3 to designate the aggregation set to install. In the examples
above, set 0 is already installed, but set 1 is waiting to be installed. Thus, enter
1 at the prompt:
Enter Aggregation Set number between 1 and 998 : 1
12. By default, the date to start collecting data on the designated aggregation set
is today's date. You can instead use menu option 5 to designate a future date
to start collecting data. Set an appropriate future date for your installation.
Enter start date (GMT) using Oracle format yyyy.mm.dd-hh24 : 2009.08.31-00
WARNING! Start date is set in the future.
No loading is allowed until start date (GMT) is reached.
Do you confirm the start date (Y/N) [N] ? y
Appendix C. Aggregation sets 187
13. When all menu parameters are set, type 6 at the Choice prompt and press
Enter.
Edit aggregation set parameters file
How to edit the aggregation set parameters file
About this task
Procedure
1. The script prompts that it will start the editor specified in the EDITOR
environment variable and open the aggregation set parameters file. Press Enter.
An editing session opens containing the aggsetreg.udef configuration file, as
shown in this example:
#
# Tivoli Netcool Performance Manager Datamart
# <Current Date>
#
#
# Channel C01: GROUPS DAILY aggregates storage
#
[AGGSETREG/C01/1DGA/TABLE/CURRENT]
PARTITION_EXTENTS=5
PARTITION_SIZE=100K
#
[AGGSETREG/C01/1DGA/TABLE/HISTORIC]
PARTITION_EXTENTS=5
PARTITION_SIZE=100K
#
[AGGSETREG/C01/1DGA/TABLESPACE/CURRENT]
CREATION_PATH=/raid_2/oradata
EXTENT_SIZE=64K
SIZE=10M
#
[AGGSETREG/C01/1DGA/TABLESPACE/HISTORIC]
CREATION_PATH=/raid_3/oradata
EXTENT_SIZE=64K
SIZE=10M
#
# Channel C01: RESOURCES DAILY aggregates storage
#
[AGGSETREG/C01/1DRA/TABLE/CURRENT]
PARTITION_EXTENTS=5
PARTITION_SIZE=100K
#
...
2. Do not make changes to this file unless you have explicit instructions from
Professional Services.
Only if you have guidelines from Professional Services for advanced
configuration of your aggregation sets, make the suggested edits.
Save and close the file.
3. When you close the configuration file, the script checks the file parameters and
starts installing the aggregation set. The installation takes three to ten minutes,
depending on the speed of your server.
A message like the following is displayed when the installation completes:
P R O V I S O A g g r e g a t i o n S e t <version number>
||||||||||||||||||||||||
AggregationSet installed
Tivoli Netcool Performance Manager Aggregation Set 1 on Channel 1 successfully installed !
Press Enter...
188 IBM Tivoli Netcool Performance Manager: Installation Guide
Linking DataView groups to timezones
Once you have configured and installed the aggregation sets, you must link
DataView groups to a timezone.
About this task
You can link a defined timezone to a calendar you create in the DataView GUI, or
the CME Permanent calendar (a 24-hour calendar).
When you link a group to a specific timezone and calendar, all subgroups inherit
the same timezone and calendar.
Procedure
v Best practice:
Use a separate calendar for each timezone. If you link multiple timezones to the
same calendar, a change to one timezone calendar setting will affect all the
timezones linked to that calendar.
v To link a group to a timezone:
1. Create a calendar with the DataView GUI, or use the default CME
Permanent calendar.
2. Create a text file (for example, linkGroupTZ.txt) with the following format:
Each line has three fields separated by |_|.
The first field is a DataView group name.
The second field is a timezone name from the Tivoli Netcool Performance
Manager internal timezone list. See Configuring aggregation sets on
page 181 for a list of timezone names.
The third field is the name of the calendar you create, or CME Permanent.
The following example line demonstrates the file format:
~Group~USEast|_|EST_2005_DST|_|CME Permanent|_|
Enter as many lines as you have timezone entries in your aggregation set
configuration.
3. At a shell prompt, enter a command similar to the following, which uses the
Resource Manager's CLI to link the group to the timezone:
resmgr -import segp -colNames "npath tz.name cal.name" -file linkGroupTZ.txt
v To unlink a timezone:
Use the resmgr command. For example:
resmgr -delete linkGroupSE_TZC -colNames "npath tz.name cal.name" -file linkGroupTZ.txt
v To review timezone to group associations:
Use the resmgr command. For example:
resmgr -export segp -colNames "name tz.name cal.name" -file link.txt
Appendix C. Aggregation sets 189
190 IBM Tivoli Netcool Performance Manager: Installation Guide
Appendix D. Deployer CLI options
Deployer CLI options and their descriptions.
To run the deployer from the command line, entering the following command:
# ./deployer.bin [options]
For example, the following command performs a minimal deployment installation:
# ./deployer.bin -Daction=poc
The deployer.bin command accepts the following options:
Option Description
-Daction=mib
Used with -Daction=poc to complete a
minimal deployment installation on an AIX
system. For example:
./deployer.bin -Daction=mib -Daction=poc
-Daction=patch
Performs a patch installation of Tivoli
Netcool Performance Manager. See
Appendix H, Installing an interim fix, on
page 215 for more information.
-Daction=poc
Performs a minimal deployment installation.
See Chapter 5, Installing as a minimal
deployment, on page 111 for more
information.
-Daction=resume
Resumes an interrupted installation at the
current step. Note that this option is possible
only when the /tmp/ProvisoConsumer
directory is available. See Resuming a
partially successful first-time installation on
page 109 for more information.
-Daction=uninstall
Uninstalls all components marked "To Be
Removed" in the current topology file. See
Uninstalling the entire Tivoli Netcool
Performance Manager system on page 161
for more information.
-DCheckUser=true
Specifies whether the deployer checks to see
if it is running as root before performing
install operations. Possible values are true
and false. For most install scenarios,
running the deployer as the operating
system root is required. You can use this
option to override root user checking.
Default is true.
-DOracleClient=oracle_client_home
Enables you to specify the Oracle client
home, so the wizard screen that prompts
you for that information is skipped.
-DOracleServerHost=hostname
Specifies the hostname or IP address where
the Oracle server resides.
-DOracleServerPort=port
Specifies the communication port used by
the Oracle server. Default is 1521.
Copyright IBM Corp. 2006, 2012 191
Option Description
-DOracleSID=sid
Specifies the Oracle server ID. Default is PV.
-DOracleAdminUser=admin_user
Specifies the administrator username for the
Oracle server. Default is PV_INSTALL.
-DOracleAdminPassword=admin_password
Specifies the administrator password for the
Oracle server. Default is PV.
-DPrimary=true
Indicates that the deployer is running on the
primary server. This option is used by the
Topology Editor to invoke the deployer. Use
this option to force a channel configuration
update in the database.
-DTarget=id
Instructs the deployer to install or uninstall
the component specified using the id
parameter, regardless of the current status of
the component in the topology. Use this
option to force an install or uninstall of a
component in a high-availability (HA)
environment, or when fixing an incomplete
or damaged installation. Table 12 contains a
list of possible values for the id parameter.
-DTopologyFile=topology_file_path
Tells the deployer to use the specified
topology file instead of prompting for the
file.
-DTrace=true
Causes the deployer to log additional
diagnostic information.
-DUsehostname=hostname
Enables you to override the hostname that
the deployer uses to define where it is
running. This option is useful when
hostname aliasing is used and none of the
hostnames listed in the topology.xml file
match the hostname of the machine where
the deployer is running.
Using the -DTarget option
How to use the -DTarget option to force an install or uninstall of a component or
damaged installation.
You can use the -DTarget option to force an install or uninstall of a component in a
high-availability (HA) environment, or when fixing an incomplete or damaged
installation. The -DTarget option uses the following syntax:
deployer.bin -DTarget=id
where id is a supported target identifier code.
If you are using the -DTarget option to force the uninstall of a component, you
must also specify the -Daction=uninstall option when you run the deployer
application. The following example shows how to force the uninstallation of
DataMart on the local system:
deployer.bin -Daction=uninstall -DTarget=DMR
Table 12 shows the possible values for the id parameter.
192 IBM Tivoli Netcool Performance Manager: Installation Guide
Table 12: Target Identifier Codes
Value Description
DB Instructs the deployer to install the database
setup components on the local machine.
DM Instructs the deployer to install the
DataMart component on the local machine.
DV Instructs the deployer to install the
DataView component on the local machine.
DC Instructs the deployer to install the
DataChannel component on the local
machine.
DL Instructs the deployer to install the
DataLoad component on the local machine.
TIP Instructs the deployer to install TIP on the
local machine.
DBR Instructs the deployer to remove the
database setup components from the local
machine. Requires the -Daction=uninstall
option.
DMR Instructs the deployer to remove the
DataMart component from the local
machine. Requires the -Daction=uninstall
option.
DVR Instructs the deployer to remove the
DataView component from the local
machine. Requires the -Daction=uninstall
option.
DCR Instructs the deployer to remove the
DataChannel component from the local
machine. Requires the -Daction=uninstall
option.
DLR Instructs the deployer to remove the
DataLoad component from the local
machine. Requires the -Daction=uninstall
option.
TIPR Instructs the deployer to remove TIP
fromthe local machine. Requires the
-Daction=uninstall option.
DBU Instructs the deployer to upgrade the
database setup components on the local
machine.
DMU Instructs the deployer to upgrade the
DataMart component on the local machine.
DVU Instructs the deployer to upgrade the
DataView component on the local machine.
DCU Instructs the deployer to upgrade the
DataChannel component on the local
machine.
DLU Instructs the deployer to upgrade the
DataLoad component on the local machine.
Appendix D. Deployer CLI options 193
When you run the deployer using the -DTarget option, note the following:
v The deployer does not perform component registration in the versioning tables
of the database.
v The deployer does not upload modified topology information to the database.
v The deployer does not allow you to you select other nodes besides the local
node in the Node Selection panel.
v In the case of an uninstall, the deployer does not remove the component from
the topology.
194 IBM Tivoli Netcool Performance Manager: Installation Guide
Appendix E. Secure file transfer installation
This section describes the OpenSSH installation, configuration, and testing process
in detail for each platform.
Overview
How to install OpenSSH for Secure File Transfer (SFTP) among Tivoli Netcool
Performance Manager components.
This document explains how to install OpenSSH for Secure File Transfer (SFTP)
among Tivoli Netcool Performance Manager components. You must be proficient in
your operating system and have a basic understanding of public/private key
encryption when working with SFTP. For the purposes of this document, an SFTP
"client" is the node that initiates the SFTP connection and login attempt, while the
SFTP "server" is the node that accepts the connection and permits the login
attempt. This distinction is important for generating public/private keys and
authorization, as the SFTP server should have the public key of the SFTP client in
its authorized hosts file. This process is described in more detail later.
For Tivoli Netcool Performance Manager to use SFTP for the remote execution of
components and file transfer, OpenSSH must be configured for key-based
authentication when connecting from the Tivoli Netcool Performance Manager
account on the client (the host running the Tivoli Netcool Performance Manager
process that needs to use SFTP) to the account on the server. In addition, the host
keys must be established such that the host key confirmation prompt is not
displayed during the connection.
Enabling SFTP
The use of SFTP is supported in Tivoli Netcool Performance Manager.
Tivoli Netcool Performance Manager SFTP can be enabled for a single component,
set of components, or all components as needed. This table shows the Tivoli
Netcool Performance Manager components that support SFTP:
Client Server Description
Node on which DataChannel
resides.
All other DataChannel nodes
to be installed.
Installer can use SFTP to
install Tivoli Netcool
Performance Manager
software to remote locations.
Bulk Collector DataMart Inventory Transfer of inventory files.
FTE Bulk Collector FTE transfers files from
BCOL to CME.
FTE DataLoad SNMP collector Transfer of SNMP data.
CME/LDR Remote CME Downstream CME and LDR
both transfer files from
remote CME.
Note: This document is intended only as a guideline to installing OpenSSH. Tivoli
Netcool Performance Manager calls the ssh binary file directly and uses the SFTP
Copyright IBM Corp. 2006, 2012 195
protocol to transfer files, so the essential Tivoli Netcool Performance Manager
requirement is that OpenSSH is used and public key authentication is enabled. The
following procedures are examples of one method of installing and configuring
OpenSSH. The precise method and final configuration for your site should be
decided by your local operating system security administrator.
For detailed information about OpenSSH and its command syntax, visit the
following URL:
http://www.openssh.com/manual.html
Installing OpenSSH
This section describes the steps necessary to install OpenSSH on AIX, Solaris and
Linux.
Note: The following sections refer to the earliest supported version of the required
packages. Refer to the OpenSSH documentation for information about updated
versions.
AIX systems
To install OpenSSH on AIX systems you must follow all steps described in this
section.
Download the required software packages
How to download the required packages.
Procedure
1. In your browser, enter the following URL:
http://www-03.ibm.com/servers/aix/products/aixos/linux/download.html
2. From the AIX Toolbox for Linux Applications page, download the following
files according to the instructions to each Tivoli Netcool Performance Manager
system where SFTP is to be used:
v prngd - Pseudo Random Number Generation Daemon (prngd-0.9.29-
1.aix5.1.ppc.rpm or later).
v zlib - zlib compression and decompression library (zlib-1.2.2-
4.aix5.1.ppc.rpm or later).
3. From the AIX Toolbox for Linux Applications page, click the AIX Toolbox
Cryptographic Content link.
4. Download the following files to each Tivoli Netcool Performance Manager
system where SFTP is to be used:
openssl-0.9.7g-1.aix5.1.ppc.rpm or later
5. In your browser, enter the following URL:
http://sourceforge.net/projects/openssh-aix
6. From the OpenSSH on AIX page, search for and download the following files
according to the instructions to each Tivoli Netcool Performance Manager
system where SFTP is to be used:
openssh-4.1p1_53.tar.Z or later
Install the required packages
How to install the required packages on each Tivoli Netcool Performance Manager
system where SFTP is to be used.
196 IBM Tivoli Netcool Performance Manager: Installation Guide
Procedure
1. Log in to the system as root.
2. Change your working directory to the location where the software packages
have been downloaded by using the following command:
# cd /download/location
3. Run the RPM Packaging Manager for each package, in the specified order,
using the following commands:
# rpm -i zlib
# rpm -i prndg
# rpm -i openssl
4. Uncompress and untar the openssh tar file by entering the following
commands:
$ uncompress openssh-4.1p1_53.tar.Z
$ tar xvf openssh-4.1p1_53.tar
5. Using the System Management Interface Tool (SMIT), install the openssh
package.
6. Exit from SMIT.
Configure OpenSSH server to start up on system boot
After installing the OpenSSH server and client, you must configure the OpenSSH
server to start up on system boot.
Procedure
To configure the server to start on system boot, modify the /etc/rc.d/rc2.d/Ssshd
init script as follows:
#! /usr/bin/sh
#
# start/stop the secure shell daemon
case "$1" in
start)
# Start the ssh daemon
if [ -x /usr/local/sbin/sshd ]; then
echo "starting SSHD daemon"
/usr/local/sbin/sshd & fi
;;
stop)
# Stop the ssh daemon
kill -9 `ps -eaf | grep /usr/local/sbin/sshd | grep -v grep |
awk {print $2} | xargs`
;;
*)
echo "usage: sshd {start|stop}"
;;
Appendix E. Secure file transfer installation 197
Solaris systems
OpenSSH is required for SFTP to work with Tivoli Netcool Performance Manager
on Solaris systems.
The version of SSH installed with the Solaris 10 operating system is not supported.
Note: The following sections refer to the current version of the required packages.
Refer to the OpenSSH documentation for information about updated versions.
To install OpenSSH on Solaris systems, follow all steps described in this section.
Download the required software packages
How to download the required packages.
Procedure
1. In your browser, enter the following URL: http://www.sunfreeware.com
2. From the Freeware for Solaris page, follow the instructions to download the
following files to each Tivoli Netcool Performance Manager system where SFTP
is to be used. Ensure that you download the correct files for your version of
Solaris.
v gcc - Compiler. Ensure that you download the full Solaris package and not
just the source code (gcc-3.2.3-sol9-sparc-local.gz or later).
v openssh - SSH client (openssh-4.5p1-sol-sparc-local.gz or later).
v openssl - SSL executable files and libraries (openssl-0.9.8d-sol9-sparc-local.gz
or later).
v zlib - zlib compression and decompression library (zlib-1.2.3-sol9-sparc-
local.gz or later).
What to do next
Note: The user should ensure they have the libcrypto.so.0.9.8 instead of
libcrypto.so.1.0.0. to use OpenSSH on Solaris.
Install the required software packages
How to install the required software packages.
About this task
To install the required packages, do the following on each Tivoli Netcool
Performance Manager system where SFTP is to be used:
Procedure
1. Log in to the system as root.
2. Change your working directory to the location where the software packages
have been downloaded using the following command:
# cd /download/location
3. Copy the downloaded software packages to /usr/local/src, or a similar
location, using the following commands:
# cp gcc-version-sparc-local.gz /usr/local/src
# cp zlib-version-sparc-local.gz /usr/local/src
# cp openssl-version-sparc-local.gz /usr/local/src
# cp openssh-version-sparc-local.gz /usr/local/src
198 IBM Tivoli Netcool Performance Manager: Installation Guide
4. Change your working directory to /usr/local/src using the following
command:
# cd /usr/local/src
5. Install the gcc compiler:
a. Uncompress gcc using the following command:
gunzip gcc-version-sparc-local.gz
b. Add the gcc package using the following command:
pkgadd -d gcc-version-sparc-local
6. Install the zlib compression library:
a. Uncompress zlib using the following command:
gunzip zlib-version-sparc-local.gz
b. Add the zlib package using the following command:
pkgadd -d zlib-version-sparc-local
7. Install the OpenSSL executable and binary files:
a. Uncompress OpenSSL using the following command:
gunzip openssl-version-sparc-local.gz
b. Add the OpenSSL package using the following command:
pkgadd -d openssl-version-sparc-local
8. Install the OpenSSH client:
a. Uncompress OpenSSH using the following command:
gunzip openssh-version-sparc-local.gz
b. Add the OpenSSH package using the following command:
pkgadd -d openssh-version-sparc-local
c. Create a group and user for sshd using the following commands:
groupadd sshd
useradd -g sshd sshd
9. Optional: Remove Sun SSH from the path and link OpenSSH:
a. Change your working directory to /usr/bin using the following command:
cd /usr/bin
b. Move the Sun SSH files and link the OpenSSH files using the following
commands:
# mv ssh ssh.sun
# mv ssh-add ssh-add.sun
# mv ssh-agent ssh-agent.sun
# mv ssh-keygen ssh-keygen.sun
# mv ssh-keyscan ssh-keyscan.sun
# ln -s /usr/local/bin/ssh ssh
# ln -s /usr/local/bin/ssh-add ssh-add
# ln -s /usr/local/bin/ssh-agent ssh-agent
# ln -s /usr/local/bin/ssh-keygen ssh-keygen
# ln -s /usr/local/bin/ssh-keyscan ssh-keyscan
Configure OpenSSH server to start up on system boot
After installing the OpenSSH server and client, you must configure the OpenSSH
server to start up on system boot.
About this task
To configure the server to start on system boot:
Procedure
1. Create or modify the /etc/init.d/sshd init script as follows:
Appendix E. Secure file transfer installation 199
#! /bin/sh
#
# start/stop the secure shell daemon
case "$1" in
start)
# Start the ssh daemon
if [ -x /usr/sbin/sshd ]; then
echo "starting SSHD daemon"
/usr/sbin/sshd &
fi
;;
stop)
# Stop the ssh daemon
/usr/bin/pkill -x sshd
;;
*)
echo "usage: /etc/init.d/sshd {start|stop}"
;;
2. Check that /etc/rc3.d/S89sshd exists (or any sshd startup script exists) and is
a soft link to /etc/init.d/sshd.
If not, create it using the following command:
ln -s /etc/init.d/sshd /etc/rc3.d/S89sshd
Linux systems
OpenSSH is required for VSFTP to work with Tivoli Netcool Performance Manager.
OpenSSH is installed by default on any RHEL system.
Configuring OpenSSH
This section describes how to configure the OpenSSH server and client.
Configuring the OpenSSH server
How to configuring the OpenSSH Server on Linux.
About this task
To configure the OpenSSH Server, follow these steps on each Tivoli Netcool
Performance Manager system where SFTP is to be used:
Procedure
1. Log in to the system as root.
2. Change your working directory to the location where the OpenSSH Server was
installed (/usr/local/etc/sshd_config by default) using the following
command:
# cd /usr/local/etc
3. Using the text editor of your choice, open the sshd_config file. This is an
example of a sshd_config file:
#***************************************************************************
# sshd_config
# This is the sshd server system-wide configuration file. See sshd(8)
# for more information.
# The strategy used for options in the default sshd_config shipped with
# OpenSSH is to specify options with their default value where
# possible, but leave them commented. Uncommented options change a
# default value.
200 IBM Tivoli Netcool Performance Manager: Installation Guide
Port 22
Protocol 2
ListenAddress 0.0.0.0
HostKey /usr/local/etc/ssh_host_dsa_key
SyslogFacility AUTH
LogLevel INFO
PubkeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys
RhostsAuthentication no
RhostsRSAAuthentication no
HostbasedAuthentication no
PasswordAuthentication yes
ChallengeResponseAuthentication no
Subsystem sftp /usr/local/libexec/sftp-server
#****************************************************************
4. Locate the Protocol parameter. For security purposes, it is recommended that
this parameter is set to Protcol 2 as follows:
Protocol 2
5. Locate the HostKeys for protocol version 2 parameter and ensure that it is
set as follows:
HostKey /usr/local/etc/ssh_host_dsa_key
6. Locate the PubkeyAuthentication parameter and ensure that it is set as follows:
PubkeyAuthentication yes
7. Locate the PasswordAuthentication parameter and ensure that it is set as
follows:
PasswordAuthentication yes
8. Locate the Subsystem parameter and ensure that the SFTP subsystem and path
are correct. Using defaults, the Subsystem parameter appears as follows:
Subsystem sftp /usr/local/libexec/sftp-server
Configuring OpenSSH client
How to configure OpenSSH client.
The OpenSSH client requires no configuration if it used in its default form. The
default location for the OpenSSH client file is /usr/local/etc/ssh_config.
Generating public and private keys
By default, OpenSSH generates public and private keys for the root user. You must
generate public and private keys with the Tivoli Netcool Performance Manager
user for the SFTP functions to work in Tivoli Netcool Performance Manager.
About this task
To generate public and private keys:
Procedure
1. Log in as pvuser on the node that will be the SFTP client. This node is
referred to as SFTPclient in these instructions, but you must replace
SFTPclient with the name of your node.
2. Create an .ssh directory in the home directory of the Tivoli Netcool
Performance Manager user, set permissions to x/r/w for owner (700), then
change to the directory using the following commands:
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
$ cd ~/.ssh
Appendix E. Secure file transfer installation 201
3. Generate a DSA public and private key with no passphrase (DSA encryption
is used as an example). The following example shows a UNIX server called
SFTPclient:
$ /usr/local/bin/ssh-keygen -t dsa -f SFTPclient -P ""
Generating public/private dsa key pair.
Your identification has been saved in SFTPclient.
Your public key has been saved in SFTPclient.pub.
The key fingerprint is: 77:67:2f:34:d4:2c:66:db:9b:1f:9a:36:fe:c7:07:c6 pvuser@SFTPclient
4. The previous command generates two files, SFTPclient (the private key) and
SFTPclient.pub (the public key). Copy the private key to id_dsa in the ~/.ssh
directory by entering the following command:
$ cp -p ~/.ssh/SFTPclient ~/.ssh/id_dsa
id_dsa identifies the node when it contacts other nodes.
5. To permit Tivoli Netcool Performance Manager components on SFTPclient to
communicate, you must append the contents of the SFTPclient.pub key file to
the file authorized_keys in the ~/.ssh directory by using the following
commands:
cd ~/.ssh
cat SFTPclient.pub >> authorized_keys
6. Log on to the other node that will be the SFTP server. This node is referred to
as SFTPserver in these instructions, but you must replace SFTPserver with the
name of your node.
7. Repeat Step 1 through Step 5 on the SFTPserver node, replacing SFTPclient
with SFTPserver.
8. Copy (with FTP, scp, or a similar utility) the public key from SFTPclient to
SFTPserver and append the contents of the key file to the file authorized_keys
in the ~/.ssh directory. If you cut and paste lines, be careful not to introduce
carriage returns.
Use the following FTP session as an example:
SFTPserver:~/.ssh> ftp SFTPclient
Connected to SFTPclient.
220 SFTPclient FTP server (SunOS 5.8) ready.
Name (SFTPclient:pvuser): pvuser
331 Password required for pvuser.
Password:
230 User pvuser logged in.
ftp> bin
200 Type set to I.
ftp> get .ssh/SFTPclient.pub
200 PORT command successful.
150 Binary data connection for .ssh/SFTPclient.pub
226 Binary Transfer complete.
local: .ssh/SFTPclient.pub remote: .ssh/SFTPclient.pub
ftp> quit
221 Goodbye.
SFTPserver:~/.ssh> cat SFTPclient.pub >> authorized_keys
9. Optional: If you want to set up bidirectional SFTP, repeat Step 8, but from the
SFTserver node to the SFTPclient node.
Note: This step is not needed for Tivoli Netcool Performance Manager.
Use the following FTP session as an example:
SFTPclient:~/.ssh> ftp SFTPserver
Connected to SFTPserver.
220 SFTPserver FTP server (SunOS 5.8) ready.
Name (SFTPserver:pvuser): pvuser
202 IBM Tivoli Netcool Performance Manager: Installation Guide
331 Password required for pvuser.
Password:
230 User pvuser logged in.
ftp> bin
180 Tivoli Netcool Performance Manager Installation Guide, Version 1.3.2
200 Type set to I.
ftp> get .ssh/SFTPserver.pub
200 PORT command successful.
150 Binary data connection for .ssh/SFTPserver.pub
226 Binary Transfer complete.
local: .ssh/SFTPserver.pub remote: .ssh/SFTPserver.pub
ftp> quit
221 Goodbye.
SFTPclient:~/.ssh> cat SFTPserver.pub >> authorized_keys
10. When finished, the SFTPclient and SFTPserver should look similar to the
following:
SFTPclient:~/.ssh> ls -al ~/.ssh
total 10
drwx------ 2 pvuser pvuser 512 Nov 25 16:56 .
drwxr-xr-x 28 pvuser pvuser 1024 Nov 25 15:25 ..
-rw------- 1 pvuser pvuser 883 Nov 25 15:21 id_dsa
-rw-r--r-- 1 pvuser pvuser 836 Nov 25 16:33 known_hosts
SFTPserver:~/.ssh> ls -al ~/.ssh
total 10
drwx------ 2 pvuser pvuser 512 Nov 25 16:56 .
drwxr-xr-x 28 pvuser pvuser 1024 Nov 25 15:25 ..
-rw------- 1 pvuser pvuser 883 Nov 25 15:21 id_dsa
-rw-r--r-- 1 pvuser pvuser 836 Nov 25 16:33 known_hosts
The important files are:
v authorized_keys, which contains the public keys of the nodes that are
authorized to connect to this node
v id_dsa, which contains the private key of the node it is on
v known_hosts, which contains the public keys of the node that you want to
connect to
For security, the private key (id_dsa) should be -rw------. Likewise, the
public key Node<number>.pub, authorized_keys, and known_hosts should be
-rw-r--r--.
The directory itself should be -rwx-----.
Note: The directory that contains the .ssh directory might also need to be
writable by owner.
11. The first time you connect using SSH or SFTP to the other node, it will ask if
the public key fingerprint is correct, and then save that fingerprint in
known_hosts. Optionally, you can manually populate the client's known_hosts
file with the server's public host key (by default, /usr/local/etc/
ssh_host_dsa_key.pub).
For large-scale deployments, a more efficient and reliable procedure is:
a. From one host, ssh to each SFTP server and accept the fingerprint. This
builds a master known_hosts file with all the necessary hosts.
b. Copy that master file to every other SFTP client.
Note: If the known_hosts file has not been populated and secure file
transfer (SFTP) is attempted through Tivoli Netcool Performance Manager,
SFTP fails with vague errors.
Appendix E. Secure file transfer installation 203
Testing OpenSSH and SFTP
How to test OpenSSH and SFTP.
About this task
For the following tests, the commands normally work without asking for a
password. If you are prompted for a password, public/private key encryption is
not working.
Ensure that you specify the full path to the ssh and sshd binary files. Otherwise,
you might use another previously installed SSH client or server.
To test OpenSSH and SFTP:
Procedure
1. On both nodes, kill any existing sshd processes and start the sshd process from
the packages you installed, by entering the following commands:
pkill -9 sshd /usr/local/sbin/sshd &
The path can be different depending on the installation.
2. From SFTPclient, run the following command:
/usr/local/bin/ssh SFTPserver
3. From SFTPclient, run the following command:
/usr/local/bin/sftp SFTPserver
4. Optional: If you set up bidirectional SFTP, run the following command from
SFTPserver:
/usr/local/bin/ssh SFTPclient
5. Optional: If you set up bidirectional SFTP, run the following command from
SFTPserver:
/usr/local/bin/sftp SFTPclient
6. If all tests allow you to log in without specifying a password, follow the Tivoli
Netcool Performance Manager instructions on how to enable SFTP in each
Tivoli Netcool Performance Manager component. Make sure to specify the full
path to SSH in the Tivoli Netcool Performance Manager configuration files. In
addition, make sure the user that Tivoli Netcool Performance Manager is run as
is the same as the user that you used to generate keys.
Troubleshooting
How to troubleshoot OpenSSH and its public keys.
About this task
If you find that OpenSSH is not working properly with public keys:
Procedure
1. Check the ~/known_hosts file on the node acting as the SSH client and make
sure the client host name and IP information is present and correct.
2. Check the ~/authorized_keys file on the node acting as the SSH server and
make sure that the client public key is present and correct. Ensure that the
permissions are -rw-r--r--.
204 IBM Tivoli Netcool Performance Manager: Installation Guide
3. Check the ~/id.dsa file on the node acting as the SSH client and make sure
that the client's private key is present and correct. Ensure that the permissions
are -rw-------.
4. Check the ~/.ssh directory on both nodes to ensure that the permissions on
the directories are -rwx------.
5. Check for syntax errors (common ones are misspelling authorized_keys and
known_hosts without the "s" at the end). In addition, if you copied and pasted
keys into known hosts or authorized keys files, you probably have introduced
carriage returns in the middle of a single, very long line.
6. Check the ~ (home directory) permissions to ensure that they are only writable
by owner.
7. If the permissions are correct, kill the sshd process and restart in debug mode
as follows:
pkill -9 sshd /usr/local/sbin/sshd -d
8. Test SSH again in verbose mode on the other node by entering the following
command:
/usr/local/bin/ssh -v SFTPserver
9. Read the debugging information about both client and server and
troubleshoot from there.
10. Check the log file /var/adm/messages for additional troubleshooting
information.
Netcool/Provisio SFTP errors
Errors that you may encounter.
In the Tivoli Netcool Performance Manager log files, you might see the following
errors:
v [DC10120] FTPERR error: incompatible version, result: sftp status:
SSH2_FX_FAILURE:: incompatible version, log:
This error indicates that the SSH server (sshd) is SSH2 rather than OpenSSH.
OpenSSH is required for Tivoli Netcool Performance Manager to function
correctly.
v [DC10120] FTPERR error: bad version msg, result: sftp status:
SSH2_FX_NO_CONNECTION:: connection not established - check ssh
configuration, log:
This error indicates that the SSH configuration is incorrect or the wrong version
of the SSH server (sshd) is running. OpenSSH is required for Tivoli Netcool
Performance Manager to function correctly.
Appendix E. Secure file transfer installation 205
206 IBM Tivoli Netcool Performance Manager: Installation Guide
Appendix F. LDAP integration
Detailed information on how to configure LDAP (Lightweight Directory Access
Protocol) as a default authentication/authorization mechanism for Tivoli Netcool
Performance Manager.
Supported LDAP servers
A list of LDAP servers supported by Tivoli Netcool Performance Manager.
v Domino 6.5.4, 7.0
v IBM Tivoli Directory Server 6.3
v IBM z/OS Security Server 1.6, 1.7
v IBM z/OS.e Security Server 1.6, 1.7
v Microsoft Active Directory 2000, 2003
v Novell eDirectory 8.7.3, 8.8
LDAP configuration
The configuration of LDAP as a default authentication and authorization
mechanism for Tivoli Netcool Performance Manager is achieved using the
Topology Editor.
Enable LDAP configuration
The LDAP configuration option becomes available after adding a Tivoli Integrated
Portal to your topology.
About this task
Note: The process of adding a Tivoli Integrated Portal to the topology using the
Topology Editor is described in Add a Tivoli Integrated Portal on page 90.
To enable LDAP configuration:
Procedure
1. In Logical View of Topology Editor, select the Tivoli Integrated Portal that you
have added.
2. Open the Advanced Properties tab.
3. Click the checkbox opposite the
IAGLOBAL_USER_REGISTRY_LDAP_SELECTED property. This enables LDAP.
4. In Advanced Properties tab, enter the LDAP connection details. This requires
that you populate the following fields:
v WAS_USER_NAME: This is the name you have registered as the Tivoli
Integrated Portal user. For example, "tipadmin".
v IAGLOBAL_LDAP_BIND_DN: This username specified must have read and
write permissions in LDAP 3. Typically this will be an LDAP administrator
username. For example "cn=Directory Manager".
v IALOCAL_LDAP_BIND_PASSWORD: This is the password for the Bind
Distinguished Name specified.
Copyright IBM Corp. 2006, 2012 207
v IAGLOBAL_LDAP_NAME: This is the LDAP server host name. Should this
LDAP server exist behind firewall, make sure that this host has been
authenticated.
v IAGLOBAL_LDAP_PORT: For example, "1389".
v IAGLOBAL_LDAP_REPOSITORY_ID: This is a string used to identify the
LDAP repository, which can be set to the string of your choice.
v IAGLOBAL_LDAP_BASE_ENTRY: The distinguished name of a base entry in
LDAP.
For example, for IBM the base entry is o=IBM, c=US.
Verifying the DataView installation
Verify that the correct users are within the correct groups.
About this task
When the DataView installation is complete, it should have created two users and
two groups in LDAP:
Users:
v tnpm
v tnpmScheduler
Groups:
v tnpmUsers
v tnpmAdministrators
Procedure
Verify from the UI that the users tnpm and tnpmScheduler are members of the
tnpmAdministrators group.
Assigning Tivoli Netcool Performance Manager roles to LDAP
users
To successfully authenticate your LDAP user, you need to assign them to one of
the appropriate roles
Procedure
To successfully authenticate you LDAP user you need to assign them to one of the
following roles:
v tnpmUser
v tnpmAdministrator
This can be done by tipadmin user, by navigating to Users and Groups > User
Roles, and assigning the correct roles.
Alternatively tipcli.sh can be used for assigning roles to the user.
<tip_location>/profiles/TIPProfile/bin/tipcli.sh MapRolesToUser --username
<tip_admin_user> --password <tip_admin_password> --userID <userUniqueId> --rolesList <roleName>
where <userUniqueId> is the concatenation of username and realm in which user
information is stored.
For example:
208 IBM Tivoli Netcool Performance Manager: Installation Guide
<tip_location>/profiles/TIPProfile/bin/tipcli.sh MapRolesToUser --username
<tip_admin_user> --password <tip_admin_password> --userID
uid=<user_name>,dc=<server>,dc=<server>,dc=<company>,dc=com --rolesList
tnpmUser
Roles specific to an application, such as tnpmUser for Tivoli Netcool Performance
Manager, are not stored in LDAP. Roles are stored in a flat XML file in the TIP
directory. For example, if you assign a Tivoli Netcool Performance Manager role to
an LDAP user on tip_instance1, you must also assign the same role to the user on
tip_instance2. Otherwise, the user cannot authenticate on tip_instance2.
Alternatively you can assign tnpmUser role to tnpmUsers group on tip_instance1.
If the user is a member of this group then user can authenticate on tip_instance2.
Appendix F. LDAP integration 209
210 IBM Tivoli Netcool Performance Manager: Installation Guide
Appendix G. Using silent mode
This appendix describes how to use silent mode to run the deployer or to install
the Topology Editor.
Sample properties files
The location and contents of the sample properties files.
The Silent subdirectory under the directory that contains the deployer.bin file
(for example, /opt/IBM/proviso/deployer/proviso/data/Silent), contains the
following sample properties files:
v Fresh.properties runs the deployer in standard mode.
v POC.properties runs the deployer in minimal deployment mode.
v topologyEditor.properties runs the Topology Editor installation in silent mode.
The Deployer
How to run the deployer in silent mode.
Running the Deployer in silent mode
How to use the Fresh.properties file to run the deployer.
About this task
Use the Fresh.properties file to run the deployer in standard mode, or the
POC.properties file to run the deployer in minimal deployment mode.
For example, to perform a silent fresh installation:
Procedure
1. Log in as root.
2. Log in to the machine on which you want to run the silent installation.
3. In a text editor, open the Fresh.properties file and make the following edits:
a. Set and verify that the Oracle client path is correct.
b. Set the DownloadTopology flag to True (1) or False (0).
c. If you set DownloadTopology flag to False, set the TopologyFilePath to the
location of your topology.xml file.
d. If you are running the deployer application on the same system where the
Topology Editor is installed, set the Primary flag to true.
e. Set and verify that the Database Access Information is correct.
f. Set and verify the PACKAGE_PATH variable for the relevant system:
On Solaris systems:
<DIST_DIR>/proviso/SOLARIS
On AIX systems:
<DIST_DIR>/proviso/AIX
On Linux systems:
<DIST_DIR>/proviso/RHEL
Copyright IBM Corp. 2006, 2012 211
<DIST_DIR> is the directory on the hard drive where you copied the contents
of the Tivoli Netcool Performance Manager distribution in Downloading
the Tivoli Netcool Performance Manager distribution to disk on page 48.
Your edited file will look similar to the following:
#Oracle client JDBC driver path
#------------------------------
OracleClient=/opt/oracle/product/11.2.0-client32/jdbc/lib
#Download Topology from Proviso database
# 1 is true
# 0 is false
#-------------
DownloadTopology=0
#Primary
# Specify if the configuration has to be updated
# Specify true if running the deployer on the same
# system where the Topology Editor is installed.
# true or false
#-------------
Primary=false
#Topology file
# If DownloadTopology=1 this parameter is ignored
#-------------
TopologyFilePath=/tmp/ProvisoConsumer/Topology.xml
#Database access information
#---------------------------
OracleServerHost=lab238053
OracleServerPort=1521
OracleSID=PV
OracleAdminUser=PV_INSTALL
OracleAdminPassword=PV
#Check Prerequisites Flag(true/false)
#Use true only for first time install
#-------------------------------------
CHECK_PREREQ=true
# Tivoli Netcool Performance Manager installation packages path
#---------------------------
PACKAGE_PATH=/cdrom/SOLARIS
#Silver Stream installation packages path
#-------------------------------------
SS_BUNDLE=/cdrom/exteNd40k
g. Write and quit the file.
4. Change to the /opt/IBM/proviso/deployer directory.
5. Run the following command:
./deployer.bin -i silent -f propertyFileWithPath
For example:
./deployer.bin -i silent -f /opt/IBM/proviso/deployer/proviso/data/Silent/Fresh.properties
Confirming the status of a silent install
How to verify a successful installation from the log status messages.
To verify a successful installation, you must analyze the /tmp/ProvisoConsumer/
log.txt file:
For a successful silent install, /tmp/ProvisoConsumer/log.txt will have the
following as the second last line:
ConsumerSilent null CMW3019I Silent installation completed
If the installation fails:
212 IBM Tivoli Netcool Performance Manager: Installation Guide
if the install has not worked and one of the steps fails you will see one of the
following errors:
v Product images not found during silent installation
v An installation step has failed during silent installation
v Silent Installatoin Suspended because a reboot is needed
v Engine main loop internal error
Restrictions
Deployer restrictions.
Note the following restrictions:
v The silent deployer does not support remote installations. You must manually
invoke the script on each machine.
v Silent resume is not supported. If you need to resume a partial silent installation,
use the -Daction=resume option to complete the installation using graphical
mode (the steps table). The step that originally failed might have been in the
middle of a step sequence that cannot be re-created by a subsequent -i silent
invocation.
The Topology Editor
You can also install the Topology Editor in silent mode.
About this task
The Topology Editor is installed with the installer named installer.bin, located
in:
On Solaris systems:
# cd <DIST_DIR>/proviso/SOLARIS/Install/SOL10/topologyEditor/Disk1/InstData/VM
On AIX systems:
# cd <DIST_DIR>/proviso/AIX/Install/topologyEditor/Disk1/InstData/VM
On Linux systems:
# cd <DIST_DIR>/proviso/RHEL/Install/topologyEditor/Disk1/InstData/VM
<DIST_DIR> is the directory on the hard drive where you copied the contents of the
Tivoli Netcool Performance Manager distribution in Downloading the Tivoli
Netcool Performance Manager distribution to disk on page 48.
To install the Topology Editor in silent mode:
Procedure
1. Log in as root to the server on which you want to run the silent installation.
2. Change to the directory that contains the deployer.bin file (for example,
/opt/IBM/proviso/deployer), then change to the /proviso/data/Silent
subdirectory.
3. Using a text editor, open the topologyEditor.properties file and make the
following edits:
a. Set and verify that the Oracle client path is correct.
b. Set the DownloadTopology flag to True (1) or False (0).
Appendix G. Using silent mode 213
c. If you set DownloadTopology flag to False, set the TopologyFilePath to the
location of your topology.xml file.
d. Set and verify that the Database Access Information is correct.
e. Set and verify the PACKAGE_PATH variable.
f. Write and quit the file.
4. Run the following command:
./installer.bin -i silent -f ..../silent/topologyEditor.properties
214 IBM Tivoli Netcool Performance Manager: Installation Guide
Appendix H. Installing an interim fix
This appendix describes how to install an interim fix (or patch) release of Tivoli
Netcool Performance Manager.
Overview
Interim fix installation overview.
Unlike major, minor, and maintenance releases, which are planned, patch releases
(interim fixes and fix packs) are unscheduled and are delivered under the
following circumstances:
v A customer is experiencing a "blocking" problem and cannot wait for a
scheduled release for the fix.
v The customer's support contract specifies a timeframe for delivering a fix for a
blocking problem and that timeframe does not correspond with a scheduled
release.
v Development determines that a patch is necessary.
Note: Patches are designed to be incorporated into the next scheduled release,
assuming there is adequate time to integrate the code.
Installation rules
Rules that apply to the installation of patches.
Note the following installation rules for patch installations:
v Apply fix to Database before any other components.
v Fixes for the Database and DataMart must be installed on that host.
v Fixes for the DataChannel, DataLoad, DataMart and DataView can be installed
remotely from the local host in a distributed system.
v Fix packs are installed on general availability (GA) products.
v Sequentially numbered fix packs can be installed on any fix pack with a lower
number.
v Interim fixes must be installed on the absolute fix pack.
The patch installer verifies that your installation conforms to these rules.
Behavior and restrictions
Behaviour and restrictions that apply to the installation of patches.
If remote installation of a component is not possible, the deployer grays out the
any remote component host on the node selection page.
The maintenance deployer must run locally on each DataMart host to apply a
patch.
Copyright IBM Corp. 2006, 2012 215
Before you begin
What you must do before you begin a patch installation.
A patch release updates the file system for the component that the patch is
intended for and updates the versioning information in the database.
To verify that the versioning was updated correctly for the components in the
database, you can run several queries both before and after the installation and
compare the results. For detailed information, see the Tivoli Netcool Performance
Manager Technical Note: Tools for Version Reporting document.
Installing a patch
How to install a patch.
About this task
To install a patch:
Procedure
1. You must have received or downloaded the maintenance package from IBM
Support. The maintenance package contains the Maintenance Descriptor File,
an XML file that describes the contents of the fix pack. Follow the instructions
in the README for the fix pack release to obtain the maintenance package
and unzip the files.
Note: for each tar.gz file, you must unzip them, and then un-tar them. For
example:
gunzip filename.tar.gz
tar -xvf filename.tar
2. Log in as root.
3. Set and export your DISPLAY environment variable (see Setting up a remote
X Window display on page 36).
4. Start the patch deployer using one of the following methods:
From the launchpad:
a. Click the Start Tivoli Tivoli Netcool Performance Manager Maintenance
Deployer option in the list of tasks.
b. Click the Start Tivoli Tivoli Netcool Performance Manager Maintenance
Deployer link.
From the command line:
v Run the following command:
# ./deployer.bin -Daction=patch
5. The deployer displays a welcome page. Click Next to continue.
6. Accept the default location of the base installation directory of the Oracle
JDBC driver (/opt/oracle/product/11.2.0-client32), or click Choose to
navigate to another directory. Click Next to continue.
7. On the patch folder page, click Choose to select the patch you want to install.
8. Navigate to the directory that contains the files for the fix pack, and click into
the appropriate directory. Click Select to select that directory, then click Next
to continue.
216 IBM Tivoli Netcool Performance Manager: Installation Guide
9. A pop-up window asks whether you want to download the topology file.
Click Yes.
10. Verify that all of the fields for the database connection are filled in with the
correct values:
v Database hostname - Enter the name of the database host.
v Port - Specifies the port number used for communication with the database.
The default value is 1521.
v Database user - Specifies the username used to access the database. The
default value is PV_INSTALL.
v Database Password - Enter the password for the database user account (for
example, PV).
v SID - Specifies the SID for the database. The default value is PV.
Click Next.
11. When the topology has been downloaded from the database, click Next.
12. The node selection window shows the target systems and how the files will be
transferred. The table has one row for each machine where at least one Tivoli
Netcool Performance Manager component will be installed. Verify the settings,
then click Next to continue.
13. The deployer displays summary information about the installation. Review the
information, then click Next.
The deployer displays the table of installation steps.
14. Run through each installation step just as you would for a normal installation.
15. When all the steps have completed successfully, click Done to close the
wizard.
Appendix H. Installing an interim fix 217
218 IBM Tivoli Netcool Performance Manager: Installation Guide
Appendix I. Error codes and log files
This appendix lists the Tivoli Netcool Performance Manager error messages and
log files.
See Appendix J, Troubleshooting, on page 239 for information about
troubleshooting problems with the Tivoli Netcool Performance Manager
installation.
Error codes
The following sections describe the error messages generated by the Deployer, the
Toplogy Editor and InstallAnywhere.
Deployer messages
The Deployer messages.
Table 13 lists the error messages returned by the Tivoli Netcool Performance
Manager deployer.
Table 13: Deployer Messages
Error Code Description User Action
DataView Messages
GYMCI5000E A system command failed. A
standard UNIX system
command failed. These
commands are used for
standard system operations,
such as creating directories,
changing file permissions,
and removing files.
See the installation log for
more details.
GYMCI5002E The operating system is not
at the prerequisite patch
level. Some required
operating system patches are
not installed.
See the installation log for
details. Install the required
patches, then try the
installation again.
GYMCI5003E The Oracle configuration file,
tnsnames.ora, does not
include an entry for
SilverMaster.
Add an entry for
SilverMaster to the
tnsnames.ora file, then try
the installation again.
GYMCI5004E The Oracle configuration file,
tnsnames.ora, was not found.
The tnsnames.ora file must
be created and stored in the
$TNS_ADMIN directory.
Ensure that the file exists in
the correct location.
GYMCI5005E Unable to connect to the
Oracle database. It is possible
that a specified connection
parameter is incorrect, or the
Oracle server might not be
available.
See the installation log for
more details. Ensure that the
connection parameters you
are using are correct and that
the Oracle server is up and
running.
Copyright IBM Corp. 2006, 2012 219
Error Code Description User Action
GYMCI5006E An error occurred while
running the
DVOptimizerToRule.sql
script to initialize the
database. It is possible that
the Oracle database and
listener are not running.
See the installation log for
more details. Check that the
database and listener are
running.
GYMCI5007E An error occurred while
trying to remove entries for a
resource from a database
table. It is possible that the
Oracle database and listener
are not running.
See the installation log for
more details. Check that the
database and listener are
running.
GYMCI5008E An error occurred while
trying to remove version
information from a database
table. It is possible that the
Oracle database and listener
are not running.
See the installation log for
more details. Check that the
database and listener are
running.
GYMCI5009E An error occurred while
reading the configuration
file. The name of a parameter
or the format of the file is
incorrect.
Contact IBM Software
Support.
GYMCI5010E The file system does not
have sufficient free space to
complete the installation.
See the installation log for
more details. Ensure that you
have sufficient space on the
file system before retrying
the installation.
GYMCI5011E The DataView license file is
missing. The license file was
not found, but this file
should not be required. The
installation log will contain
more details of the error.
Contact IBM Software
Support.
GYMCI5012E A configuration file or
directory is missing.
See the installation log for
more details.
GYMCI5013E An error occurred while
creating a configuration file.
The file could not be created.
The installer failed to create
one of the required
configuration files.
See the installation log for
more details.
GYMCI5014E An error occurred while
updating a configuration file.
The file could not be
modified. The installer failed
to make a required
modification to one of the
configuration files.
See the installation log for
more details.
DataMart Messages
GYMCI5101E The DataMart installation
failed.
See the DataMart installer
logs for details.
220 IBM Tivoli Netcool Performance Manager: Installation Guide
Error Code Description User Action
Database Configuration Messages
GYMCI5201E The database installation
failed. See the installation log
for details.
See the root_install_dir/
database/install/log/
Oracle_SID/install.log file.
GYMCI5202E The database uninstallation
script failed because of a
syntax error. This script must
be run as oracle. For
example: ./uninstall_db
/var/tmp/PvInstall/
install.cfg.silent
Check the syntax and run
the script again.
GYMCI5204E The database could not be
removed because some
Oracle environment variables
are not correctly set. Some or
all of the Oracle environment
variables are not set (for
example, ORACLE_HOME,
ORACLE_SID, or
ORACLE_BASE).
Check that all the required
Oracle variables are set and
try again.
GYMCI5205E An error occurred when
trying to start the Oracle
database.
See the Oracle alert file for
possible startup errors.
Resolve any problems
reported in the log and try
again.
GYMCI5206E An error occurred when
trying to shut down the
Oracle database.
See the Oracle alert file for
possible shutdown errors.
Resolve any problems
reported in the log and try
again.
GYMCI5207E An error occurred while
querying the database to
determine the data files that
are owned by the database.
See the Oracle alert file for
details of errors. You might
need to manually delete
Oracle data files using
operating system commands.
DataChannel Messages
GYMCI5301E The database channel
installation failed. See the
installation log for details.
See the file
root_install_dir/channel/
install/log/Oracle_SID/
install.log.
GYMCI5401E An error occurred while
running a script.
See the message produced
with the error code for more
details.
GYMCI5402E Unable to find an expected
file.
See the message produced
with the error code for more
details.
GYMCI5403E The data in one of the files is
not valid.
See the message produced
with the error code for more
details.
GYMCI5404E Unable to find an expected
file or expected data.
See the message produced
with the error code for more
details.
Appendix I. Error codes and log files 221
Error Code Description User Action
GYMCI5405E Scripts cannot function
correctly because the
LD_ASSUME_KERNEL
variable is set.
Unset the variable and try
again.
GYMCI5406E An action parameter is
missing.
See the message produced
with the error code for more
details.
GYMCI5407E An error occurred while
processing the tar command.
See the message produced
with the error code for more
details.
GYMCI5408E The product version you are
trying to install seems to be
for a different operating
system.
See the message produced
with the error code for more
details.
GYMCI5409E Unable to locate installed
package information for the
operating system.
See the message produced
with the error code for more
details.
GYMCI5410E A file has an unexpected
owner, group, or
permissions.
See the message produced
with the error code for more
details.
GYMCI5411E A problem was found by the
PvCheck module when
checking the environment.
See the message produced
with the error code for more
details.
GYMCI5412E The installation module
failed.
See the messages in standard
error for more details.
GYMCI5413E The patch installation failed. See the messages in standard
error for more details.
GYMCI5414E The remove action failed. See the messages in standard
error for more details.
GYMCI5415E An unrecoverable error
occurred while running the
script.
See the message produced
with the error code for more
details.
Dataload Messages
GYMCI5501E An error occurred when
running the script.
See the message produced
with the error code for more
details.
GYMCI5502E Unable to find an expected
file.
See the message produced
with the error code for more
details.
GYMCI5503E The data in one of the files is
not valid.
See the message produced
with the error code for more
details.
GYMCI5504E Unable to find an expected
file or expected data.
See the message produced
with the error code for more
details.
GYMCI5505E Scripts cannot function
correctly because the
LD_ASSUME_KERNEL
variable is set.
Unset the variable and try
again.
222 IBM Tivoli Netcool Performance Manager: Installation Guide
Error Code Description User Action
GYMCI5506E An action parameter is
missing.
See the message produced
with the error code for more
details.
GYMCI5507E An error occurred while
processing the tar command.
See the message produced
with the error code for more
details.
GYMCI5508E The product version you are
trying to install seems to be
for a different operating
system.
See the message produced
with the error code for more
details.
GYMCI5509E Unable to locate installed
package information for the
operating system.
See the message produced
with the error code for more
details.
GYMCI5510E A file has an unexpected
owner, group or permissions.
See the message produced
with the error code for more
details.
GYMCI5511E A problem was found by the
PvCheck module when
checking the environment.
See the message produced
with the error code for more
details.
GYMCI5512E The installation module
failed.
See the message produced
with the error code for more
details.
GYMCI5513E The patch installation failed. See the messages in standard
error for more details.
GYMCI5514E The remove action failed. See the messages in standard
error for more details.
GYMCI5515E An unrecoverable error
occurred while running the
script.
See the message produced
with the error code for more
details.
Prerequisite Checkers: Operating System
GYMCI6001E The syntax of the check_os
script is not correct. The
specified component does
not exist.The syntax is:
check_os
PROVISO_COMPONENT
where
PROVISO_COMPONENT is
DL, DC, DM, DB, or DV.
Correct the syntax and try
again.
GYMCI6002E This version of IBM Tivoli
Tivoli Netcool Performance
Manager is not supported on
the host operating system.
See the check_os.ini file for a
list of supported operating
systems.
GYMCI6003E The specified component
does not exist or is not
supported on this operating
system.
Ensure that you have
specified the correct
component. If you have, the
operating system must be
upgraded before the
component can be installed.
Appendix I. Error codes and log files 223
Error Code Description User Action
GYMCI6004E The operating system is not
at the prerequisite patch
level. Some required
operating system patches are
not installed.
Check the product
documentation for a list of
required patches. Apply any
missing patches and try
again.
GYMCI6005E The host operating system is
not supported for this
installation.
Perform the installation on a
supported operating system.
GYMCI6006E In the /etc/security/limits
file, some values are missing
or incorrect. Values must not
be lower than specified in
the check_os.ini file.
Check the values in the
check_os.ini and edit the
default stanza in the
/etc/security/limits file so
that valid values are
specified for all required
limits.
Prerequisite Checkers: Database
GYMCI6101E The syntax of the check_db
script is not correct. The
syntax is: check_db [client -
server] [new - upgrade]
[ORACLE_SID or
tnsnames.ora entry]
Correct the syntax and try
again.
GYMCI6102E The host operating system is
not supported for this
installation.
Perform the installation on a
supported operating system.
GYMCI6103E This version of the IBM
Tivoli Tivoli Netcool
Performance Manager
database is not supported on
the current version of the
host operating system.
See the check_os.ini file for a
list of supported operating
system versions.
GYMCI6104E Some required Oracle
variables are missing or
undefined.
Check the Oracle users
environment files (for
example, .profile and
.bash_profile).
GYMCI6105E An Oracle binary is missing
or not valid.
Ensure that Oracle is
correctly installed.
GYMCI6106E The instance of Oracle
installed on the host is not at
a supported version.
Check the list of supported
Oracle versions in the
check_db.ini file.
GYMCI6107E Unable to contact the Oracle
server using the tnsping
utility with the specified
ORACLE_SID.
Check that your Oracle
listener is running on the
database server. Start the
listener if it is not running.
GYMCI6108E An Oracle instance is
running on the host where
you have requested a new
server installation.
Check whether you have
selected the correct host for a
new Oracle server
installation. If the selected
host is correct, remove the
existing Oracle instance first.
224 IBM Tivoli Netcool Performance Manager: Installation Guide
Error Code Description User Action
GYMCI6109E The number of bits (32 or 64)
for the Oracle binary does
not match the values defined
in the check_db.ini file.
Check the list of supported
Oracle versions in the
check_db.ini file.
GYMCI6110E The installation method
passed to the script is not
valid. Valid installation
methods are New and
Upgrade.
Pass the New or Upgrade
option to the script.
GYMCI6111E The installation type passed
to the script is not valid.
Valid installation methods
are Client and Server.
Pass the Client or Server
option to the script.
GYMCI6112E The script was run with
options set for a new server
installation, but an Oracle
instance configuration file
(init.ora) already exists for
the specified SID. The
presence of the init.ora file
indicates the presence of an
Oracle instance.
Check that a new server
installation is the correct
action for this SID. If it is,
remove the existing Oracle
instance configuration files.
GYMCI6113E A symbolic link was found
in the Oracle home path. The
Oracle home path cannot
contain any symbolic links.
Remove any symbolic links.
Specify the Oracle home path
using only real directories.
GYMCI6114W Cannot contact the Oracle
Listener. The tnsping utility
was run to check the Oracle
Listener status, but, the
Listener could not be
contacted.
Check that the Oracle
Listener is running. Start it if
necessary.
GYMCI6115E The Solaris semaphore and
shared memory check failed.
The sysdef command was
used to check the values for
semaphores and shared
memory. The command did
not report the minimum
value for a particular
semaphore or shared
memory.
Check that the required
/etc/system parameters are
set up for Oracle. Check that
the values of these
parameters meet the
minimum values listed in the
check_db.ini file.
GYMCI6116E Could not find the bos.adt.lib
package in the COMMITTED
state. The package might not
be installed. The package is
either not installed or not in
a COMMITTED state.
Ensure that the bos.adt.lib
package is installed and
committed and then try
again.
Appendix I. Error codes and log files 225
Error Code Description User Action
GYMCI6117E Could not log in to the
database. The verify base
option was used. The option
attempts to log into the
database to ensure it is
running. However, the script
could not log in to the
database.
Check that the database and
Oracle Listener are up and
running. If not, start them.
GYMCI6118E The checkextc script failed.
The verify base option was
used. The option runs the
checkextc script to ensure
external procedure calls can
be performed.
Check that the Tivoli Netcool
Performance Manager
database was created
properly.
GYMCI6119E The tnsnames.ora file is
missing. A tnsnames.ora file
in should exist in
ORACLE_HOME/network/
admin directory.
Check that the tnsnames.ora
file exists in the
ORACLE_HOME/network/
admin directory. If it does
not, create it.
Minimal Deployment: Post-Installation Messages
GYMCI7500E An internal processing error
occurred in the script.
Check the logs and the
output from the script. Look
for incorrect configuration or
improper invocation.
GYMCI7501E The required configuration
or messages files for the
poc-post-install script are not
in the same directory as the
script. These files should be
unpacked by the installer
together with the script.
Check for errors that
occurred during the
installation steps.
GYMCI7502E An environment file is
missing or is in the wrong
location.
Check the poc-post-install
configuration file. The
missing environment file and
expected path will be
identified in the log file.
GYMCI7503E The SNMP DataLoad did not
start. The SNMP DataLoad
process (pvmd) failed to
start.
Check the SNMP DataLoad
log for errors during startup.
GYMCI7504E The network inventory
failed. New devices cannot
be discovered unless the
inventory runs successfully.
Check the inventory log for
errors. Ensure the DISC
server and SNMP DataLoad
(Collector) processes are
running.
GYMCI7505E The Report Grouping
operation failed. This action
does not depend on any
external application
processes. The database must
be running, and correct
DataMart grouping rule
definitions are required.
Check the inventory log file
for more details of the
Report Grouping failure.
226 IBM Tivoli Netcool Performance Manager: Installation Guide
Error Code Description User Action
GYMCI7506E The DataChannel command
line failed. It is possible that
the CNS, CMGR, and AMGR
processes are not running.
Ensure that the required
processes are running. Check
the proviso.log for details of
the failure.
GYMCI7507E The Report User was not
created. The Web user will
not be able to view reports.
The DataMart resmgr utility
is used to add this
configuration to the
database. It is possible that
the database is not running.
Ensure that the database is
running, and check for error
logs in the DataMart logs
directory.
GYMCI7508E Failed to associate a Report
User to a group. The report
user is associated with a
group to allow the user to
view reports. The DataMart
resmgr utility is used to add
this configuration to the
database. It is possible that
the database is not running.
Ensure that the database is
running, and check for error
logs in the DataMart logs
directory. Ensure that the
specified report group exists.
GYMCI7509E A report user could not be
deleted from the database.
Check for error and trace
logs in the DataMart logs
directory.
GYMCI7510E Failed to create a Web User.
The user will not be able to
authenticate with the
Web/application server.
Check the Web/application
server log file for errors.
Ensure that the
Web/application server is
running.
GYMCI7511E The Web group could not be
created, and the Web user
might not be properly
configured to view reports.
Check the Web/application
server log file for errors.
Ensure that the
Web/application server is
running.
GYMCI7512E Failed to associate the Web
User with a group. The Web
user might not be properly
configured to view reports
unless successfully associated
with a group.
Check the Web/application
server log file for errors.
Ensure that the
Web/application server is
running. This step relies on
the database component
only.
GYMCI7513E Failed to delete Web Users.
Web user authentication was
not removed.
Check the Web/application
server logs.
GYMCI7514E The Channel Naming Service
failed to start.
Cross-application
communication cannot
function.
Check for walkback or error
files in the DataChannel log
or state directory.
GYMCI7515E The central LOG server
failed to start. Logging for
DataChannel will be
unavailable.
Check for walkback or error
files in the DataChannel log
or state directory.
Appendix I. Error codes and log files 227
Error Code Description User Action
GYMCI7516E The Channel Manager failed
to start. DataChannel
applications cannot be
started or stopped.
Application status will be
unavailable.
Check the proviso.log file for
errors. Check for walkback
or error files in the
DataChannel log or state
directory.
GYMCI7517E The Application Manager
failed to start. DataChannel
applications cannot be
started or stopped.
Application status will be
unavailable.
Check the proviso.log file for
errors. Check for walkback
or error files in the
DataChannel log or state
directory.
GYMCI7518E Failed to create the DV user
group. The DV user will
remain in the Orphans
group.
Check the poc-post-install
log in /var/tmp for more
details on the error
condition.
GYMCI7519E Failed to associate the DV
user to the DV group. The
DV user will remain in the
Orphans group.
Check the poc-post-install
log in /var/tmp for more
details on the error
condition.
GYMCI7520E The Web Application server
is not running or took too
long to start up.
Start up the Web Application
server as documented.
GYMCI7597E The MIB-II Technology Pack
jar file was not found in the
specified directory.
Add the MIB2 Technology
Pack jar to the directory.
Remove other jar files and
try again.
GYMCI7598E Too many jar files are present
in the specified directory.
Only two jar file can be
present in the directory: the
ProvisoPackInstaller.jar and
the MIB-II Technology Pack
jar.
Remove the other jar files
and try again.
GYMCI7599E The Technology Pack
installer failed. Check the
Technology Pack installer
logs for details.
Installer Action Messages and IA Flow Messages
GYMCI9998E Unable to find a message for
the key. The message was
not retrieved from the
message catalog.
See the installation log for
more details.
GYMCI9999E An unknown error occurred
for the component name
with the error code code. The
message could not be
retrieved from the catalog.
See the installation log for
more details.
GYMCI9001E An error occurred during
installation. An exception has
been generated during an
installation step.
See the installation log for
more details.
228 IBM Tivoli Netcool Performance Manager: Installation Guide
Error Code Description User Action
GYMCI9002E An unrecoverable error
occurred when running the
command command.
See the installation log for
more details.
GYMCI9003E An unrecoverable error
occurred while running a
command.
See the installation log for
more details.
GYMCI9004E An error occurred while
connecting to the database.
See the installation log for
more details.
GYMCI9005E An error occurred while
performing a database
operation.
See the installation log for
more details.
GYMCI9006E Remote File Transfer has
been disabled.
To continue, change the step
property to Allow Remote
Execution and run the step
again, or manually transfer
the directory to the host.
When the transfer is
completed, change the step
status to Success and
continue the installation.
GYMCI9007E An error occurred while
remotely connecting to
target. There are connection
problems with the host.
See the installation log for
more details.
GYMCI9008E An error occurred while
connecting to target. There
are connection problems with
the host.
See the installation log for
more details.
GYMCI9009E An error occurred while
copying install_dir.
See the installation log for
more details.
GYMCI9010E Remote Command Execution
has been disabled.
To continue: 1. Change the
step property to Set Allow
Remote Execution. 2. Run the
step again. Or, manually
transfer the directory to the
host. When the transfer is
completed, change the step
status to Success and
continue the installation.
GYMCI9011E An error occurred during file
creation.
See the installation log for
more details.
GYMCI9012E An error occurred while
loading the discovered
topology file.
See the installation log for
more details.
GYMCI9013E An error occurred while
loading the topology file.
See the installation log for
more details.
GYMCI9014E The installation engine
encountered an
unrecoverable error.
See the installation log for
more details.
GYMCI9015E An error occurred while
saving the topology file.
See the installation log for
more details.
Appendix I. Error codes and log files 229
Error Code Description User Action
GYMCI9016E The installer cannot proceed
with the installation because
there is insufficient disk
space on the local host.
See the installation log for
more details.
GYMCI9017E The installer cannot
download the topology from
the specified database. Verify
that the Tivoli Netcool
Performance Manager
database exists and that it
has been started. If it does
not exist, launch the installer,
providing a topology file.
Ensure that the correct host
name, port, and SID were
specified and that the
database has been started.
GYMCI9018E The installer cannot connect
to the specified database
indicated because of
incorrect credentials.
Ensure that you provide the
correct user name and
password.
GYMCI9019W The installer could not
establish a connection to the
specified database. Check
that the Tivoli Netcool
Performance Manager
database can be contacted.
Click Next to proceed
without checking the current
environment status.
Check that the Tivoli Netcool
Performance Manager
database can be contacted.
GYMCI9020E The database connection
parameters do not match
those in the topology file.
Ensure that you provide the
correct parameters.
GYMCI9021E An error occurred while
loading the Oracle client jar.
See the installation log for
more details.
GYMCI9022E The configuration file name
was not found. The step
cannot run.
See the installation log for
more details.
GYMCI9023W There appear to be no
differences between the
desired topology state and
the current state of the Tivoli
Netcool Performance
Manager installation. The
installer shows this message
when it determines there is
not work that it can do.
Normally, this occurs when
the Tivoli Netcool
Performance Manager system
is already at the desired
state. However, it can also
occur when there are
component dependencies
that are not satisfied.
See the installation log for
more details.
GYMCI9024E The operating system
specified for this node in the
topology file is not correct.
Correct the topology file.
230 IBM Tivoli Netcool Performance Manager: Installation Guide
Error Code Description User Action
GYMCI9025E The path is not valid or you
do not have permissions to
write to it.
Correct the parameter and
try again.
GYMCI9026E The path is not a valid
Oracle path. The sqlplus
command could not be
found.
Correct the parameter and
try again.
GYMCI9027E The specified port is not
valid.
Correct the parameter and
try again.
GYMCI9028E At least one parameter is
null.
Specify values for the
required parameters.
GYMCI9029E The specified host name
contains unsupported
characters.
Ensure that host names
include only supported
characters.
GYMCI9030E The specified host cannot be
contacted.
Ensure that the host name is
correct and check that the
host is available.
GYMCI9031E The path not exists on the
local system.
Correct the path and try
again.
GYMCI9032E An error occurred while
saving the topology. It has
not been uploaded to the
Tivoli Netcool Performance
Manager database. This error
occurs when there is a
database connection error or
when the Tivoli Netcool
Performance Manager
database has not yet been
created
See the log file for further
details.
GYMCI9033E One of the following
parameters must be set to 1:
param1 param2
Check the log file for further
details. Redefine the
parameters and try again.
GYMCI9034E An error occurred while
creating mount point
directories.
See the log file for further
details.
GYMCI9035E An error occurred while
changing the ownership or
the group of mount point
directories.
See the log file for further
details.
GYMCI9036E The machine hostname was
not found in the Tivoli
Netcool Performance
Manager model
(topology.xml file). The
machine where the installer
is running is not part of the
Tivoli Netcool Performance
Manager topology.
If a host name alias is used,
make the machine host name
match the host name in the
model. Alternatively, use the
option
-DUsehostname=hostname to
override the machine host
name used by the installer.
Appendix I. Error codes and log files 231
Error Code Description User Action
GYMCI9037E The Deployer version you
are using is not compatible
with the component that you
are trying to install.
Use a Deployer at a version
that supports the
deployment of the
component you are trying to
install.
GYMCI9038E The XML file cannot be read
or cannot be parsed.
Ensure the file is not
corrupted. See the log file for
more details.
GYMCI9039E The deployment cannot
proceed, because an error
occurred the deployment
plan was being generated.
See the log file for more
details. Check that there is
sufficient disk space and that
the Deployer images are not
corrupted.
GYMCI9040E The Deployer cannot manage
the indicated component on
the specified node.
See the log file for more
details about the condition
that was detected.
GYMCI9041E The user ID you specified is
not defined on the target
system.
Check that you have
specified the correct user ID.
GYMCI9042E You specified a host that is
running on an unsupported
platform.
Check that you have
specified the correct host
name.
GYMCI9043E The value you specified is
not supported.
Specify one of the supported
values.
Topology Editor messages
The Topology Editor messages.
Table 14 lists the error messages returned by the Topology Editor.
Table 14: Topology Editor Messages
Error Code Description User Action
GYM0001E A connection error was
caused by an SQL failure
when running the report.
Details are logged in the
trace file. There is a
connection problem with the
database. Possible problems
include: The database is not
running. The database
password provided when the
engine was created is wrong
or has been changed.
Check the error log and trace
files for the possible cause of
the problem. Check that the
database is up and that the
connection credentials are
correct. Correct the problem
and try the operation again.
GYMCI0000E Folder name containing
technology pack metadata
files was not found. The
specified folder does not
exist.
Ensure that you have the
correct location for the
technology pack metadata
files and try the operation
again.
GYMCI0001E An internal error, associated
with the XML parser
configuration, occurred.
Contact IBM Software
Support.
232 IBM Tivoli Netcool Performance Manager: Installation Guide
Error Code Description User Action
GYMCI0002I No item has been found that
satisfies the filtering criteria.
Ensure that you enter the
correct filtering criteria and
try the operation again.
GYMCI0003E An error occurred when
reading XML file name. The
XML file might be corrupt or
in an incorrect format.
Ensure that you have
selected the correct file and
try the operation again.
GYMCI0004E The input value must be an
integer.
Correct the input value and
try the operation again.
GYMCI0005E An unexpected element was
found when reading the
XML file.
Ensure that you have
selected the correct file and
try the operation again.
GYMCI0006E A value must be specified. Correct the input value and
retry the operation.
GYMCI0007E The value must represent a
log filter matching regular
expression expression.
Correct the input value and
try the operation again.
GYMCI0008E Metadata file name was not
found. The specified file does
not exist.
Ensure that you have the
correct file name and path
and retry the operation.
GYMCI0009E Metadata file name is
corrupted.
Contact IBM Software
Support.
GYMCI0010E Metadata file name was
already imported. Do you
want to replace it?
Click Yes to replace the file
or No to cancel the
operation.
GYMCI0011E Object name was not found
in the repository. The
specified object does not
exist.
Ensure that you have the
correct object name and try
the operation again.
GYMCI0012E The specified value must
identify an existing directory.
The specified directory does
not exist.
Ensure that you have the
correct directory name and
try the operation again.
GYMCI0013E Removing object from host in
Physical View.
No user action required.
GYMCI0014E File name does not exist. Ensure that you have the
correct file name and try the
operation again.
GYMCI0015E An unexpected error
occurred writing file name.
See the trace file for details.
Ensure that there is sufficient
space to write the file in the
file system where the
Topology Editor is running.
GYMCI0016E The user or password that
you specified is wrong.
Correct the login credentials
and try the operation again.
GYMCI0017E The value specified for at
least one of the following
fields is not valid: host name,
port, or SID.
Correct the input value or
values and try the operation
again.
GYMCI0018E The file name is corrupted. Select a valid XML file.
Appendix I. Error codes and log files 233
Error Code Description User Action
GYMCI0019E An unexpected error
occurred when retrieving
data from the database. See
the trace file for details.
Ensure that the database is
up and running and that you
can connect to it.
GYMCI0020E An unexpected error
occurred when parsing file
name. See the trace file for
details.
Select a valid XML file.
GYMCI0021E An unexpected error
occurred. See the trace file
for details.
Contact IBM Software
Support.
GYMCI0022E The input value must be a
boolean.
Correct the input value and
try the operation again.
GYMCI0023E The specified value must be
one of the following
operating systems: AIX,
SOLARIS, or Linux.
Correct the input value and
try the operation again.
GYMCI0024E The value must be a software
version number in the format
n.n.n or n.n.n.n. For example
7.1.2, or 7.1.2.1.
Correct the input value and
try the operation again.
GYMCI0025E The value must be an integer
in the range minValue to
maxValue, inclusive.
Correct the input value and
try the operation again.
GYMCI0026E The value must be a
comma-separated list of
strings.
Correct the input value and
try the operation again.
GYMCI0027E The value must be a file size
expressed in kilobytes. For
example, 1024K.
Correct the input value and
try the operation again.
GYMCI0028E The value must be a file size
expressed in megabytes. For
example, 512M.
Correct the input value and
try the operation again.
GYMCI0029E The value must be a file size
expressed in kilobytes or
megabytes. For example
1024K or 512M.
Correct the input value and
try the operation again.
GYMCI0030E The value must be an FTP or
SFTP connection string. For
example,
ftp://
username:password@hostname/directory.
Correct the input value and
try the operation again.
GYMCI0031E The value must be a
comma-separated list of
directories. For example,
/opt, /var/tmp, /home.
Correct the input value and
try the operation again.
GYMCI0032E Value cannot be a
fully-qualified domain name,
IP address, or name
containing hyphen or period.
Supply the unqualified host
name without the domain.
Do not use the IP address or
a name that contains
hyphens.
234 IBM Tivoli Netcool Performance Manager: Installation Guide
Error Code Description User Action
GYMCI0033E Metadata file name contains
an technology pack with a
wrong structure.
Contact IBM Software
Support.
GYMCI0034E Value should be in the
format YYYY-MM-DD,
cannot be a date prior than
1970-01-01, or later than the
current date.
Specify a date that is within
the range and in the correct
format.
GYMCI0035E The meta-data file contains
an technology pack with the
wrong structure.
Obtain a valid meta-data file
and try again.
GYMCI0036E Value should be in the
format YYYY-MM-DD,
cannot be a date prior than
1970-01-01, or later than the
current date.
Correct the input value and
retry the operation.
GYMCI0037E The operation failed because
the specified file does not
exist.
Ensure that the file name
and path you specified is
correct and retry the
operation.
GYMCI0038E The operation failed because
of an error while validating
the host name mappings file.
See the trace file for more
details.
GYMCI0039E The host name retrieved by
the upgrade process is not
valid. Fully qualified host
names, IP addresses and
names containing hyphens or
periods are not supported.
Correct the entry for the
specified host name in the
topology definition.
GYMCI0040E The upgrade process
retrieved two entries for the
specified host name. The
fully qualified host name is
not supported.
Remove the entry for the
fully qualified host name.
GYMCI0040W The upgrade process did not
retrieve a valid value for the
specified property. A default
value has been used.
Check that the default
assigned is appropriate and
change it if necessary.
GYMCI0041E No component is present on
the specified host.
Specify a host where at least
one component is present.
GYMCI0042E The operation failed because
the input value is not the
correct data type. The correct
data type is Long.
Correct the input value and
retry the operation.
GYMCI0043E The operation failed because
the input value is not valid.
Correct the input value and
retry the operation.
GYMCI0044W The upgrade process did not
retrieve a valid value for the
specified property. A default
value has been used.
Check that the default
assigned is appropriate and
change it if necessary.
Appendix I. Error codes and log files 235
InstallAnywhere messages
The InstallAnywhere messages.
Table 14 lists the InstallAnywhere
Printed in USA